title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Generalized Bayesian Inference for Scientific Simulators via Amortized Cost Estimation
Accept (poster)
Summary: The authors present a novel approach for SBI by utilizing neural networks (NN) to estimate a generalized cost in GBI. Their proposed method, ACE, demonstrates superior computational efficiency compared to previous approaches, without compromising competitive performance across various evaluation metrics. The authors conducted extensive benchmarking of their method under diverse experimental settings, ranging from 1D to 10D. Furthermore, they conducted experiments using real intracellular recordings, adding an additional dimension to their research. Strengths: The paper is well written and maintains a smooth flow throughout. Furthermore, the contribution made by the authors appears to be a novel and valuable improvement upon existing ideas in the field. Moreover, the authors have thoughtfully acknowledged and addressed the limitations of their work, placing their findings within the broader context of the field. It is worth noting that their code repository is well-organized, adding to the overall credibility and reproducibility of their research. Weaknesses: 1. To enhance clarity, it would be beneficial to improve the visual presentation of Figure 2. When referring to subfigure C, the arrangement could be adjusted to prevent the automatic focus on the third pane. 2. Propositions 1 and 2 appear somewhat forced, as their results do not seem entirely compelling or noteworthy. It may be worth revisiting these propositions. In the appendix the authors state, "The proofs closely follow standard proofs that regression converges to the conditional expectation." Thus highlighting the lack of importance of these propositions. 3. A minor typo on line 116, "parameter- or". Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: nothing, great work! Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors do address the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate the reviewer’s encouraging and positive comments about our work, in particular highlighting the novelty and value of our proposed contribution to SBI, our extensive benchmark experiments, and the quality of the writing and code for better communication and reproducibility. In response to their suggestions regarding clarity, we will: - Rearrange Figure 3C such that the first panel is bigger and more prominent to prevent automatic focus on the third panel, - Move the propositions to the Appendix, such that in the main text we keep the presentation more concise and just state the well-known convergence proof, and refer to the Appendix for more details. - Fix the noted typo (thanks for the detailed read!) --- Rebuttal Comment 1.1: Comment: Thank you for the comments.
Summary: This paper studies the problem of simulation-based inference - which can is encountered in a wide range of scientific problems - where one is interested in performing Bayesian inference using simulators with implicit likelihoods. Scientific problems can have two unique properties - a) the predictive quality of the simulation is more important than the posterior b) the model can be misspecified. These can be handled though Generalized Bayesian Inference - which generalizes traditional Bayesian Inference by admitting generalized likelihood functions which are defined by some general loss / cost function defined on the parameters. A challenge with employing Generalized Bayesian Inference in the simulation-based case is that estimating the cost function can be computationally expensive. The authors propose learning an amortized neural network estimator for the cost function to reduce the overhead for enabling GBI in simulation-based inference. This amortized cost estimator can be combined with standard inference algorithms (MCMC) to sample from the Generalized Bayesian posterior. The overall proposed approach consists of 3 stages: 1) Collecting data 2) Training ACE 3) Sampling using ACE. The authors present several experiments on a variety of benchmark simulation-based inference tasks and present a case study with the Hodgkin-Huxley simulator. Strengths: * The problem of simulation-based inference is an important one for various scientific problems and misspecification is an often under-studied but important aspect in practice. The paper attempts to address this important issue relevant to various communities. * The proposed approach is novel within the context of the simulation based inference. (Although as I mention below, similar ideas have been employed is some related areas) * Even despite some caveats that I discuss below, the idea is quite neat, and relatively simple (which is a good thing!) Replacing the estimation of the cost with an amortized predictor seems like a natural extension to enable generalized posteriors in the simulation-based inference setting. The simpler algorithm also makes it easier to adopt by practitioners with little overhead. * The experiments cover a fairly wide variety of tasks used in prior work in the SBI setting. The case study on the HH model was a nice example of leveraging the method on realistic tasks. * The presentation in the paper was generally quite clear. The paper is well written and easy to follow. * I also appreciate the authors open-sourcing the code to aid reproducibility. Weaknesses: * A fundamental weakness I see with the method is that it relies on the neural estimator to generalize well from finitely many simulations. This might not be true in practice (certainly isn’t guaranteed). Put differently, an implicit assumption for the approach to work is that the ACE is trained with _enough_ data to be able to provide informative guidance to the sampler learned in the next stages. As with other work relying on NN estimators this is a fundamental failure mode. * On a similar note, the method appears to be practical only for relatively low-dimensional problems. The NN learning issues would crop up if the dimensionality of the problem is increased. * Similar ideas of learning amortized neural estimators have been recently explored in the uncertainty estimation literature [1,2]. In particular, the secondary predictor learned in [1] amortizes a similar quantity. [1] DEUP: Direct Epistemic Uncertainty Prediction. Lahlou et al., TMLR 2023. [2] Epistemic Neural Networks. Osband et al., 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * There are already some results with different simulation budgets - but a more careful study of the effect of the number of samples available to train the ACE might be quite useful. * Could you comment on the connections to the work on uncertainty estimation mentioned above? * Do the authors have any thoughts on extending the approach to higher dimensional settings? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors already discuss several limitations in the paper including a) need for another method to sample from the posterior induced by ACE, b) accommodation of arbitrary distances, c) additional hyper parameters. In addition to these I would also add access to sufficient samples for training ACE and being limited to low dimensional settings as limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our work, who was very kind to note the importance, novelty, and simplicity of the contribution, as well as our effort on presenting the idea clearly and with included code for reproducibility. We enjoyed the concise and accurate summary of the problem setting surrounding scientific simulators, and we agree that simpler algorithms (with good performance) are more likely to be adopted by practitioners. We address the reviewer’s questions and suggestions point by point here: **Problem of neural network estimation with finite training data**: We agree wholeheartedly with the reviewer’s concern that the reliance of neural networks on having enough training data is a fundamental limitation of our (and all other) neural network-based SBI algorithms. We will explicitly acknowledge this as a limitation in the discussion. We also agree that a careful characterization of performance vs. simulation budget is a critical sensitivity to measure for any algorithm. We demonstrate precisely the reviewer’s point in our experiments: with a simulation budget of 200 (Suppl. Fig. A3), ACE’s cost estimation accuracy is poor, even though posterior predictive distance is still on par or better compared to other neural methods in this low-training data regime. With a moderate increase in simulation budget (to 1000, Suppl. Fig. A4), we already see a marked improvement in cost estimation. This provides guidance on the simulation budget for problems with similar parameter and data dimensionality. We additionally characterize this dependence in our real-world problem, noting that ACE achieves very good performance compared to NPE even with 10 times less training data (100k vs. 1 million), and is further improved with 1 million training samples. **Extending to higher-dimensional problems**: We acknowledge that here we only provide evidence for good performance in low to moderate dimensionality problems (up to 10 parameters), and we agree that the finite data issue will be further exacerbated in high-dimensional problems. We will include these points in the discussion of limitations as well. However, we note that thus far most successful applications of simulation-based inference are in regimes of parameter dimensionality around 10. For example, the benchmark tasks presented in Lueckmann et al. 2019 all have parameter dimensionality smaller or equal to 10. In addition, compared to methods such as NLE and NPE, ACE casts the density estimation problem into a regression problem, where the output of the neural network is always kept as 1-dimensional. Therefore, on a task with high dimensionality, e.g., 30 parameters and 30 data dimensions, NPE needs to learn a rather complex 30-D to 30-D transformation, which can be challenging for a normalizing flow with invertibility constraints, whereas ACE only performs a 60-D to 1-D regression, drastically simplifying the problem. Other approaches employed by existing methods, such as adding a preprocessing/embedding network to first reduce parameter and/or data dimensionality, or extending the algorithm to perform sequential inference that concentrates on a specific observation, can be applied here as well. We will include the above in the discussion section, and we thank the reviewer for raising the possibility of future works along these lines. **Relations to work on uncertainty estimation**: Thanks for bringing to our attention a related area of literature. From our understanding (and correct us if we’re wrong), both referenced papers train neural networks to predict the _predictive_ uncertainty of a neural network. The “standard” deep NN solves a regression task by predicting the expected (mean) output given an input, while the uncertainty networks in the cited works additionally predict deviations from that, noting various sources of uncertainty (e.g., approximation uncertainty vs. misspecification/epistemic uncertainty). As noted in Lahlou et al 2023, this encapsulates Bayesian NNs, which produce samples from the posterior predictive distribution directly. Furthermore, since the predictive uncertainty can be computed per input without retraining, it is amortized. Conditional density estimation indeed tackles a similar problem, i.e., predicting not just the mean output, but (samples from) a conditional distribution. Furthermore, standard methods in SBI (such as NLE) currently do not naturally account for misspecification. Therefore, both uncertainty-aware / epistemic neural networks and ACE-GBI are relevant for the high-level problem of producing a diversity of good predictive samples under possible misspecification. On the other hand, they have very different motivations and regimes of operation, i.e., inverting (misspecified) stochastic mechanistic simulator models vs. giving black-box neural networks awareness of sources of uncertainty. As such, our overall goal, the implementation of amortized estimation of the predictive distance, and its usage in SBI are all quite different from the referenced works on amortized prediction of uncertainty in deep NNs (predictive uncertainty vs. parameter uncertainty, targeting cost vs. posterior variance, using MCMC for sampling vs. directly predicting the full distribution). Nevertheless, there are interesting potential connections between predictive uncertainty, Bayesian NNs, and GBI, which can be explored in future works for robustifying both deep NN and SBI approaches to misspecification and other sources of uncertainty. We thank the reviewer for raising these potential connections, and will note the above discussion in the paper as well. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Thank you for the response and sorry for the late response. > Relations to work on uncertainty estimation Thanks for the elaborate explanation. To clarify, my comment on the uncertainty estimation methods was merely to highlight the connection between the two, not necessarily as a shortcoming. Apologies for the miscommunication. I think a concise version of this summary would be good to have in the paper. I appreciate the authors responses which partially address the concerns I raised about the reliability of NNs trained on finite data and high-dimensional problems. But these fundamental limitations still remain so I will keep my score, recommending acceptance.
Summary: This paper proposes amortized cost estimation (ACE) for generalized Bayesian inference (GBI) for SBI. The paper trains a neural network to approximate the cost function. The paper demonstrates results on baseline synthetic SBI examples followed by a real-world application using experimental data from the Allen Cell Types Database. Strengths: * The paper is written clearly and is structured well. * The idea appears novel. While, GBI was previously defined in [13], its application within the SBI literature is interesting and is therefore of value to the community. * Algorithm box makes the implementation of the algorithm clear to the reader. Weaknesses: * The main weakness of the paper seems to be the experimentation. While the figures are neatly presented, it is not that clear what the reader is supposed to conclude from the results: * For example, it was not clearly defined what the difference between specified and misspecified samples means within Figure 3. It would be helpful to mathematically define these differences rather than describe them within the text. * What is the definition of the GT-GBI? It first appears on line 173 and it is not clear what the true distance is and how it is obtained. This misunderstanding makes it further difficult to understand Figure 3. * Another question is why are the first three rows only showing MSE and the last row showing MMD? This is confusing and makes one question why the Figure is not consistent with the metric. * Additionally, for the MSE/MMD it seems that as $\beta$ is increased, the MSE/MMD goes down, but the C2ST seems to go up. Why is this the case? What happens in the limit that $\beta$ goes to infinity? Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Compared to BGI, equation (2) seems to not include the KL divergence term. Can this approach really be called GBI if it is not being regularized by some prior? Why was the KL term not included? * Following on from the question regarding the prior, is the addition of the noise ($\epsilon$) implicitly adding a prior? It might be the case that adding this noise is also enforcing some implicit distance metric between nearby parameters. I.e. does it assume that there is a local Euclidean distance metric? * Why does this approach require multi-chain slice-sampling? Is random-walk Metropolis-Hastings not sufficient? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * The idea behind the paper is interesting, but the experimental results are difficult to interpret. Further clarifications on the above would help increase the score. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for noting the novelty of our work and the clear structure of the paper, and their positive score of 3s (good) in soundness, presentation, and contribution. We appreciate their requests for further clarifications and the opportunity to increase the score. We apologize for the confusion caused by the lack of explicitly presented information in these cases. We address each concern below and will modify the paper accordingly. We hope this clarifies the reviewer’s concerns and that it allows them to recommend our paper for acceptance. **Difference between well-specified and misspecified observations**: This was a clear oversight on our part, as we failed to refer to Appendix A4.3 on line 615 in the main text, which contains the mathematical definitions for the synthetic well-specified and misspecified observations. We do so now in the main text and expand on the original description, and summarize A4.3 here for convenience: Well-specified observations for all tasks were prior simulations. For the Uniform 1D, 2 Moons, and Linear Gaussian task, misspecified observations were prior simulations that were successively perturbed with Gaussian noise with fixed variance (i.e., Gaussian random walk) until the sample was outside the range (i.e., [min, max]) defined by 100k prior simulations in all dimensions. For the Gaussian Mixture task, misspecified samples were generated by replacing the second of the two Gaussians in the simulator with N(12.5 * sign(theta), 0.5^2*eye). **Definition of GT-GBI and true distance**: We apologize for, and will correct the mistake on line 172: It should have been “true cost”, not “true distance”. Throughout the paper, we use “distance” to refer to the output of the distance function (e.g., MSE) between two points in data space, while “cost” refers to the _expectation of the distance_ between all simulations generated by a parameter (theta) and an observation in data space (x). We compute the ground-truth cost function (Eq. 2) for all benchmark tasks, either computing the integral analytically, or numerically via quadrature (i.e., summing over a fine grid). This is then used to define the unnormalized GBI posterior in Eq. 1 and sampled using MCMC or rejection sampling, referred to as GT-GBI samples. This will be clarified in the main text. **MSE vs. MMD in Fig. 3**: 3 of the benchmark tasks use MSE as the distance function in data space, since Euclidean distance between two points are easily measured. In the fourth task (Gaussian Mixture), each observation is a set of 5 independently sampled data points, therefore the distance function must measure the statistical distance between two distributions (which MSE cannot, but MMD can). In addition, users may want to use different distance functions, and it is a feature of ACE that it works for a diverse set of distance functions. The first and second columns of Fig. 3 quantify the average distance the posterior predictive simulations achieve for each of the algorithms, hence the first 3 rows have the y-axis labeled as MSE, and the last as MMD. More details in Appendix A4.2. **MSE/MMD goes down while C2ST goes up with increasing beta**: Due to the exponential in Eq. 1, as beta increases, the GBI posterior becomes more concentrated near parameter regions with low cost. Therefore, high-beta posterior predictive simulations have lower distance to the observation (thus lower MSE & MMD). At the same time, since the cost estimation network only learns to approximate cost using finite simulations, if the ground truth posterior is very narrow, then small errors in cost estimation can lead to large changes in C2ST, resulting in generally higher C2ST with increasing beta. In the limit of beta approaching infinity, the true GBI posterior would collapse onto the minimizer of the cost function (i.e., the parameter that produces simulations with the lowest average distance to the observation). Samples from this ground-truth posterior would achieve the lowest cost by definition, while samples from the ACE posterior would be close, but slightly higher in cost, and the two sets of samples coming from two different delta functions would be perfectly separable, resulting in a C2ST score of 1 but comparably small predictive distances. We will expand on the current discussion of this in the results section (line 235). **Prior regularization and addition of noise**: Our Eq. 2 does not contain the KL term because it is only a loss to train the neural network to learn an approximation of the cost function. Prior regularization in our GBI posterior happens in Eq. 1, via standard Bayesian updating that balances the (generalized) likelihood and the prior probabilities. In ref 13 (Bissiri et al 2016), Eq. 7 is a loss over the entire posterior approximation, such that it can be optimized directly, thus including a prior KL term. In section 2.3, those authors note that when the generalized likelihood is defined by a cost function, the solution to the posterior approximation problem exactly follows our Eq. 1, thus the two formulations are equivalent. The addition of noise to the training samples in data space during learning of the cost function does not have any effect on regularizing the posterior density in parameter space. Instead, it expands the range of prior simulations such that the simulation support may cover observed data for which the simulator is otherwise misspecified, and does not assume local Euclidean distance. **Alternate MCMC methods**: Multi-chain slice-sampling was used for computational efficiency in our experiments. Any MCMC algorithm, such as Metropolis-Hastings, is applicable, since the cost estimation network simply provides the (generalized) log-likelihood. We will note this in the discussion of limitations (line 335). --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the detailed response. Please do include those clarifications in the main paper/supplementary materials. I will happily increase my score.
Summary: The paper presents a new technique - amortized cost estimation (ACE) - that, as stated on the can, amortizes a broad class of loss functions used in generalized Bayesian inference (GBI) in place of the (log) likelihood. After training on a moderate-to-large number of model simulations (10K-100K in the examples of the paper), the amortized loss can then be used in place of the real loss to perform GBI (e.g., via MCMC). Crucially, this step does not require further simulations from the model. The authors show that the amortized loss generally matches the ground-truth loss, and show that their amortized method inherits the advantages of GBI, in that it is typically more robust to misspecification than standard and amortized Bayesian inference, as in neural likelihood estimation and neural posterior estimation Strengths: - **Quality:** The general polish and quality of the paper is high. - **Clarity:** The paper is well written and generally clear. The aim and methodology are very well explained and motivated, with some points that would benefit from further expansion. - **Significance:** The idea of the paper (amortizing GBI for simulator-based inference, with the additional goal of dealing with misspecification) is very timely and there are surely many applications. - **Originality:** The idea at the core is not particularly original - amortizing the loss in GBI seems a natural direction for the field at the moment - but clearly worth pursuing. Similarly, the execution seems fairly straightforward (which is not a bad thing). ### Post-rebuttal: Thanks for having addressed most of my comments. I appreciated the additional experiments with tempered NLE, with varying noise augmentation, and the recommendations for the choice of $\beta$. The proposed heuristic seems both reasonable and practical. I understand that given the limited time, it was not feasible to conduct an additional set of experiments. Overall, I am satisfied with the changes and I increased my score accordingly from 6 to 7. Weaknesses: Most of my concerns have to do with the treatment of the hyperparameter $\beta$ (inverse temperature), a key element of GBI. The role of $\beta$ could be explained and discussed more (it is mentioned in the **Limitations**). Many experiments show the results for a range of $\beta$ (which also changes from experiment to experiment), and it is unclear how the practitioner should choose its value. For example, $\beta \rightarrow \infty$, (generalized) Bayesian inference tends to maximum-a-posteriori (or minimum-generalized-loss) point estimation. The paper is ambiguous on what the user should be trying to achieve. I appreciate that some of these issues are not of ACE (the authors' method) but of GBI, but the paper would benefit from directly addressing these points. The comparison between GBI and Bayesian inference is somewhat unfair in that Bayesian inference is not allowed an inverse temperature hyperparameter (i.e., likelihood tempering), but in principle it could be easily applied (at least for NLE). Is ACE+GBI truly better here, or is the advantage just given by the fact that ACE/GBI has an extra free hyperparameter? In some cases, standard Bayesian inference can do better with large $\beta$. We know this can be the case for example with neural network posteriors (possibly a case of prior misspecification). - With NLE, just run MCMC with a scaling factor $\beta$ on the amortized log likelihood. - Amortizing different temperatures with NPE would be more complex. Naively, one would need to retrain the network for different values of $\beta$. One could also subtract the log prior from the NPE posterior to get the log likelihood, and then rescale it, but that might lead to instabilities. Similarly to $\beta$ (but less importantly), the paper could explore more the role of the noise $\sigma$ added to $S$ observations. This parameter seems to be less relevant though - it is very nice that the authors use a fixed value throughout the paper, and this value can be determined from prior-predictive checks. Nonetheless, a bit of exploration / insight could help. For the rest, the empirical evaluation is acceptable, but it would have been nice to see at least a couple of applications to real-world data (even another simple one), since this is arguably (also according to the authors) where GBI shines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can you expand on the role of $\beta$, e.g. how you chose (and a practitioner should choose) reasonable ranges of the hyperparameter? - What about having an inverse temperature to the log likelihood of standard Bayesian inference (as per standard tempering, also seen helping in Bayesian deep learning applications)? Can you show that in your examples (for NLE, and possibly NPE)? - Consider adding a lesion study or analysis about the role of $\sigma$ and $S$. - Not strictly necessary, but it would be nice to see the method at work on another (simple) example with real data. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors address the limitations of their method (and I already asked above to expand on the role of $\beta$). The work has no particular potential negative social impact (at least not above any other general method for statistical inference). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s encouraging remarks regarding the timeliness and broad applicability of our work in leveraging GBI for (misspecified) SBI problems, as well as noting the high quality of execution and presentation in our paper. Furthermore, they raised several interesting questions and comments regarding the effect of the beta and sigma hyperparameters in our original results, for which we conducted additional experiments and discuss below. **Experiments with tempered NLE**: We very much agree with the reviewer that the standard Bayesian posterior from neural SBI methods (e.g., NLE) could in theory benefit from a temperature hyperparameter, and therefore could constitute a more fair experiment vs. ACE. Therefore, we implemented “tempered NLE” by adding a similar beta parameter (inverse temperature) to the log-likelihood term for MCMC sampling, i.e., beta*NLE_loglikelihood + logprior. In short, we find that increasing beta for tempered NLE does not improve posterior predictive distance as it does for GT/ACE-GBI, and is more often detrimental. Results on all tasks are presented in Fig. 1 of the attached pdf (10k training budget). The one exception is the 2 Moons misspecified observation task, where higher beta improves NLE, but ACE still systematically outperforms tempered NLE at all betas. It’s an open question whether tempering NLE is detrimental due to targeting of entirely different objectives (likelihood vs. predictive distance), or if tempering exacerbates errors in the normalizing flow’s approximation of the likelihood. We will include the new results and discussions for potential future investigations in the paper, and we thank the reviewer for this interesting suggestion. **Experiments with varying noise augmentation (sigma)**: Similarly, we agree that the role of sigma could be of interest to explore further. We therefore conducted additional experiments by varying our data augmentation procedure under a simulation budget of 10k: we trained ACE with sigma of 0, 2, and 5 (original results with sigma=2), as well as entirely removing data augmentation, i.e., no noised simulations nor real observations seen during training. Overall, varying noise augmentation during training has barely any effect on predictive distance across all tasks (Fig. 2 in attached pdf). This may be due to the fact that the number of augmented samples (100) is small compared to the simulation budget (10k), providing reassurance regarding the robustness of our main results. We also studied the effect of removing training augmentation with a training budget of 1000 (not shown), which slightly decreased performance in two tasks where the observation is misspecified. This supports our original motivation for noise and real data augmentation as expanding prior simulation range to combat misspecification, though its effect is more pronounced with smaller training simulation budgets. As the reviewer pointed out, we kept sigma constant throughout our original experiments, and it is straightforward to determine a good value from prior predictive checks. These new experiments nevertheless provide interesting insights and sanity checks, which will be included in the results and discussion sections of the paper, and we thank the reviewer for their suggestions. **Discussion of beta and guideline for practitioners**: We agree that the non-trivial role of beta can be mentioned earlier in the work, and will discuss it in more detail in the methods section in the context of GBI. As the reviewer pointed out, while the issue is not unique to ACE but GBI in general, using our proposed method nevertheless requires the practitioner to choose a beta value suitable for their goals: Taking the log of Eq. 1, we see that beta weighs the cost function against the impact of the log-prior regularization. In this particular case, a Monte Carlo estimate of the cost function (i.e., distances) is a quantity that can be computed on simulated data alone (unlike log-likelihood in e.g., tempered NLE). As such, a practitioner should choose a value of beta that scales the cost function relative to the log prior probability, both of which can be computed on prior simulations before training ACE. In addition, the choice of beta should consider how broadly the cost function is distributed, which can be straightforwardly estimated with a 1D histogram. Since the cost is always greater or equal to 0, larger betas penalize larger costs more heavily due to the exponential. Considering the above, one reasonable heuristic is to choose beta such that it scales the mean or median of the empirical distribution of distances (computed on random pairs of prior simulations) to be in the same range as the mean or median of log-prior probabilities of those same prior samples. As a concrete example, for the Uniform 1D problem, the log-prior is -1.1 (uniform), while the median of prior simulation distances from 10k random pairs is 0.077. Therefore, a beta of 10-20 is a good start, and increasing by a factor of 5-10 further trades off posterior sample diversity for predictive simulation distance / performance. Lastly, many GBI methods for setting beta have been developed (see, e.g., Wu, Martin, Bayesian Analysis 2023). Similar methods could be applied to ACE and, since ACE amortizes inference over beta, it is even amenable to methods which require repeated posterior sampling for different values of beta. Thanks to the reviewer for raising this important practical issue, and we will include a summary of the above in the method and discussion sections. **Another application with real data**: Given the time constraint of the rebuttal phase, we are not able to conduct an additional set of experiments on a real problem, which requires a suitable dataset and simulator model, in-house expertise, and comparison with existing methods. However, we agree that further applications on real problems are important, and are currently being pursued. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thanks for having addressed most of my comments. I appreciated the additional experiments with tempered NLE, with varying noise augmentation, and the recommendations for the choice of $\beta$. The proposed heuristic seems both reasonable and practical. I understand that given the limited time, it was not feasible to conduct an additional set of experiments. Overall, I am satisfied with the changes and I will argue for acceptance of the paper. I will increase my score accordingly. --- Reply to Comment 1.1.1: Title: Question regarding score increase Comment: Thanks again for the feedback and we are happy to hear that the reviewer will increase the score and argue for acceptance! Unfortunately, the score has not been changed on OpenReview yet, so we would kindly like to ask if you simply forgot to increase the score, if you need any further clarification on the paper, or if this is a technical issue with OpenReview. Thank you very much for your time!
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their constructive and detailed engagement with our work, resulting in many helpful comments, questions, and opportunities for clarification, as well as ideas for future work. We are especially grateful for several reviewers’ acknowledgement that the problem we tackle is “well-motivated” (bf2N), “important and relevant” (HKbp), that our contribution to the SBI & GBI literature is novel (CFba, HKbp, 9aMX), “timely” (bf2N), and will be of value to the community (CFba, HKbp, 9aMX), while also pointing out that the method itself being simple is a good thing (bf2N, HKbp). We are happy to hear that all 4 reviewers thought the paper was “well-written” and clearly structured, and appreciated the open-source code (HKbP, 9aMX). At the same time, we agree with many of the concerns and questions raised by the reviewers, and have responded individually to each review in detail. We summarize here the main points and results from new experiments, referring to individual reviewers’ comments whenever applicable. We hope that these follow-up discussions and results clarify their concerns, allowing all reviewers to recommend our work for acceptance. **Tempered NLE (rev:bf2N)**: We implemented the tempered Bayesian posterior with NLE (neural likelihood estimation), and tested it on all tasks with the same betas as used for GBI. Overall, we find that tempering NLE does not result in lower posterior predictive distances that we aim to minimize, and is in fact often detrimental compared to standard NLE. “Colder” posteriors (higher beta) slightly improved NLE in just one of the eight scenarios (2 Moons, misspecified observations), but is still outperformed by ACE in this (and all other) tasks. Results in attached pdf, Fig. 1. We will update Fig. 3, method and discussion sections in the paper. **Noise augmentation lesion study (rev:bf2N)**: We varied the augmentation noise variance (sigma=0,2,5) during training for ACE in the 10k simulation budget experiment and observed no impact in subsequent performance. We further removed all training data augmentation, including observed data points, and saw barely any effects (attached pdf, Fig. 2). We therefore conclude that at sufficient simulation budgets, removing augmentation would not degrade performance. We did see a slight decrease in performance for two tasks with misspecified observations when all augmentation was removed for ACE trained with 1k simulation budget, supporting the idea that data augmentation aids learning outside of regions produced by the simulator. These results will be discussed in the paper. **On choosing beta (rev:bf2N) and when beta approaches infinity (rev:bf2N & CFba)**: We agree that earlier acknowledgement of the additional hyperparameter increases transparency, and will do so in the method section as well. We also agree that choosing beta is an important practical concern, and provide here a heuristic for choosing starting values: a good “baseline value” is such that it scales the average distance across a subset of the training data (precomputed on prior simulations) to be in the same range as the (log) prior probability, both of which can be computed on prior simulations. From there, increasing beta sacrifices sample diversity for predictive distance, as was pointed out that when beta approaches infinity, posterior samples converge onto the minimizer of the cost function. We also note that, since our method is amortized over beta, beta-selection methods which require posterior sampling for several different values can be performed at low computational cost. See individual responses for more details. **Prior regularization via dKL (rev:CFba)**: In ref 13, the GBI loss is for optimizing the entire posterior approximation directly, hence prior regularization is included explicitly as the KL term. In contrast, our loss function (Eq. 2) is only for training the cost estimation network, while prior regularization occurs via the standard Bayesian updating in Eq. 1. These two views are consistent, as is also stated in ref 13, Sec. 2.3. **Clarification on experimental details and results (rev:CFba)**: We apologize for erroneous or missing references to methodological details and definitions. We now refer to the definition of well-specified and misspecified observations in the Appendix (A4.3), fixed inconsistencies when referring to distance vs. cost functions and how ground-truth was obtained, and make further clarifications regarding the MSE vs. MMD, and their differing directions of change compared to C2ST with increasing beta. Full details in the individual response. **Relationship to uncertainty estimation and Bayesian NN (rev:HKbp)**: We discuss how posterior predictive samples and their distance to the observation (our optimized quantity) conceptually relate to learning predictive uncertainty in works on uncertainty estimation (e.g., epistemic neural networks) and Bayesian NNs, while noting the very different goals and implementations between these two areas of literature. **Acknowledgement of limitations regarding finite data, high-dimensional problems, hyperparameter sensitivities**: We thank the reviewers for noting further limitations in our work that we had failed to discuss, and will explicitly acknowledge in the discussion section, including: errors in neural network approximation when learning from finite data (rev:HKbp), potential issues and strategies when applying to higher dimensional problems (rev:HKbp), and sensitivity to various hyperparameters,e.g., beta, sigma, and training budget. Detailed discussions in individual responses. **Clarification and improvements on presentation and visualization**: We make a number of clarifications or changes following questions and suggestions raised by all reviewers, including adjustment to Fig. 2C (rev:9qMX), moving the propositions to Appendix for clarity (rev:9qMX), and further explanations on results in Fig. 3 (rev:CFba). Pdf: /pdf/10f0adf221cf415851dc9d118105540b8b379cdd.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
How to Leverage Imperfect Demonstrations in Offline Imitation Learning
Reject
Summary: This paper addresses the challenge of learning good imitative policies from offline data, in which abundant imperfect demonstrations are mixed with few expert ones. Unlike previous work that measures the state-action similarity between imperfect and expert data, the present work proposes iLID, which leverages trajectories in imperfect data that lead to expert states in several steps. The sample complexity analysis indicates that this approach benefits the performance of imitation policy, and the empirical results suggest that iLID outperforms baselines including state-of-the-art offline imitation learning methods. Strengths: * As illustrated in Figure 1, the original idea of selecting imperfect demonstrations leading to expert states is novel and makes intuitive sense. * The policy optimization problem is well-posed, and is straightforward to implement with alternating dual ascent. * Empirical results suggest a large performance gain for iLID compared to state-of-the-art baselines, especially when the dataset contains very few expert demonstrations. In particular, the ablation study in Figure 3 does a good job of explaining why the constrained optimization problem leads to a better policy than the naive direct imitation approach. Weaknesses: The quality of presentation can be improved. 1) $\tilde{\mathcal{D}}$ in equation (6) overloads the notation that was originally presented in Section 3.1 without time indices. 2) Remarks in Section 3.1 state that the sample complexity of the proposed approach is better than the vanilla BC, but there’s no citation for the BC sample complexity. 3) The explanation on the behavior interference for the complementary dataset $\tilde{\mathcal{D}}$ did not make full sense and requires further clarification. In particular, it is unclear why more recent actions are preferred when the same state appears multiple times in the trajectory, even though the underlying MDP does not have any discount factor in the definition of the value function. (What would happen if the discount factor $\gamma$ in equation (7) is set to 1 for all the experiments?) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * In Figure 4 and Table 1, it appears that all the methods resulted in poor performance for the Door task. Is it a failure case, and if so what was so difficult about the task? * It was not easy for me to follow the proof of Theorem 3.1. There are quite a few questions about the proof steps: 1) What is the outer expectation over in the definition of $\epsilon$ and $\delta$? Is it over the randomness in the choice of the datasets? If so, do you require any assumptions on the distributions of $\mathcal{D}_e$ and $\tilde{\mathcal{D}}$, such as independence? 2) How do you get from the 1st inequality to the 2nd in equation (13)? Specifically, how do you show $\mathbb{E}\left[\mathbb{1}(s \notin \mathcal{S}_1(\mathcal{D}_e)) \mathbb{1}(s \notin \mathcal{S}_1(\tilde{\mathcal{D}}))\right] \leq \mathbb{E}\left[\mathbb{1}(s \notin \mathcal{S}_1(\mathcal{D}_e))\right] \mathbb{E}\left[\mathbb{1}(s \notin \mathcal{S}_1(\tilde{\mathcal{D}}))\right]$? 3) Whether $\epsilon = \mathbb{E}\left[\mathbb{1}(s \notin \mathcal{S}_1(\mathcal{D}_e))\right]$ and $\delta = \mathbb{E}\left[\mathbb{1}(s \notin \mathcal{S}_1(\tilde{\mathcal{D}}))\right]$ hold or not seem to depend on the distribution of $ \mathcal{D}_e $ and $ \mathcal{\tilde{D}} $. For example, $\epsilon = \mathbb{E}\left[ \mathbb{E}_s [ \mathbb{1}(s \notin \mathcal{S}_1(\mathcal{D}_e)) ]\right] = \mathbb{E}_s \left[ \mathbb{E} [ \mathbb{1}(s \notin \mathcal{S}_1(\mathcal{D}_e)) ]\right] = \sum_s \frac{1}{|S|} \mathbb{E} [\mathbb{1}(s \notin \mathcal{S}_1(\mathcal{D}_e)) ] = \sum_s \frac{1}{|S|} \mathbb{P}(s \notin \mathcal{S}_1(\mathcal{D}_e))$ , and the quantity $\mathbb{P}(s \notin \mathcal{S}_1(\mathcal{D}_e))$ may differ for different $s \in \mathcal{S}$ depending on the distribution of $\mathcal{D}_e$. 4) The 1st equality of equation (15) is not obvious. How do you show that $\mathbb{E}\left[\mathbb{1}(s \notin \mathcal{S}_1(\mathcal{D}_e)) \mathbb{1}(s \in \mathcal{S}_1(\tilde{\mathcal{D}}))\right] = \mathbb{E}\left[\mathbb{1}(s \notin \mathcal{S}_1(\mathcal{D}_e))\right] \mathbb{E}\left[\mathbb{1}(s \in \mathcal{S}_1(\tilde{\mathcal{D}}))\right]$? Do you assume that the events $\mathbb{1}(s \notin \mathcal{S}_1(\mathcal{D}_e))$ and $\mathbb{1}(s \in \mathcal{S}_1(\tilde{\mathcal{D}}))$ are independent for all $s \in \mathcal{S}$? 4) The 1st inequality of equation (15) does not make sense. Did you mean $\epsilon(1 - \delta) V^{\pi_e}$ rather than $\epsilon(1 - \delta) V^{\pi_e}(s)$? Where does the inequality come from? 5) Can you elaborate on how to derive $\mathbb{E}_{s' \sim \tilde{D}(\cdot | s)} V'(s')$ in equation (16)? Does it hold because we only have to consider the case $s \in \mathcal{S}_1(\mathcal{\tilde{D}}) \backslash \mathcal{S}_1(\mathcal{D}_e)$ there, which would correspond to the 2nd case in equation (2)? Please elaborate on the points above and clarify any underlying assumptions that are used implicitly. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have not explicitly provided limitations in the paper. One conceivable limitation is that one requires the datasets of expert and imperfect demonstrations to be labeled as such, although it seems unavoidable for any methods of this kind. Another limitation could be that the resulting policy may still fail to discover diverse modes to accomplish the task, if the expert demonstration only has a single mode. For instance, in a goal-reaching navigation task similar to the one depicted in Figure 1, the expert policy still needs to exhibit the two different paths so that iLID learns to discover both modes to approach the goal. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of the contribution of this paper! Below are detailed responses to each comment: --- ## Q1: About the proof steps **(1) What is the outer expectation over in the definition of $\epsilon$ and $\delta$? Do you require any assumptions on the distributions of $\mathcal{D}_e$ and $\tilde{D}$** Regarding the first question, the reviewer's understanding is accurate: the outer expectation is taken w.r.t the randomness in the choice of datasets $\mathcal{D}_e$ and $\tilde{D}$. Regarding the second question, as stated in Section 2, the tuples in $\mathcal{D}_e$ are assumed to be sampled from the expert's state-action distribution, and for tuple $(s,a,s')\in\tilde{D}$, $s$ and $s'$ follow $\mu$ and $\mathcal{S}(\mathcal{D}_e)$ respectively. Accordingly, the *starting* states in the state-action tuples of $\tilde{D}$ are independent of $\mathcal{D}_e$, while the resultant states are contingent on $\mathcal{D}_e$. This assumption empowers us to characterize the diverse transitions leading to given expert states with necessary statistical properties. **(2) How do you get from the 1st inequality to the 2nd in equation (13)?** The derivation is from the independence of the distributions of the starting states $\mathcal{S}_1(\mathcal{D}_e)$ and $\mathcal{S}_1(\tilde{\mathcal{D}})$, both of which are independently sampled from the initial state distribution $\mu$ (as elaborated in the preceding question). **(3) Whether $\mathbb{E}[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))]=\epsilon$ or $\mathbb{E}[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))]=\delta$ hold or not seem to depend on the distributions of $\mathcal{D}_e$ and $\tilde{\mathcal{D}}$.** The reviewer's understanding is correct, and it is indeed the reason why we impose the assumption prior to Theorem 1: the starting states $\mathcal{S}_1(\mathcal{D}_e)$ and $\mathcal{S}_1(\tilde{\mathcal{D}})$ follow the uniform distribution $\mu$. Notably, this assumption can be relaxed to yield a more generalized result, as detailed in our response to Reviewer 71T4. **(4) The 1st inequality of equation (15) does not make sense.** Sorry for the typo. The correct expression in equation (15) should be $\epsilon (1-\delta)V^{\pi_e}$. We will diligently review the proof and rectify any such errors. **(5) Elaboration on how to derive $\mathbb{E}_{s'\sim\tilde{\mathcal{D}}(\cdot|s)}V'(s')$ in equation (16).** Denote $\mathcal{D}_c\doteq\mathcal{S}_1(\tilde{\mathcal{D}})/\mathcal{S}_1(\mathcal{D}_e)$. We detail the derivation from the 3rd equality to the 4th equality as follows: $$\mathbb{E}\_{s\sim\mu} \Big[\mathbb{1}(s\in\mathcal{D}_c) \mathbb{E}\_{a\sim\tilde{\pi}(\cdot|s),s'\sim T(s,a)}\big[V'(s')\big]\Big]$$ $$=\sum\_{s\in\mathcal{D}_c}\mu(s) \mathbb{E}\_{a\sim\tilde{\pi}(\cdot|s),s'\sim T(s,a)}\big[V'(s')\big]$$ $$=\sum\_{s\in\mathcal{D}_c}\mu(s)\sum\_a \frac{\sum\_{(\tilde{s},\tilde{a},\tilde{s}')\in\tilde{\mathcal{D}}}\mathbb{1}((\tilde{s},\tilde{a})=(s,a))}{\sum\_{(\tilde{s},\tilde{a},\tilde{s}')\in\tilde{\mathcal{D}}}\mathbb{1}(\tilde{s} = s)} V'(T(s,a))$$ $$=\sum\_{s\in\mathcal{D}_c}\mu(s)\frac{\sum\_{(\tilde s,\tilde a, \tilde s')\in\tilde{\mathcal{D}}}V'(\tilde s')\sum\_a\mathbb{1}((\tilde s,\tilde a)=(s,a))}{\sum\_{(\tilde{s},\tilde{a},\tilde{s}')\in\tilde{\mathcal{D}}}\mathbb{1}(\tilde{s} = s)}$$ $$=\sum\_{s\in\mathcal{D}_c}\mu(s)\frac{\sum\_{(\tilde s,\tilde a, \tilde s')\in\tilde{\mathcal{D}}}\mathbb{1}(\tilde s=s)V'(\tilde s')}{\sum\_{(\tilde{s},\tilde{a},\tilde{s}')\in\tilde{\mathcal{D}}}\mathbb{1}(\tilde{s} = s)}$$ $$=\sum\_{s\in\mathcal{D}_c}\mu(s)\frac{\sum\_{(\tilde s,\tilde a, \tilde s')\in\tilde{\mathcal{D}}}\mathbb{1}(\tilde s=s)\sum\_{s'}\mathbb{1}(\tilde s'=s')V'(s')}{\sum\_{(\tilde{s},\tilde{a},\tilde{s}')\in\tilde{\mathcal{D}}}\mathbb{1}(\tilde{s} = s)}$$ $$=\sum\_{s\in\mathcal{D}\_c}\mu(s)\sum\_{s'}\frac{\sum\_{(\tilde s,\tilde a, \tilde s')\in\tilde{\mathcal{D}}}\mathbb{1}(\tilde s=s,\tilde s'=s')V'(s')}{\sum\_{(\tilde{s},\tilde{a},\tilde{s}')\in\tilde{\mathcal{D}}}\mathbb{1}(\tilde{s}=s)}$$ $$=\sum\_{s\in\mathcal{D}_c}\mu(s)\mathbb{E}\_{s'\sim\tilde{\mathcal{D}}(\cdot|s)}\big[V'(s')\big]$$ $$=\mathbb{E}\_{s\sim\mu}\Big[\mathbb{1}(s\in\mathcal{D}_c)\mathbb{E}\_{s'\sim\tilde{\mathcal{D}}(\cdot|s)}\big[V'(s')\big]\Big].$$ --- ## Q2: About presentation **(1) $\tilde{\mathcal{D}}$ in equation (6) overloads the notation that was originally presented in Section 3.1 without time indices.** To clarify it, we have added an explanation, "Slightly abusing notation, we denote $\tilde{\mathcal{D}}$ as the buffer of selected diverse data", in the data selection section. We will undertake a check of notations used throughout the paper and provide the appropriate explanation to ensure clarity and consistency. **(2) Lack of citation for the sample complexity of BC.** The result can be found in Theorem 2 of (Xu et al. 2021). We will revise the manuscript accordingly. (Xu et al. 2021) [On generalization of adversarial imitation learning and beyond]([arxiv.org/pdf/2106.10424.pdf](https://arxiv.org/pdf/2106.10424.pdf)). **(3) Why are more recent actions preferred when the same state appears multiple times in the trajectory?** To clarify, we add ablation studies on all 4 MuJoCo environments by setting $\gamma=1$. As implied in Figure 3 in the PDF, without properly controlling the priorities among selected behaviors, the actions necessitating multiple steps to reach expert states may suffer from vulnerability to environmental ***uncertainty*** and hamper performance. --- ## Q3: About the Door task The door task involves undoing the latch and swinging the door open. The latch has significant dry friction and a bias torque that forces the door to stay closed. Without environmental interaction, it is highly challenging for the offline agent to develop an understanding of the latch as no information about the latch is explicitly provided, and the position of the door is also randomized. --- --- Rebuttal 2: Comment: Dear authors, Thank you for the detailed clarification, and I apologize for the delay in my response. The math in the proof of Theorem 3.1 is much clearer now. I was originally concerned that the uniform distribution assumptions on $\mathcal{S}_1(D_e)$ and $\mathcal{S}_1(\tilde{D})$ would be too strong. However, it is nice that the authors have been able to relax the assumption to yield a similar (albeit looser) bound in the response to reviewer 71T4. >To clarify, we add ablation studies on all 4 MuJoCo environments by setting $\gamma = 1$. As implied in Figure 3 in the PDF, without properly controlling the priorities among selected behaviors, the actions necessitating multiple steps to reach expert states may suffer from vulnerability to environmental uncertainty and hamper performance. Aside from the typo in Figure 3 in the new PDF (you meant $\gamma$, not $\lambda$, didn't you?), it visually describes, to some extent, that more recent causal actions leading to expert states are preferred when *stochasticity* exists in the environment . My original confusion was caused by the discrepancy between the theoretical part of the paper assuming deterministic transitions and the algorithmic part assuming a stochastic environment. I encourage the authors to make the distinction clear in writing the final version of the manuscript. Nevertheless, the authors have addressed most of the concerns I have had in the original review, and thus I will leave the score unchanged. The only minor comment I have is on the limitations of the work. I am curious whether the authors agree with the potential limitations that I wrote in my initial review, and if so I would appreciate it if they mentioned them in the final version of the paper. --- Rebuttal Comment 2.1: Comment: We deeply appreciate the reviewer's meticulous reexamination of our responses and the in-depth suggestions for refining our work. Regarding Figure 3 in the attached PDF, we are sorry for the typo, and the correct notation should be $\gamma$ instead of $\lambda$. Regarding the alignment between the theoretical motivation and the algorithmic design, we will emphasize this point in the revised version. As an example, we have augmented Section 3.2 with an elucidation, "it is worth noting that while our theoretical motivation is framed in deterministic cases, iLID can be applied to general stochastic environments." Regarding the limitations, we concur with the reviewer's remarks and will integrate them into our final version. In particular, the second mentioned limitation actually stems from the isolation between the manifolds of expert and suboptimal data. With no state similarity between expert data and suboptimal data, our algorithm can hardly abstract positive multi-mode diverse behaviors, and its performance will reduce toward BCE. In contrast, if a different mode for achieving the goal exists in the suboptimal data, even with a single-mode expert demonstration, our approach can effectively identify this alternative mode from the causal behaviors of the goal state (overlapping with the expert states). More specifically, we will add the following limitation section to the revised manuscript. ## Limitation of iLID The main limitation of this paper lies in the requirement of the existence of state similarity between the suboptimal data and the expert data. Without any state similarity, our algorithm can hardly abstract positive or multi-mode diverse behaviors, and the performance will reduce to BC. However, it is worth noting that in the context of offline IL, if the states within suboptimal data deviate significantly from the given expert data, it is extremely challenging to assess the values of suboptimal behaviors with no prior information, because the non-expert behaviors can deviate arbitrarily from desirable behaviors. Another limitation of this work is that one requires the datasets of expert and imperfect demonstrations to be labeled. Our method may not reason about the stochasticity of the expert behaviors. For example, humans can take suboptimal behaviors when these behaviors bear lesser importance in the task (Ziebart et al. 2008). In addition, there is a lack of theoretical guarantees in the general MDPs, which we will continue to explore by utilizing the analytical idea introduced in this work and replacing the initial distribution with the state-action occupancy measure. **Reference** (Ziebart et al. 2008) [Maximum Entropy Inverse Reinforcement Learning](https://cdn.aaai.org/AAAI/2008/AAAI08-227.pdf)
Summary: The submission presents a novel method called Offline Imitation Learning with Imperfect Demonstrations (iLID) for Offline Imitation Learning, which aims to improve policy learning from both expert and imperfect demonstrations. Compared with previous IL methods, which only consider the state-action pairs during learning, this paper also considers the dynamics of the non-expert data. To this end, the submission proposed employs a data selection technique using a discriminator on the resultant state of behavior, meanwhile integrating lightweight-constrained behavior cloning. Some empirical studies showed the outperformance of the proposed method compared with other baselines. Strengths: 1. Overall the paper is well-written. 2. The motivation for including dynamics in behavior cloning makes sense and easy-to-follow. 3. The experimental tasks and baselines are sufficient. Weaknesses: 1. Some notations are so confusing that the descriptions of these notations can not lead to corresponding theoretical results. 2. The assumption of Theorem 3.1 is too strong but without any discussion. 3. The proposed algorithm and the motivation in the introduction (Figure 1) are isolated. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: 1. In line 76 the initial state distribution $\mu$ is defined as the mapping function $\mathcal{S} \rightarrow [0, 1]$. Then how is it possible to sample $s_i \sim \mu$ but $s'_i \sim \mathcal{S}(\mathcal{D}_e)$ in the definition of $\widetilde{\mathcal{D}}$? After that, in Theorem 3.1, $\mu$ is again defined by $U(\mathcal{S})$, but its meaning is completely different from the meaning of the initially defined mapping function. 2. $\mu$ is usually defined as a subset of the state space $\mathcal{S}$. The assumption in Theorem 3.1 that $\mu=U(\mathcal{S})$ is too strong and almost impossible for decision-making environments. 3. There exists no empirical study or statement to illustrate how the proposed algorithm reflects the motivation in the introduction. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 3 good Contribution: 2 fair Limitations: There is no discussion about the limitation or societal impact in the submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and detailed feedback! Below are detailed responses to each comment, and new comments on them are very welcome! --- ## Q1: About the notation, $\mu$ We are sorry for this typo in the preliminary section. Throughout the paper, $\mu$ is used as a distribution or overloaded as the corresponding probability measure (function). We will diligently review the manuscript and rectify any such errors. --- ## Q2: About the assumption, $\mu=U(\mathcal{S})$ Thank you for pointing this out. It is important to highlight that we introduce this assumption primarily for the characterization of achieving strong performance at each individual state, especially within the context of this deterministic motivational setting. As suggested by the reviewer, we proceed to remove this assumption and derive more generalized results as follows. Note that the assumption is only used in Eqs. (13), (15), and (16) in the Appendix. Next, we refine Eqs. (13), (15), and (16) by removing the assumption. Denote the maximum probability of the initial state not being in $\mathcal{S}_1(\mathcal{D}_e))$ and the minimum probability of the initial state not being in $\mathcal{S}_1(\tilde{\mathcal{D}}))$ as $$\epsilon_{\mathrm{max}}\doteq \max \max \Big\\{\mathbb{E}\_{\mathcal{D}_e}\big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))\big]\,\Big\vert\,\mu(s)>0,s\in\mathcal{S}\Big\\},\quad \delta\_{\mathrm{min}}\doteq\min \Big\\{\mathbb{E}\_{\tilde{\mathcal{D}}}\big[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))\big] \,\Big\vert\, \mu(s)>0,s\in\mathcal{S}\Big\\}.$$ Regarding Eq. (13), we have $$\mathbb{E}\left[(a)\right]$$ $$=\mathbb{E}\Big[\mathbb{E}\_{s\sim\mu}\Big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))\cdot\mathbb{1}(s\notin\mathcal{S}\_1(\tilde{\mathcal{D}}))\cdot\big( V^{\pi_e}(s) - V^{\tilde{\pi}}(s)\big)\Big]\Big]$$ $$\overset{(a)}{\le} H\mathbb{E}\left[\mathbb{E}\_{s\sim\mu}\left[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))\cdot\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))\right]\right]$$ $$\overset{(b)}{\le} H\mathbb{E}\_{s\sim\mu}\left[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))\big]\cdot\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))\big]\right]$$ $$\overset{(c)}{\le} H \sqrt{\mathbb{E}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))\big]^2\big] \cdot\mathbb{E}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))\big]^2\big]}$$ $$\overset{(d)}{\le} H \sqrt{\mathbb{E}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))^2 \big]\big]\cdot\mathbb{E}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))^2 \big]\big]}$$ $$= H \sqrt{\mathbb{E}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))\big]\big]\cdot\mathbb{E}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}})) \big]\big]}$$ $$=H\sqrt{\delta\epsilon},$$ where $(a)$ is from $V^\pi(s)\le H$, $(b)$ from the independency of $\mathcal{S}_1(\mathcal{D}_e))$ and $\mathcal{S}_1(\tilde{\mathcal{D}}))$, $(c)$ from the Cauchy-schwarz inequality, and $(d)$ from $\mathbb{E}[X]^2\le\mathbb{E}[X^2]$. Regarding Eq. (15), the following holds: $$\mathbb{E}\Big[\mathbb{E}\_{s\sim\mu}\Big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))\cdot\mathbb{1}(s\in\mathcal{S}_1(\tilde{\mathcal{D}}))\cdot V^{\pi_e}(s)\Big]\Big]$$ $$=\mathbb{E}\_{s\sim\mu}\Big[\mathbb{E} \big[\mathbb{1}(s\notin\mathcal{S}\_1(\mathcal{D}\_e))\big]\cdot\mathbb{E}\big[\big(1-\mathbb{1}(s\notin\mathcal{S}\_1(\tilde{\mathcal{D}}))\big)\big]\cdot V^{\pi\_e}(s)\Big]$$ $$\le \epsilon\_\max (1-\delta\_\min)V^{\pi_e},$$ where the last inequality is from the definitions of $\epsilon\_\max$ and $\delta\_\min$. Similarly, for Eq. (16), it can be easily seen that $$\mathbb{E}\left[\mathbb{E}\_{s\sim\mu}\left[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}\_e))\cdot\mathbb{1}(s\in\mathcal{S}\_1(\tilde{\mathcal{D}}))\cdot V^{\tilde{\pi}}(s)\right]\right]\le\epsilon\_\max (1-\delta\_\min)\mathbb{E}\_{s'\sim\rho^{\pi\_e}}\big[V'(s')\big].$$ Using the above results and following the proof steps in Theorem 1, we can obtain a generalized result independent with $\mu=U(\mathcal{S})$ as: $$V^{\pi\_e} - \mathbb{E}\big[V^{\tilde{\pi}}\big] \le \left(\frac{H}{2}+\frac{1}{H}\right)(1-\delta\_\min)\epsilon\_\max + H\sqrt{\delta\epsilon}$$ where $\epsilon$ and $\delta$ are defined in Theorem 1. As $\delta\_\min$ and $\delta$ rely on the data coverage of $\tilde{\mathcal{D}}$, this aligns with the fact that a large $\tilde{\mathcal{D}}$ can effectively combat the error accumulation in BC. --- ## Q3: About motivations We respectfully disagree with the reviewer about "there exists no empirical study or statement to illustrate how the proposed algorithm reflects the motivation in the introduction." Figure 1 showcases the benefit (generalization to unseen states) of ***the diverse behaviors that can lead to expert states***. Section 3 is dedicated to illustrating our methodology, which learns ***a state-only identifier to distinguish positive diverse behaviors based on their resultant states*** and leverage the selected data properly. Our empirical evaluation is carried out in a setting where the agent can only get access to notably limited expert data (1 trajectory) along with (low-quality) mixed suboptimal data. The pronounced performance superiority of our method can serve as compelling evidence, underscoring the algorithm's efficacy in extracting positive behaviors from suboptimal data and substantiating the alignment between our proposed approach and the motivations outlined in the introduction. --- --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses and further discussions. However, I still have some concerns regarding the second and third questions. ## Q2: The Updated Theorem 3.1 After the removal of the strong assumption that $\mu \in U(\mathcal{S})$, the updated expectation term in (a) is bounded by $H\sqrt{\delta\epsilon}$, which is even tighter than the original bound in (a), i.e., $H\delta\epsilon$. I am confused about how this is possible. Also, to obtain the last equation in (d), don't we still need the assumption that $\mu \in U(\mathcal{S})$? ## Q3: Motivation and Methodology The motivation of this work, as presented in Figure 1, is that "when the agent encounters a state unobserved in expert demonstrations, compared to taking a random action, a more reasonable way is to return to the states where it knows expert behaviors." My concern is that there is no empirical study to support this statement. In the experiments (including the appendix), there are only performance comparisons. Does the non-expert data singled out by the proposed method actually help guide the policy towards expert data? What if there is no overlap between the non-expert and expert data? --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's further response and in-depth clarification. Below are detailed responses to each follow-up comment, and new comments on them are very welcome! --- ## Q2: The Updated Theorem 3.1 **(1) Is the updated bound of (a) tighter than the original bound?** The answer is no. Please recall the definitions $\epsilon\doteq\mathbb{E}[\mathbb{E}\_{s\sim\mu}[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))]]$ and $\delta\doteq\mathbb{E}[\mathbb{E}\_{s\sim\mu}[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))]]$, which indicates $\epsilon,\delta\in[0,1]$. The updated bound $H\sqrt{\delta\epsilon}$ is in fact looser than the original bound $H\delta\epsilon$. **(2) Do we still need $\nu=U(\mathcal{S})$ to obtain the last equation in (d)?** The answer is no. To clarify, we elaborate on the derivation from (c) to (d) below. First, the inequality (d) is derived from the fact $\mathbb{E}[X]^2\le\mathbb{E}[X^2]$: $$\mathbb{E}[(a)]\overset{(c)}{\le}H\sqrt{\mathbb{E}\_{s\sim\mu}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))\big]^2\big]\cdot\mathbb{E}\_{s\sim\mu}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))\big]^2\big]}\overset{(d)}{\le} H \sqrt{\mathbb{E}\_{s\sim\mu}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))^2\big]\big]\cdot\mathbb{E}\_{s\sim\mu}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))^2\big]\big]}.$$ Then, using the facts $\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))^2 =\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))$ and $\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))^2=\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))$, we have $$H\sqrt{\mathbb{E}\_{s\sim\mu}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e))^2 \big]\big]\cdot\mathbb{E}\_{s\sim\mu}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}}))^2 \big]\big]}=H \sqrt{\mathbb{E}\_{s\sim\mu}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\mathcal{D}_e)) \big]\big]\cdot\mathbb{E}\_{s\sim\mu}\big[\mathbb{E}\big[\mathbb{1}(s\notin\mathcal{S}_1(\tilde{\mathcal{D}})) \big]\big]}=H\sqrt{\delta\epsilon},$$ where the second equality holds from the definitions of $\epsilon$ and $\delta$. The assumption $\mu=U(\mathcal{S})$ is never used in the above proof. --- ## Q3: Motivation and Methodology **(1) Does the non-expert data singled out by the proposed method actually help guide the policy toward expert data?** Since our proposed method selects ***the causal behaviors*** of identified expert states, the extracted non-expert training data are indeed the ***sub-trajectories that lead to expert states***. Therefore, it is natural that the data (toward expert states) will guide the learned policy toward expert states in non-expert environments. To better illustrate the efficacy of our method in guiding the policy toward expert states, we further examine ***the ratio of expert states*** in the policies' visited states of BC and our method. The experiments are carried out under the settings of Table 1, and we regard the states with the discriminator output $d(s)>0.5$ as the (approximated) expert states. The results below correspond to the ratio of visited expert states across ten episodes. | Environment | BC | iLID (ours) | | :---------- | :----- | :---------- | | Ant | 7.98% | 57.83% | | HalfCheetah | 3.67% | 25.07% | | Hopper | 5.91% | 78.78% | | Walker2d | 53.14% | 90.52% | Due to learning only on given expert data, BC would probably take random actions in the states beyond the given expert data manifold. The results demonstrate that our method enjoys a higher incidence of visits to expert states, which can affirm the claim. **(2) What if there is no overlap between the non-expert and expert data?** If there is no similarity between the non-expert and expert ***states***, without any prior knowledge of the non-expert data, the performance of our algorithm would reduce to BC. It corresponds to the complementary experiment on Halfcheetah with the Random data and a single expert trajectory, where our method only selects 100 non-expert state-actions with just one identified expert state. However, it is important to note that assessing values of suboptimal behaviors in this case is extremely challenging for offline IL, because the non-expert behaviors can deviate arbitrarily from desirable behaviors (as illustrated in Figure 1 in the attached PDF, all baselines fail in this case). In addition, the existence of state similarity (even state-action similarity) between the non-expert and expert data is a commonly used assumption in existing works of offline IL ([Kim et al. 2022](https://openreview.net/pdf?id=BrPdX1bDZkQ);[Xu et al. 2022](https://proceedings.mlr.press/v162/xu22l/xu22l.pdf)). As shown in the complementary results, it is also applicable in widely used RL benchmarks (please refer to our response to Reviewers b1iR and K4fk for more details).
Summary: The paper addresses the problem of offline imitation learning (IL) from demonstrations that are noisy/suboptimal. To this end, the authors propose iLID which is a two-step process—a data selection step which only retains those $(s,a)$ transitions from suboptimal demonstrations which lead to states in the expert demonstrations, thereby maintaining a supplementary data buffer; then policy learning is performed by behavior cloning on samples from the supplementary buffer while also regularizing the policy to not stray too far from a BC expert policy (on samples from the expert data buffer.) The authors establish competitive upper bounds on the suboptimality and sample complexity of iLID. Extensive experimental on 8 complex robotic tasks, and accompanying sensitivity studies show that iLID outperforms 5 competing baselines, and limited sensitivity analysis is performed. Strengths: - The paper tackles the important but challenging issue of offline imitation learning from suboptimal demonstrations. In this regard, the paper addresses a pertinent problem in the research area. - The rationale behind the formulation is simple yet powerful. Specifically, the data selection step trains a discriminator to select only those transitions in the suboptimal data $(s,a) \in \mathcal{D}_\mathcal{s}$ which lead to a state in the expert data $(s,a) \in \mathcal{D}_\mathcal{e}$ within some specified $K$ timesteps. This is a simple way of leveraging offline data to distil only useful knowledge from suboptimal data for policy learning and may aid the agent in correcting towards expert behavior from non-expert states. The formulation of the policy learning step as a regularized version of BC yields demonstrable improvements in training time. - Empirical results on the D4RL robotics benchmark dataset are impressive (Table 1) and hold across all but one environment. - Overall, the paper is very well-written and provides helpful illustrations and examples to present ideas. Weaknesses: Experimentally, seeding imperfect dataset with expert data (~1-20%) seems like a strong assumption. Given that the data selection method explicitly selects $(s,a)$ pairs based on whether they lead to expert states $s \in \mathcal{D}_\mathcal{e}$. If the expert and suboptimal trajectories only share the seeded expert transitions i.e., $\mathcal{D}_\mathcal{e} \cap\mathcal{D}_\mathcal{s} = \mathcal{D}_\mathcal{\text{seeded}}$ (a realistic assumption in real-world cases), then the proposed selection criterion will select only the (seeded) “expert” transitions from $\mathcal{D}_\mathcal{s}$ to add to supplementary data $\tilde{\mathcal{D}}$ (since only states from the seeded expert trajectories in $\mathcal{D}_\mathcal{s}$ would lead to successor states in $\mathcal{D}_\mathcal{e}$). In this case does iLID reduce to just pure BC (Pomerlau, 1998) on just expert data with additional policy regularization? Further, Figure 3a also shows that iLID does rely heavily on expert demonstrations for it to perform well. Some ways to address this – - Experiment showing results when the imperfect data is not seeded with expert demonstrations would best clarify this issue. - Experiment showing results for varying # of expert trajectories (as in Fig 3a) for different environments (including the unseeded case). - As an alternative methodology, to bypass seeding, the imperfect demonstration set could be generated rolling out trajectories from the noise-injected expert BC policy $\tilde{\pi}_{\mathcal{e}}$. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: - The only major concern is seeding suboptimal data with expert data. How does iLID perform across different environments when this is not done at all? - (Table 1) Why does BC with expert data (BCE) perform worse than BC with union data (BCU)? Do they use different number of demonstrations/samples? Compounding errors should have affected both? - Can the runtime for DemoDICE be included in Figure 5 (missing figure caption?) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: While the iLID formulation is interesting, some weaknesses associated with data seeding could be discussed in more detail as per suggestions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your in-depth comments and suggestions! Below are detailed responses to each comment, and new comments on them are very welcome! **Q1: The only major concern is seeding suboptimal data with expert data. How does iLID perform across different environments when this is not done at all?** Following the reviewer's suggestion, we exclude expert data from the diverse dataset and conduct a series of extensive experiments across varying environments (Ant, Halfcheetah, Hopper, Walker2d), data qualities (Random, Replay, and Medium), and levels of expert data (1, 3, and 5). We show all the results in Figure 1 in the PDF and select two sets of the results in the tables below (above: in Ant, below: in Halfcheetah) Clearly, the results demonstrate that iLID can effectively unearth valuable behaviors even in the absence of expert data within diverse datasets. Furthermore, we would like to underscore a critical point: to enhance the algorithm's exploitation of diverse data, it is paramount to meticulously calibrate the $\sigma$ and $\gamma$ parameter values. When expert data is relatively scarce, it proves beneficial to increase $\sigma$, thus providing more data support during offline learning. On the contrary, when dealing with a relatively large number of expert data, a reduction in $\sigma$ is more judicious. Moreover, it's worth noting that lower-quality diverse data often correlates with diminished $\gamma$, which aids in mitigating the challenges of behavior interference. | Suboptimal traj. | # expert traj. | BCE | BCU | DWBC | CLARE | DemoDICE | iLID (ours) | | :--------------: | :------------: | :---: | :---: | :---: | :---: | :-------: | :-----------: | | Random | 1 | -2.37 | 31.55 | 2.86 | 31.73 | 32.03 | **33.60** | | Random | 3 | -2.18 | 31.48 | 23.67 | 17.71 | **49.63** | 45.13 | | Random | 5 | 32.00 | 31.48 | 41.46 | 31.75 | 39.48 | **64.97** | | Replay | 1 | -2.37 | 67.73 | 9.76 | 61.42 | 72.74 | **80.20** | | Replay | 3 | -2.18 | 69.82 | 30.11 | 64.33 | 79.97 | **96.00** | | Replay | 5 | 32.00 | 62.93 | 24.61 | 62.72 | 82.17 | **97.65** | | Medium | 1 | -2.37 | 87.67 | -0.08 | 86.24 | **92.07** | 82.96 | | Medium | 3 | -2.18 | 81.89 | 8.95 | 85.78 | 81.70 | **89.33** | | Medium | 5 | 32.00 | 88.83 | -1.95 | 86.52 | 85.85 | **99.11** | | Suboptimal traj. | # expert traj. | BCE | BCU | DWBC | CLARE | DemoDICE | iLID (ours) | | :--------------: | :------------: | :---: | :---: | :---: | :-------: | :------: | :-----------: | | Random | 1 | -0.32 | 2.25 | 0.89 | -0.25 | 2.24 | **2.43** | | Random | 3 | 5.40 | 2.25 | 4.33 | 2.75 | 2.22 | **5.77** | | Random | 5 | 4.05 | 2.25 | 2.16 | 4.74 | 2.23 | **5.86** | | Replay | 1 | -0.32 | 23.56 | 9.15 | 31.07 | 30.96 | **34.49** | | Replay | 3 | 5.40 | 28.89 | 12.72 | **35.77** | 19.62 | 34.66 | | Replay | 5 | 4.05 | 35.10 | 19.33 | 37.05 | 28.25 | **38.63** | | Medium | 1 | -0.32 | 42.60 | 9.31 | **42.56** | 41.94 | 42.47 | | Medium | 3 | 5.40 | 42.86 | 5.90 | 42.48 | 39.88 | **43.10** | | Medium | 5 | 4.05 | 42.74 | 7.24 | 42.38 | 41.50 | **43.38** | **Q2: Why does BC with expert data (BCE) perform worse than BC with union data (BCU)?** BCE is trained only on the provided expert trajectories (in our experiments, BCE only uses 1 expert trajectory), whereas BCU utilizes both expert trajectories and diverse trajectories, resulting in differing sample sizes between the two methods. Hence, the performance of BCE is easily hampered by the limited data coverage of the expert dataset, leading to significant extrapolation errors. **Q3: Can the runtime for DemoDICE be included in Figure 5?** The reason we opted not to incorporate DemoDICE in Figure 5 stems from the divergence in frameworks. DemoDICE's official code is built upon the TensorFlow framework, while our work is grounded in PyTorch. This framework discrepancy makes the algorithms not directly comparable. We are actively working on reproducing DemoDICE using PyTorch and plan to integrate its result into forthcoming iterations of our work.
Summary: The paper proposes an algorithm for offline imitation learning on a mixture of “perfect” expert demonstrations and “imperfect” sub-optimal demonstrations. The authors provide a theoretical motivation for their approach and exhibit results on various Mujoco and Adroit tasks. The authors also conduct ablation studies to justify their design choices. Strengths: - The paper proposes a novel and interesting approach for learning from suboptimal data. It also provides some theoretical motivation for why filtering data based on resultant states might be better than simply using the state-action distribution. - The paper is written clearly and concisely. - In addition to the novelty in filtering the data, the paper proposes a novel constrained BC algorithm where a discount factor is used to reduce the impact of stochasticity in the MDP during the optimization process. - The author’s compare their approach with a bunch of recently published offline imitation learning algorithms capable of learning of suboptimal data. The algorithms are evaluated across 4 MoJoCo tasks and 4 Adroit tasks with two variations (low and high expert data) for each task. - The authors justify their design choices using ablation studies showing the impact of number of expert demonstrations, the number of rollback steps considered, the use of the discount factor and a comparison between runtimes of iLID and other algorithms. Weaknesses: - Though the paper has some good ablation studies, it would be interesting to have a study of the effect of quality of the suboptimal data on the performance of the method. The paper currently only considers random trajectories as suboptimal data which might not be very useful for harder tasks. Instead, imagine demonstrations that can complete a part of the task but not the entire task (for instance, can pick up the hammer but not hit the nail). These can probably be collected by rolling out an expert agent and taking random actions in between (varying the percentage of random actions can give different levels of suboptimality). Such a study can help (1) highlight the relevance of the work in the real world where collecting perfect demos might be hard but it is often possible to do parts of the task, and (2) highlight the importance of expert demonstrations in this problem setting (for instance, can we reduce the amount of expert data if the amount of suboptimality in the remaining data reduces). - Fig. 4 plots the average return against the number of training steps. However, since the models are completely trained on offline data, a better metric might be a comparison of maximum performance attained by different algorithms. Also, training it for too long might result in the model overfitting on the data, thus, reducing the average return over time (as can be seen in quite a few tasks in Fig. 4). - It would be great if the ablations in Fig. 3 could be shown on a few more tasks in the appendix. - The paper is missing a limitations section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It would be great if the authors could address the points mentioned in the “Weaknesses” section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper is missing a limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the reviewer's appreciation of the contribution and novelty of this paper! Below are detailed responses to each comment: **Q1: It would be interesting to have a study of the effect of the quality of the suboptimal data on the performance of the method.** Following the reviewer's suggestion, we employ the Random, Replay, and Medium datasets in D4RL, which, ranging from low to high qualities (detailed below), serve as the suboptimal demonstrations (without seeding with expert data). > Random, Medium, and Replay use samples from 1) a randomly initialized policy, 2) a policy trained to approximately 1/3 the performance of the expert, and 3) the replay buffer of a policy trained up to the performance of the medium agent, respectively. We carry out a series of experiments in four MuJoCo environments with varying numbers of expert trajectories (from 1 to 5). The selected results are presented in the tables below (above: in Ant, below: in Halfcheetah; please refer to Figure 1 in the PDF for the complete set of results). The results demonstrate the efficacy of iLID in effectively extracting positive behaviors from suboptimal demonstrations across a spectrum of quality levels. As expected, the reduction in suboptimality corresponds to a decrease in the required number of expert trajectories (exemplified by the superior performance of 1 expert trajectory on the Replay data in Ant, compared to 3 expert trajectories on the Random Data). However, surprisingly, albeit with higher overall scores, the performance of iLID on the Medium data frequently falls short in comparison to its performance on the Replay data, revealing the larger state coverage and richer information embedded within the Replay data. | Suboptimal traj. | # expert traj. | BCE | BCU | DWBC | CLARE | DemoDICE | iLID (ours) | | :--------------: | :------------: | :---: | :---: | :---: | :---: | :-------: | :-----------: | | Random | 1 | -2.37 | 31.55 | 2.86 | 31.73 | 32.03 | **33.60** | | Random | 3 | -2.18 | 31.48 | 23.67 | 17.71 | **49.63** | 45.13 | | Random | 5 | 32.00 | 31.48 | 41.46 | 31.75 | 39.48 | **64.97** | | Replay | 1 | -2.37 | 67.73 | 9.76 | 61.42 | 72.74 | **80.20** | | Replay | 3 | -2.18 | 69.82 | 30.11 | 64.33 | 79.97 | **96.00** | | Replay | 5 | 32.00 | 62.93 | 24.61 | 62.72 | 82.17 | **97.65** | | Medium | 1 | -2.37 | 87.67 | -0.08 | 86.24 | **92.07** | 82.96 | | Medium | 3 | -2.18 | 81.89 | 8.95 | 85.78 | 81.70 | **89.33** | | Medium | 5 | 32.00 | 88.83 | -1.95 | 86.52 | 85.85 | **99.11** | | Suboptimal traj. | # expert traj. | BCE | BCU | DWBC | CLARE | DemoDICE | iLID (ours) | | :--------------: | :------------: | :---: | :---: | :---: | :-------: | :------: | :-----------: | | Random | 1 | -0.32 | 2.25 | 0.89 | -0.25 | 2.24 | **2.43** | | Random | 3 | 5.40 | 2.25 | 4.33 | 2.75 | 2.22 | **5.77** | | Random | 5 | 4.05 | 2.25 | 2.16 | 4.74 | 2.23 | **5.86** | | Replay | 1 | -0.32 | 23.56 | 9.15 | 31.07 | 30.96 | **34.49** | | Replay | 3 | 5.40 | 28.89 | 12.72 | **35.77** | 19.62 | 34.66 | | Replay | 5 | 4.05 | 35.10 | 19.33 | 37.05 | 28.25 | **38.63** | | Medium | 1 | -0.32 | 42.60 | 9.31 | **42.56** | 41.94 | 42.47 | | Medium | 3 | 5.40 | 42.86 | 5.90 | 42.48 | 39.88 | **43.10** | | Medium | 5 | 4.05 | 42.74 | 7.24 | 42.38 | 41.50 | **43.38** | **Q2: ...a better metric might be a comparison of maximum performance attained by different algorithms...** Thank you for pointing this out, we will include the information (the average of the highest 10 scores) in the appendix to assess the potential of the proposed algorithm. Combined with the average returns of the policy at the last iteration of training, we can provide a more comprehensive understanding of the algorithm's performance, in terms of both the stability and potential. **Q3: The ablations could be shown on a few more tasks in the appendix.** We conduct complementary ablation studies across four MuJoco environments. As showcased in Figure 3 within the attached PDF, the results further corroborate the significance of the constrained BC procedure. It not only accelerates training but also contributes to the proposed algorithm's stability. **Q4: The paper is missing a limitations section.** The main limitation of this paper lies in the requirement of the existence of state similarity between the suboptimal data and the expert data. However, it is worth noting that in the context of offline IL, when the states within suboptimal data deviate significantly from the given expert data, it is extremely challenging to assess the values of suboptimal behaviors with no prior information. Another limitation of this work is the lack of theoretical guarantees in the general MDPs, which we leave for future work. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I thank the authors for the rebuttal and additional experiments. All my concerns have been addressed and I am raising my score by a point. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you so much for further reviewing our response and increasing the score!
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for the insightful and constructive feedback! We also thank the reviewers for their appreciation of the novelty (Reviewer K4fk), contribution (Reviewer Y7ma), and writing (Reviewers b1iR and 71T4) of our work. Per the reviewers' suggestions, we have: (1) carried out new experiments using suboptimal demonstrations that have different qualities and are not seeded with expert data, (2) added more ablation studies, (3) sharpened the motivating theoretical results by removing the assumption on the uniform distribution, and (4) replied to each reviewer's comment point by point. New comments on these responses are very welcome! Pdf: /pdf/1ce520942cd8c3483088dadf4f9c51ef9838d893.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Characterizing the Impacts of Semi-supervised Learning for Weak Supervision
Accept (poster)
Summary: The authors propose a design space to analyze the effects of WS + SSL, and how to combine them, and on which regimes that SSL help WS. The paper's biggest conclusion is: training using unlabeled datain SSL is mainly unhelpful, except when the main issue is bad labeling function (LF) accuracy. The design space is based on 3 major axis: Thresholding, SSL technique, and re-labelling. Strengths: - The paper presents a clear framework to analyze the effect of using SSL on WS - The framework provides a design space that is clear and intuitive, and based on 3 axis that helps t disentangle the effect from WS or SSL - Important result presented on when SSL is not or is helpful to improve WS Weaknesses: - Perhaps analysis can also be conducted on snorkel's competitive alternative (e.g., FlyingSquid https://github.com/HazyResearch/flyingsquid) - While discussing label noise, its important to clarify that WS required label function to have better accuracy than random (ACC > 50%) - An important ablation and information need also to be included is LF accuracy. - For the NLP task that uses embedding, its interesting to also analyze how SSL interact with smoothness based methods to improve LF coverage: LIGER https://arxiv.org/pdf/2203.13270.pdf Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See Weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes, limitations is addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and helpful feedback! We see your comments as covering two main points, which we will respond to below: **(W1) Testing on more label models (i.e., FlyingSquid, LIGER)** **FlyingSquid (FS).** For our experiments, we chose label models (i.e., Snorkel/MajorityVoting) by looking at the results from the WRENCH paper and picking the ones which led to the strongest performance for Vanilla end model training. For FS, we saw its performance was similar in most cases, while considerably worse on several tasks (e.g., Spouse, Chemprot, Trec)– hence we did not prioritize it. Of course, the implicit assumptions here are that: (1) Vanilla performance is also an indicator of performance once our design space is applied on top; (2) our high-level conclusions should translate over across different LMs. To your point, it could be worth exploring assumptions (1)-(2) further. As a preliminary experiment, we ran both Vanilla end model training and the method we identified as being the best across datasets (in Sec 4.2) on top of FS labels. From the results in Table 2 in the pdf (attached to global response), the results for FS and Snorkel are extremely similar on Imdb and Yelp. On most other datasets, Snorkel’s better performance for Vanilla translates over to better performance once our method is applied (which still improves upon Vanilla for both Snorkel/FS). The only exception is on Youtube, where FS overtakes Snorkel once our method is applied (though within fairly wide error bars). Of course, more comprehensive verification of (1)-(2) would require running many more methods from our design space on FS. Nonetheless, we believe these results provide some positive initial signal about our assumptions. **LIGER.** We agree that it would be interesting to apply our analyses on top of this approach given that it directly expands the coverage of LFs. Based on our results, we’d suspect that applying SSL on top of LIGER’s expanded LFs are likely to make less of an impact than on the labels produced by the raw LFs, but it would be nice to verify this! While we have not yet been able to try such experiments, we would be happy to try including it in future revisions. **(W2) Discussion/Ablation of Label Noise** *“The paper's biggest conclusion is: training using unlabeled data in SSL is mainly unhelpful, except when the main issue is bad labeling function (LF) accuracy”* Just to be on the same page here, we actually see the takeaway as being the other way around: “when the main issue is bad LF accuracy (as opposed to incomplete coverage)”, we find that SSL is less likely to be helpful. Namely, our results in Sec. 4.5 show that on standard WS benchmarks (where SSL is not too helpful), label noise is the main issue: removing label noise and training only on covered examples allows us to recover the performance of fully supervised learning. However, in settings with lower coverage, label noise is no longer the only significant issue and SSL is indeed more helpful. *"“It’s important to clarify that WS requires label function to have better accuracy than random”* Thanks for pointing this out. Especially for theoretical treatments of WS, assumptions about the informativeness of LFs are important and we’d be happy to discuss this more in future revisions. For Snorkel at least, the assumption is that LFs are **on average** better than random. Other approaches may have stronger assumptions though, and we will try to clarify this! *"“An important ablation and information need also to be included is LF accuracy.”* We agree that the precisions of LFs are important to consider. In the current manuscript, we do include the precision of the aggregated weak labels (i.e. the label model outputs) for each of the datasets and their full LF sets. In future revisions, we can add this information for each of the reduced coverage setups as well as some summary stats for the individual LFs (similar to Table 5 in the WRENCH paper). One experiment we’ve started to run since the submission is to try subsampling the least accurate LFs from a candidate pool instead of the most accurate. Unsurprisingly, this results in worse (and arguably less realistic) training sets but does allow us to compare LF sets of similar coverage but different precision levels. From the preliminary results in Figure 1 of the attached pdf, we observe that on Semeval and Massive18, we get qualitatively similar results to the ones we saw before: Thresh+SSL meaningfully outperforms Thresh+Re-label at lower coverage levels but less so at higher ones. In future revisions, we’re happy to extend such an analysis to more datasets! --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications! My initial concerns have been addressed, and I have changed my initial review score
Summary: This paper studies the impacts of semi-supervised learning (SSL) for programmatic weak supervision (WS) in a systematic way. The authors define a modular design space with three key methodological considerations, thresholding, SSL Technique, re-labeling, to study the use of SSL for WS. Their results show that fairly simple methods from their design space can match the performance of complex state-of-the-art method. Also, SSL is not necessary to obtain the best performance on most WS benchmarks, but is more effective when the end model is small or WS can only label a small portion of training examples. Strengths: 1. The paper provides a systematic design space to study WS. Most of the previous baselines can be placed into the design space which makes a thorough analysis of existing WS methods possible. Also, the paper conducts extensive experiments on multiple design choices and a wide range of datasets - the benchmark results can be beneficial to the community. 2. The best method discovered by the design space can match the performance of more complex methods. This may arouse the interest of practitioners since they can use the proposed design space to find effective WS methods instead of doing a lot of huristic hand-craft. Weaknesses: 1. The findings of the paper are not surprising. Though the authors claim that SSL is not necessary, the experimental results show that SSL can still bring benefits on many datasets - I cannot see why the paper emphasizes SSL is not necessary if it still helps. Another finding of the paper is that SSL is effective when WS can only label a small portion of training examples. This is easy to foresee as SSL can further exploit a large number of unlabeled examples. 2. While this is an analysis paper, all the analyses are empirical and there is no theoretical analysis. Therefore, the conclusions are drawn from the experimental results on the benchmark datasets. It's unsure whether these datasets are representative and extensive enough to cover all cases. One claim made by this work is that "SSL is not necessary to obtain the best performance on most WS benchmarks". A possible reason for this is that the benchmark datasets are rather easy - according to Table 2, most of the datasets are binary classification; for those with more classes, the coverage of WS labeling is very high. I'm skeptical whether this is true for datasets in real application. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. This paper involves a lot of acronyms which made it occasionally hard to follow. For "GT" discussed in Section 4.5, does it refer to "Ground Truth"? I didn't find a definition of this acronym in the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have already discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and helpful feedback! To respond to some of your questions and comments: **(W1a)** *“The experimental results show that SSL can still bring benefits on many datasets - I cannot see why the paper emphasizes SSL is not necessary if it still helps.”* We’d like to highlight that on the standard benchmarks, improvements enjoyed by additionally applying SSL mostly do not exceed the stdev error bars when compared to Thresh+Re-label. This is surprising, as we note, because previous works test on these benchmarks and suggest that SSL is a central reason for improved performance. For instance, the WRENCH paper posits (in their Sec. 7) that *“the superiority of COSINE suggests that uncovered data should also be used in training an end model; this inspires [the exploration of] new DM training strategies combined with SSL techniques”.* That being said, in Sec 4.5, our point is exactly that SSL can often be more impactful in scenarios that differ from these standard benchmarks. Though we tried to be explicit about these important nuances in this manuscript, we will work to make them even more upfront in future revisions! **(W1b)** *“Another finding of the paper is that SSL is effective when WS can only label a small portion of training examples. This is easy to foresee as SSL can further exploit a large number of unlabeled examples.”* First, we agree that this is not overly surprising, but we still view this as an important practical result given the increasing ubiquity of SSL approaches, e.g. we hope this can provide some guidance for questions such as: “at what coverage levels is SSL useful?” and “is it better to learn from fewer (more-accurate) LFs + SSL or from a larger overall set of LFs?” We also note that there are at least two more non-obvious conclusions when it comes to *high-coverage settings*: (i) even when we allow for thresholding which yields smaller datasets (i.e., more unlabeled data), SSL still helps minimally; and (ii) given we are using programmatic WS, the unlabeled examples, despite their small number, may come from key parts of the input space that are systematically abstained on by LFs; one might expect that SSL could help address this biased coverage of the LFs, but this also does not happen. **(W3)** *“A possible reason for this is that the benchmark datasets are rather easy - according to Table 2, most of the datasets are binary classification; for those with more classes, the coverage of WS labeling is very high.”* We agree regarding your assessment of the commonly used datasets in Table 2. Indeed, this skepticism is precisely what motivated us to study WS settings that notably differ from previous benchmarks in Sec 4.5. In particular, we chose to look at: * LF sets that span a greater range of coverage levels (on the same dataset) * Tasks that have considerably larger label spaces (i.e., Massive18, Banking77) * Tabular tasks instead of text-based While we do not claim that these factors now capture all aspects of real applications, our conclusions from these experiments are already that the impactfulness of SSL can indeed change, as you allude to. **(Q1)** Apologies for any confusions here! GT indeed corresponds to ground-truth in Sec 4.5, which we definitely should take care to explicitly define. We also appreciate the general feedback about acronyms a lot and plan to more carefully reduce our reliance on them in future revisions. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply! I agree with the authors' response on the emphasis of Sec 4.5 which provide many interesting insights on when SSL can have benefit. I do think it's beneficial to make this part more upfront in revision. However, I still hold my argument that some findings of the paper are not surprising. I prefer to keep my rating unchanged as it has already shown the inclination of "accept".
Summary: This paper empirically studies the interface between (programmatic) weak supervision (WS) and semi-supervised learning (SSL). Several existing works have tried to leverage SSL techniques and other tricks to improve the performance of weakly supervised learning, since the two settings are fairly similar on the surface. This paper provides a taxonomy of the key techniques from this line of work, arguing that most methods differ along three main axes: thresholding, SSL technique, and re-labeling. The paper empirically explores combinations across these three axes to study if and when the SSL techniques themselves really improve WS. The best single method of the design space matches or exceeds existing methods from the literature on 5/7 benchmark datasets, indicating that design space is large enough to be representative of the current literature. Surprisingly, though, the ablation results indicate that the SSL techniques themselves are not really the main cause of good performance in the regime of the most common benchmark datasets (relatively high coverage of weak rules). Instead, thresholding and re-labeling are shown to be responsible for most of the gain of these methods. The SSL techniques are shown to help more when the coverage is (artificially) reduced. Strengths: - Well-written empirical study that addresses an important question for the WS field, and could influence which directions are explored by future research. - The key empirical takeaway, that existing SSL methods are mainly helpful in the low-coverage regime, can drive more research in both weak supervision and semi-supervised learning. Many WS papers have been focused on using SSL techniques to improve performance, but the empirical results from this careful study indicate that the main benefits usually come from other tricks like thresholding. This could spur more research into why these other techniques work well and also into new SSL methods that use unlabeled data to improve learning from structured label noise. - The design space of combinations is empirically shown to be powerful, exceeding or matching the state-of-the-art results on benchmark datasets. - The WRENCH benchmark datasets tend to have fairly high coverage rates (in my opinion this is one of the few weaknesses of WRENCH). In response, this paper details a carefully thought-out way of artificially inducing lower coverage to study how WS methods perform in this empirically relevant setting, and also provides two new benchmark datasets that have lower coverage rates. Weaknesses: - Fairly large validation sets are used to select best DM from each training iteration and also to select best combination from the design space. If we have access to (say) 300 labeled examples for validation, can't we use 100 of them for training as well? It's important to note that the authors are following most if not all other works in the WS field with this choice, and the authors appropriately cite works that mix the supervised and weakly-supervised settings. But it would be interesting to see how many of the results remain the same when much smaller (e.g., something like num_classes * 10) validation sets are used. I've found that more complicated methods like DM as LF are less robust in this setting, so that may affect some of the results (but probably not the key takeaway). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors appropriately discuss the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and helpful feedback, we’re glad you appreciated the paper! Regarding your point about validation sets: like you said, we chose to follow the standard setups established by other works, as our main questions were about assessing various trends we saw from the WS literature. However, we agree that these default validation sizes are probably large and it would be valuable to explore the importance of validation data as an additional factor for which methods from our design space succeed/fail. One natural hypothesis would be that as less validation data is used, performance would decrease more sharply for methods that involve more components, i.e., especially the SSL methods since they tend to introduce the most additional hyperparameters. As such, it may be even more difficult for SSL to be usefully applied.
Summary: Getting labeled training data is a bottleneck in the development of machine learning pipelines. Two families of methods that address this are weak supervision (WS), which aggregates multiple sources to produce weak labels, and semi-supervised learning (SSL), which combines a (weakly) labeled dataset and an unlabeled dataset. While there have been works that study how to utilize ideas from WS and SSL to best produce high quality models from unlabeled data, the ways they combine these ideas has been fairly ad-hoc and unstructured. This paper conducts a systematic analysis of how to merge ideas from WS and SSL, breaking it down into three design choices: thresholding (what weakly labeled data to revert to unlabeled), the SSL technique for using unlabeled data, and re-labeling, which creates a self-loop of updating weak labels. The authors find that many existing methods that merge both WS and SSL can be expressed in this design framework, and their design space search finds combinations of strategies (thresholding/ssl/re-labeling) that match existing methods' performance. They study if simpler techniques, such as thresholding based on model confidence, can match more complex techniques. They also uncover that in many cases, the SSL step does not help that much unless there is a significant performance gap due to poor and biased labeling function coverage. Strengths: Originality: - This structured survey of ways to use SSL for WS is very novel. Quality: - Paper is sound with extensive empirical evaluation. The authors seek to thoroughly understand why they observe that SSL isn't helpful, and design compelling experiments to validate their hypothesis. Significance: - I think such a paper is very valuable to those who work on WS/SSL. Personally, I have also thought about how to best use SSL for WS and have not been convinced why one way is better than the other - even trying to prove this theoretically has been technically challenging, as these methods all ultimately use the same amount of information (some labeling functions and an unlabeled dataset; maybe some validation data, etc.) And so perhaps the solution here is not to drill into finding better method but to systematically view these methods as combinations of simpler design choices. This perspective will also be highly useful for practitioners. - Random thought: These findings on unlabeled data being unnecessary are potentially connected to data pruning literature, where one can discard low confidence points and still have a good model; however, there are cases when these unlabeled points are especially hard and thus worth training on --- maybe this is one perspective to understand why some datasets have a gap between GT-cov and GT-full? Weaknesses: Clarity: - Description of methods/baselines were a bit unclear and hard to follow at times. The paper describes these methods only via text and fig 1. It would be helpful to put in the appendix some formalization of these if appropriate; for instance, the two paragraphs on re-labeling L at line 149 took a bit of time to understand (stuff around decoupling SSL and re-labeling, dynamic vs one-time scheduling). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Can this framework be applied outside of weak supervision for generic noisy soft labels? - Seems strange that Massive18, Banking77 and the tabular datasets have this gap from (1) and (2) but Wrench datasets don't. Any idea why? - In figure 4, why are you comparing thresh+ssl and thresh+re-label? Shouldn't we be studying the marginal benefit of adding SSL? - See above for suggestions around clarity. Rebuttal acknowledgement: I have read and acknowledged the author rebuttal. It has addressed my concerns above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time and helpful feedback, we’re glad you appreciated the paper! To respond to your questions and comments: **Clarity.** We really appreciate your feedback here. In future revisions, we will take care to make these descriptions more explicit in the Appendix. **Connections to example difficulty.** Interesting thought! Intuitively, using (human-written) labeling functions could risk bias towards labeling “easy points” instead of “hard points” given that they often take the form of simple rules. Indeed, our results in Appendix E more generally suggest that the gaps between GT (Cov) and GT (Full) are due to coverage bias, i.e. distributional differences between what labeling functions can label and the full data distribution. My guess is that in some cases where the GT (Cov)-GT (Full) gap exists, coverage bias could be closely tied to difficulty, e.g., LFs fail to cover examples close to the ideal decision boundary, leading to a sub-optimal model. There may be other cases though where this link does not hold; perhaps the uncovered examples are not difficult but simply poorly represented by covered ones (i.e., they are “far away” from both the ideal decision boundary and all covered examples). In future work, it would be interesting to investigate/disentangle these scenarios further! **Questions** **(1)** Correct, many of the techniques in our design space could be ported over to the setting where there are generic noisy labels. The exception would be anything that involves the label model, such as in the version of self-training where the end model is used as an LF. In the settings typically studied for learning with generic noisy labels, one also isn’t usually given estimated soft-labels to start with, so some other technique would have to be applied to get confidence estimates (i.e., if one wants to use confidence-based thresholding). **(2)** For Massive18/Banking77, one perspective on why a gap exists is that these are more complex tasks (at least in terms of the label cardinality). The datasets with relatively small label spaces in WRENCH (e.g., Yelp, Imdb, AGNews) are the ones that show smaller gaps between (1) and (2), whereas datasets like Chemprot (|Y|=10) and Semeval (|Y|=9) do show larger gaps. As for the tabular datasets, we suspect that not having a pre-trained model might be an explanation. For text datasets, it may be that pre-trained models/representations can help “smooth” over the gaps in LF-coverage. **(3)** The main conclusion we wanted to show with this set of plots is that using SSL can add value in different scenarios (e.g., as coverage levels decrease), contrary to what happens on the original WS benchmarks. For this, we saw using simpler Thresh+SSL methods as sufficient though it is certainly possible that Thresh+SSL+Re-label may further improve upon Thresh+SSL. While we don’t expect the high-level conclusions to change, the coverage range in which SSL is helpful could expand as a result. We plan to include such experiments in future revisions. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I think this is great work and I hope it gets accepted.
Rebuttal 1: Rebuttal: We thank all of the reviewers for their time and thoughtful feedback! We will respond to each review individually, using this "global response" space to upload our pdf containing the Tables/Figures for new results (which are referenced in our responses). Pdf: /pdf/eb8e35fc9f7be89cf336069b011c26c0907749bd.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents a systematic study of how useful SSL is in weak supervision. Specifically, the authors analyze SSL and weak supervision (WS) along three axes and explore various approaches along each axis. First, the paper analyzes what to consider as `unlabeled` data for SSL training. The second axis refers to what SSL method to utilize. Finally the paper analyzes various ways of refining the WS weak labels through relabeling. In short, this paper is a large-scale analysis into SSL and WS which tries to understand where the improvements of SSL and WS reported in prior work stem from. All methods are thoroughly tested on 8 classification WS benchmarks. Strengths: - The presentation of the paper is great. The study in thoroughly detailed and easy to understand. - The motivation is also excellent. Sheding light on how to achieve an effective WS training setup and analyzing its interaction with SSL is much needed. - The paper performs the analysis on as many as 8 datasets and performs a thorough ablation study and analyses. - There are some very interesting findings, such as the fact that SSL does not help too much in various setups. In a way it makes sense that a high coverage for weak labeling means that the unlabeled data may not contain enough information for the model to learn. Weaknesses: - I believe the search space considered in the paper could be more comprehensive. I appreciate that the authors acknowledge this in the paper (Limitations), however, given that the paper is effectively an in-depth analysis of various ways of combining WS with SSL, I believe that these analyses are needed. Specifically, besides soft label renormalization and contrastive learning, the paper mentions that augmentations are less straightforward for NLP tasks. However, prior work such as UDA [1], MixText [2] or AUM-ST [3] have effectively used augmentations in SSL setups. I am just a bit concerned that the strength of the SSL methods is not adequate. Given the aim of the paper, these methods should have been taken into account. - Along the same lines, the novelty of the paper seems a bit limited. Once again, these types of in-depth analyses this study tries to address are much needed, but I believe the scale of the study could be a bit larger. [1] - Unsupervised Data Augmentation for Consistency Training [2] - MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification [3] - Leveraging Training Dynamics and Self-Training for Text Classification Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Given the LLM craze and their outstanding few-shot capabilities, I would be curious to see how this analysis would look like when having a strong LLM weak labeler. What do you think? - I think reducing coverage by downsampling is not adequate. The downsampled set will still have the same distribution whereas a real-world limited coverage would indicate different distributions. Can you comment a bit more on how you did the downsampling? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper discussed the limitations at length. There is no negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and helpful feedback! To respond to your questions and comments: **Scale of study / Stronger SSL methods.** We agree that it is possible that more sophisticated SSL techniques could lead to a conclusion that SSL is more useful. Based on your feedback, we implemented and ran UDA (using En-De backtranslations) on three of the text classification benchmarks (i.e, Youtube, Trec, AGNews). Overall, the results shown in Table 1 in the pdf (attached to the global response) do not change the conclusions from our paper: using UDA does not improve upon the best methods we previously found, even when combining it with Thresholding + Re-labeling. We’re happy to expand these results to other tasks and will incorporate them into future revisions! While UDA can outperform other SSL methods on traditional SSL benchmarks, we suspect that our results here are because the SSL settings induced by WS are quite different in that for labeled set L and unlabeled set U: (i) the |L| : |U| ratio is much higher; (ii) L and U are not i.i.d.; (iii) L contains (feature-dependent) noisy labels. Our analysis in Figure 3 (from the submission) perhaps adds another perspective here, showing that the unlabeled data in these tasks may simply not be necessary for learning a strong model: assuming access to ground-truth labels, the final performance changes little whether one trains on the full training set or just the covered examples. **LLMs in WS.** We note first that there are many settings in which few-shot learning with LLMs may not be as effective. For example, when the task involves private data and/or narrow domain knowledge, LLMs are less likely to have been trained on sufficiently similar data or to be able to capture all the relevant nuances of the problem (e.g. a company's specific policies/ontology for classifying customer intents). Hence, we believe our focus on weak supervision settings where the supervision comes from primarily human-written labeling functions, remains relevant to real world practice. Nonetheless, we see using LLMs in WS as an interesting setting. One nuance for our work would be that using an LLM directly as a few-shot labeler (by default) does not leave any examples unlabeled. Thus, applying SSL would require some additional thresholding/abstaining mechanism. Meanwhile, LLMs may also allow for several new techniques to perform thresholding (e.g. via prompting an LLM to provide reasoning/confidence along with individual labels) as well as SSL (e.g. generating augmentations for consistency regularization). These would all be interesting to look into in future work! **Clarification of downsampling.** In our programmatic supervision setups, the distribution is indeed biased according to where the labeling functions label. We totally agree that downsampling a high-coverage WS training set at a *per-data point level* would not adequately capture the biases of actual low-coverage settings. That is why we instead downsample at a *per-LF level*, which reflects the real world process that more bias can be incurred when labeling fewer points with smaller LF sets. We provided more details about the specific subsampling procedures in Appendix D, but are happy to clarify any points in further back-and-forths. --- Rebuttal Comment 1.1: Comment: Thank you for the response, clarifications, and for providing your view on applicability of LLMs in WS! I still believe additional SSL comparisons could further improve the paper. However, given the comparison provided and after reading other reviewers' comments I think this is enough to warrant acceptance.
null
null
null
null
null
null
NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks
Accept (poster)
Summary: This paper proposes a knowledge distillation approach for defending against adversarial attacks in multi-exit networks. They have two different objectives: (i) leveraging self-distillation to improve adversarial robustness, (ii) reducing adversarial transferability among the submodules of the network. Since multi-exit networks have submodules that are correlated with each other, distilling knowledge from the output of clean data at the last exit to all submodules increases the chances of adversarial transferability. To avoid this, this paper proposes a two-fold approach: (i) Neighbor knowledge distillation (NKD): generates teacher prediction at exit i by averaging the predictions of clean data from exits i-1, i and i+1 and distills it to adversarial predictions at exit i; (ii) Exit-wise orthogonal knowledge distillation (EOKD): output of clean data at exit i is distilled to adversarial example at exit i. Orthogonal labeling operation on clean predictions makes teacher predictions orthogonal across all exits. NKD helps create high quality teacher predictions improving adversarial robustness of multi-exit networks. At the same time, NKD creates different teacher predictions for each exit reducing the risk of adversarial transferability. EOKD further reduces chances of adversarial transferability. Strengths: 1. The paper is well-written. The approach is well explained and the claims have been substantiated with results. 2. NEO-KD achieves best adversarial accuracy against max-average and average attacks in all budget setups. 3. The paper successfully demonstrates reduction in adversarial transferability among exits in multi-exit networks Weaknesses: 1. In anytime prediction setup, NEO-KD shows low performance at the later exits for small datasets like MNIST, Cifar-10 2. NEO-KD shows low performance at early exit for mid scale datasets like TinyImageNet. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please demonstrate results with ViT-tiny or small. Please use large scale dataset like ImageNet to show the efficacy of the method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Large scale demonstration missing, given that it is an empirical paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the time and efforts, and providing helpful comments that are also very clear. Below, we provide responses to the reviewer's comments. ### **NEO-KD shows low performance at the specific exits in some cases.** We agree with the reviewer that in some datasets, NEO-KD occasionally does not achieve the best performance in specific exits. However, we respectfilly disagree that NEO-KD achieves low performance: NEO-KD performs the best in all 7 exits for CIFAR-100 considering top-1 accuracy, performs the best in 4 out of 5 exits on Tiny-ImageNet. For small datasets (e.g., CIFAR-10), NEO-KD only looses against the baseline in 0 or 1 exit. Even in these exits, NEO-KD still achieves the second best performance most of the time, where the gap with the best performance is marginal. NEO-KD also achieves the best average accuracy in all cases. Hence, when a system designer has to decide which algorithm to use for constructing a robust multi-exit neural network, it is easy to answer that the proposed NEO-KD is the best. Overall, we believe that this strength of the proposed scheme as well as the fact that NEO-KD is the very first work to strategically integrate (i) _multi-exit networks_, (ii) _self-distillation_, and (iii) _adversarial training_, deserve merits in both multi-exit networks and adversarial training communities. ### **Additional experiments.** We appreciate the suggestion. To comply with the reviewer's comment, we conducted additional experiments using ImageNet with 1000 object classes. Due to the strict timeline, we considered ImageNet(100) where 100 samples are selected from each class in the ImageNet to construct the training set. Considering that there are 1000 classes in ImageNet, the number of train samples we use is 100,000. This dataset has been adopted in many multi-exit network literature to prove the effectiveness of the algorithm [ICCV'19], [AAAI'21]. The results in the below table show that the proposed NEO-KD performs the best in the larger-scale dataset with 1000 classes, further confirming its advantage. | Exit | 1 | 2 | 3 | 4 | 5 | Average | |:---|:---:|:---:|:---:|:---:|:---:|:---:| | Adv. w/o Distill | 18.54% | 24.66% | 27.63% | 28.28% | 29.71% | 25.76% | | NEO-KD (ours) | 21.86% | 28.04% | 31.15% | 31.93% | 33.87% | 29.37% | We also appreciate the reviewer for suggesting experiments with ViT models. However, due to strict timeline for implementing and training the baselines as well as our scheme with ViT models, we were not able to provide the corresponding results. We agree that there exists some works that aim to combine multi-exit networks and transformers, which itself is a challenging research topic in the multi-exit network community. However, since our solution integrates multi-exit network, self-distillation, and adversarial training altogether (which has not been considered in the literature), adding another transformer-side dimension beyond these components makes the problem setup extremely complicated and causes significant resource issues, compared to the case with only combining transformer with multi-exit network (with no adversarial training and no self-distillation). Please also consider that we already used the most commonly adopted model in multi-exit network literature i.e., MSDNet. Overall, we have now considered 5 datasets -- MNIST, CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet -- and trained SmallCNN and 3 different variations of MSDNet (with 3, 5, 7 exits). [ICCV'19] Phuong et al., ``Distillation-Based Training for Multi-Exit Architectures,'' ICCV 2019. [AAAI'21] Wang et al., ``Harmonized Dense Knowledge Distillation Training for Multi-Exit Architectures,'' AAAI 2021. Again, thank you for your time and efforts, providing helpful comments to improve our paper. We would appreciate further opportunities to answer any remaining concerns you might have. --- Rebuttal 2: Title: final discussions Comment: Dear Reviewer, As discussions come to an end soon, this is a polite reminder to engage with the authors in discussion. Please note we take note of unresponsive reviewers. Best regards, \ SAC --- Rebuttal 3: Title: Thanks Comment: Thanks to the authors for their detailed response. I have one follow-up question, is the method applicable in case of a hybrid distillation framework like as shown in [1]. I would encourage the authors to provide some insight and possibly results on that. [1] Analyzing the confidentiality of undistilable teachers in knowledge distillation, NeurIPS 2021. --- Rebuttal Comment 3.1: Comment: Thanks for the response. The paper you suggested propose a hybrid distillation methodology as follows: (i) First, a skeptical student strategy is proposed to adopt intermediate shallow classifier: this prevents the information leakage from the teacher to the student. (ii) Secondly, the student adopts self distillation to improve the learnability of the student. We need to highlight that our scheme is a self-distillation-based strategy that does not have an external teacher network. The main goal of [1] is to prevent the model stealer (i.e., student) to extract the knowledge of the teacher: the technical details of [1] were developed to achieve this. However, since everything is conducted by a single multi-exit network in our case based on the self-distillation during adversarial training, we actually do not need to worry about the data leakage of the teacher to another student.
Summary: This paper proposed a know-distillation-based method to improve the adversarial robustness of multi-exit neural networks. Extensive experiments are conducted to show the effectiveness of the proposed method. Strengths: 1. Extensive experiments are conducted to show the effectiveness of the proposed method. 2. This method is a novel combination of previous methods. 3. The paper is well-organized and easy to follow. 4. Technical details are provided in the supplementary materials. 5. Ablation study is provided. Weaknesses: 1. This method is to improve the adversarial robustness of DNNs, but this paper lacks a comparison with some general defense methods, e.g., TEAT[1]. 2. What is the advantage of distillation methods over other adversarial defense methods on this task? 3. "One challenge is that simply applying existing self-distillation techniques increases the adversarial transferability across different submodels, since the same knowledge from the last exit is distilled to all the other exits, increasing the dependency among different submodels in the network. " I can find any support for this claim in this paper. 4. What is the difference between the proposed method and other related work? This paper lacks a detailed/formal comparison with other related methods. Dong, Yinpeng, et al. "Exploring memorization in adversarial training." ICLR 2022 Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. "Considering our distillation method, NEO-KD is currently not directly applicable to object detection." Can the author briefly explain why it cannot be applied to object detection? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations are clearly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the positive comments and valuable feedback. Below, we provide answers to the comments raised by the reviewer. ### **Comparison with TEAT** The baselines in the main paper were generally the adversarial defense methods designed for multi-exit network. As the reviewer suggested, we conducted additional experiments with TEAT [ICLR'22] and compare with our method. Since TEAT was originally designed for the single-exit network, we first adapted TEAT to the multi-exit network setting. Instead of the original TEAT that generates the adversarial examples considering the final output of the network, we modified TEAT to generate adversarial examples that maximizes the average loss of all exits in the multi-exit network. Table R5 above shows the results using max-average attack on CIFAR-10/100 dataset. It can be seen that our NEO-KD, which is designed for multi-exit network, achieves higher adversarial test accuracy compared to the TEAT methods (PGD-TE and TRADES-TE) designed for single-exit networks. The results highlights the necessity of developing adversarial defense techniques geared to multi-exit networks rather than adapting general defense methods used for single-exit network. ### **Advantages of distillation-based approaches over adversarial defense methods** Our distillation-based approach for robust multi-exit networks has the following key advantages over conventional adversarial defense methods: First, it directly tackles the unique challenges that arise in multi-exit networks. Multi-exit networks are highly vulnerable to simple attacks (e.g., an attack targeting a single exit) since it has high dependencies between exits. Existing adversarial defense approaches cannot directly handle this unique problem of multi-exit network, as they focus on the single-exit network. Even when these methods are adapted to multi-exit networks, they face the same challenges as can be also seen in our results with TEAT above. On the other hand, distillation-based methods provide a great platform to tackle this issue, as each exit can be distilled with different knowledge to reduce the dependencies among exits. Our NEO-KD strategically utilizes distillation based on neighbor ensembling, exit-wise, and orthogonal distillation. Another advantage of the distillation method in defending against adversarial attacks is its compatibility with other defense strategies. Specifically, our distillation-based scheme is an orthogonal approach to existing defense methods, as the proposed distillation losses can be used with any classification loss (including adversarial training loss). As can be seen from experimental results of the main paper, our NEO-KD combined with existing adversarial loss (via max-average attack [ICLR'20]) brings large performance gains, showing its good compatibility. ### **Do the prior self-distillation methods really increase the adversarial transferability?** This claim can be supported by the results in Fig. 4 of our main paper. The average adversarial transferability of SKD (which adopts self-distillation) is 33.36%, which is significantly larger than the transferability of the baseline without distillation, which is 23.68%. This makes sense since the same knowledge from the last exit is distilled to all other exits, which results in increased dependencies among all submodels. On the other hand, our NEO-KD achieves the lowest adversarial transferability (20.12%) based on the proposed neighbor distillation and exit-wise orthogonal distillation, that are designed to reduce the transferability among submodels. This advantage of NEO-KD in adversarial transferability contributes to a better adversarial test accuracy compared to the baselines, as shown in the main paper. ### **More detailed comparison with related works** Since this paper focuses on the robustness of multi-exit neural networks, we mainly compared our NEO-KD approach with other self-distillation and defense methods used in the context of multi-exit networks, as discussed in the related work section (Section 2) of the main paper. Below, we provide a more detailed comparison with existing works relevant to conventional adversarial training. In the context of defense schemes against adversarial examples, conventional adversarial training methods [ICML'19, ICLR'22, CVPR'23] proposed for single-exit network have mainly focused on creating new adversarial training losses restricted to single-exit network. Therefore, adversarial transferability between exits has not been a key issue in the prior works for single-exit network. Recent methods targeting multi-exit network [ICLR'20] have also proposed new adversarial training losses, but they still do not directly handle the adversarial transferability, which is a inherent problem in multi-exit networks. Our NEO-KD is an orthogonal approach focusing on directly mitigating adversarial transferability across exits and can be combined with the existing adversarial training losses. In summary, our work provides distinct advantages compared to prior works by focusing on mitigating adversarial transferability which is a unique challenge of multi-exit network, improving robustness of multi-exit network. ### **Application to object detection task** When applying distillation to object detection in multi-exit network, we need to distill predictions of box regression and classification from the teacher (last exit) to the students (early exits). However, since the box regression predictions can differ between the teacher and the students, the student may fail to detect an object box that the teacher can identify, making it difficult to apply to object detection. We believe our work could pave the way to develop distillation-based adversarial training for multi-exit network in more complicated tasks, such as object detection. Again, thank you for the positive and helpful comments. We will make all of these points clearer in the revised manuscript.
Summary: This paper proposed a knowledge-distillation based adversarial training method, which is designed for multi-exit neural networks. The authors propose neighbor knowledge distillation to improve the robustness against adversarial attacks, and propose exit-wise orthogonal knowledge distillation to reduce the adversarial transferability across different submodels. Moreover, the proposed method is a plug-and-play method, which can be used in prevailing training strategies of multi-exit networks. Strengths: 1. This paper brings self-distillation into adversarial training, and make a good combination with multi-exit networks. 2. The experimental results on several datasets and attacks demonstrate its superiority, and the considered scenarios are sufficient. Weaknesses: 1. Though the experimental result is extraordinary, the motivation and operation of EOKD makes me confused. I did not really get the meaning of orthogonal labeling operation $O(\cdot)$, and the paper does not clearly elaborate the relation between orthogonal labeling and adversarial transferability reduction. Moreover, I would like to know whether EOKD will change the behavior of the output from every exit on clean examples. 2. The settings of baselines need to be discussed. The author conduct experiments of distillation-based baselines (SKD and ARD) by utilizing the prediction of the last exit, instead of every exit or random exit or another specific exit. Is there any reason to choose the last exit in baseline methods? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I hope the authors can clarify the motivation of EOKD and the orthogonal labeling operation more clearly. 2. Please explain the setting of baselines which only use the prediction of last exit to conduct knowledge distillation. 3. I would like to know the training speed of the proposed NEO-KD, as there are many extra self-distillation terms, which will complicate the computational graph. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: As a new adversarial training method, this paper builds a comprehensive solution, and I think this paper have discussed their limitations enough. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the helpful comments, especially on the unclear aspects in the paper. In the response below, we would like to clarify all the ambiguous points raised by the reviewer. ### **Motivation of EOKD** Multi-exit networks are highly vulnerable to simple attacks (e.g., an adversarial attack targeting a single exit) as the submodels in the network are highly correlated by sharing some model parameters. The motivation of EOKD is to _reduce the dependencies among exits_ while taking advantage of knowledge-distillation during adversarial training of multi-exit networks. To achieve this, EOKD is equipped with two key components: (i) exit-wise distillation and (ii) orthogonal distillation. The idea of the first component is to distill the knowledge of clean data to the output of the adversarial sample in an exit-wise manner, which intuitively reduces the interdependencies between exits by distilling a different knowledge to each exit (this has a clear difference compared to prior self-distillation strategies [ICCV'19a], [ICCV'19b] that distill the same knowledge to all exits). The second component of EOKD is based on the orthogonal labeling operation $O(\cdot)$ to encourage the predictions for the non-ground-truth classes of individual exits to be mutually orthogonal, by providing orthogonal soft labels. ### **Motivation of orthogonal labeling and its operation.** The EOKD loss function is defined as $EOKD_{i,j}=\ell(f_{\theta_i}(x^{adv}_j), O(f _{\theta_i}(x_j)))$ as in Eq. (6) of our main paper. To see how the orthogonal labeling operation $O(\cdot)$ works, consider a toy example with 3-exit network (i.e., $L=3$) focusing on a 4-way classification task (i.e., $C=4$). Let $[p^i_1, p^i_2, p^i_3, p^i_4]$ be the softmax output of the clean sample at the $i$-th exit, for $i=1,2,3$. If class 1 is the ground-truth, the orthogonal labeling operation $O(\cdot)$ jointly produces the following results from each exit: $[\hat{p}^1_1, \hat{p}^1_2, 0, 0]$ from exit 1, $[\hat{p}^2_1, 0, \hat{p}^2_3, 0]$ from exit 2, $[\hat{p}^3_1, 0, 0, \hat{p}^3_4]$ from exit 3, where $\hat{p}$ indicates the normalized probability of $p$ so that the values in each vector sum to one. Here, it can be seen that except the prediction $\hat{p}_1^i$ for class 1, the non-ground-truth predictions become orthogonal across different exits with no overlappings. This strategy enables the model to distill the ground-truth information to all exits while maximizing the distinction of distilled knowledge for each exit, reducing the dependencies among exits (and thus improving adversarial transferability). More generally speaking, for each exit, $O(\cdot)$ randomly selects $\lfloor (C-1)/L \rfloor$ labels among total of $C$ classes so that the selected labels are non-overlapping across different exits (except for the answer label), where the probabilities of selected labels are normalized to sum to one. This makes the predictions for the non-ground-truth classes to be orthogonal across all exits. By doing so, the essential knowledge of the ground-truth class is preserved while the knowledge of other classes is orthogonally distilled, promoting the reduction of dependencies between exits. In Fig. 4 of the main paper, it can be seen that EOKD can effectively reduce the adversarial transferability. This also leads to an improved adversarial test accuracy compared with the baselines, as can be seen in Tables 1,2,3,4 of our main manuscript. ### **Effect of EOKD on clean examples** Table R1 & R2 above shows the comparison between NKD and NEOKD in terms of clean/adversarial test accuracy using CIFAR-10. As can seen from the results, it is confirmed that applying EOKD slightly compromises the clean accuracy but yields large performance gain in adversarial test accuracy. ### **Why do baselines (SKD, ARD) use the last exit for distillation?** We would like to first clarify the reason why the last exit is utilized in implementing the baselines (SKD [ICCV'19a], ARD [AAAI'20]), and then provide additional experimental results using different distillation strategies for the baselines (e.g., using another exit or ensemble of exits instead of the last one). Existing self-distillation schemes [ICCV'19a, ICCV'19b] for multi-exit network improve the performance on clean samples by self-distilling the knowledge of the last exit, as the last exit has the best prediction quality. Therefore, following the original philosophy, we also used the last exit in implementing the SKD baseline. Regarding ARD [AAAI'20], since it was proposed for single-exit network, we also utilized the last exit with high performance when applying ARD to multi-exit networks. Nevertheless, as per the reviewer's suggestion, we performed additional experiments to consider comprehensive baselines using various exits for distillation. Table R3 above shows the results of SKD and ARD using a specific exit or an ensemble of all exits for distillation. The results show that our scheme consistently outperforms all of the baselines. ### **Training speed** In Table R4, we compare the training time of our scheme and the considered baselines for one epoch on CIFAR-100 dataset. It is observed that our scheme requires 4\% more time than the basic adversarial training (i.e., Adv. w/o Distill), and only 1.6\% more time than other KD based baselines (SKD, ARD). Considering that the main focus of multi-exit networks is the latency during inference rather than training, this small additional computation during training can be seen as a reasonable cost for achieving an improved adversarial test accuracy compared to the baselines. Again, thank you for your time and efforts in reviewing our paper. Your raised concerns made us think deep and wide; and we feel we have managed to clarify all the issues raised. We would appreciate further opportunities to answer any remaining concerns you might have. --- Rebuttal 2: Title: final discussions Comment: Dear Reviewer, As discussions come to an end soon, this is a polite reminder to engage with the authors in discussion. Please note we take note of unresponsive reviewers. Best regards, \ SAC --- Rebuttal 3: Comment: The reviewer would like to appreciate the responses from the authors. However, the motivation is still somewhat confusing to me and I incline to keep my original score. --- Rebuttal Comment 3.1: Comment: Dear Reviewer PqqG Thanks for your response, but we would be grateful if you could be a bit more specific on which part of the motivation is still confusing. In the above response, we have tried to clearly illustrate the motivations of EOKD and the orthogonal labeling process in detail using a simple **toy example with a 3-exit network and a 4-way classification task**, which is also supported by the experiments in the main manuscript, and we honestly feel that there is nothing we can add further to make it clearer. We had a discussion period of around two weeks, and we must say it feels unfair that the reviewer didn't provide any comment until the discussion period is 6 hours left (with regional time differences); we were disappointed especially because the reviewer's only remaining concern is clarification on motivation, which is generally easy to address. Again, we would be grateful if you could let us know which part of the motivation is unclear to you. Best, Authors
Summary: The paper presents a novel method called Neighbor Exitwise Orthogonal Knowledge Distillation (NEO-KD) for improving the adversarial robustness of multi-exit networks. The method's motivation lies in addressing the issue that existing knowledge distillation schemes are not ideal for multi-exit networks as they either increase or decrease adversarial robustness. The authors argue that the choice of the knowledge to distill and the specific exit to target significantly influence the robustness of multi-exit networks. - NKD improves the network's defense by ensuring the outputs of adversarial examples mimic the outputs of clean data, by distilling the combined predictions of clean data neighbor exits, leading to enhanced robustness and superior feature quality for corresponding exits. - EOKD concentrates on minimizing adversarial transferability between different network submodels, by distilling the output of clean data to the output of adversarial samples in an exit-by-exit manner, while promoting orthogonality in non-maximal predictions of individual exits. - Experiments demonstrate that NEO-KD outperforms existing solutions across a range of commonly adopted datasets, including MNIST, CIFAR-10/100, and Tiny-ImageNet. - NEO-KD demonstrates superior performance under different prediction setups (anytime and budgeted), and exhibits reduced adversarial transferability. - Ablation studies demonstrated the benefits of its individual components and its robustness against stronger adversarial attacks. Overall, the paper offers a substantial contribution to improving the adversarial robustness of multi-exit networks. Strengths: 1) The paper presents a novel approach called NEO-KD that uses Neighbor Knowledge Distillation (NKD) and Exitwise Orthogonal Knowledge Distillation (EOKD) for enhancing adversarial robustness in multi-exit networks. This approach is distinct in its application of knowledge distillation techniques to the unique challenges posed by multi-exit networks. 2) An extensive set of experiments were conducted to validate the proposed method using different datasets and adversarial attacks. This includes the use of anytime and budgeted prediction setups, and analysis of adversarial transferability. The results show the NEO-KD method's performance relative to baseline techniques. 3) The paper(motivation) addresses an important problem in the area of multi-exit networks: how to enhance adversarial robustness. The proposed solution, NEO-KD, not only improves robustness, but also optimizes computation costs and reduces adversarial transferability. Weaknesses: The approach is tested on four standard datasets: MNIST, CIFAR-10, CIFAR-100, and Tiny-ImageNet. While these are common benchmarks, the effectiveness of NEO-KD on larger datasets such as ImageNet is not demonstrated. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The paper incorporates the application of ensemble strategy at inference time in the budgeted prediction setup. It would be interesting to learn more about your decision process for using this approach. Specifically, how did you determine the confidence threshold for selecting the exit? Are there any trade-offs or implications if the threshold was set differently? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - The experiments were performed using relatively smaller datasets (CIFAR-10, CIFAR-100, MNIST, and Tiny-ImageNet), and the performance on larger datasets such as ImageNet remains unexplored. - The choice and implications of the confidence threshold used in the ensemble strategy during the budgeted prediction setup are not clearly explained. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the time and efforts. In the response below, we provide answers to the comments raised by the reviewer. ### **Results on larger datasets** We appreciate the suggestion. We conducted additional experiments using ImageNet with 1000 object classes. Due to the strict timeline, we considered ImageNet(100) where 100 samples are selected from each class in the ImageNet to construct the training set. Considering that there are 1000 classes in ImageNet, the number of train samples we use is 100,000. This dataset has been adopted in many multi-exit network literature to prove the effectiveness of the algorithm [ICCV'19], [AAAI'21]. The results in the below table show that the proposed NEO-KD performs the best in the larger-scale dataset with 1000 classes, further confirming its advantage. | Exit | 1 | 2 | 3 | 4 | 5 | Average | |:---|:---:|:---:|:---:|:---:|:---:|:---:| | Adv. w/o Distill | 18.54% | 24.66% | 27.63% | 28.28% | 29.71% | 25.76% | | NEO-KD (ours) | 21.86% | 28.04% | 31.15% | 31.93% | 33.87% | 29.37% | ### **Accuracy-computation trade-off controlled by the confidence threshold** In a budgeted prediction setup, given a limited computing budget, the trained model has to make predictions for all samples within the budget. For instance, if the budget is sufficiently provided, we can obtain high performance by classifying many samples at the later exits. On the other hand, if the budget is significantly small, the model should classify most of the samples at early exits while compromising performance. In other words, the model should make predictions efficiently within limited budget constraints by classifying 'easy' samples at early exits and 'hard' samples at later exits. At this time, _confidence threshold_, which is the maximum value of the prediction probability vector by softmax, is a criteria for determining whether to perform prediction on a sample at the current exit or the next exit. Note that each exit has a different confidence threshold, which is determined using validation set (this will be clarified soon). Specifically, given a sample, if the confidence of the sample at a specific exit is greater than the predefined threshold at the exit (easy sample), the prediction is made on the current exit, and if it is lower than the threshold (hard sample), the feature of the sample is passed to the next exit for prediction at the next exit. Therefore, confidence threshold controls the trade-off between computational cost and accuracy. As the threshold increases, more samples are predicted at the earlier exits, reducing the number of samples forwarded to the later exits. Consequently, fewer resources are used for classifying test samples while compromising the performance. Conversely, when the threshold is decreased, more samples are predicted at later exit, which consumes more resources and yields higher performance. ### **How to determine confidence threshold** As per the reviewer's suggestion, we would like to provide a detailed explanation about how to determine confidence threshold for each exit using validation set before the testing phase. First, in order to obtain confidence thresholds for various budget scenarios, we allocate the number of validation samples for each exit. For simplicity, consider a toy example with 3-exit network (i.e., $L = 3$) and assume the number of validation set is 3000. Then, each exit can be assigned a different number of samples: for instance, (2000, 500, 500), (1000, 1000, 1000) and (500, 1000, 1500). As more samples are allocated to the early exits, a scenario with a smaller budget can be obtained, while allocating more data to the later exits can lead to a scenario with a larger budget. More specifically, to see how to obtain the confidence threshold for each exit, consider the low-budget case of (2000, 500, 500). The model first makes predictions on all 3000 samples at exit 1 and sorts the samples based on their confidence. Then, the 2000th largest confidence value is set as the confidence threshold for the exit 1. Likewise, the model performs predictions on remaining 1000 samples at exit 2 and the 500th largest confidence is determined as the threshold for the exit 2. Following this process, all thresholds for each exit are determined. During the testing phase, we perform predictions on test samples based on the predefined thresholds for each exit, and calculate the total computational budget for the combination of (2000, 500, 500). In this way, we can obtain accuracy and computational budget for different combinations of data numbers (i.e., various budget scenarios). Fig. 2 and 3 in the main paper show the results for 100 cases of different budget scenarios. Again, we appreciate the reviewer for the insightful comments. We would be happy to address any remaining questions the reviewer might have. [ICCV'19] Phuong et al., ``Distillation-Based Training for Multi-Exit Architectures,'' ICCV 2019. [AAAI'21] Wang et al., ``Harmonized Dense Knowledge Distillation Training for Multi-Exit Architectures,'' AAAI 2021. Again, we appreciate the reviewer for the positive comments with valuable feedback. We would appreciate further opportunities to answer any remaining concerns you might have. --- Rebuttal 2: Title: Post-Rebuttal Comment: Comment: Based on the detailed response from the authors addressing my concerns and clarifications, and taking into consideration the feedback from other reviewers and the rebuttal: I'm glad to see the added tests on ImageNet(100) that tackle my worries about scalability. Their clarification on the confidence threshold in the budgeted prediction setup clears up my earlier questions. Seeing how the authors addressed the feedback and the paper's importance for multi-exit networks, I'm leaning towards changing my rating from 'Borderline accept' to 'Weak Accept'. --- Rebuttal Comment 2.1: Comment: Thank you very much for acknowledging our efforts and raising your score. Best, Authors of Paper 11001
Rebuttal 1: Rebuttal: We appreciate all reviewers for providing constructive comments, which have greatly helped us to improve the paper. Due to the limited content we can provide in each response, we would like to share additional experimental results that **Reviewer PqqG** and **Reviewer KNG2** suggested here. For the other reviewers (Reviewer iEEe and Reviewer grwc), all results are provided in our response corresponding to each reviewer. In general, all approaches, including the baselines and our NEO-KD, are trained using adversarial examples generated through the max-average attack.   ### Tables \& References for **Reviewer PqqG** * **Table R1:** Clean test accuracy of NKD and NEO-KD. |Exit|1|2|3|Average Clean Accuracy| |:---|:---:|:---:|:---:|:---:| |NKD|76.81%|79.03%|81.98%|79.27%| |NEO-KD|75.77%|78.37%|81.37%|78.50%| * **Table R2:** Adversarial test accuracy of NKD and NEO-KD. |Exit|1|2|3|Average Adversarial Accuracy| |:---|:---:|:---:|:---:|:---:| |NKD|46.48%|46.63%|44.64%|45.92%| |NEO-KD|46.53%|47.65%|50.71%|48.30%| * **Table R3:** Adversarial test accuracy of SKD and ARD according to exit selection as a teacher prediction. |Exit|1|2|3|Average| |:---|:---:|:---:|:---:|:---:| |SKD (exit 1)|32.27%|36.92%|38.57%|35.92%| |SKD (exit 2)|35.33%|35.10%|37.82%|36.08%| |SKD (exit 3)|39.36%|41.39%|38.39%|39.71%| |SKD (ensemble)|38.63%|41.80%|40.13%|40.19%| |ARD (exit 1)|35.64%|38.10%|42.12%|38.62%| |ARD (exit 2)|35.35%|38.24%|40.00%|37.86%| |ARD (exit 3)|39.37%|41.98%|43.53%|41.63%| |ARD (ensemble)|35.22%|38.35%|40.76%|38.11%| |NEO-KD (ours)|41.67%|45.38%|45.54%|44.20%| * **Table R4:** Comparison of training time between our NEO-KD and baselines. |Method|Adv. w/o Distill|SKD|ARD|NEO-KD| |:---|:---:|:---:|:---:|:---:| |Training time (min/epoch) |13.40|13.80|13.80|14.02| [ICCV'19a] Phuong et al., Distillation-based training for multi-exit architectures. [ICCV'19b] Li et al., Improved techniques for training adaptive deep networks. [AAAI'20] Goldblum et al,. Adversarially robust distillation.   ### Tables \& References for **Reviewer KNG2** **Table R5:** Comparison of adversarial test accuracy against max-average attack between TEAT methods and our NEO-KD. * (a) CIFAR-10 |Exit|1|2|3|Average| |:---|:---:|:---:|:---:|:---:| |PGD-TE [ICLR'22]|**48.73%**|46.00%|46.85%|47.19%| |TRADES-TE [ICLR'22]|45.05%|39.64%|42.10%|42.26%| |NEO-KD (ours)|46.53%|**47.65%**|**50.71%**|**48.30%**| * (b) CIFAR-100 |Exit|1|2|3|4|5|6|7|Average| |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |PGD-TE [ICLR'22]|24.07%|24.39%|25.14%|25.35%|26.29%|25.57%|24.60%|25.06%| |TRADES-TE [ICLR'22]|17.62%|18.52%|18.61%|18.98%|18.95%|19.67%|20.35%|18.96%| |NEO-KD (ours)|**28.37%**|**28.78%**|**29.02%**|**29.49%**|**30.06%**|**28.45%**|**28.54%**|**28.96%**| [ICLR'22] Dong, Yinpeng, et al. ``Exploring memorization in adversarial training,'' ICLR 2022. [ICLR'20] Hu et al., Triple wins: Boosting accuracy, robustness and efficiency together by enabling input-adaptive inference. [ICML'19] Zhang et al. Theoretically principled trade-off between robustness and accuracy. [ICLR'22] Dong et al. Exploring memorization in adversarial training. [CVPR'23] Dong et al., The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training. [ICLR'20] Hu et al., Triple wins: Boosting accuracy, robustness and efficiency together by enabling input-adaptive inference. [ICCV'19a] Phuong et al., Distillation-based training for multi-exit architectures. [ICCV'19b] Li et al., Improved techniques for training adaptive deep networks.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Puzzlefusion: Unleashing the Power of Diffusion Models for Spatial Puzzle Solving
Accept (spotlight)
Summary: This paper applies conditional diffusion model to address the problem of spatial puzzle solving and show its real application on Cross-cut Jigsaw Puzzle (CJP), Voronoi Jigsaw Puzzle (VJP) and room layout arrangement (RLA). Unlike previous methods that rely on enumerating and verifying pairwise alignment and may struggle when the complexity of global arrangement increases, this approach aligns all pieces in one pass. Qualitative and quantitative evaluations show the proposed method outperforms competitive methods. Strengths: + The writing is easy to follow. + The proposed method verifies the potential of the generative model on spatial puzzle solving, which may inspire to address more complex spatial puzzles. + Experiments with noisy spatial puzzles demonstrate the robustness of the proposed methods. Weaknesses: + The application and the method are not properly motivated. I cannot come up with a case that we need to put together multiple floorplan regions. It looks a little weird. Besides, the other two applications, i.e., CJP and VJP, are too simple. + I did not see how the method solves complex cases as raised in line 30. Full MagicPlan (RPLAN) has at most 10 (8) room patches, while Small MaginPlan (RPLAN) has at most 6 (6) room patches, thus they have minor differences in complexity. The search space of these examples is small and may not need a learning-based method. + Methodology: The canonical diffusion denoising probabilistic model requires quite some denoising steps, the proposed method does not apply any modifications to remedy the low time efficiency. Technical Quality: 3 good Clarity: 3 good Questions for Authors: + Experiments: + It seems better to derive more analysis on ablation studies. For example, in Tab. 3 lines 5 and 8, why does matching loss applying on all corners have a slightly lower performance than the optimal choice? Besides, feature redundancy at each corner seems critical, what is the intuition behind this design? Can it be explained by providing more training examples to the model? + There should be more experimental results. Does reference [21] have more competitive results on Room layout arrangement? It is better to report the results of [21] in Tab. 1. + The evaluation scheme is not clear, since the ideal result is deterministic, how does the method cope with the variability of the DDPM output during the evaluation? + Typo: Tab. 2 Caption: quantitative Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss some limitations about the performance and demands on the data. I would like to see more discussions about the quality of the results, which can be more inspiring. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your positive, insightful and valuable comments and suggestions which are very crucial for improving the quality of our manuscript. --- **1. Applications of the room layout arrangement and hardness of the jigsaw puzzle solving.** >The room layout arrangement is an emerging problem for the real estate industry, with big companies like Ricoh, Zillow, and MagicPlan actively pursuing solutions. For instance, there exist mobile applications that capture individual room layouts, necessitating manual integration to produce the full house floorplans. Our method is a step forward toward automating this process. Please also refer to L11-L13, L22-L24, L106-109, and L201-207 of the paper, mentioning the practical impact of the proposed research. > >Moreover, tackling the CJP and VJP demands at least several minutes of human effort. In contrast, our method accomplishes this task within mere seconds. We believe extending our method to similar tasks and datasets could assist in addressing formidable challenges. > >We also believe that extending our method to other similar tasks and datasets can help solve overwhelming tasks such as reassembling shredded documents or restoring fragmented objects and images, and potentially even more exciting future directions, such as designing electrical boards based on the provided components. --- **2. Non-learning based methods for Full MagicPlan (RPLAN).** >As highlighted by Shabani et al. [20], their method needs several minutes to process a small-scale house. However, their processing time explodes exponentially to approximately one hour for a house with seven rooms and even a few days for a house with ten rooms. > >Analytically, suppose each room has two doors, the number of possible room arrangements by aligning pairs of doors is equal to the number of spanning trees in a complete graph with $n$ vertices, which is $n^{(n-2)}$. The count becomes approximately 1k, 262k, and 100 million for a floorplan of 6, 8, and 10 rooms, respectively. This demonstrates the severe scalability challenge of non-learning based methods. **3. The canonical diffusion denoising probabilistic model requires quite some denoising steps.** >We thank for the great comment. Our current method utilizes the standard diffusion denoising probabilistic models, which are slow and require many sampling steps. However, we rather consider the use of diffusion models to be a distinct advantage of our approach. This is because any advancements in accelerated sampling techniques and efficient diffusion models, which are currently a vibrant area of research, will be directly applicable to our method. This aspect is beyond the scope of our paper due to space constraints and the diverse array of available sampling approaches. --- **4. It seems better to derive more analysis on ablation studies.** >We thank for the interesting questions. We will add the following analysis to the main paper. >- Matching loss on doors and corners: One reason can be that the corners of rooms within the floorplans do not necessarily align. For instance, a wall from the dining hall might divide two rooms, thereby not conveying significant high-level information. In contrast, doors hold more informative value as they connect the rooms. >- Feature redundancy: While the redundancy in the diffusion process can be considered as adding more randomness and possibly more augmentation, as mentioned by the reviewer, it also serves us in two ways. 1) It helps in the model's architecture design by providing the ability to establish explicit connections among corners within the transformer, for example by utilizing Polygon Self Attention. 2) During the final prediction stage, this redundancy can be beneficial by allowing the averaging or voting of predictions akin to an ensemble model. ___ **5. Comparison with Lambert et al. [21] for Layout arrangement.** >While the high-level problem is the same, Lambert et al. [21] assume that the panorama images are given and have some overlaps. They leverage this information to generate Aligned Bird's Eye View Texture Maps, subsequently employing these maps for global pose estimation. However, this is not possible in our targetted application domains, because in a mobile application such as MagicPlan, panorama images cannot be uploaded to the cloud due to the privacy/security concerns as well as to minimize the amount of data transfer from a mobile device (see L296-300). Our approach requires only room layouts that are extremely compact and does not convey any private information. --- **6. How does the method cope with the variability of the DDPM output during the evaluation?** >We thank for the question and will clarify the following in the paper. We run our system five times and report the mean. --- **7. Typo in Tabel. 2 Caption.** >We thank for the typo check and will make a correction. --- Rebuttal Comment 1.1: Comment: I thank the authors for clarifying my concerns. I still think the method is reasonable but the motivation is not so sound for me. For example, if users of MaginPlan can take time to draw a room layout, roughly assembling different rooms will not be a problem. The authors did mention some applications that are more interesting and reasonable in their responses. It would be interesting to see some of those applications. For scaling to more complex cases, it would be more convincing to add more complex results in the paper, for example, examples with dozens of pieces. Besides, it is also important to add some discussions about the low-quality results of the present method. I believe this will be interesting to the readers. --- Reply to Comment 1.1.1: Comment: We again thank the reviewer for valuable input regarding the experiments, comments, and responses. We are glad that the reviewer's concerns have been clarified. We further respond to points raised in the discussion. --- **1. For scaling to more complex cases, it would be more convincing to add more complex results in the paper, for example, examples with dozens of pieces.** We thank the reviewer for the question. To further address the reviewer’s concern, we created a new version of the Voronoi dataset including 200K samples for training and 2K samples for the test, and we increased the number of pieces to the range of 15 to 50. The obtained metrics for our model are 65.48, 70.40, and 55.31 respectively for Overlap, Precision, and Recall. --- **2. Besides, it is also important to add some discussions about the low-quality results of the present method. I believe this will be interesting to the readers.** We thank the reviewer for the suggestion. While we have provided some failure samples in Fig. 5 of supplementary, and also the first figure of the attached PDF, we will add more cases along with discussion in the final manuscript. --- **3. If users of MaginPlan can take time to draw a room layout, roughly assembling different rooms will not be a problem.** Creating an efficient data capture pipeline for the masses significantly differs from data acquisition in controlled research environments. Operators are burdened with multiple intricate steps and lengthy instructions. These steps involve capturing images, annotating layouts and details like addresses, floor and unit numbers, room types, cardinal headings (NSEW), presence of obstacles, and key architectural elements such as windows, doors, and fixtures like basins, bathtubs, and laundry machine bases. Our collaboration extends across seven global companies focused on applying computer vision techniques in real estate and construction. Automation of as many steps as possible through robust techniques is paramount. Additionally, there's ongoing research to automatically extract room layouts from panoramic images, further enhancing the automation process. By combining our research and leveraging these advancements, a fully automated system is on the horizon. This perspective will be emphasized in our paper, reinforcing the significance of our real-world applicable research. --- **4. The authors did mention some applications that are more interesting and reasonable in their responses. It would be interesting to see some of those application.** We agree with the reviewer that our paper can motivate several interesting future applications. However, it is worth noting that in references such as Lambert et al. ECCV 2022 [21] (Room arrangement), Harel et al. CVPR 2021 [6] (Crossing cut puzzles), and Shabani et al. ICCV 2019 [20] (Room arrangement), a single task has been addressed. In comparison, our approach not only excels in tackling those tasks more effectively and efficiently, but also introduces an additional layer of complexity through the incorporation of more intricate tasks, such as Voronoi puzzles.
Summary: This paper introduces a novel approach to puzzle solving and room floorplan arrangement by utilizing a conditional generation process based on the denoising diffusion model. The model effectively reconstructs the original polygonal coordinates, representing the spatial arrangement, during the reverse diffusion process. Notably, the proposed method demonstrates robustness to data noises, which enhances its practical applicability. In order to train the diffusion model, the authors introduce two new datasets: a synthetic jigsaw dataset and a real floorplan dataset with room layout pieces. The experimental results showcase outstanding performance in both puzzle solving and room floorplan arrangement tasks. Strengths: 1. The utilization of positional information as the signal in the diffusion process is a relatively novel idea. 2. Introduces an interesting approach by framing spatial arrangement, puzzle solving and floorplan registration tasks as a conditional generative process. Weaknesses: 1. The paper exhibits a writing problem where the description of the jigsaw solving and layout arrangement tasks lacks strong connections. 2. It would be beneficial to provide more detailed explanations about the jigsaw part directly in the main paper. 3. The training of the diffusion model requires a large amount of data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the proposed method handle puzzles that are cut into squares? 2. Can a baseline be composed of TransVector-based approaches with multiple iterations of optimization? 3. Could you provide more details on the "Averaging/voting" process in Figure 2 of the supplement? 4. How does the computational cost of the proposed method compare to existing methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your positive, insightful and valuable comments and suggestions which are very crucial for improving the quality of our manuscript. --- **1. Additional details of the jigsaw part in the main paper and lack of connection between layout arrangement and jigsaw puzzles.** >We thank for the comment and will do our best to clarify the details of the layout arrangement and jigsaw puzzle tasks. Both tasks assemble scattered geometric pieces into a single coherent shape, an image, or a floorplan of a functional house. Careful spatial reasoning is the key to the success of a method. We will clarify this in the paper and would appreciate any further specific feedback/comments on the clarity. --- **2. The training of the diffusion model requires a large amount of data.** >We agree with the reviewer, where one of our key contributions is the introduction of a new large-scale real-world dataset, MagicPlan, which helps us train our diffusion-based model. This contribution is precious given the scarcity of substantial real estate data, owing to privacy and licensing constraints. In addition, this challenge does not apply to Jigsaw puzzle tasks whose datasets are often synthetic and scale up easily. --- **3. Can the proposed method handle puzzles that are cut into squares?** >We thank for the question and have conducted experiments to show that our method can solve square jigsaw puzzles. > >Specifically, we have solved 3x3 pictorial square puzzles with a similar implementation as for the pictorial CJP. Our model achieves the following numbers: >- 90.98% in *Direct Comparison*, which denotes the portion of puzzle pieces within the reassembled puzzle that have been positioned correctly, >- 88.87% in *Neighbour* score, meaning the percentage of paired neighboring pieces that are correct, >- 74.54% in *Perfect Reconstruction*, which measures the percentage of perfectly reassembled puzzles. > >We want to emphasize that our approach randomly initializes and solves the arrangements in a continuous space. In contrast, prior methods such as [35,36] focus solely on a subset of potential discrete permutations. For instance, Table 4 in reference [35] displays a maximum of 1000 permutations selected from the entire pool of 9! (~363k) possible permutations. Even with this reduced set, the challenge posed by the number of permutations persists, especially compared to the results from consideration of a mere 10 or 100 permutations. A few qualitative samples are provided in the attached pdf file. We will add this to the main paper. ___ **4. Can a baseline be composed of TransVector-based approaches with multiple iterations of optimization?** >We thank for the suggestion and have implemented this new baseline as an additional experiment. Drawing inspiration from the iterative approach employed by Housegan++ [56] during training, we integrated a 50% probability for each room to remain fixed. In cases where rooms were designated as fixed, we input the ground truth position (GT position) to the network, enabling the network to learn the task of arranging non-fixed rooms by leveraging information from the fixed ones. > >During the testing phase, previously predicted locations were introduced to the network with a 50% probability per room, effectively offering potential input constraints and facilitating iterative design refinement. This process was repeated a total of 10 times. Consequently, the performance improved, yielding MPE of 39.12 and GED of 2.18 on the full RPLAN dataset, while the full MagicPlan dataset reflected 47.85 and 5.49, respectively for MPE and GED. Despite noteworthy improvements, the results remained significantly inferior to our diffusion-based approach. We will add this to the paper. --- **5. Could you provide more details on the "Averaging/voting" process in Figure 2 of the supplement?** >We thank for the question and will clarify the following details in the paper. For puzzle solving, the determination of the location and rotation of each piece involves computing the average location and rotation of its corners. However, in the context of room layout arrangement, where rotations adhere exclusively to the Manhattan directions (0, 90, 180, and 270 degrees), a voting system for rotation is employed. This includes assigning each corner a vote, aligned with the rotation closest to its predicted value. The final rotation of the piece corresponds to the angle that accumulates the highest number of votes. For position, we use a similar approach to puzzle solving. Predicted positions for all corners within a room are averaged to establish the position of the room within the floorplan. > >When visualizing the diffusion process, there are two options: utilizing the averaged output of the entire piece for corner positions, or solely relying on the output of each corner itself. Figure 2 in the supplementary material and the video from 00:57 to 1:26 provides both visualizations. --- **6. How does the computational cost of the proposed method compare to existing methods?** >The diffusion model and TransVector implementations each utilize approximately 4 million parameters. On the other hand, Harrel operates as a non-learning-based method, devoid of any parameters to be learned. In the case of Shabani et al. [20], their approach encompasses a blend of a heuristic technique for generating potential candidates alongside a deep learning model for scoring said candidates. While the model itself incorporates a lesser number of parameters, the heuristic component demands notably more time due to the generation of all possible candidates via door connections. The training and the inference time of our method and the competing ones are given at L183-187 and L234-238 of the paper. --- Rebuttal Comment 1.1: Comment: I appreciate your intention to clarify the details of both the jigsaw puzzle and layout arrangement tasks. Allocating some space to underscore the shared concept between these tasks —"assemble scattered geometric pieces into a single coherent shape, an image, or a floorplan"- would enhance the overall coherence of the narrative. Thank you for providing more experimental results that demonstrate the ability of your method to handle square jigsaw puzzles and the new baseline test-time iterated TransVector approach. All of my concerns are addressed. --- Reply to Comment 1.1.1: Comment: We again thank the reviewer for valuable input regarding the experiments, comments, and responses. We'll enhance the explanation of the shared concept in all three tasks and include additional experiments. Please let us know if you have further questions or comments.
Summary: This paper investigates the use of diffusion models to solve jigsaw puzzles. Puzzle pieces are model as a sequence of corners and the diffusion process consists in adding noise/denoising the position and orientation of the corners, conditionally to the original piece shapes. To have fragments snap into place, an additional loss for corner matching is added. The diffusion process is test on synthetic puzzles as well as on a floorplan dataset and shows that the proposed method in improving over the existing literature. Strengths: Given the combinatorial nature of jigsaw puzzles, using a denoising diffusion process, which runs in a fixed number of steps, is a promising idea. It opens the door of using diffusion process to solve discrete combinatorial tasks, which is an area where learning based method are still struggling to dominate (setting aside Deep MCTS). The execution in this paper is well done, the additional loss for corner matching is sound. The experiments are OK. Code was provided. Weaknesses: Not much! - One complaint I have is that the evaluation metrics are not very interpretable. The MPE depends on the size of the puzzle and the pieces (a 2 pix error is not the same on a 5 pix fragment and on a 150 pix fragment), GED is even worse and precision/recall tend to saturate on the proposed datasets. Also, pictorial CJP have no quantitative results mentioned. - I was having the impression that the literature on jigsaw puzzle solving using deep learning is a bit sparse ([26, 35, 36]), given the number of relevant papers that a google scholar search returns when querying "deep learning jigsaw puzzle". More comparison with how deep learning was used to solve the problem would have be nice to highlight the radical difference this approach proposes. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is it possible to solve the regular n x n jigsaw puzzle (for example by assigning to each fragment its closest fixed position) and compare to regular method that solve the combinatorial problem (like [35,36] and similar methods) using fragment position accuracy and puzzle solving accuracy? This would allow for much more interpretable results. - What are the score for pictorial CJP? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper addresses limitations in the conclusion by mentioning the requirement of large scale training sets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your positive, insightful, and valuable comments. --- **1. Interpretability of the metrics.** >We thank the reviewer for pointing this out. While these metrics might not be deemed ideal, it worth to mention that for layout arrangement, as every pixel is equivalent to a real world distance (measured in meters/feet), which can be determined using the room layout scale of the house, MPE can easily converted to real-world distance, as highlighted by Shabani et al. [20]. In addition, to further address the reviewer's concern, we have devised a new **"Weighted MPE" (WMPE)** metric, which normalizes the error per room/piece by the square root of the room/piece area: >$ \text{WMPE} =\frac{1}{N} \sum_{i=0}^{N} \frac{\text{Positional Error of } x_{i}}{\sqrt{\text{ Area of }x_{i} }} $ > >The WMPE metric scores are 0.109 (ours) and 0.451 (TransVector) for the full RPLAN dataset, and 0.341 (ours) and 0.564 (TransVector) for the full MagicPlan dataset. We will add the results to the paper. >The GED metric was borrowed from Nauata et al. [56] to assess room connectivity, which is established through doors/frames, stands as a pivotal factor in house design. The precision/recall metric has been the standard in the recent jigsaw puzzle literature [6,26]. --- **2. Quantitative results of CJP.** >Due to the time complexity of Harel et al. especially in the case of pictorial puzzles, they did not report any number for their method on pictorial data. For the same reason, in order to compare Harel et al. to ours, we calculated the metrics on 20 samples for pictorial CJP. Our method achieves 0.953, 0.978, and 0.930, respectively for, IoU, Precision, and Recall, compared to 0.935, 0.934, and 0.947 of Harel et al. Thank you for pointing out, we will add the details to the paper. --- **3. Additional related works on jigsaw puzzles.** >We thank for the suggestion. We will add and discuss more references on deep-learning based jigsaw puzzle papers. However, the majority of these papers tackle square puzzles and cannot be compared on our tasks [25]. Our work pursues a broader and more adaptable approach for more challenging tasks, and needs to compare against a few similarly adaptable approaches (e.g., [Harel et al.] or [Shabani et al.]). --- **4. Is it possible to solve the regular n x n jigsaw puzzle?** >Yes, although the focus of our paper is mainly on the geometry side of the puzzles, our method can be used for pictorial square puzzles as well. In this regard, we did an experiment on solving 3x3 pictorial square puzzles with similar implementation details as the pictorial CJP. Our model achieves the following metrics: >- 90.98% in *Direct Comparison*, which denotes the portion of puzzle pieces within the reassembled puzzle that have been positioned correctly, >- 88.87% in *Neighbour* score, meaning the percentage of paired neighboring pieces that are correct, >- 74.54% in *Perfect Reconstruction*, which measures the percentage of perfectly reassembled puzzles. > >We need to emphasize that our approach tackles puzzles with randomly initialized pieces in a continuous space. In contrast, prior methods such as [35,36] focus solely on a subset of potential discrete permutations. For instance, Table 4 in reference [35] displays a maximum of 1000 permutations selected from the entire pool of 9! (~363k) possible permutations. Even with this reduced set, the challenge posed by the number of permutations persists, especially compared to the results from consideration of a mere 10 or 100 permutations. Few qualitative samples are provided in the attached PDF file. We will add this to the main paper. --- Rebuttal Comment 1.1: Comment: The rebuttal addresses my concerns. --- Reply to Comment 1.1.1: Comment: We again thank the reviewer for valuable input regarding the experiments, comments, and responses. We are glad that our responses addressed all the reviewer's concerns. We will add discussed clarifications and experiments to the final manuscript.
Summary: The paper presents a diffusion based method to tackle jigsaw puzzle solving task. This task has applications in artwork restoration, room layout estimation, etc. The paper also introduces a room layout and arrangement dataset. The authors compare their work with previous state of the art approaches and a transformer based baseline, and outperforms all these methods. Strengths: 1. The paper presents an interesting approach to tackle jigsaw puzzle solving problem using diffusion models. The corner coordinates and rotation of puzzle pieces are used as input, and diffusion model is tasked with predicting the noise in those inputs. 2. Unlike prior methods, this method is not limited to simple puzzles or require pairwise comparison between puzzle pieces, which makes it more efficient. 3. The method achieves better performance compared to previous state of the art methods and a transformer based baseline that directly predicts the output instead of doing iterative denoising. 4. Ablations show that all new modules/loses introduced in the paper contribute towards the final performance. Weaknesses: ### Paper clarity comments 1. Section 3 is a bit hard to follow. 2. What does line 140 mean: “position/rotation of the rth room/piece stored at the ith corner”? Does it mean the piece that has corner i as one of the corners? ### Missing ablation 3. Authors state that redundant representation helps their system, and have shown one ablation for it. However, it would be interesting to see an ablation that shows how the method performs when the the position and orientation estimation of the piece are not concatenated with all the corners but passed as separate input. ### Discrepancy between paper and supplementary 4. Some discrepancy between figure and supplementary video: In fig 2, it seems like that random noise is added to each corner independently and that noise changes the change of each piece. In the supplementary video, however, it seems that same random noise is added to all the corners of each piece, which results in shape of each piece staying consistent. So which one is it? 5. If random noise is added to each corner independently for each piece, how do authors maintain the ordering of the corners when passing it to transformers? Won’t de-noising process change the ordering of corners? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: ### Copying my comments from weakness section 1. Authors state that redundant representation helps their system, and have shown one ablation for it. However, it would be interesting to see an ablation that shows how the method performs when the the position and orientation estimation of the piece are not concatenated with all the corners but passed as separate input. 2. Some discrepancy between figure and supplementary video: In fig 2, it seems like that random noise is added to each corner independently and that noise changes the change of each piece. In the supplementary video, however, it seems that same random noise is added to all the corners of each piece, which results in shape of each piece staying consistent. So which one is it? 3. If random noise is added to each corner independently for each piece, how do authors maintain the ordering of the corners when passing it to transformers? Won’t de-noising process change the ordering of corners? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes, authors have addressed the limitations well Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your positive, insightful and valuable comments and suggestions which are very crucial for improving the quality of our manuscript. --- **1. Section 3 is a bit hard to follow** >We thank for the comment and will do our best to clarify our writing. We would appreciate any further specific comments or suggestions. --- **2. What does line 140 mean: ''position/rotation of the rth room/piece stored at the ith corner''? Does it mean the piece that has corner i as one of the corners?** > We thank for the question and will clarify in the paper. ($r$) denotes the index of a room/piece in a puzzle. For example. ($r$) is either 1, 2, or 3 for a puzzle of 3 pieces. Similarly, ($i$) denotes the index of a corner in a room/piece. Therefore ($C^r_i$) denotes the i-th corner of the r-th room/piece. Note that our approach infers the position/rotation information at every corner of a room/piece and uses their average to determine the final position/rotation of the room/piece. --- **3. it would be interesting to see an ablation that shows how the method performs when the the position and orientation estimation of the piece are not concatenated with all the corners but passed as separate input.** >That is indeed an interesting experiment. In order to study network performance while considering position and orientation as separate inputs, we conducted two experiments. In the first experiment, we utilized a single input token per piece to predict both piece position and rotation of that piece, while incorporating corners information as separate tokens. In the second set of experiments, we employed one input token for each corner to predict the position and rotation of each piece, with corner information again represented as distinct tokens. We applied both of these methods to the MagicPlan dataset. > >For the first setup, we achieved MPE and GED values of 49.91 and 4.96, respectively. Similarly, for the second setup, the corresponding values were 42.45 for MPE and 3.22 for GED. The performance drop is significant in the first setup, and although the second setup yields more comparable results, it still demonstrates lower performance when compared to our setup (MPE/GED of 40.81/3.09). Additionally, it involves twice the number of input tokens compared to our setup, resulting in higher computational costs. We will provide details of this experiment in the paper. --- **4. It seems that same random noise is added to all the corners of each piece.** >Different noises (but from the same Gaussian distribution) are added to different corners, with each corner contributing to a prediction for the position or rotation of the corresponding piece. The final prediction is derived by averaging the predictions from all corners of a piece. Nevertheless, when visualizing the diffusion process, there are two options: utilizing the averaged output of the entire piece for corner positions, or solely relying on the output of each corner itself. Figure 2 in the supplementary material and the video from 00:57 to 1:26 effectively illustrate this concept. ___ **5. How do authors maintain the ordering of the corners when passing it to transformers?** >We thank for the question. The feature embedding contains the corner index information (the second term of Eq. 3 in Section 4.2), which allows our system to keep track of the order of corners. We will clarify our explanation at Lines 156-158 of the main paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and for addressing my concerns. I have increased my final score to 5. --- Reply to Comment 1.1.1: Comment: We again thank the reviewer for valuable input regarding the experiments, comments, and responses. We sincerely thank the reviewer for updating the score and We are glad that our responses addressed all the reviewer's concerns. We will make sure to incorporate your suggestions by adding further clarification and experiments in our final manuscript.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable and insightful comments. We are also grateful to the reviewers for their positive comments on our work. We have addressed the reviewers points in our individual responses to each reviewer, and please let us know if there are any new questions. Pdf: /pdf/a0e2eb29b11ac4af34dc4c0f54116786c23e2f3e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a diffusion model for solving 3 different spatial puzzle tasks: cross-cut jigsaw, voronoi jigsaw, and room layout arrangement. Their method achieves SOTA results while being much faster than previous works, allowing them to handle larger puzzles than previous methods could. They also demonstrate greater robustness to noisy data inputs compared to previous methods, despite not being trained on noisy data. Ablations show the contributions of proposed losses and network components. They also present two new datasets: a synthetic one for the Voronoi puzzle task, and a real room layout one from MagicPlan. Strengths: - The paper is overall well-written and easy to understand. - In terms of originality, this paper is the first to tackle these kinds of layout arrangement problems using diffusion models. - The proposed method is sound and clearly described. - The method is a significant improvement over existing methods while using a very different approach. Weaknesses: - No major weaknesses - At zero noise level (Figs 5 and 6), the proposed method seems to be less precise at aligning the pieces compared to Harel et al. (while the general layout is correct, there are small alignment errors). Of course, Harel et al. fails for more difficult cases, but for these easier cases, it seems slightly more accurate than the proposed method. (I am however not very familiar with this task so I am not sure how important these small inaccuracies are in practice.) - Some details are unclear (see “Questions”) Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - As shown in Fig. 7 of the supplementary PDF, the method produces slightly different results due to stochasticity; how many samples were generated when computing the results in Tables 1 and 2? - L243 says that TransVector achieves a better MPE score than Shabani et al, but that doesn’t appear to be the case in Table 1? - In Table 2, are the CJP and VJP results obtained using the same model, or is a separate model trained for each task? - For Table 1, separate models are trained for MagicPlan and RPLAN, but does a model trained on MagicPlan exhibit some generalization to RPLAN and vice versa? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations were discussed adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your positive, insightful and valuable comments and suggestions which are very crucial for improving the quality of our manuscript. --- **1. At zero noise level (Figs 5 and 6), the proposed method seems to be less precise at aligning the pieces compared to [6].** >We agree that Harel et al. achieves slightly more precise alignment through their heuristic method in the case of zero noise. However, this is not an issue. We conducted an experiment (see the attached pdf Fig. 2) showing that our approach easily achieves equally precise alignment by a simple post-processing heuristic, which is a minor variant of a loop merging process of an existing work [1]. Our method is also noticeably faster than [6]. We will add the details of the post-processing step and more experimental results to the paper. >[1] Chen, Jiacheng, et al. "Floor-sp: Inverse cad for floorplans by sequential room-wise shortest path." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.) --- **2. How many samples were generated when computing the results in Tables 1 and 2?** >We thank for the question and will clarify in the paper. We ran our system 5 times and reported the mean. --- **3. Typo on line 243.** >We apologize for the oversight and thank you for bringing that to our attention. That sentence is indeed a typo, and we will drop the phrase from the text. ___ **4. Are the CJP and VJP results obtained using the same model, or is a separate model trained for each task?** >We thank for the question and will clarify in the paper. The models are independent and trained separately. --- **5. Does a model trained on MagicPlan exhibit some generalization to RPLAN and vice versa?** >We thank for a great question. We conducted additional experiments, which will be added to the paper. Concretely, we trained our model with MagicPlan and tested it with RPLAN. (MPE/GED) scores are (15.62/1.90), compared to (10.55/0.97) when trained and tested with RPLAN. Similarly, (MPE/GED) scores are (48.48/4.68) when trained with RPLAN and tested with MagicPlan, compared to (40.81 and 3.09) when trained and tested with MagicPlan. The performance drops are understandable, given that Magicplan encompasses real-world data contributed by users, while RPLAN contains synthetic noise-free floorplans, designed by professional architects. It is worth noting that our cross-dataset results are still noticeably better than in-dataset results by the TransVector baseline. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications and additional experiments. The rebuttal addressed all of my concerns. --- Reply to Comment 1.1.1: Comment: We again thank the reviewer for valuable input regarding the experiments, comments, and responses. We are glad that our responses addressed all the reviewer's concerns. We will incorporate the discussed clarifications and experiments into the final manuscript.
null
null
null
null
null
null
Topology-Aware Uncertainty for Image Segmentation
Accept (poster)
Summary: This paper proposes a framework that utilizes a probabilistic approach to extract structure-wise uncertainty estimates. This is achieved by extending the DMT to a probabilistic setting that models each structure as a sample from a probability distribution, thus capturing the intra-structural uncertainty. The proposed method then incorporates inter-structural uncertainty through a regression network which jointly reasons over the structures, using a Graph Neural Network (GNN). Additionally, a specialized inference procedure and post-processing steps are used to generate a structure-wise uncertainty heatmap, which can improve segmentation and quantify uncertainty more effectively. Strengths: - The authors demonstrate the versatility of their method by applying it to multiple segmentation network backbones and datasets. The proposed method was found to improve the quality of segmentation and produce high fidelity uncertainty maps for each network, making it backbone-agnostic. - The introduction of a probabilistic DMT and a GNN to reason about inter-structural uncertainty is a significant advancement in this field. - The proposed method for quantifying structure-wise uncertainty in segmentation networks has significant implications for medical image analysis. The ability to better understand and represent the uncertainty can potentially lead to improved segmentation quality, and could be crucial in medical applications where decision-making is based on these segmentation outputs. Weaknesses: - Small dataset size: DRIVE dataset 40 images. ROSE 39 images. PARSE 100 volumes. Larger dataset should be considered to evaluate the proposed method. - Limited Explanation of Hyperparameter: while the authors have considered a range of hyperparameters in main body and supplementary, the paper could benefit from a more detailed discussion of the influence of these hyperparameters on the method's performance and how the optimal values were determined. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please add larger dataset and evaluate the proposed method. Also, explain more about the hyperparameters. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Please see the 'questions' part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! Please find our responses to specific queries below. **Q1:** Please add a larger dataset to evaluate the proposed method. **A1:** As recommended, we conduct these experiments. Please see 1. in the global response ‘Author Rebuttal by Authors’ above. **Q2:** Limited Explanation of Hyperparameter: while the authors have considered a range of hyperparameters in main body and supplementary, the paper could benefit from a more detailed discussion of the influence of these hyperparameters on the method's performance and how the optimal values were determined. **A2:** This is a good question and we provide a detailed discussion as requested. The main hyperparameters in this work are $u, \gamma, \alpha, \beta$. We describe the importance of each below: - $u$ : This is the parameter for the Bernoulli distribution, and we introduce it in L221 of the main paper. In our Prob. DMT module, for every structure, we have a choice to either retain the structure as obtained from DMT, or, generate a sample skeleton using the perturb-and-walk algorithm. We model this choice using the Bernoulli distribution. Essentially, in some runs we would like the original DMT structures to also interact with the others. Thus a low value of $u$ works best. We found $u = 0.3$ to give the best performance, that is, for every structure there is a 30% chance that it’s DMT form is used and a 70% chance that a sample variant is used. We find that $0.15 \leq u \leq 0.3$ have comparable performance. - $\gamma$ : This hyperparameter is used in the weighted combination of distance $Q_d$ and likelihood $f_n$ to obtain $Q(c’)$, which is used to determine the next pixel location. We introduce $\gamma$ in L210 of the main paper. It maintains a tradeoff between the distance regularizer $Q_d$ and the perturbed likelihood $f_n$. The higher the value of $\gamma$, the greater is the distance regularizer, and consequently the generated path will become closer to that of a straight line. This is not desirable, as a straight line would lose the original composition of the structure. Additionally, because of the perturbation in the likelihood, we do not want the path to go astray. And so, to ensure path completeness, we require $\gamma$ to be non-zero. Through experiments, we obtain the best performance when $\gamma = 0.2$. We provide ablation study results of different $\gamma$ values in Section 12 of the supplementary. - $\alpha, \beta$ : These are prior hyperparameters of the Inverse Gamma (IG) distribution which we introduce in L204 of the main paper. We perturb the likelihood using a Gaussian model. As the variance of the Gaussian model is unknown, we use Bayesian probability theory to sample the variance from the IG distribution (its conjugate prior). And so, $\alpha$ is the shape parameter and $\beta$ is the scale parameter of this IG distribution. Ideally we would like a small perturbation of the likelihood and not a strong one. This is because a strong perturbation would corrupt wholly and we would not be able to sample a reasonable skeleton. At the same time, the perturbation should not be too small, otherwise we will not obtain a significant variant. The mean of the IG distribution is $\frac{\beta}{\alpha - 1}$ (when $\alpha > 1, \beta > 0$), which on average is the value of the sampled variance for the Gaussian distribution. We achieve the best performance when $\alpha = 2.0$ and $\beta = 0.01$. The resulting sampled variance for the Gaussian model thus generates reasonable perturbation. We provide ablation study results of different $\alpha, \beta$ values in Section 12 of the supplementary. At the extreme ends of the graph plot in Fig. 13 of the supplementary, the sampled variance is either too low or too high, resulting in a decrease in performance. We hope the above discussion is helpful. We will definitely add this to the revised version of the paper. Thank you very much for your review! We hope we were able to clarify your comments, and we would be happy to discuss further! --- Rebuttal Comment 1.1: Comment: The rebuttal solves all my concerns. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you very much for your response! We are glad that the rebuttal was able to solve all of your concerns. If further clarifications are needed for reevaluating the score, we would be happy to continue the discussion! Sincerely, Authors#14252
Summary: This paper proposed a topology-aware uncertainty estimation method to segment curvilinear objects. The main contribution focuses on the application of discrete Morse theory (DMT). On several public datasets, the proposed method achieves SOTA performance and the visual results demonstrate the connectivity of vessels or other objects can be enhanced. Strengths: 1. Clear and well-organized paper. 2. The object connectivity is improved by the proposed sound framework. Weaknesses: 1. This paper is mainly based on [24]. The technical contribution is a bit marginal here. Please clearly state the main differences and the take-home insights. 2. Some non-deep methods are also good at achieving better topologies. See: [1] Liu, Siqi, et al. "Rivulet: 3D neuron morphology tracing with iterative back-tracking." Neuroinformatics 14 (2016): 387-401. Please include them for discussion and comparison. 3. The title is a bit over-claimed. "Topology-Aware Uncertainty for Curvilinear Object Segmentation" might be more suitable. 4. Using graph models to capture the topology information is not new. See: [2] Shin, Seung Yeon, et al. "Deep vessel segmentation by learning graphical connectivity." Medical image analysis 58 (2019): 101556. More discussions and comparison should be added. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See the above weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: See the above weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! We will revise our manuscript accordingly. Please find our responses to specific queries below. **Q1:** This paper is mainly based on [24]. The technical contribution is a bit marginal here. Please clearly state the main differences and the take-home insights. **A1:** While we tackle the same problem as [24] (i.e., structure-wise uncertainty estimation), our method is significantly different from theirs. We described the key differences in L87 of the main paper, and we elaborate below. We refer to the method in [24] as Hu et al. - Hu et al. uses classic DMT to deterministically generate skeletons, thus failing to model intra-structural uncertainty. As we show in Fig. 2a) and Fig. 5 of the main paper, DMT structures often differ from the true GT structure. This is a common problem because of the tortuous nature of the structures. The uncertainty formulation in Hu et. al. mainly relies on the persistence of a structure, thus failing to capture the uncertainty with respect to the structure composition itself. In the ablation study in Table 3, we show how our proposed Prob. DMT results in improvement over DMT. Also as stated in L219, DMT is just one specific instance of our Prob. DMT. - We propose a joint inference model to jointly predict uncertainties of all the structures. This joint inference framework avoids explicit enumeration/sampling over the exponential size space of hypotheses. This is in contrast to Hu et al.’s method whose main aim was to generate multiple segmentation hypotheses, thus suffering from the enumeration problem. To reduce this computational burden, they used a global persistence value to prune structures in each run. This pruning was coarse, and such a global thresholding/pruning is harsh in practice, leading to suboptimal uncertainty estimation. - Our approach incorporates inter-structural uncertainty using GNNs, recognizing that structures in image space interact with each other and are not isolated. During uncertainty estimation, it is therefore crucial to consider their spatial context, i.e., inter-structural uncertainty. In the Table 3 ablation study, we show how incorporating GNN for inter-structural uncertainty results in improvement. - Finally, from Fig. 1, 8, 14 and Tables 1, 6, it is evident that Hu et al.’s method tends to produce over-confident uncertainty estimates --- they assign zero uncertainty (100% confidence) to most structures. On the other hand, our method, accounting for both intra- and inter-structural uncertainties, produces higher fidelity uncertainty estimates. We believe the above points strongly differentiate our work from [24], both in terms of methodology as well as performance. **Q2:** Some non-deep methods are also good at achieving better topologies. See: [1] Liu, Siqi, et al. "Rivulet: 3D neuron morphology tracing with iterative back-tracking." Neuroinformatics 14 (2016): 387-401. Please include them for discussion and comparison. **A2:** Thank you for providing this citation. While Rivulet aims to enhance segmentation quality, our primary goal is structural-level uncertainty estimation, with segmentation improvement coming naturally. Hence directly comparing the two does not seem straight-forward. Furthermore, while Rivulet also generates centerline skeletons similar to DMT, DMT has the important property that it decomposes the likelihood into a set of constituent structures (each structure is a path between a saddle-maxima pair). This decomposition is crucial as we ultimately estimate the uncertainty for each of these structures. Rivulet does not provide any such decomposition and hence cannot be used as a substitute in our framework. **Q3:** The title is a bit over-claimed. "Topology-Aware Uncertainty for Curvilinear Object Segmentation" might be more suitable. **A3:** We understand the sentiment and will update the title to "Topology-Aware Uncertainty for Curvilinear Structure Segmentation" in the revised version. **Q4:** Using graph models to capture the topology information is not new. See: [2] Shin, Seung Yeon, et al. "Deep vessel segmentation by learning graphical connectivity." Medical image analysis 58 (2019): 101556. More discussions and comparison should be added. **A4:** Thank you for providing this citation. Indeed, the referenced paper uses graph models for vessel segmentation, however, our method utilizes GNNs in a different way. In the referenced paper, the vertices of the graph are pixels sampled from vessel centerlines. In contrast, our method generates a structure-level graph, treating entire structures (collections of pixels) as vertices. Thus we directly model the structures in the graph, instead of a sampled subset of points. Moreover, our graph model generates uncertainty estimates, while the referenced paper focuses on classifying pixels as vessels. Thank you so much for your review! We would be happy to discuss further! --- Rebuttal Comment 1.1: Comment: Overall, the authors‘ responses solved most of my concerns. Regarding the vessel segmentation tasks, please add the discussion of future works in the final version. For example, the evaluation metric? the main obstacle for clinical applications? Thanks. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you very much for your response! We will definitely incorporate the discussion from the rebuttal to the revised version of the manuscript. We are glad that we were able to solve your concerns. If further clarifications are needed for reevaluating the score, we would be happy to continue the discussion! Sincerely, Authors#14252
Summary: This paper proposes a novel method for the estimation of uncertainty of the structures in the segmentation results from existing methods/models, in order to facilitate the subsequent proofreading process. To this end, it models the intra-structure and inter-structure uncertainties in two modules. While the former considers the geometry, contrast and model’s confidence, the latter considers neighbours and thus context. The former is implemented using the Probabilistic discrete Morse theory (DMT), which samples the Morse skeletons using the inverse Gamma distribution. The latter uses graph neural network (GNN) for the prediction of the uncertainty of all the structures jointly through regression using the attenuation loss. The proposed method has been validated over three publicly accessible datasets and compared with several state-of-the-art. Relatively better results have been obtained in different metrics. A number of ablation studies have also been carried out on variants of the DMT and GNN and some hyperparameters. Some insights have been accumulated into how the proposed method behaves under different configurations. Strengths: 1. The topic of the paper is interesting and important for the subsequent validation of the results for image segmentation and find many applications in the read world such as medial image segmentation and analysis, object classification and recognition, industrial quality assurance, etc. 2. A novel method has been proposed to post-process the segmentation results produced by some existing models for their proofreading and validation. It includes two main modules: intra-structure uncertainty and inter-structure uncertainty estimation. While the former is modelled using the probabilistic DMT, the latter is modelling using the graph neural network. The method is well motivated and supported by solid theory. 3. The proposed method has been validated over three publicly accessible datasets and compared with several state-of-the-art. Relatively better results have been obtained in different metrics. The t-test has also been performed whether the improvement is significant. 4. A number of ablation studies have been carried out on the variants of DMT and GNN and some hyperparameters in the process of sampling of Morse skeletons. Some insights have been accumulated into how the proposed method behaves under different conditions, that would instruct how it can be applied in the real world. 5. The proposed uncertainty estimation has been applied to re-calibrate the segmentation results obtained. The experimental results have shown that it does help identify the false positives and false negatives. Weaknesses: 1. While the paper targets the proofreading of the segmentation results by experts, its necessity could be emphasised: the automatic algorithms cannot guarantee the correctness of the segmentation, especially for medical imaging where the anatomy and structures may vary from one subject to another, and their prior knowledge is not always available This is also the process for the relevant researchers/experts to learn and accumulate insights about the variation of the structures of the subjects for individualised diagnosis, treatment and medicine. 2. Some details, elaborations, clarifications and discussions are missing from place to place. For example, while the construction of the graph is described later, its main idea could be summarised in Introduction. “structure” is widely used throughout the paper but its count has ever been discussed. While the ground truth is again used to train the GNN, some discussions could be made about its special requirement: GT plays a crucial role in the validation and guiding the estimation of the uncertainty for false positives and false negatives and thus, may have special requirement about quality and reliability. It is not clear how to guarantee this, especially when the datasets are large and include subtle structures. On the other hand, is that possible to use the salient structures to guide the search for faint structures in the spirit of Canny edge detection? “Persistence value” in L239-240: how to calculate it? where is it from? No details or references are given. 3. The key steps for the re-calculation of the Morse skeletons in probabilistic DMT may require further investigation. The next point selection criteria should be stated explicitly: maximize the sum of the inverse distance and the likelihood from the candidates and discuss why this criterion is feasible, especially when the distance is not really comparable directly with the likelihood. For the definition of Q(c’), this definition may require further investigation: what is the rationale for this definition? while the first term is inverse distance, is that comparable to the likelihood in the second term? Is this optimal? Is there any other alternative? Is that possible to draw conclusion that the first term will always lie in the unit interval [0, 1]? Overall, this is still a heuristic, which may not hold in some cases. “This process is done separately and in parallel for every structure.” In L222-223: how many Morse skeletons were sampled for each structure and can they be directly used to calculate as uncertainty their means and variances? The final results may be affected by the number of runs and their combinations made. 4. The computational complexity and time have not been analysed and reported. Thus, it is not clear how efficient the proposed method is and how much time it requires to process a set of given images. 5. Further analysis of the experimental results would help. For example, while the proposed method is effective in improving the results of the existing methods in ECE, clDice, ARI and VOI, but it is not always in Dice. It is not clear why. Any further insights and explanations would really help. 6. More ablation studies could be carried out on other parameters and components such as the dimensionality of input feature vector, crops/bounding boxes on the structure, alternatives to the shortest distance for structure inference. 7. The claim that the proposed method may be applicable to non-medical applications: civil engineering, road network and railway track segmentation, has not been validated. 8. It is not clear how the prior knowledge can be used to guide the image segmentation, uncertainty estimation, and result re-calibration, rather than just based on some heuristics in skeleton-recalculation and structure inference, while such heuristics may not hold in some case. Also, it is not clear how such heuristics contribute to the final errors. Some further investigation would be encouraged. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please see the comments/concerns in weaknesses, More detailed comments are as follows: 1. L41: accept or reject/correct structural proposals efficiently: relative to what: prior knowledge or experience? 2. L132, each structure: some elaborations would help: what that is: line segment, blob, junction, keypoint, or anything else. 3. L140-14, At each training iteration, it takes one sample skeleton for each structure: Does this mean that the number of iterations will be determined by the number of the total structures and the number of sampled skeletons for each structure? How to count the number of the structures? any details? 4. L170, a perturb-and-walk algorithm: refs? 5. L192-194, At every step, we always walk to the neighboring pixel with the highest likelihood value: This is a heuristic only, which may not hold in some cases. 6. L224, The output of Prob. DMT is effectively one sample skeleton: is the same number of sample skeletons as the number of runs? Can all the sample skeletons be directly used to calculate the uncertainties? How to count the number of structures? 7. L227-228, network that takes as input each structure: This is confusing: takes as input each structure each time or all the structures together? If the former is the case, do you have to run the model many times? what is the motivation for this? Can they be combined together and run the model once to directly estimate the uncertainties? 8. L237, are smaller crops/bounding boxes: what are the sizes of the boxes? Any details? 9. Se 3.3: shortest distance: This is heuristic only, which may not hold in some cases. 10. L299, others -> the others Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations of the proposed work are discussed in the supplementary materials. The potential negative social impact are not highly related and thus are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! We will revise our manuscript accordingly and address your questions below. **Q1:** Guarantee of GT quality/reliability? **A1:** We use public datasets which are considered reasonably good, though are not guaranteed to be flawless. Our method can aid in refining the datasets: structure-wise uncertainty can reveal potential annotation errors. **Q2:** Can salient structures guide search for faint structures like Canny edge detection (CED)? **A2:** Our work is primarily on uncertainty estimation, tested with human-involved proofreading. An alternative is a human-free automated system that utilizes uncertainty to accept/reject structures. Your analogy of CED suggests high-confidence (low uncertainty) structures could support inclusion of nearby faint (low confidence) ones. This is intriguing and merits further exploration. **Q3:** Details/ref of Persistence Value L239. **A3:** We apologize, the missing reference is an oversight on our part. Persistence value (from persistent homology [4]) is defined as the difference of function (likelihood) values of 2 critical cells (saddle-maxima pair). It captures the importance of a structure, thus making it a valuable feature in our framework. **Q4:** Rationale of $Q(c’)$? Is the first term (inverse distance) comparable to the second term (likelihood)? Is this optimal? Alternatives? Is the first term always in [0,1]? Overall it’s a heuristic which may not always hold. **A4:** We provide the rationale for $Q(c’)$, especially the inverse-distance term $Q_d$, in L196, L215. As we perform the walk algorithm on a perturbed input, it’s possible the path would go astray and not reach the destination. $Q_d$ acts as a regularizer to likelihood $f_n$, guiding the path from source $c_s$ to destination $c_m$, ensuring path completeness. $Q_d$ will always be in [0,1] as it’s the inverse of Euclidean distance. Given 2 different pixel locations, the minimum Euclidean distance will be at least 1, and so $Q_d$ will be at most 1. The range of $f_n, Q_d$ are [0,1], so they are comparable. Their combination is weighted by $\gamma$ for the final $Q(c’)$ metric. Fig.13 (supple.) shows ablation study on $\gamma$ emphasizing $Q_d$’s importance. When $\gamma = 0, Q_d$ is not used, leading to decrease in performance. Notice $\gamma > 0$ results in sharp improvement, empirically showing $Q_d$ is essential in the next point selection criteria $Q(c’)$. You are correct that $Q(c’)$ is heuristic, but we find it works well in practice. That said, our future goal is to explore a theoretically guaranteed algorithm which can estimate a distribution of Morse complexes from noisy observations. **Q5: a)** L222: How many skeletons are sampled per structure? Does this equal the no. of runs? Do runs and their combinations influence results? **b)** L140: Does no. of training iterations depend on no. of sampled skeletons? **c)** L227: Can structures be combined to run the model once for direct uncertainty estimation? **A5: a)** In L294, we mention we take 5 runs, i.e., we sample 5 Morse skeletons per structure. L261 ‘Inference procedure’ outlines how T runs of the framework generate uncertainty estimates. While results vary with run count and resulting combinations, we find that T > 5 did not result in statistically significant improvement. Note that T runs are typical for uncertainty-estimation techniques, including the probabilistic methods we compare against. **b)** No. of epochs during training is independent of T. Every epoch, we sample 1 skeleton per structure. **c)** During inference, we conduct T runs, sampling a different skeleton in each run. As you mentioned, the alternative is to pre-generate multiple skeletons at once, and feed them together to obtain uncertainty in one pass. However, we want a flexible framework. If N skeletons are pre-generated, the network will rigidly require N structures, limiting flexibility for users with specific time/memory needs. Our approach lets users tweak T without retraining, unlike the alternative where altering N mandates retraining. **Q6:** Sec 3.3 shortest distance is a heuristic which may not always hold. Alternatives? **A6:** In L269, we emphasize that the shortest distance is applied solely to foreground pixels, a stronger constraint than using the shortest distance by itself. This ensures that structures separated by background do not mistakenly assign uncertainties to one another. **Q7:** How can prior knowledge guide segmentation, uncertainty and re-calibration, rather than just based on heuristics in skeleton-recalculation and inference? **A7:** We are unclear about what you mean by ‘using prior knowledge’. We assume you mean how annotators’ knowledge can be integrated via active learning. Presently, we focus on generating structure-wise uncertainty estimates that can highlight uncertain structures and solicit further input from annotators. How to better learn from user input is a good yet different question that deserves future study. We hope this answers your query, and would be happy to address more comments during the discussion period. As for heuristics, we believe you are referring to $Q(c’)$. We explain it in Q4/A4, and would like to reiterate that despite a heuristic, we obtain good results in our experimentation. **Q8:** L41: accept/reject structural proposals: based on prior knowledge or experience? **A8:** This is subjective: in medical contexts, clinicians rely on expertise, while in non-medical contexts like ROADS [1], an average user can apply their judgment. Thus decisions depend on various factors like context, task, participants, etc. **Q9:** L132: What is a structure? **A9:** In L160, we describe structures as Morse structures: V-paths connecting saddle-maxima pairs. A structure is thus a piece of a larger curvilinear structure (Fig.4, 5 visualize some structures that we encounter). Thank you so much for your review! We would be happy to discuss further! --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses to the comments raised. While all the datasets for experiments are medical about vessels, and thus some priori knowledge may exist about their distribution, topology, and geometry of eyes and heart. Can such knowledge be used to guide the segmentation or uncertainty estimation process, rather than use the heuristics only to infer the structures? --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you very much for your response and the clarification on priori knowledge. To the best of our knowledge, there are no well-defined generalizable characteristics of vessels that can be used as priori knowledge. This is because vessel configurations depend on various factors. For instance, the structure of vessels varies significantly across individuals based on age, genetics, health conditions, etc. Pathological conditions can further alter the topology and geometry of vessels. For example, conditions like retinopathy or arteriosclerosis can drastically change the appearance and structure of vessels. Relying on priori knowledge might not always be suitable in such cases. Vessels are also not static structures. Their size, branching pattern, and even direction might change based on various physiological factors such as blood flow, oxygen demand, and tissue growth or repair. Then there are also technical details: medical datasets can be captured at various resolutions and scales, which influences topology/geometry characteristics. Considering the above, it is challenging to obtain generalizable constraints to use as priori knowledge. Solely relying on them can introduce bias or inaccuracies. Thus, we find that while $Q(c')$ is a heuristic, it works well in practice and we obtain good results in our experimentation. Integrating priori knowledge based on the specificities of the dataset would require further investigation. Additionally, in this rebuttal, we also conduct an experiment on a non-vessel, non-medical ROADS dataset (as mentioned in 1. under the global response ‘Author Rebuttal by Authors’ above). If you have further thoughts on this, we would be glad to continue the discussion! Sincerely, Authors#14252
Summary: This work aims to contribute to proofreading by proposing uncertain structures in a topological sense. The work proposes a method to quantify a form of structure-wise uncertainty from segmentations, where the framework explicitly models structures as samples from a probability distribution. First, the structures are extracted via discrete Morse theory (DMT). Next, the uncertainty is modeled via a joint prediction model that estimates the uncertainty of a structure in consideration of the surrounding structures. Furthermore, the authors propose a novel probabilistic DMT concept to model intra-structure uncertainty. The method is then successfully experimentally validated. Strengths: - The work presents a real generalization of the work on DMT for segmentation [25]. E.g., if I understand it correctly, if one chooses perturbation 0, the result of the method will be exactly the DMT result. This is a nice property and a strong extension of prior work. - The motivation is clear and interesting. - The method is well described and formalized. Weaknesses: 1) **Experimentation:** The authors state to propose a "topology-aware" method. However, the authors do not evaluate their results on popular topology-related metrics. E.g., Betti number errors in dimensions 0,1, and 2 (for the 3D dataset). Evaluation of the performance with respect to these metrics will improve the Experimentation. Comparison to the recently published Betti matching error (1), which considers the spatial agreement of the topological structures, would further increase the interpretability of results. 2) **Method** If I understand the method correctly, the DMT calculation at the end could be seen as a post-processing step or additional network to improve connectivity on the results; it has been shown for curvilinear structure segmentation that such an additional step improves segmentation performance. Clearly, such a concept makes this a multi-step procedure which has limitations compared to the cited method by Hu et al. 3) The definition of "topological structures" is not very clear, and to me, it appears to not align with some definitions in algebraic topology, especially in dimension-1. Intuitively I would expect a "topological structure" to be (e.g., in dimension-1) a closed loop. This appears not to be the case here. If I misunderstand this, could the authors clarify how they represent the cycles, and how this is different from their representation of features in dimension-0 and provide more explanations? **References**: [] are references from manuscript (1) Stucki, N. , et al. "Topologically Faithful Image Segmentation via Induced Matching of Persistence Barcodes." ICML (2023). Technical Quality: 3 good Clarity: 3 good Questions for Authors: This is a minor question regarding the utility of the method. The authors motivate by and mention that their contribution is a way to simplify and improve proofreading. Do you have practical experiments in a proofreading setting that you can share? Is there an experiment with human experts, e.g., readers in ophthalmology? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not provide a dedicated limitations section. I would like to learn more about the limitations of their method in the context of stricter definitions in algebraic topology. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! Please find our responses to specific queries below. **Q1:** The work presents a real generalization of the work on DMT for segmentation [25]. **A1:** We would like to clarify that the goal of this paper is very different from [25]. [25] and other topology-preserving segmentation methods focus only on a segmentation network. In contrast, our method assumes a given segmentation network, and focuses on estimating the uncertainty of the given segmentation output at a structural level. Topology-preserving segmentation is only one of the many applications of our work. **Q2:** Evaluate results on popular topology-related metrics, e.g., Betti Number errors and Betti Matching error. The comparison would further increase the interpretability of results. **A2:** Thank you for the suggestion. We now provide these results; please see 2. in the global response ‘Author Rebuttal by Authors’ above. **Q3:** If I understand the method correctly, the DMT calculation at the end could be seen as a post-processing step or additional network to improve connectivity on the results; it has been shown for curvilinear structure segmentation that such an additional step improves segmentation performance. Clearly, such a concept makes this a multi-step procedure which has limitations compared to the cited method by Hu et al. **A3:** As we clarify in Q1/A1, given a segmentation network, our goal is to capture the uncertainty of its prediction at a structural level. As the segmentation network is not part of our contribution (and instead is an input to our method), our work cannot be considered as a post-processing/multi-step approach. This is also reflective of real-world scenarios where segmentation networks are often black-boxes, with users being allowed to only access results of the network and not its internals. Thus using only the results of the given segmentation network, our method is able to generate structure-wise uncertainty estimates to streamline the proofreading process. **Q4:** The definition of "topological structures" is not very clear, and to me, it appears to not align with some definitions in algebraic topology, especially in dimension-1. Intuitively I would expect a "topological structure" to be (e.g., in dimension-1) a closed loop. This appears not to be the case here. If I misunderstand this, could the authors clarify how they represent the cycles, and how this is different from their representation of features in dimension-0 and provide more explanations? **A4:** This is an excellent question and we would like to clarify the relationship between Morse theory and the theory of persistent homology [4]. In L147, we describe Discrete Morse theory (DMT), and specifically in L159-161, we state that the “structures” are zero- and one-dimensional Morse structures. Morse structures are essentially critical points and special paths connecting them. Morse theory has a very strong relationship with the theory of persistent homology; given a Morse complex, one can exactly compute the persistent homology [5,6]. Discrete Morse theory has been used in the literature for the computation and simplification of persistent homology. Regarding the representation of cycles/loops (dim-1) in the sense of algebraic topology, while we do not explicitly model them, our method implicitly induces their topological correctness. We express in the paper that our method is topology-aware because of the strong relationship Morse complexes have with topology. When the prediction of the Morse structures is correct, the topology in all dimensions is guaranteed to be correct. **Q5:** Do you have practical experiments in a proofreading setting that you can share? Is there an experiment with human experts, e.g., readers in ophthalmology? **A5:** Yes, in L308 of the main paper, we had included results of proofreading experiments comparing our method with Hu et al.’s on the ROSE dataset. Our method was able to improve the segmentation result significantly with a relatively fewer number of clicks. We conducted these experiments with a group of researchers. As future work, we have plans to include clinicians to test our framework. **Q6:** The authors do not provide a dedicated limitations section. I would like to learn more about the limitations of their method in the context of stricter definitions in algebraic topology. **A6:** We included the limitations section in Section 14 of the supplementary. As answered in Q4/A4 above, our method uses Morse theory instead of persistent homology, and that there is a strong relationship between the two. As future work, we find that beyond curvilinear segmentation, general object segmentation can also benefit from structure-wise uncertainty (structures in this case would be smaller patches/volumes of the object). Discrete Morse theory can be used in this setting, however, we would need to make use of topological features other than the stable manifold. Thank you very much for your review! We hope we were able to clarify your comments, and we would be happy to discuss further! --- Rebuttal Comment 1.1: Comment: Dear authors, overall the concerns are appropriately addressed and I recommend accepting the paper. I would really encourage the authors to elaborate more on the theoretical limitations in the manuscript and add the nice additional Experimentation, specifically the performance in terms of Betti matching error, to the main manuscript. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you very much for your response and recommendation! We will definitely incorporate the clarifications from the rebuttal to the revised version of the manuscript. We are glad that we were able to address your concerns. Sincerely, Authors#14252
Rebuttal 1: Rebuttal: We thank the reviewers for their time and insightful feedback. We are encouraged that all the reviewers appreciated the novelty of the contribution, and found our work to be methodologically sound and effective. We have uploaded a 1-page PDF where we add results of additional experiments as requested by the reviewers: **1. Validate the proposed method on a large non-medical dataset (oww6, ns8Y)** We conduct additional experiments on ROADS [1] --- a large, non-medical dataset containing 1171 aerial images (1108/14/49 train/val/test), each of 1500 x 1500 resolution. It is a challenging dataset due to obstruction from nearby trees, shadows, varying texture/color of roads, road class imbalance etc. The quantitative and qualitative results are provided in Table 6 and Fig. 14 of the rebuttal PDF respectively. Table 6 shows that our method outperforms the other probabilistic methods on both ECE and segmentation metrics. Fig. 14 shows that our method generates better fidelity structure-wise uncertainty maps compared to Hu et al. Our heatmaps assign non-zero uncertainty to several false positives/negatives in the backbone UNet’s outputs. This is because we reason about every structure while Hu et al. limits the structure space via pruning. **2. Evaluate on topology-related metrics: Betti Number error [2] and Betti Matching error [3] (qp6i)** We provide results on these metrics in Table 7 of the rebuttal PDF. Our method consistently improves the segmentation result in terms of topology. This is consistent with our results in Table 1 of the main paper where our method outperforms the other methods on topology-based metrics like clDice, ARI and VOI. Note that for the 3D PARSE dataset, we were unable to provide Betti Matching error results as its official implementation handles only 2D inputs. **3. Conduct ablation study on other parameters like dimensionality of input feature vector and crops/bounding boxes (oww6)** We now include ablation studies on the dimensionality of the input feature vector, and size of the crops/bounding boxes, and report this in Table 8 of the rebuttal PDF. We obtain the best results when the input feature vector size is 32 and the bounding box is 32 x 32. For lower values (16 and 16 x 16), the performance reduces, while for higher values (64 and 64 x 64) we did not observe any statistically significant improvement. Thus to maintain the tradeoff between complexity and performance, we respectively use 32 and 32 x 32 for these hyperparameters. **4. Please report computational complexity and time (oww6)** We report the inference time for 5 runs on a 256 x 256 input image patch as follows: Prob.-UNet: 0.196 sec; PHiSeg: 1.811 sec; Hu et al.: 5.485 sec; Ours: 7.433 sec. The module which takes the most time is the DMT / Prob.DMT computation. Presently, this is the most optimized version as we have implemented it as an external module in C++. We will work towards porting the code to run on GPU to bring down the runtime even more. Following [7], the computational complexity of DMT is $O(n \log n)$, where $n$ is the number of pixels in the image. Since Prob. DMT additionally computes structure variants, the complexity is $O(n \log n + m)$ where $m$ is approximately the number of foreground pixels, and typically $m << n$ for curvilinear structure datasets. The linear term $m$ is added as we traverse each foreground pixel only once when generating the sample skeleton. **5. While the proposed method is effective in improving the results on ECE, clDice, ARI and VOI, but not always on Dice. Please provide insights and explanations (oww6)** This observation is accurate for datasets having curvilinear structures. This is because improvements in segmentation are obtained by recovering broken connections and false negative structures. As curvilinear structures are inherently thin (only a few pixels wide), the recovered connections and false negatives are also quite thin and hence do not affect the Dice score greatly. That being said, we would like to emphasize that we do achieve statistically significant improvement even in Dice for all the 2D datasets (DRIVE, ROSE, ROADS). In the case of the 3D PARSE dataset, we obtain numerically better results for Dice although it is not statistically significant. **6. L237: What are the sizes of smaller crops/bounding boxes? (oww6)** We provided these details in L77 of the supplementary. The size of the bounding box was 32 x 32 for 2D datasets, and 32 x 32 x 32 for the 3D dataset. In the Table 8 ablation study, we found this value to have a good tradeoff between computational complexity and performance. **7. Refs for L170 perturb-and-walk algo? (oww6)** We give our proposed algorithm the custom name of “perturb-and-walk”, and hence there is no reference. Nonetheless, its design is inspired by the random walk algorithm, for which we cite references in L206. We further reply individually to each reviewer to address their specific questions. **References used throughout the rebuttal:** [1] Volodymyr Mnih. Machine learning for aerial image labeling. University of Toronto (Canada), 2013 [2] Hu, Xiaoling, et al. "Topology-preserving deep image segmentation." NeurIPS, 2019 [3] Stucki, Nico, et al. "Topologically faithful image segmentation via induced matching of persistence barcodes." ICML, 2023 [4] Edelsbrunner, Letscher, and Zomorodian. "Topological persistence and simplification." Discrete & Computational Geometry 28 (2002) [5] Robins, Vanessa, Peter John Wood, and Adrian P. Sheppard. "Theory and algorithms for constructing discrete Morse complexes from grayscale digital images." TPAMI (2011) [6] Mischaikow, Konstantin, and Vidit Nanda. "Morse theory for filtrations and efficient computation of persistent homology." Discrete & Computational Geometry 50 (2013): 330-353 [7] Dey, Tamal K., Jiayuan Wang, and Yusu Wang. "Graph reconstruction by discrete Morse theory." arXiv preprint arXiv:1803.05093 (2018) Pdf: /pdf/025d61b491910640f1510b254349714d1212f0c2.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Tree-Rings Watermarks: Invisible Fingerprints for Diffusion Images
Accept (poster)
Summary: This paper proposes a method to add the watermark to the images generated by the diffusion model during the generation process. It subtly influences the entire sampling process, resulting in a model fingerprint that is invisible to humans. In terms of the specific approach, the watermark embeds a pattern into the initial noise vector used for sampling. These patterns are structured in Fourier space so that they are invariant to convolutions, crops, dilations, flips, and rotations. After image generation, the watermark signal is detected by inverting the diffusion process (DDIM Inversion) to retrieve the noise vector, which is then checked for the embedded signal. Strengths: - Although not practical, I think the idea is interesting. - Good writing. This paper is well written and easy to follow. Weaknesses: - I consider the method proposed in this paper as image watermarking. My biggest concern is that the method is not practical. Image watermarking algorithms have been extensively studied, what are the advantages of this method over those methods? What's more, the method has some fatal flaws, such as the modified initial noise no longer satisfies the standard Gaussian noise, which will have an impact on the diversity and quality of the synthesis results, the necessity to use DDIM inversion to recover the watermark, etc. - Although three construction strategies are proposed in this paper, the resulting initialized noise does not satisfy the standard Gaussian distribution, which has an impact on the diversity and quality of the final synthesis results. However, there is no relevant experimental analysis in this paper. Please provide quantitative and qualitative experiments regarding the diversity and quality (CLIP Score, FID does not accurately reflect this, so user study is required) of the synthesis results. - The stable diffusion uses a classifier-free guidance in the synthesis process, and the guidance is generally set to 7.5. This paper uses empty text to recover the initial noise through DDIM inversion is not able to reconstruct accurately. The experiments in this paper also prove this, so there is a big vulnerability in the method presented in this paper. - For multiple key capacity, the experimental analysis needs to be provided. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The negative impact of using this technique needs to be declared and discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and the time you've dedicated to providing it. Below, we address specific points you raised: > I consider the method proposed in this paper as image watermarking. My biggest concern is that the method is not practical. Image watermarking algorithms have been extensively studied, what are the advantages of this method over those methods? What's more, the method has some fatal flaws, such as the modified initial noise no longer satisfies the standard Gaussian noise, which will have an impact on the diversity and quality of the synthesis results, the necessity to use DDIM inversion to recover the watermark, etc. Thank you for your feedback. Our watermarking approach is unique in that it directly applies to generative models. Instead of a classical image watermark, the watermark is embedded through a minimal change in the output distribution. Further, we feel the approach is practical and inexpensive because it can be applied at inference time without re-training large models. We show that this results in a watermark that is more robust than existing approaches that are currently deployed in Stable Diffusion. We'd be happy to compare to additional training-free image watermarks, which do you have in mind? We further agree that our approach implies that we have to evaluate whether the distribution of generated images is negatively affected, which we do extensively in FID and CLIP-based metrics, measuring the diversity and quality of generated images. While we're happy to include additional metrics of image quality, such as a human study, as you mention as well, so far, we see no indication that the proposed watermark is "fatally flawed," or that it introduces any noticeable image artifacts. We'd be glad if you could clarify your argument and we'd revise our study if there is a clear direction in which you think it can be improved. Please see Figure 2 in the attached PDF, which contains typical examples of images generated with and without the watermark. We do not see any evidence of quality degradation in these or other examples. As an aside, our method is based on DDIM inversion, but it is unclear to us why this would be considered a limitation. We're happy to discuss further. > Although three construction strategies are proposed in this paper, the resulting initialized noise does not satisfy the standard Gaussian distribution, which has an impact on the diversity and quality of the final synthesis results. However, there is no relevant experimental analysis in this paper. Please provide quantitative and qualitative experiments regarding the diversity and quality (CLIP Score, FID does not accurately reflect this, so user study is required) of the synthesis results. FID scores and CLIP scores are widely regarded as reasonable estimates of image diversity and quality, and we study the impact of the proposed watermark extensively under these metrics. As such our submission contains substantive experimental analysis. While we acknowledge your viewpoint that FID and CLIP Score may not provide a complete measure of the diversity in the generated images, arguing that there is no signal in experimental evaluation of image quality short of a human study is not a viewpoint shared by the broader community. We agree that it a human study would be great to round out our evaluation, but obtaining IRB approval and completing human evaluations within the rebuttal period is not practical. We do intend to conduct more comprehensive human evaluations in our future work. In the meantime, we invite you to review some typical qualitative results presented in Appendix Figure 6 and Figure 2 of the rebuttal PDF. > The stable diffusion uses classifier-free guidance in the synthesis process, and the guidance is generally set to 7.5. This paper uses empty text to recover the initial noise through DDIM inversion is not able to reconstruct accurately. The experiments in this paper also prove this, so there is a big vulnerability in the method presented in this paper. We indeed use only empty text to recover the initial noise. However, we find that this is actually sufficient to recover the original noise closely enough, so that detection is possible. 7.5 is indeed the guidance scale that we use in all experiments in this submission. We additionally ablate larger guidance strengths in Fig. 4b, finding that even larger guidance scales still lead to a robust recovery. We'd be glad to clarify our writing to describe unambiguously that our experiments do prove that this works. > For multiple key capacities, the experimental analysis needs to be provided. We've now formalized the detection process, please see our global response. In this framework, the detection produces rigorous p-values. Armed with this setup, the question of multiple keys, which we raised as a limitation in our current submission, can be simply reduced to the question of a multiple comparisons test. Standard methods, such as the Holm–Bonferroni approach, can now be applied to rigorously test against an arbitrary number of keys. Thank you again for your thoughtful review. We made a significant effort to address your feedback including experiments, and we would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address? --- Rebuttal 2: Title: Official Comment by Authors Comment: We're again thankful for your continued feedback. We conducted a preliminary check of the potential capacity for multiple keys. In these experiments, we employ a Bonferroni correction for multiple hypotheses and set the threshold at FPR=1e^-6. As indicated in the table below, even with such a low FPR, our method consistently delivers reliable identification accuracy. Note further, that the actual average p-value for *Tree-Ring$_{\text{Rand}}$* is in fact p=7.27e-162, which, in theory, implies a capacity for around 1e156 users, when using the Bonferroni correction. | Method | 50 users/keys | 100 users/keys | 500 users/keys | 1000 users/keys | |:----------------------------:|:--:|:---:|:---:|:----:| | *Tree-Ring$_{\text{Rand}}$* | 1.00 | 1.00 | 1.00 | 1.00 | | *Tree-Ring$_{\text{Rings}}$* | 0.99 | 0.99 | 0.98 | 0.98 | Thank you for bringing this up, this result will be significant for our next draft. --- Rebuttal Comment 2.1: Title: Response to authors Comment: Thanks to the author for providing a detailed response! The author's response addressed most of my concerns, so I'll improve my scoring. I have an additional question: since this work is essentially image watermarking for diffusion-synthesized images, what are its advantages over adding watermarks accomplished directly using pre-trained image watermarking models? In my opinion, although this method can be considered as training free, the existing pre-trained image watermarking models can also be considered as requiring no additional training for the user and may be better in terms of run time and performance. Overall, I still recognize the interest of the present work. --- Reply to Comment 2.1.1: Title: Official Comment by Authors Comment: Thank you for your insightful question. The immediate advantage of the training-free approach that we present is that there is never a need to worry about domain shifts, as there would be for a pre-trained encoder, which might fail when watermarking images from a domain it was not trained for. Our proposed approach is easily implemented in code and applicable to all diffusion models, no matter the domain. Further, "training-free" is only one of the advantages of our approach. Our proposed method is also more time-efficient during generation. The incremental time introduced by the tree-ring approach is negligible, incorporating just two additional Fourier transformations in the small latent space. However, as demonstrated in the recent ICCV 2023 work [1] Table 1, many state-of-the-art pre-trained image watermarking models are notably inefficient. Four of the tested models led to additional processing times ranging from 0.11 to 0.45 seconds per image. They also demand greater GPU memory or resource allocation to load or implement such watermarking models. This increased overhead could become a significant concern for companies serving a large number of user bases. Another efficiency advantage of our method is that for multiple users, there's no need to fine-tune the model for individual keys - as our approach is "training-free." Conversely, in studies like [1], a separate fine-tuning process (on the VAE latent decoder) is necessary for each user or key. This means that their experiment with 1,000 users requires the fine-tuning of 1,000 distinct models with unique keys. In contrast, our tree-ring approach accomplishes this effortlessly. Thank you for your interest in our paper. Please let us know if you have any further questions. [1] Fernandez, Pierre, et al. "The stable signature: Rooting watermarks in latent diffusion models." arXiv preprint arXiv:2303.15435 (2023).
Summary: This paper proposes a watermarking scheme for image generative AI based on diffusion process. The watermark is embedded in the initial noise pattern before diffusion. Detection succeeds in reversing the diffusion to get an estimate of the initial noise pattern. Strengths: S1. The idea is simple and super original. S2. The watermarking technique seems to be very robust. Weaknesses: W1. Terminology. Being a watermarker for a long time, I am surprised by some atypical wordings. Mainly: - fingerprint / fingerprinting: Fingerprint usually denotes a passive forensics technique (like robust hash) whereas watermarking is an active technique (the content/model is modified). I found the title very confusing for instance. - to imprint: the correct term is 'to embed' in the watermarking literature. W2. Bold claims. I disagree with the following statements - Line 35: "This is the first watermark that is truly invisible as no post-hoc are made to the image". I strongly disagree. The difference between watermarked and non-watermarked generated images are super visible since their visual content are not the same (see Fig.~2). How can you pretend to be invisible. The thing is that nobody has access to the non-watermarked generated images therefore we do not care about the visibility. We only care about the generation quality (FID and CLIP score). - Line 47: "the watermark can only be detected by parties in control of the image generation model". This is a drawback, not an advantage. - Line 81: "Stable Signature applies this idea". If "this idea" relates to training a model with watermarked images (a la Yu et al. [2022]), then this statement is wrong. W3. No theoretical background. - Line 157: "Curiously, we will observe that ..." I understand that "curiously" as "we do not know why but this happens to hold." - The threshold for a required FPR is *experimentally* set. First, over 1,000 unwatermarked images ONLY for a FPR=1%. A population size of 1,000 yields an accuracy above +/- 1% (95% confidence) in the estimation of a small probability. So, 1,000 is definitively not enough... and this is just FPR=1%. See https://en.wikipedia.org/wiki/Sample_size_determination By the way, since the secret key is a Gaussian vector (even for the three-ring), there might be a way to set the threshold theoretically. At least, in watermarking literature, this is the case for Gaussian secret vector and cosine similarity thanks to the regularised beta incomplete function. - Some properties are outlined but they are not exactly put into practice: Line 149: Rotation - Done -> this explains the rings. Line 150: Translation - Not Done. The magnitude of the Fourier coefficients are invariant to translation but neither the embedding nor the detection works with magnitude. Therefore, the robustness to translation remains unexplained. Line 151: Dilation (ie. stretching) - Not Done. Same comment. I am surprised that Rand and Ring show some robustness since a stretching should desynchronise the watermark signal. Line 152: Color jitter - Done -> don't rely on the DC coefficient. W4: Flawed benchmark. Fernandez et al. [2023] is cited but not included in the comparison. I do not see any reason. On the contrary, some old schemes called "DwtDct" and "DwtDctSvd" are included. The attached reference is the bible Cox et al. 2007. I do not remember that this book proposes these transformations. Dwt alone -> Yes. Dct alone -> Yes (But, it is well known that there are not robust to geometric attacks). However, DwtDct (ie. the concatenation of Dwt o Dct) -> Does not makes sense. DwtDctSvd -> even worse. It would have been much better to compare to recent image watermarking like those cited in Lines 70-80 (especially HiddeN and followers). Conclusion: the benchmark does not include key schemes but includes 2 non-sense techniques. W5. I prefer A LOT the metric TPR@1%FPR because it makes sense from a practical point of view whereas AUC delivers poor information. For instance, Fig. 3 is hard to decipher: AUC = 0.948 is green lighted whereas such AUC corresponds to a TPR of 0.1 in some other plots. Same for Table 2. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Q1. Why the detection score is the L1 distance? A watermarker would have use the cosine similarity. Q2. How do you explain the robustness to translation and crop/scaling? Is the crop random (ie. not a central crop)? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Some features are presented as advantages whereas these are limitations. L1. Line 192 "However, these baselines methods are designed for steganography". No, these are watermarking schemes. They offer multi-bit watermarking, whereas you only offer zero-bit watermarking. L2. The scheme is extremely sensitive to Gaussian noise. A jitter of 2 is ok, but a noise std = 0.1 (out of 255?) pulls down the watermark. L3. Line 285: "Further, the proposed watermark is by design only verifiable by the model owner ... this has advantages". I do not think so. For security reason, the model cannot be disclosed. Fernandez et al. does not have this drawback. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and the time you've dedicated to providing it. In the limited space below, we have made an effort to address the specific points you raised: >Some atypical wordings. Thank you for pointing out this, we are happy update to our terminology. >Bold claim about invisibility of the watermark. We do think that Fig.2. is a poignant example of our "invisible" approach. We, of course, agree that the generated image, shown in Fig.2., is different from the watermarked version. But we contend that this difference is the point of our paper. Instead of conceptualizing the watermark process as one where a "non-watermarked generated image" is hidden and modified, we argue that such a "base" image does not exist - the watermarked generation directly produces only watermarked images. These images are truly invisibly watermarked because they are direct generations from the diffusion model and contain no noticeable artifacts. Importantly, note that running the non-watermarked generator a second time with a different random seed $x_T$ would *also* result in a different image. The fact the different versions of the generator produce different images does not mean that there is any change in utility for the user. We think this is an interesting reformulation of the problem. Instead of a “classical watermark” that manipulates/watermarks the image in the pixel domain where changes result in visible image distortion, we propose a watermark that embeds itself in the initial noise $x_T$. After this vector passes through the image generation pipeline, the resultant image contains no visible distortions, even when a high watermark strength is used. This is clearly a much more limited setup than general watermarking (as only diffusion images can be watermarked), but one that is uniquely suited to watermarking for generated data. We do think that this is a core contribution of our work, and we want to make sure that we get the description right for all readers. Please let us know what you think of this, we are glad to be able to discuss this. >set the threshold theoretically. We now greatly increased the theoretical underpinning of the detection scheme and include a formalized way of calculating the P-value. More details can be found in the global response. >Some properties are outlined but they are not exactly put into practice. We have added two more augmentations: PyTorch random affine transformation with an absolute fraction of 0.2 for both horizontal and vertical translations and random affine transformation with stretching factor of 20. The results are shown in the table below. We found that Tree-ring watermarks are robust under these augmentations except for *Tree-Ring$_{\text{Rand}}$* under translation. |Method|Translation|Stretching| |:-:|:-:|:-:| |DwtDct|.537/.000|.691/.006| |DwtDctSvd|.463/.000|.357/.003| |RivaGan|.999/1.00|.996/.942| |*Tree-Ring$_{\text{Zeros}}$*|.998/.976|.998/.983| |*Tree-Ring$_{\text{Rand}}$*|.764/.089|.952/.638| |*Tree-Ring$_{\text{Rings}}$*|.911/.138|.966/.599| >flawed benchmark. We really do appreciate the work of Fernandez et al. [2023]. However, their [code](https://github.com/facebookresearch/stable_signature) was made available only a couple of weeks ago, well after our original submission. Nonetheless, we have now evaluated their approach in our experiments. The results can be found in the table below. We find the watermark proposed therein, Stable Signatures, is a very robust method under many attacks, but not as robust to a number of augmentations, such as rotation, blurring, and noise. |Method|Clean|Rotation|JPEG|Cr.&Sc.|Blurring|Noise|Color Jitter|Translation|Stretching|Avg| |:-:|:-----:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Stable Signature|1.00|.658|.989|1.00|.565|.731|.976|1.00|.999|.880| |Tree-ring|1.00|.935|.999|.961|.999|.944|.983|.911|.966|.966| >Naming scheme of DWT-based watermarks. For better or worse, we chose to follow the naming convention of the currently implemented stable diffusion watermark, which can be found here https://github.com/ShieldMnt/invisible-watermark, which is also followed by Fernandez et al. [2023]. We'd be very happy if we were provided with additional references or alternative naming schemes for these algorithms that you would consider more accurate, which we will happily include. >I prefer the metric TPR@1%FPR. Thanks for the suggestion, we're happy to modify our draft. >Q1. A watermarker would have use the cosine similarity. We have conducted some experiments with a cosine similarity metric but observed it to be less resilient compared to the L1 distance in our experiments. We intend to delve further into this matter in the future. >Q2. How do you explain the robustness to translation and crop/scaling? Is the crop random (ie. not a central crop)? Yes, we used random crops. That being said, against strong translations (see above), our approach is not so robust. Thank you for bringing this up! We are fixing our formula to rely on either magnitude or correlation and are currently running experiments to verify that this improves robustness to non-central crops. We will post another update about this soon when the experiments finish. >The scheme is extremely sensitive to Gaussian noise. ... a noise std = 0.1 (out of 255?). The std corresponds to 0.1 multiplied by 255, resulting in a value of 25.5 and as such *the approach is actually very robust to Gaussian Noise*. The magnitude of this perturbation is depicted in Appendix Figure 8 (e), showing that the perturbation is substantial. We're sorry for the confusion and will clarify the exact unit conversions in our draft. Thank you again for your thoughtful review. We made a significant effort to address your feedback including experiments, and we would appreciate it if you would consider raising your score in light of our response, or commenting in response so we can update you on experiments that are still running? --- Rebuttal 2: Title: Official Comment by Authors Comment: Thank you once again for your insightful feedback. We have now obtained results using magnitude as a metric, as well as after applying a scalar correction, as detailed in the table below. Both approaches enhance robustness against translation and cropping. However, there remain certain trade-offs in other adversarial scenarios. We look forward to exploring these in future studies. |Method|Clean|Rotation|JPEG|Cr.&Sc.|Blurring|Noise|Color Jitter|Translation|Stretching|Avg| |:----------------:|:-----:|:--------:|:-----:|:---------:|:--------:|:-----:|:------------:|:-----------:|:----------:|:---:| |Stable Signature|1.000|0.658|0.989|1.000|0.565|0.731|0.976|1.000|0.999|0.880| |Tree-ring|1.000|0.935|0.999|0.961|0.999|0.944|0.983|0.911|0.966|0.966| |Tree-ring$_{mag}$|0.999|0.907|0.932|0.969|0.926|0.716|0.837|0.968|0.931|0.909| |Tree-ring$_{correction}$|0.999|0.908|0.970|0.979|0.981|0.788|0.878|0.954|0.924|0.931| Meanwhile, as suggested by reviewer ZCXQ, we conducted a preliminary check on the key capacity experiments (evaluating how many simultaneous users can be included), employing a Bonferroni correction and setting the threshold at FPR=1e^-6. Using 1000 keys and the FPR at 1e-6 closely follows the multiple keys setup in Fernandez 2023. As indicated in the table below, even with such a low FPR, our method consistently delivers reliable identification accuracy. Note further, that the actual average p-value for *Tree-Ring$_{\text{Rand}}$* is in fact p=7.27e-162, which, in theory, implies a capacity for around 1e156 users, when using the Bonferroni correction. We think this is a great addition to our draft, and hope it answers some of the final outstanding questions brought up during the review. | Method | 50 users/keys | 100 users/keys | 500 users/keys | 1000 users/keys | |:----------------------------:|:--:|:---:|:---:|:----:| | *Tree-Ring$_{\text{Rand}}$* | 1.00 | 1.00 | 1.00 | 1.00 | | *Tree-Ring$_{\text{Rings}}$* | 0.99 | 0.99 | 0.98 | 0.98 | --- Rebuttal Comment 2.1: Comment: > We do think that Fig.2. is a poignant example of our "invisible" approach. I agree that your approach is interesting. I am just saying that the word *invisible* is not appropriate. The way you "sell" your approach is confusing. > observed it to be less resilient compared to the L1 distance in our experiments I understand here that you kept the L1 distance. Yet, the computation of the p-value that you present in the rebutall is for the L2 distance. > We have now obtained results using magnitude as a metric Be more precise. Do you mean that you are now using the magnitude of the Fourier coefficients? Be careful, if Fourier coefficients are C-Gaussian (complex Gaussian distribution), their amplitudes are not Gaussian distributed. > as well as after applying a scalar correction I don't understand. --- Reply to Comment 2.1.1: Title: Official Comment by Authors Comment: We appreciate your continued feedback. > I agree that your approach is interesting. I am just saying that the word invisible is not appropriate. The way you "sell" your approach is confusing. In our thinking, the watermark is "invisible" on a per-sample basis. Yet, we are ultimately not opposed to refining our wording in a future version. In the meantime, may we kindly ask: what term would you suggest? > I understand here that you kept the L1 distance. Yet, the computation of the p-value that you present in the rebutall is for the L2 distance. Our answer that L1 distances were better than cosine similarities were based on our preliminary experiments, happening before the switch to an L2-based metric during this review process. To be more precise, here is a table that provides the exact results for both L1 and L2 distance (for simplicity with empirical AUC). There's only a small difference between L1 and L2, which we consider a reasonable change, to in turn allow for analytic computation of the cdf. This is a change we made during the review process, based on feedback from you and other reviewers. |Method|Clean|Rotation|JPEG|Cr.&Sc.|Blurring|Noise|Color Jitter|Translation|Stretching|Avg| |:----------------:|:-----:|:--------:|:-----:|:---------:|:--------:|:-----:|:------------:|:-----------:|:----------:|:---:| |*Tree-Ring$_{\text{Rings}}$* with L1|1.000|0.935|0.999|0.961|0.999|0.944|0.983|0.911|0.966|0.966| |*Tree-Ring$_{\text{Rings}}$* with L2|1.000|0.965|0.999|0.996|1.000|0.957|0.986|0.906|0.981|0.977| > magnitude as a metric Yes, for now, we have provided empirical measurements of TPR/FPR in our response here. Magnitudes are Rayleigh distributed, if we were to use magnitude-based detection (which might not be a given, due to the trade-offs), we would use a Monte-Carlo estimate of the modified distribution, to estimate p-values. > scalar correction Translation results in a phase shift in the Fourier space, which is equivalent to multiplying by a complex exponential. Thus, before computing the distance, we first determine the optimal complex exponential using a least squares solution and then adjust the key with this exponential. When calculating the p-value, we then accordingly reduce the degrees of freedom by one. This reduction has a minimal impact as the degrees of freedom are greater than a hundred in practice, but optimizes the robustness to scalar multipliers. Overall, from our discussion, we seem to have been able to clarify your central questions, such as the comparison with Fernandez et al. (2023), the theoretical underpinning for the detection threshold and investigations of other augmentations, as well as many other small changes (such as robustness to noise, why we compare to DWT-based watermarks that are currently deployed, and changes in wording and related work). We appreciate your precise and exceedingly helpful review, but we appear to have addressed the reasons for your initial borderline rating. There's of course always more to do and we are happy to continue to do so. Yet, reading your initial review and our responses again, these appear to be smaller, follow-up questions?
Summary: This paper proposes a method for watermarking images created by diffusion models, a popular class of generative models. Whereas traditional watermarking methods operate directly on images (e.g. in pixel space or Fourier/wavelet representations), the proposed watermark is embedded in a Fourier representation of the diffusion model’s initial noise distribution. As a result the watermark does not introduce perceptible artifacts, is robust to common image transformations, and can be detected with knowledge of the model weights via a noise inversion process. This method is a promising step towards allowing the operators of generative models to identify synthetic media produced by their systems. Strengths: The proposed method is novel and operates by intervening in the sampling process (akin to recent language-model watermarks [1]) rather than “post-hoc” on generated images. This kind of “distributional” watermarking of generative models is increasingly emerging as an attractive alternative to traditional methods. The proposed method appears straightforward to implement and does not require re-training the diffusion model. Solid experimental evaluation against reasonable watermarking baselines, including ablations on key generation & detection parameters. The authors provide code for reproducing their experiments. [1] Kirchenbauer, John and Geiping, Jonas and Wen, Yuxin and Katz, Jonathan and Miers, Ian and Goldstein, Tom. “A Watermark for Large Language Models”. https://arxiv.org/abs/2301.10226 Weaknesses: In line 157, the authors briefly mention a key result: “Curiously, we will observe below that the invariant properties above are preserved in xT even when image manipulations are done in pixel space of x0.” I think this could be discussed in more depth, as the success of the proposed approach essentially rests on this seemingly counterintuitive property. Do the authors have any theories as to why this property holds? Watermark capacity is barely touched on. It is common knowledge that watermark robustness, imperceptibility, and information-carrying capacity typically exist in a trade-off. The authors essentially propose a 1-bit watermark in the case of “Tree-Rings-Zeros,” while the actual capacity of the “Rand” and “Rings” variants is murkier (it is not clear how much two keys need to differ in order to be distinguishable, and how many simultaneous keys could operate in practice). While the authors’ main concern is differentiating between watermarked and un-watermarked content — for which capacity may not be essential — I think that at the very least, the difference in capacity between the proposed and baseline approaches should be mentioned. For instance, RivaGAN embeds 32-bit messages (which may be sufficient to “assign a unique key to every user of the API,” in the authors’ words). It is therefore reasonable to wonder whether RivaGAN might achieve better imperceptibility and robustness if trained with a lower capacity closer to that of the proposed method. I don’t think it is necessary for the authors to perform additional experiments along these lines, but a brief mention of the capacities of each method would better contextualize their existing results. Robustness of the proposed method to multiple simultaneous transformations looks low (Figure 7 in appendix) — it looks like even two simultaneous transformations can lower TPR@1%FPR to under 10%? It would be nice to see some discussion of this in the appendix (e.g. if any transformation combinations prove particularly effective at breaking the watermark without significantly degrading image quality). It would also be nice to see versions of Figure 7 for both baseline watermarks in the appendix. While model weight watermarking approaches are addressed briefly in the related work section (lines 94-97), the authors do not state why such approaches are not viable for the task at hand. Naively, it seems reasonable that a watermark embedded within the model weights during training could be at least as strong as one embedded in the diffusion noise at sampling. An extra sentence or two explaining why such approaches are not considered would be good (e.g. training-free methods are more desirable, existing model weight watermarking methods lack robustness or produce characteristic outputs for only a few "trigger" inputs, etc.). Technical Quality: 3 good Clarity: 3 good Questions for Authors: (These are more comments than questions, and aren't critical to my review/score of the paper) From Tables 1-4, it looks like RivaGAN is actually very competitive with the proposed method on robustness aside from the rotation transform, on which it is not trained. Given that RivaGAN’s attention mechanism supposedly encourages it to hide image modifications in textures, it is possible that a RivaGAN trained on rotation transformations might achieve robustness results on par with the proposed method while allowing for 32-bit watermark capacity. I don’t think this necessarily merits a mention in the paper, but it seems worth pointing out. One class of adversaries against which the proposed method might fare substantially better than post-hoc methods like RivaGAN is “generative autoencoder” attacks that use off-the-shelf pretrained image models [2]. An experiment with this kind of transformation probably falls into the realm of future work, but could potentially present a very strong argument for the kind of distributional watermarking approach proposed in the paper over post-hoc image modification. [2] Zhao, Xuandong and Zhang, Kexun and Wang, Yu-Xiang and Li, Lei. “Generative Autoencoders as Watermark Attackers: Analyses of Vulnerabilities and Threats”. https://arxiv.org/abs/2306.01953 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and the time you've dedicated to providing it. Below, we address specific points you raised: > It is therefore reasonable to wonder whether RivaGAN might achieve better imperceptibility and robustness if trained with a lower capacity closer to that of the proposed method. We fully acknowledge that exists the trade-off between the robustness and capacity of RivaGAN. This aspect is indeed worth investigating in future research. However, it's important to note that fine-tuning these hyperparameters would necessitate retraining the model for each desired capacity. > It would be nice to see some discussion of this in the appendix (e.g. if any transformation combinations prove particularly effective at breaking the watermark without significantly degrading image quality). It would also be nice to see versions of Figure 7 for both baseline watermarks in the appendix. Thank you for bringing this to our attention. We plan to conduct thorough and extensive experiments in the upcoming version. In the meantime, we'd like to underscore that the perturbations we are applying have a significant impact. Consequently, even combining two attacks can drastically alter the image, making them less practical due to their detectability. Given this, it would be valuable for us to explore attacks like [2] as you mentioned in the question section, as they present an intriguing attack for future investigation. > While model weight watermarking approaches are addressed briefly in the related work section (lines 94-97), the authors do not state why such approaches are not viable for the task at hand. Naively, it seems reasonable that a watermark embedded within the model weights during training could be at least as strong as one embedded in the diffusion noise at sampling. An extra sentence or two explaining why such approaches are not considered would be good (e.g. training-free methods are more desirable, existing model weight watermarking methods lack robustness or produce characteristic outputs for only a few "trigger" inputs, etc.). We have observed that the model weight watermarking techniques primarily rely on backdoored-style watermarks. For instance, when verified using trigger prompts, the model owner can determine if the model weights have been watermarked during training by incorporating specific trigger prompt-image pairs into the training data. These watermarking strategies don't apply to all outputs of the model; they pertain only to a subset generated using the designated trigger prompts. > From Tables 1-4, it looks like RivaGAN is actually very competitive with the proposed method on robustness aside from the rotation transform, on which it is not trained. Given that RivaGAN’s attention mechanism supposedly encourages it to hide image modifications in textures, it is possible that a RivaGAN trained on rotation transformations might achieve robustness results on par with the proposed method while allowing for 32-bit watermark capacity. I don’t think this necessarily merits a mention in the paper, but it seems worth pointing out. We really appreciate this suggestion. We will include this information in the future version. It indeed holds intriguing potential for future verification. Thank you for your feedback on this submission. Hope our response resolves your questions. Please let us know if you have other questions and comments that we can address. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I thank the authors for their thorough rebuttal. In my opinion the proposed method has merit even using empirical calibration of the detection threshold, but I appreciate the authors' reformulation of detection in terms of P-values. In particular, this should allow for characterizing the watermark's capacity for multiple keys via multiple-comparisons tests. Given that key capacity is an important consideration for watermarking methods -- including recent works addressing watermarking image generative models, such as the Stable Signature method referenced by reviewer KNue -- I think including even a small experiment along these lines would greatly strengthen the final paper. I am willing to increase my score should the authors agree to follow through on this. --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: We're again thankful for your continued feedback. We conducted a preliminary check of the potential capacity for multiple keys. In these experiments, we employ a Bonferroni correction for multiple hypotheses and set the threshold at FPR=1e^-6. As indicated in the table below, even with such a low FPR, our method consistently delivers reliable identification accuracy. Note further, that the actual average p-value for *Tree-Ring$_{\text{Rand}}$* is in fact p=7.27e-162, which, in theory, implies a capacity for around 1e156 users, when using the Bonferroni correction. | Method | 50 users/keys | 100 users/keys | 500 users/keys | 1000 users/keys | |:----------------------------:|:--:|:---:|:---:|:----:| | *Tree-Ring$_{\text{Rand}}$* | 1.00 | 1.00 | 1.00 | 1.00 | | *Tree-Ring$_{\text{Rings}}$* | 0.99 | 0.99 | 0.98 | 0.98 | Thank you for bringing this up, this result will be significant for our next draft.
Summary: This paper proposes watermarking the generated image from diffusion models by watermarking the initial noise and the reverse DDIM process is directly used as the watermark extraction. Experiments show the influence on the visual quality and robustness of the proposed method against various distortions. Strengths: 1) This paper is easy to read. 2) Watermarking the AIGC is an interesting problem. Weaknesses: 1) The threat model shall be clarified. For example, who conducts the watermarking process and executes the verification stage? If the user’s ability/behavior (generation process) is unknown as mentioned in L236, why does the user like to use the tree-rings as his initial noise rather than a clean initial noise? 2) As shown in Fig2, compared with other post-processing watermarking, the generated image by the proposed method is charged by a large margin, although the objective FID score is better. 3) What is the influence on the editability of the watermarked noise? In other words, given different prompts, it would be better to show some visual examples of generated images by clean and tree-rings noise, respectively. 4) Section 2.1 shall be reorganized. For example, in L100, a forward diffusion process is from clean data $X_0$ to noise. 5) Some related work is missing. [1] focuses on watermarking the generative models, which also tries to watermark the internal noise. [1] Ong, Ding Sheng, et al. "Protecting intellectual property of generative adversarial networks from ambiguity attacks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) What is the influence on the editability of the watermarked noise? 2) If there are two different tree rings embedded into two initial noises, will it affect the integrity of the proposed method? This may be related to the capacity mentioned in Limitations. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Mentioned in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and the time you've dedicated to providing it. Below, we address specific points you raised: > The threat model shall be clarified. Thanks for pointing out this. We will add the following threat model section in the future version. #### Threat Model The goal of watermarking is to allow for image generation without quality degradation while enabling the model owner the ability to identify if a given image is generated from their model. Meanwhile, the watermarked image is subjected to a number of image manipulations and modifications that appear in everyday use. We formalize this as an adversary who tries to remove the watermark in the generated image to evade detection using common image manipulations (i.e., augmentations) but note that informally, we are also interested in watermark robustness across common usage. Ultimately, this setup leads to a threat model with two agents that act sequentially. - Model Owner (Generation Phase): Gene owns a generative diffusion model $\epsilon_\theta$ and allows images $x$ to be generated through an API containing the private watermarking algorithm $\mathcal{T}$. The watermarking algorithm $\mathcal{T}$ should have a negligible effect on the generated distribution so that quality is maintained and watermarking leaves no visible trace. For the conditional-based diffusion models, the API also allows conditioning $c$ during generation by users, multiple possible DDIM step settings, and guidance strength. - Forger: Fiona generates an image $x$ through the API, then tries to evade the detection of $\mathcal{T}$ by applying strong data augmentations that convert $x$ to $x'$. Later, Fiona uses $x'$ for a prohibited purpose and claims that $x'$ is her intellectual property. - Model Owner (Detection Phase): Given access to $\epsilon_\theta$ and $\mathcal{T}$, Gene tries to determine if $x'$ originated from $\epsilon_\theta$. Gene has no knowledge of the text used to condition the model or other hyperparameters like guidance strength and the number of generation steps. > As shown in Fig2, compared with other post-processing watermarking, the generated image by the proposed method is charged by a large margin, although the objective FID score is better. We, of course, agree that the generated image, shown in Fig.2, is different from the watermarked version. This is a hallmark of our "invisible" approach. This difference occurs because the two images are generated with a different initial random $x_T$. Importantly, had the unwatermarked diffusion model been run again with a different random Gaussian seed $x_T$, it would *also* have produced a different image from the original. Likewise, the watermarked image is produced from a different initial $x_T$ than the unwatermarked image, causing a different image to be produced. This image is of the same quality as the original unwatermarked image, and in this sense, it is truly invisibly watermarked because the image is a direct generation from the diffusion model. We think this is an interesting reformulation of the problem. Instead of a “classical watermark” that manipulates/watermarks the image in the pixel domain where changes result in visible image distortion, we propose a watermark that embeds itself in the initial noise $x_T$. After this vector passes through the image generation pipeline, the resultant image contains no visible distortions, even when a high watermark strength is used. This is clearly a much more limited setup than general watermarking (as only diffusion images can be watermarked), but one that is uniquely suited to watermarking for generated data. We hope to discuss this further, as we think this is a core part of the argument and departure from classical post-processing watermarks, so we do want to get our explanations here just right. > What is the influence on the editability of the watermarked noise? We have shown some generated examples in Appendix Figure 6. In Figure 2 in the rebuttal PDF, we also provide more examples generated with more "challenging" prompts and their p-values derived from the formulation in the global response. > If there are two different tree rings embedded into two initial noises, will it affect the integrity of the proposed method? This may be related to the capacity mentioned in Limitations. We've now formalized the detection process, please see our global response. In this framework, the detection produces rigorous p-values. Armed with this setup, the question of multiple keys, which we raised as a limitation in our current submission, can be simply reduced to the question of a multiple comparisons test. Standard methods, such as the Holm–Bonferroni approach, can now be applied to rigorously test against an arbitrary number of keys. Thank you again for your thoughtful review. We made a significant effort to address your feedback including experiments, and we would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address? --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. Due to the argument on the evaluation of the visual quality and the incorrectness of the description of the diffusion process, I tend to keep my score. Nevertheless, I like this paper and hope that it can be improved comprehensively. --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: Thank you for your response and continued interest in our paper. Regarding the feedback: > Section 2.1 shall be reorganized. For example, in L100, a forward diffusion process is from clean data $X_0$ to noise. Apologies for not mentioning this explicitly in our last response, due to size limitations. We, of course, took your feedback into account. L100 was contained confusing wording in the description of the forward process. We have rewritten this sentence to > Given a data point $x_0$ sampled from the real data distribution $q(x_0)$, a forward diffusion process is a fixed Markov chain with T steps, adding a predefined amount of Gaussian noise in every step. This forward process can be sampled efficiently using the close form solution [...] We appreciate your feedback and will restructure this section to ensure clarity. Concerning the question of visual quality, we are unsure about the nature of the concern. We include visual examples in Appendix Figure 6 and in Figure 2 in the rebuttal PDF, and include detailed detailed, qualitative evaluations of visual quality in other parts of our submission. We are happy to provide additional details, given more specific feedback. Finally, you might be interested in finding that we have now run additional experiments that address the question you raised about capacity limitations, and have found that empirical experiments verify the large capacity of the watermark (meaning that many keys/initial noises can be used simultaneously), see the table below, where our method consistently delivers reliable identification accuracy. Note further, that the actual average p-value for *Tree-Ring$_{\text{Rand}}$* is in fact p=7.27e-162, which, in theory, implies a capacity for around 1e156 users, when using the Bonferroni correction. | Method | 50 users/keys | 100 users/keys | 500 users/keys | 1000 users/keys | |:----------------------------:|:--:|:---:|:---:|:----:| | *Tree-Ring$_{\text{Rand}}$* | 1.00 | 1.00 | 1.00 | 1.00 | | *Tree-Ring$_{\text{Rings}}$* | 0.99 | 0.99 | 0.98 | 0.98 |
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time and for writing thoughtful reviews of our work. We've added a section about deriving P-value below and also attached a PDF containing additional figures. Based on questions about theoretically justified threshold values, we've formalized the detection event as a hypothesis test from which we can derive explicit P-values via a non-centered chi-squared distribution. Such a formal, interpretable P-value can be used to understand whether the observed watermark could have occurred in a natural image by random chance. This derivation allows the user to set a threshold of detection, i.e., the watermark is 'detected' when $p$ is below a chosen P-value threshold $\alpha$. By doing so, one can control the false positive rate $\alpha,$ making false accusations statistically unlikely. Below, we provide a detailed formulation: ### Deriving P-values We construct a statistical test for the presence of the watermark that produces a rigorous P-value. The forward diffusion process is designed to map images onto Gaussian noise, and so we assume a null hypothesis in which the entries in the array $x_T'$ obtained for a natural image are Gaussian. We find that this assumption holds quite well in practice, see Figure 1 in the rebuttal PDF. For any test image $x_0'$, we compute the approximate initial vector $x_T'$ and then set $y= \mathcal{F}(x'_T)$. We then define the following null hypothesis $$ \begin{equation} H_0: \textit{$y$ is drawn from a Gaussian distribution $\mathcal{N}(\mathbf{0}, \sigma^2 I_\mathbb{C})$.} \end{equation} $$ Here, $\sigma^2$ is an unknown variance, which we estimate for each image using the formula $\sigma^2 = \frac{1}{M} \sum_{i \in M} |y_i|^2.$ To test this hypothesis, we define the score $$ \begin{equation} \eta = \frac{1}{\sigma^2} \sum_{i \in M} |k_i^* - y_i |^2. \end{equation} $$ When $H_0$ is true, the distribution of $\eta$ is exactly a *noncentral* $\chi^2$ *distribution* [1], with $|M|$ degrees of freedom and non-centrality parameter $\lambda=\frac{1}{\sigma^2}\sum_{i} |k_i^*|^2$. We declare an image to be watermarked if the value of $\eta$ is too small to occur by random chance. The probability of observing a value as small as $\eta$ is given by the cumulative distribution function $\Phi_{\chi^2}$ of the noncentral $\chi^2$ distribution: $$ \begin{equation} p=\Pr \left(\chi^2_{|M|, \lambda}\leq \eta \middle| H_{0} \right) = \Phi_{\chi^2}(z). \end{equation} $$ $\Phi_{\chi^2}$ is a standard statistical function [2], available in `scipy` and many other statistics libraries. We show qualitative examples of the proposed watermarking scheme and accompanying P-values in Figure 2 in the rebuttal PDF. For each prompt, we show the generated image with and without the watermark, and also a watermarked image subjected to a transformation. For each image, we report a P-value. As expected, these values are large for non-watermarked images, and small (enabling rejection of the null hypothesis) when the watermark is present. Transformations reduce the watermark strength as reflected in the increased P-value. [1] P. B. Patnaik. The Non-Central X2- and F-Distribution and their Applications. Biometrika, 36(1/2): 202–232, 1949. ISSN 0006-3444. doi: 10.2307/2332542. URL https://www.jstor.org/stable/2332542. [2] Paul Glasserman. Monte Carlo Methods in Financial Engineering, volume 53 of Stochastic Modelling and Applied Probability. Springer, New York, NY, 2003. ISBN 978-1-4419-1822-2 978-0-387-21617-1. doi: 10.1007/978-0-387-21617-1. URL http://link.springer.com/10.1007/978-0-387-21617-1. Pdf: /pdf/f778695a0711f28442b935c4a83e41ff7aa5050d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Wide Neural Networks as Gaussian Processes: Lessons from Deep Equilibrium Models
Accept (poster)
Summary: The authors investigate the infinite-width limit of a DEQ, and prove that the output converges to a Gaussian process. Their result importantly leverages the intermediary analysis of a finite-depth, finite-width DEQ. Their main technical result is that the limit of infinite width and infinite depth commute for such networks, which they build upon to establish the convergence to a Gaussian process. Numerical checks are presented to bolster the claim. Strengths: The paper is very clearly written and easy to follow, with the main technical points being sufficiently discussed, and the relevant context being provided. Cautious numerical evidence is further provided to bolster the claims. Overall, the paper is mostly technical in nature, and altough it does not discuss the generalization properties of infinite-width DEQs, this result should be interesting to some in the NeurIPS machine learning theory community. Weaknesses: I have not read the proof, and am not familiar with the literature of DEQs, and therefore give a low confidence score. The presentation is sound and I am convinced by the numerical checks. As a very minor remark, while I do understand discussion about the generalization ability of infinite-width DEQs is out of the scope of the present work, I do feel like the inclusion of some simple empirical comparisons with other infinite-width limits of neural networks (NNGPs and NTKs) would benefit the overal reach of the work. I have a number of questions, which I list below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - The authors discuss how previous works show that for MLPs and ResNets, the infinite width and depth limits do not commute, while they show they do for DEQs. However, little discussion is provided as to why this difference arises: is it because of the share weights of the DEQ, or the input injection at each layer? I would find further intuition and comparison to MLPs helpful and insighful. - It is not clear why $\sigma_u$ does not enter in Lemma 4.2. Naively, the $\sigma_u \to 0$ limit should correspond to a MLP, for which the two limits do not commute. Is it the case that (14) holds for any $\sigma_u>0$? - (Minor) To my awareness, the recursions (7-13) for the infinite-width GP kernel of a DEQ are new. Could the authors provide more intuition as to how the kernel of a infinite-width but finite-depth DEQ qualitatively differs from the MLP GP kernel ($\sigma_u=0$)? For instance, a plot of the spectrum of the kernel for various $\sigma_u$ in the supplementary material would help build up intuition. - (Minor) l73: the sentence is written twice. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The paper is purely theoretical in nature and as such does not pose any foreseeable societal impact. The technical limitations of the work a clearly stated therein. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Question 1**: Thank you for your insightful comments and questions. Limits in DEQs can commute lies in the strategic utilization of input injection and careful selection of variance parameters. Let us provide further intuition for a clearer understanding. In DEQs, the input injection is a crucial aspect that allows us to carefully choose the covariance parameter $\sigma_w$ to control the magnitudes of $h^{\ell}$ as the depth approaches infinity. By selecting a relatively small $\sigma_w$, we ensure that the magnitude of $h^{\ell}$ remains close to that of $h^{1}$. This choice guarantees that $h^{\ell}$ converges uniformly on the width $n$ to the desired fixed point $h^{*}$. On the other hand, traditional MLPs lack the input injection mechanism, which often leads to unstable behaviors as the depth grows. The absence of input injection in MLPs can cause the preactivation vectors to become unbounded or vanish as the depth increases, resulting in a failure of the limits to commute and a loss of expressivity [13,14,8]. Similarly, even though ResNets use skip connections, they can suffer from these issues if poor scaling strategies are employed [8,9]. Therefore, both input injection (or skip connections) and appropriate scaling strategies play crucial roles in ensuring that the limits in DEQs commute. The insights gained from our study can shed light on other neural networks when considering depth becoming large. For achieving well-defined deep neural networks, it is essential to control the magnitudes of preactivation and post-activation vectors. One way to accomplish this is to add input injection (or skip connections) combined with the appropriate choice of small covariance parameters. Regarding the shared weights, it is not the critical reason for ensuring commutative limits. Instead, neural networks with shared weights, such as DEQs, are popular mainly because they can achieve competitive performance using significantly less memory storage. However, the analysis for networks with shared weights differs from those with independent weights. For instance, as demonstrated in [9], the commutative limits for ResNet with independent weights were established using results from stochastic differential equations, but these cannot be directly applied to DEQs or other neural networks with shared weights. **Response to Question 2**: Thank you for pointing that out, and we appreciate your valuable input. You are correct in identifying that we omitted the condition for $\sigma_u$, and we apologize for the oversight. To clarify, the result in Lemma 4.2 indeed holds for any fixed $\sigma_u > 0$. When $\sigma_u$ is chosen close to zero, DEQs behave like a trivial neural network. Specifically, as $\sigma_u$ approaches zero, the preactivation $g^1=Ux$ becomes very close to zero. Given the assumption that $\phi(0)=0$, the activation function output $h^1=\phi(g^1)$ will also be close to zero. As a result, in the subsequent layers, when computing $g^2=Wh^1$ and $h^2=\phi(g^2+g^1)$, both $g^2$ and $h^2$ will remain close to zero. Consequently, this DEQ effectively reduces to a trivial neural network that outputs a vector close to zero. We will update the manuscript to include this important clarification, and we thank you for helping us improve the accuracy and rigor of our work. Your feedback has been invaluable in strengthening our paper and making it more accessible to readers. **Response to Question 3**: We greatly appreciate your insights into our paper. Your suggestion regarding the qualitative distinction between the kernel of an infinite-width but finite-depth DEQ and the MLP GP kernel ($\sigma_u=0$) is valuable. To provide enhanced intuition on this aspect, we have thoughtfully incorporated plots depicting the spectrum of the kernel for various $\sigma_u$ values. These elucidating plots can be found in Figure 3 (right) within the updated PDF file featured in the "global" response section. As you delve into these plots, a discernible trend emerges. With increasing $\sigma_u$, the smallest eigenvalue of the kernel consistently exhibits a rising trajectory, a pattern observed in both theoretical analyses and simulation results. Particularly noteworthy is the scenario where $\sigma_u=0$ (or close to $0$). Here, the smallest eigenvalue of the kernel is close to zero, reflecting the diminishing pre-activation vector $g^{\ell}$ as depth grows. This phenomenon arises due to the application of a small $\sigma_w$ that ensures fixed point existence, consequently yielding a shared weight matrix $W$ satisfying $|W|:=\gamma < 1$ (with high probability). This, in turn, leads to the gradual trivialization of the covariance function or kernel. Conversely, the introduction of positive $\sigma_u$ solves this issue. Lemma~F.2 ensures the persistence of the pre-activation rather than its eventual vanishing as depth progresses. By leveraging input injection (i.e., $\sigma_u > 0$), the neural network's stability is substantially enhanced compared to scenarios devoid of this input injection. **Response to Question 4**: Thank you for pointing out the repetition in sentence l73. We will remove the duplicate sentence in the revised version of the paper. Your feedback is much appreciated. --- Rebuttal Comment 1.1: Title: Acknowledgements Comment: I thank the authors for the clarification, and do not change my scoring.
Summary: This paper focuses on the DEQ (deep equilibrium) model, an infinitely deep neural network with shared weight matrices across layers. The authors show that the network approaches a Gaussian process as the network width goes to infinity, and the limit of infinite width and infinite depth commute, also, the Gaussian vectors remain non-degenerate as network depth tends to infinity for any pairs of distinct inputs. These results do not hold for previously well-studied wide neural networks without shared weights. Strengths: The paper presents meticulous analysis on the infinite width and infinite depth limits of the DEQ model (networks with tied weights) and specifically the rate of convergence of the two limits. The theoretical analysis is supported by numerical results. Weaknesses: 1. While technically sound, it is unclear what are the potential insights and contribution of this work to the field. I recommend that the authors add a paragraph in the conclusion section (or if possible include some numerics) on the potential implications of their main results (that the infinite depth and width limits commute for DEQ and that the structure of the limiting kernel is preserved). For example, does this explain why DEQ’s achieve competitive results with the state-of-the-art? How does the result shed light on the infinite width limit of vanilla RNNs? 2. This work focuses on properties of the DEQ at initialization, and there is no learning in the network. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I am not familiar with the line of work on DEQs, but based on the references I think it refers to the method of computing the fixed point of the infinitely deep networks with tied-weights, whereas in the paper it simply refers to the network structure? 2. Line 91: What are the conditions on A here (elements in A need to be subgaussian?) 3. Theorem 3.1: Here you introduce the covariance function, however, it may be useful to point out that the covariance function here is different than the covariance of f in NNGP and NTK, in NNGP the readout weights of the network is learned whereas in NTK all the weights are learned. If I understand correctly, here the covariance function is simply f at initialization with all the weights being Gaussian. As you also mention NTK in line 129 it may be helpful to stress the distinction. 4. Fig 4: why is there a large jump at the initial stage in the theory but not in simulation for the smallest eigenvalue over iterations? 5. Fig 4 the last panel: can you confirm this result with simulations? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors should add a paragraph on limitations in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Weakness 1**: Thank you for your input. We establish the NNGP correspondence for DEQs, shedding light on why they can compete with advanced models. This is due to their tendency to exhibit Gaussian behavior, when the width is large. One implication is its potential application to Gaussian processes. To explore this, we conducted experiments, comparing GPs utilizing the NNGP kernel with trained DEQs of varying widths on real datasets. As depicted in Figure~1 of the updated PDF in the "global" response, we observe that GPs with NNGP kernel outperform finite-width trained DEQs, and DEQs' performance converges to NNGP as width increases, akin to feedforward networks [12]. Moreover, Our methodology can yield similar outcomes as in [19,1], establishing NNGP correspondence for RNNs. Moreover, our analysis can extend to broader networks, especially deep ones, as we successfully demonstrate limit commutativity and present a new approach to prove strictly positive definiteness of NNGP kernel. **Response to Weakness 2**: Thank you for your valuable comments. Our current focus is establishing NNGP correspondence for DEQs, not training analysis. However, our findings offer a foundation for studying training dynamics in both infinite and finite widths. In [10], the training dynamics are described via an ordinary differential equation governed by the NTK. [10] also underscores the NNGP kernel is part of NTK and strictly positive NTK determines convergence of training process in both finite-width [5] and infinite-width [10]. Hence, the strictly positive definiteness of NNGP kernel established in Theorem 4.5 significantly determines training behavior and stability across infinite and finite widths. Additionally, we have conducted experiments comparing NNGP predictions to trained DEQs with varying widths on real datasets. The results are depicted in the PDF file of the "global" response. Figure 1 illustrates that the predictions of trained DEQs increase and converge to NNGP predictions as the width increases, akin to feedforward networks [12]. **Response to Question 1**: Thank you for your question, and we appreciate the opportunity to clarify the concept of Deep Equilibrium Models (DEQs). As introduced in [3], DEQs represent a distinctive neural network structure, whose latent feature vector $h^*$ is provided implicitly as the limit of the post-activation vector $h^{\ell}$ when $\ell\rightarrow\infty$. This characteristic categorizes DEQs as an infinite-depth neural network with shared weights. Furthermore, $h^*$ can be interpreted as a fixed or equilibrium point within the equilibrium equation, as illustrated in Equation (4). Notably, the equilibrium equation can be much more complex in practice [3]. The fixed point can be computed through iterative transitions or efficient root-finding techniques such as Newton-like methods [3] and Anderson acceleration [17]. Remarkably, our results apply to the DEQs, regardless of how one computes the fixed point. **Response to Question 2**: Thank you for your question. As indicated in Theorem A.1 and Theorem A.2 from Appendix A, we assume that $A_{ij}$ are i.i.d. standard Gaussian. This assumption aligns with our random initialization introduced in Equation (5). However, it is worth noting that the results can be extended to subgaussian with some level of dependence for rectangular matrices. For more detailed and advanced results, we recommend referring to the works [15,16]. **Response to Question 3**: Thank you for your question, and we appreciate the opportunity to clarify the distinction between the covariance function in our paper and its relation to NNGP and NTK. In the infinite-width limit, neural network with random weights is equivalence to a Gaussian process with specific kernels, known as the NNGP correspondence [10,12]. These kernels are utilized in Bayesian inference or Support Vector Machines, yielding results comparable to trained neural networks trained [12]. Our work shows that DEQs also exhibit this NNGP correspondence, as demonstrated in Theorem 4.4, where the NNGP kernel is the covariance function $\Sigma^*$ defined in Theorem 4.1 and Lemma 4.1. Conversely, in the same limit, [10] illustrates that the dynamics of neural networks under training can be described as an ordinary differential equation governed by another kernel called NTK. Notably, the NTK is distinct from but related to the NNGP kernel. For instance, a strictly positive NNGP kernel implies a strictly positive NTK, but not vice versa [10]. **Response to Question 4**: Thank you for raising this matter, and we apologize for any confusion caused by the misleading labels and captions in Figure 4. To clarify, the second subgifure in Figure 4 illustrates the distribution of $\lambda_{\min}(K^*)$ across 1000 networks instead of $\lambda_{\min}(K^{\ell})$ through simulation. In response to this issue, we have redrawn Figure 4, and the revised plots, now denoted as Figure 3 (left) in the updated PDF file of "global" response, offer a more accurate portrayal. These updated plots demonstrate a consistent alignment between theory and simulation. Notably, both theory and simulation curves exhibit a big jump at the initial stage. That is because $K^1 = \sigma_u^2 XX^T/n_{in}$ is generally degenerated due to dependence within training data, while $K^{\ell}$ is non-degenerate for all $\ell\geq 2$ due to the nonlinear activation $\phi$. **Response to Question 5**: Certainly, we have validated this outcome through simulations. The corresponding plots are now available in the updated PDF file within the "global" response section. Specifically, Figure 3 (middle) demonstrates a clear and consistent increase in the smallest eigenvalue of $K^*$ as $\sigma_w$ increases, mirroring the theoretical computations. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Sorry for the late response. I have read the rebuttal and I appreciate the clarification and new numerical validations. Correspondingly I would like to raise my score to 5. I do find the result that the limits commute very interesting, my concern is about the DEQ model itself. It does not seem to be a commonly used model and it is vague how the results may be applicable to other related more practical architectures. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for revising your score to 5. I appreciate your feedback and the recognition of the clarification and additional numerical validations provided in our rebuttal. We understand your concern about practical applications, and we are committed to exploring how our findings can be applied in more common architectures in the future. Your insights are valuable, and we appreciate your support. Sincerely, The Authors
Summary: This paper examines the infinite width behaviour of DEQs, a kind of neural network architecture that can be viewed as an infinite-depth RNN. They show that contrary to regular MLPs, the limit of infinite width and infinite depth in DEQs commutes. They back their theory up with some numerical experiments. Strengths: - The authors ask an interesting theoretical question and derive a surprising result (that the limits commute). - Paper is generally well written Weaknesses: - The central importance of the commutation of the limits could perhaps be better argued for. As it stands, this seems interesting to me theoretically, but I don't see any major practical advances stemming from this result. It would be good to know if the authors see things differently. - I found some of the Figures hard to understand, particularly Fig 3 and Fig 4. Would appreciate more detail here. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What is happening in the rightmost figure in Fig 3? It looks like there is an orange curve that goes all the way to the bottom, but the blue curve does not stay with it. I'm having trouble understanding the significance of this. - Why does the relative error bottom out at different values in Fig 1, left? - What is the legend in Fig 1 left? Is it width of the network? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Weakness 1**: We appreciate the reviewer's insightful comments. It is widely recognized in the most recent literature [12,19,10] that as the width of neural networks approaches infinity, they exhibit Gaussian behavior, a phenomenon known as the Neural Network and Gaussian Process (NNGP) correspondence. This understanding has led to the successful application of NNGP kernels to Gaussian Processes (GPs), resulting in remarkable performance on real-world datasets [12,19]. Furthermore, the spectrum of the NNGP kernel plays a pivotal role in determining the global convergence of gradient-based training methods for neural networks [10,5]. However, it is important to note that this NNGP correspondence cannot be guaranteed in cases where the depth of neural networks becomes significantly large. For instance, as highlighted by [9], standard feedforward networks display heavy-tail behavior instead of Gaussian behavior when depth outpaces width in convergence. To address this challenge, we employ the input injection in DEQs and carefully select variance parameters $\sigma_w$ to ensure the commutation of depth and width limits within DEQs. This allows us to establish the NNGP correspondence for DEQs. Additionally, we also demonstrate the strict positivity of the NNGP kernel. These findings contribute fundamental insights for future studies focused on the training and generalization of DEQs. To complement our theoretical findings, a new experiment is added using the NNGP for regression on the MNIST dataset, the result is provided in Figure 1 in the PDF file of the "global response". We can see from Figure 1 that NNGP outperformances trained finite-width DEQs, but the performance of DEQs tends to converge to NNGP as the width increases. **Response to Weakness 2**: Thank you for your feedback and apologize for any confusion caused by the figures. To address this concern, we have redrawn both Figure 3 and Figure 4 to ensure improved clarity. The updated figures and detailed captions are now available in the PDF file provided in the "global" response section. Additionally, we also provide detailed answers to your follow-up questions to ensure a better understanding of the figures and the underlying concepts. **Response to Question 1**: Thank you for pointing out this observation. The rightmost figure in Figure~3 depicts the convergence of the relative error $\|H^{\ell}H^{\ell}/n-K^*\|/\|K^*\|$ as the depth $\ell$ increases. The orange curve represents the fitted convergence behavior of the relative error, which is an exponential function denoted as $\gamma^{\ell}$. The reason why the blue curve does not follow the orange curve to the bottom is that the relative error is not only influenced by the depth but also by the width of the neural network. The error introduced by the width contributes to the deviation of the blue curve from the orange curve. Considering the potential for confusion and misinterpretation arising from this width-related effect, we removed the orange curve from our updated plots, which aims to provide a clearer and more precise visualization of the convergence behavior. For your convenience, we have included the revised plots in the updated PDF file accessible through the "global" response section. **Response to Question 2**: Thank you for bringing this observation. The variation in the bottoming-out values within Fig 1, left, is influenced by several factors. Primarily, the discrepancies can be understood through the lens of Lemma~F.2, wherein the relative error $\|h^{\ell+1}-h^{\ell}\|\leq \gamma^{\ell} \|h^1\|$ holds true. Incorporating Theorem A.1 alongside the Lipschitz continuity of $\phi$, we deduce that $\|h^1\|=\|\phi(Ux)\|= \mathcal{O}(\sqrt{n})$. Consequently, it follows that the relative error $\|h^{\ell+1}-h^{\ell}\|= \mathcal{O}(\gamma^{\ell}\sqrt{n})$, implying that the error would exhibit relatively greater magnitudes as the width values increase. Furthermore, it's noteworthy to note that practical considerations such as numerical computational errors can contribute to the disparities and fluctuations observed in the relative error values. These nuances might, in part, account for the diversities observed in the plot. To rectify the confusion stemming from the plot, we have undertaken enhancements in the new depiction. By incorporating more intermediary width selections (e.g., $[50,100,200,400,800,1000,2000]$), the revised plot presents a clearer and more plausible portrayal of the data. Additionally, to mitigate the influence of width $n$, we have reformulated the plot using fresh relative errors $\|h^{\ell+1}-h^{\ell}\|/\|h^{\ell}\|$. The result is a smoother and more consistent depiction of the convergence behavior for different widths, as shown in the updated PDF file's right figure within Figure 2. **Response to Question 3**: Your understanding is accurate. The legend accompanying the left side of Figure~1 denotes the width of the network. We have taken steps to refine the presentation. The new plot features revised legend labels following the format "width xxx." This adjustment provides a clearer and more intuitive grasp of the width values, avoiding confusion. We wholeheartedly value your perceptive observation and your valuable contribution. --- Rebuttal Comment 1.1: Comment: Thanks for your responses - and apologies for the very late reply. On my first point - I think the authors do a good job outlining the theoretical significance of their work, in that it's very interesting that these limits commute, and not expected a priori. I was wondering whether the authors in addition foresee a practical advance stemming from this theoretical insight. Not having one is not a deal-breaker, but I was curious. Thank you very much for improving and clarifying the figures. I think this is an improvement to the paper. However, I am planning to maintain my score - I think this paper should be accepted, but I do not think this result on its own qualifies as "excellent impact". I think it is solid but I am not sure if the impact of this work will be very large, given that DEQs are not a very commonly used model, and it is unclear what the practical impact of this particular result will be. I am increasing my confidence score to a 4 as I'm confident this paper should be accepted.
Summary: This paper takes the theoretical tools for infinite width limits of fully connected deep neural networks (e.g. NNGP limits, tensor programs etc), and applies it to a deep equilibrium-type neural network. This model is seen as the depth \to infinity limit of a feedforward-type network. Convergence to a Gaussian process is proven for this model, and the typical type of recurrence relation for the covariance function is obtained. The theoretical results are validated with several experiments that confirm the findings, and shed light on how quick the convergence to the fixed point is. Strengths: * The paper is well written and easy to follow. The proofs are explained well and seem to be free of any major errors. * The experiments do a good job validating the theory. They cover a broad range of possible questions one might ask about the model. * The fact that the infinite depth limit exists in a non-degenerate way is a nice finding (as opposed to other types of feedforward networks where the infinite depth limit may be degenerate in some way). This means that the idea of feeding the input $x$ into each layer directly may be a stabilizing force that can be used for very deep networks. Weaknesses: * The main result of this paper is largely what one would expect using the infinite width technology, that is to say there is no "surprises" that happen along the way. That is not to say that something is lacking in the methods applied here, just that the result is more or less what you would expect. To really make this an outstanding paper, it would be nice to additionally see some explanation of why the resulting model is interesting...for example is there some way in which it outperforms ordinary feedforward nets or resnets or is there a way in which it is more efficient. Perhaps this kind of thing appears in the existing literature on deep equilibrium models (I am not an expert on this type of model) * Usually the big strength of the infinite width limits is that the NTK is a constant which enables a theoretical understanding of how training happens, not just initialization. This analysis of the training kernel is not carried out in this work. * The fact that the limit with or without shared weights is identical suggests that some important theoretical considerations may be missing here...surely there is some kind of important difference between these models that this infinite width limit is not capturing. In particular, I think the training kernel for the model with or without shared weights would be different even though the conjugate kernels studied here are the same. It would be interesting to compare/contrast these more, either by experiments or by more theory to get a handle on what this model is (or isn't) actually doing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Have you thought about the training kernel (NTK) for this model? Is there a reason the usual analysis would not go through? * Is there any analysis on the differences between the DEQ and the standard feedforward model in the wide width limit? What are the advantages/disadvantages? This may be found somewhere in the existing literature, but this was not clear to me and the fact that this DEQ model would have some advantages would be an important part of motivating the work done here. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: * One limitation that always comes up in these limiting type things is the question of how large real neural networks have to be for the theory to actually work. Some discussion or experiments specifically addressing this (e.g. showing the error in the predictions as a function of network size) could help explicitly address this. * Related to above: It would also be interesting to understand the fluctuations around the infinite width limit and how much they effect things (which may be quite different for this model vs for ordinary feedforward networks), but that is likely beyond the scope of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Weakness 1**: Thank you for your valuable review. We appreciate your interest in the efficiency of DEQs compared to other advanced neural networks, and your consideration of the nature of our main result. Implicit networks [1], such as DEQs, have recently gained significant attention for their versatility, encompassing models like MLPs, convolutional nets, ResNets, and RNNs [1,2]. Moreover, DEQs have also stood out for their competitive performance while economizing computational resources [2,4]. In terms of expected outcomes, it's important to note that unlike shallower networks, deep networks can show instability as their depth increases. For instance, as noted in [5], feedforward networks may display heavy-tailed distributions rather than Gaussian behavior when depth converges faster than width. This instability can subsequently impact network expressivity [7,8]. To address these challenges, we utilize input injection in DEQs and carefully select variance parameters $\sigma_w$ to ensure the commutativity of the two limits. This forms the basis for establishing a significant NNGP correspondence for DEQs. Our strategies and mathematical tools could potentially extend to other networks, particularly those with substantial depth. **Response to Weakness 2**: Thank you for your valuable comments. You are correct in noting that our current work does not encompass an analysis of the training, since the primary focus of our paper is to establish the NNGP correspondence to DEQs. However, our findings offer a solid foundation for probing training dynamics in scenarios of both infinite and finite widths. Furthermore, as highlighted in [3], the dynamics of networks under training can be described through an ordinary differential equation governed by the NTK. [3] also underscores the NNGP kernel is part of NTK and strictly positive NTK determines convergence of training process in both finite-width [6] and infinite-width [3] scenarios. Therefore, the strictly positive definiteness of NNGP kernel established in Theorem 4.5 significantly influences training behavior and stability across infinite and finite widths. Hence, we intend to explore training dynamics of DEQs in our future research. **Response to Weakness 3**: Thank you for your valuable response. While it holds true that the identical Gaussian process is attained in the limit regardless of shared weights, as noted in Remark 4.2 and other literature [9,10], the absence of shared weights leads to a non correlation across layers. Regarding the training kernel or NTK, you correctly identify the distinction between the NTK with and without shared weights. However, it is crucial to emphasize that the central thrust of our paper is to establish the NNGP correspondence for DEQs. In the context of finite-depth neural networks, akin to our current study, a line of studies has primarily focused on scrutinizing the NNGPs across diverse network architectures, rather than delving into the realm of NTK[10,11,5]. While training dynamics and generalization are undeniably significant, we consider it a promising avenue for future investigation due to the time limit. **Response to Question 1**: Thank you for your insightful question. The analysis of the training kernel (NTK) is indeed part of our future work. The analysis of the training kernel for DEQs could encounter analogous challenges to those encountered in analyzing the NNGP kernel for DEQs. Specifically, the training kernel may not be well-defined if the two limits (infinite width and depth) do not commute. As DEQs are defined as infinite-depth neural networks, it is crucial for the two limits to commute. However, we are optimistic that the methodologies and mathematical frameworks in this paper can serve to ensure the commutativity of these two limits, thereby rendering the training kernel or NTK well-posed. **Response to Question 2**: Thanks for your questions. The main motivation behind using DEQs lies in their practical benefits. Existing literature has highlighted that DEQs can attain competitive performance compared to other networks while demanding significantly fewer resources [2,4]. However, the theoretical analysis of DEQs is still an ongoing field of exploration. Limited research has delved into the well-posedness and training process of DEQs [12,13,14]. We intend to investigate the training and generalization ability of DEQs as part of our future work. **Response to Limitation 1**: Thank you for your valuable feedback. While this paper primarily focuses on asymptotic analysis to establish the NNGP correspondence for DEQs, addressing the practical applicability of the theoretical findings requires non-asymptotic analysis [6,15]. To complement our theoretical analysis, we have conducted experiments comparing NNGP predictions with trained DEQs of varying widths on realistic datasets. The results are depicted in the updated PDF file in the "global" response section. Figure 1 illustrates that the test accuracy of trained DEQs increases and converges towards that of NNGP as the width increases. This trend aligns with observations made in the context of feedforward networks [11]. **Response to Limitation 2**: We appreciate your insights. While fluctuations around the infinite width limit are interesting, our study centers on depth-width interaction and NNGP correspondence. We value your suggestions for future work and plan to consider them in subsequent research. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I acknowledge the rebuttal from the authors.
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely appreciate your thorough review of our paper and your valuable comments and suggestions. We have carefully considered each of your points and have addressed them in our separate responses to your questions. In light of your input, we have taken significant steps to enhance the clarity and comprehensibility of our work. Specifically, we have conducted new experiments that involve a comparison between NNGP predictions and trained DEQs of varying widths on real datasets. These results, along with updated plots, have been incorporated into the uploaded PDF file. Acknowledging the confusion caused by misleading labels and captions in certain plots, we have revisited these figures, refining the labels and captions to achieve heightened clarity. Our goal is to ensure that these figures accurately represent our findings, eliminating any potential ambiguity. The revised figures are now included within the updated PDF file. For your convenience, we have also included references in this global response that were cited in our rebuttal. This addition aims to provide you with a seamless understanding of the context to which we refer. We deeply appreciate your time and commitment to appraising our manuscript. Your insights have been instrumental in refining our work. Best regards, The Authors === **References** [1] Sina Alemohammad, Zichao Wang, Randall Balestriero, and Richard Baraniuk. The recurrent neural tangent kernel. arXiv preprint arXiv:2006.10246, 2020. [2] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In International conference on machine learning, pages 242–252. PMLR, 2019. [3] Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Deep equilibrium models. Advances in Neural Information Processing Systems, 32, 2019. [4] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. In International Conference on Learning Representations, 2018. [5] Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In International conference on machine learning, pages 1675–1685. PMLR, 2019. [6] Laurent El Ghaoui, Fangda Gu, Bertrand Travacca, Armin Askari, and Alicia Tsai. Implicit deep learning. SIAM Journal on Mathematics of Data Science, 3(3):930–958, 2021. [7] Tianxiang Gao, Hailiang Liu, Jia Liu, Hridesh Rajan, and Hongyang Gao. A global convergence theory for deep relu implicit networks via over-parameterization. In International Conference on Learning Representations, 2021. [8] Soufiane Hayou, Arnaud Doucet, and Judith Rousseau. On the impact of the activation function on deep neural networks training. In International conference on machine learning, pages 2672–2680. PMLR, 2019. [9] Soufiane Hayou and Greg Yang. Width and depth limits commute in residual networks. arXiv preprint arXiv:2302.00453, 2023. [10] Arthur Jacot, Franck Gabriel, and Cl ́ement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 31, 2018. [11] Kenji Kawaguchi. On the theory of implicit deep learning: Global convergence with implicit layers. In International Conference on Learning Representations, 2020. [12] Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein. Deep neural networks as gaussian processes. In International Conference on Learning Representations. [13] Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. Advances in neural information processing systems, 29, 2016. [14] Samuel S Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. In International Conference on Learning Representations. [15] Terence Tao. Topics in random matrix theory, volume 132. American Mathematical Soc., 2012. [16] Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018. [17] Homer F Walker and Peng Ni. Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis, 49(4):1715–1735, 2011. [18] Ezra Winston and J Zico Kolter. Monotone operator equilibrium networks. Advances in neural information processing systems, 33:10718–10728, 2020. [19] Greg Yang. Wide feedforward or recurrent neural networks of any architecture are gaussian processes. Advances in Neural Information Processing Systems, 32, 2019. Pdf: /pdf/b23ab2af2ded6d244c6dede9a40883bbe32830a1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Meta-Learning with Neural Bandit Scheduler
Accept (poster)
Summary: This paper proposes scheduling tasks for meta-learning using context bandits. Strengths: 1. The idea of applying context bandits to select tasks for meta-learning is novel 2. The experimental results are strong, especially when the task distribution is skewed Weaknesses: 1. The method is computationally inefficient in two aspects: a) The arm context requires computing both the adapted parameters and the meta-parameters for every task. While some of these gradients can be reused, the computational cost can still be significant as the number of training tasks increases. b) The networks $f_1,f_2$ used to compute the benefit score take parameters as inputs, which is huge in practice. Although, in Remark 3, the authors mention applying the average pooling to reduce the cost, it is unclear how it is done in practice. Moreover, $f_2$ requires computing the gradients of $f_1$ for two different parameter sets. 2. There is no training time comparison between different methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What’s the definition of $\mathbf{\theta}$ in Equation (7)? Is it $\theta_1$? 2. Why not simply use $\tilde r_{k,i}+\tilde e_{k,i}$ as the benefit score? In that case, there would be no need to learn $f_2$. Moreover, I believe that, with a little more effort, $f_1$ could be removed either. 3. Could you please detail the average pooling process in Remark 3? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This work does not involve some potential negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable questions and comments. Here, we will try our best to address the questions and concerns in the form of Q\&A. Since we are unable to submit the improved manuscript based on reviewers' comments, we will describe these modifications on the current manuscript instead. **To better answer reviewer's questions, we also include additional experiments in the 1-page PDF file.** *If you have any additional questions or comments, please kindly let us know. Thank you!* **Q1: What’s the definition of $\theta$ in Equation (7)? Is it $\theta_{1}$?** Yes, the $\theta$ notation here refers to $\theta_{1}$, and this is because the exploration module $f_{2}$ takes the gradients of $f_{1}$ (with respect to $\theta_{1}$) as input. We have updated the manuscript for this part to avoid any confusion. **Q2: Why not simply use $\tilde{r} + \tilde{e}$?** *As we may not fully understand this question, we will really appreciate it if the reviewer let us know whether our initial response below could adequately address your concerns. **If not, please kindly let us know, and we will try our best to make ourselves clear. Thank you!** * Recall that for an arm (task), we have $\tilde{r}$ being the output of $f_{1}$, which refers to the estimated arm reward, and it measures the instant benefit (reward) of including this task (arm) into the meta-training. Then, $\tilde{e}$ refers to the corresponding exploration score, and it measures the uncertainty of $\tilde{r}$. Note that since $\tilde{e}$ is the output of $f_{2}$, we need the exploration model $f_{2}$ to estimate $\tilde{e}$ on the fly. In this way, $\tilde{r} + \tilde{e}$ will equal to the addition of $f_1$ and $f_2$ outputs. Next, as we mentioned in the paper, we apply $\alpha\cdot \tilde{r} + \tilde{e}$ to balance the exploitation and exploration, where higher $\alpha$ values lead to higher levels of exploitation. Meanwhile, $\alpha$ is also used to balance our two exploration objectives (lines 208-224). Please also refer to the experimental results in Table 3 (Appendix) for the effects of $\alpha$. Based on Table 3 (Appendix), we can generally choose $\alpha\in [0.5, 0.7]$, because in this way we can balance the exploitation-exploration as well as balance our two exploration objectives (lines 83-91, Appendix). This also supports our claim that $f_{2}$ is necessary for our proposed BASS model. Furthermore, we would like to include additional discussions on our adaptive exploration strategy. Our proposed BASS adopts $f_{2}$ to adaptively learn the exploration score, which can be either positive or negative. Here, the intuition is that the exploitation model (i.e., $f_{1}$) can "provide the overly high estimation of the arm reward", and applying the upper confidence bound (UCB) exploration strategy can amplify the mistake, as the UCB is non-negative. For notation simplicity, let us denote the expected reward of an arm $x$ as $\mathbb{E}[r] = h(x)$, where $h$ is the unknown reward mapping function. The corresponding reward estimation is denoted as $\hat{r} = f_{1}(x)$ where $f_{1}(\cdot)$ is the exploitation model. When the estimated reward is lower than the expected reward ($f_{1}(x) < h(x)$), we will apply the "upward" exploration (i.e., positive exploration score) to increase the chance of arm $x$ being explored. On the contrary, if the estimated reward is higher than the expected reward ($f_{1}(x) > h(x)$), we will apply the "downward" exploration (i.e., negative exploration score) instead to tackle the excessively high reward estimation. In this work, we apply the exploration model $f_{2}$ to adaptively learn the relationship between the network gradients of $f_{1}$ and the reward estimation residual $h(x) - f_{1}(x)$. Moreover, we also incorporate the task adaptation difficulty level to the exploration score, for a refined exploration strategy under the meta-learning task scheduling settings. We have also added the above discussion to the Appendix for readers' reference. **Q3: Details of the average pooling process in Remark 3?** As we have discussed in the paper, since the dimensionality of original meta-parameters could be high, inspired by CNNs, we apply average pooling to embed them into low-dimensional vectors. For instance, suppose the dimensionality of the original meta-parameters is 10,000 and our average pooling step is 100. Here, the first 100 elements in the original meta-parameters will be averaged, and the mean value will be considered as the first element of the average-pooled representation vector. Applying this process for the rest elements in the original meta-parameters, we can obtain the average-pooled representation vector (of the dimensionality $10,000 / 100 = 100$) for the original meta-parameters. We have also updated the Remark 3 in the manuscript for better presentation and clarity. Meanwhile, please see Table 3 in the uploaded PDF file for reference, where we compare the model performance with different levels of average pooling. Here, overly small dimensionality of the average-pooled vector representation (e.g., 20) can lead to sub-optimal performance of the BASS framework. Meanwhile, we see that setting the dimensionality to 50 can generally lead to the good performance, because this very simple yet effective method can preserve the local characteristic of meta-parameters. **Q4: Running time of BASS?** In Figure 1 of the attached 1-page PDF file, we include additional running time comparison results, compared with ATS under 6 different settings. This is because ATS is generally the second best task scheduling method, and it is also the only method among baselines that can achieve adaptive task scheduling in meta-learning. Compared with ATS, we can observe that BASS can achieve significant running time improvements, and take as little as 50\% of the ATS's running time. We have also updated the manuscript Appendix by adding these results. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed feedback. I'm still confused about Q2. What I originally meant was that $\tilde e_{k, i}$ can always be computed from $f_1$ during the training time. In that case, why do we introduce another function $f_2$ rather than directly using $\tilde e_{k, i}$ as the exploration reward? --- Reply to Comment 1.1.1: Title: Further clarification on Q2 Comment: Thank you so much for getting back to us, and we would like to further clarify your concerns on this question. First, we would like to apologize for the possible confusion caused by our previous response. Here, based on Eq. (7) in the main body, the actual outputs of $f_{1}$, $f_{2}$ are $\hat{r}$ and $\hat{e}$ (symbols with the "hat"), respectively. Then, as in Eq. (8) and (9) in main body, we have $\tilde{r}$ and $\tilde{e}$ (symbols with the "tilde") being the unbiased approximation for the reward and exploration score, individually. The unbiased approximation here will be used to train the bandit scheduler (lines 225-232, main body). To achieve adaptive exploration, we need to obtain the exploration score (please also see our previous response on why we need adaptive exploration strategy). Based on the definition of $\tilde{e}$ in Eq. (9) of the paper main body, it consists of three values, which are unbiased reward approximation $\tilde{r}$, the output of $f_{1}$, and the validation loss $\mathcal{L}\_{k, i}$. While, the output of $f_{1}$ and $\mathcal{L}\_{k, i}$ are calculated during the inference phase (lines 6-10, Algorithm 1 in main body), the unbiased reward approximation $\tilde{r}$ (in Eq. (8)) are still unknown. In this case, if we do not use $f_{2}$ to learn $\tilde{e}$ but choose to directly calculate $\tilde{e}$, we will need to first calculate the unbiased reward estimations $\tilde{r}$ (in Eq. (8)) for **all the candidate arms** in the candidate pool $\Omega_{\text{task}}^{(k)}$, whose cardinality can be a significantly large number. In this case, the computational cost can be prohibitive, since the calculation of the unbiased reward estimations $\tilde{r}$ involves the meta-adaptation ($J$-step Gradient Descent. Eq. (1) in the main body) on all the validation tasks $\mathcal{T}^{\text{valid}}\in \Omega_{k}^{\text{valid}}$. Alternatively, we apply the exploration model $f_{2}$ to calculate the exploration score estimations, which significantly reduces the computational cost. Afterwards, as shown in lines 11-15 of Algorithm 1, for the sake of training the bandit scheduler, we only need to derive the unbiased reward estimations $\tilde{r}$ (in Eq. (8)) for **the chosen arms** $\Omega_{k}$. Since the size of candidate pool $| \Omega_{\text{task}}^{(k)} |$ is considerably larger compared with the number of chosen arms $| \Omega_{k} |$, our applying of the $f_{2}$ can dramatically reduce the running time. Therefore, $f_{2}$ is necessary to achieve adaptive exploration in task scheduling in meta-learning with high efficiency. We understand that our notation may cause confusion to the reviewer and other readers, such as the subtle visual differences between "hat" symbols and "tilde" symbols. Thus, we will update the corresponding notation and the narrative to offer better clarity. **If you have any further concerns, please kindly let us know. Thanks again for pointing out this issue.**
Summary: This paper considers the problem of task scheduling in meta-learning. Under the gradient-based meta-learning framework, the authors propose BASS, which uses contextual bandits parameterized by neural networks. The tasks in each batch (arms) are selected in an optimistic manner, with the reward estimated using a validation set and its uncertainty estimated using errors in the neural network predictions. A theoretical analysis of the regret is done in the overparameterized neural network setting. Experiments are done on Mini-ImageNet, CIFAR100, and Drug datasets, showing competitiveness with SOTA and outperforming them when there is task imbalance. Strengths: 1. The authors provide detailed mathematical motivation for their algorithmic choices. 2. The use of contextual bandits for meta-learning is novel, as far as I know. 3. The authors provide theoretical analysis of their algorithm. 4. Experiments are done on a range of datasets, with ablations. 5. BASS is able to lead to statistically significant improvements when there is task imbalance and is competitive otherwise. Weaknesses: 1. There is no discussion of the computational burden of BASS. It would strengthen the work to provide wall clock times or plots of the performance as a function of time. 2. It is not clear how BASS would scale, as it takes the parameters of one neural network as input into another neural network. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the computation time of BASS compare to the other algorithms? 2. How would BASS scale with the size of the classifier, e.g. if we were to meta-learn an initialization for 100 layers? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper proposes a new meta-learning algorithm, so I think the authors are ok in not including a broader impact section. The authors do adequately discuss limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable questions and comments. Here, we will try our best to address the questions and concerns in the form of Q\&A. Since we are unable to submit the improved manuscript based on reviewers' comments, we will describe these modifications on the current manuscript instead. **To better answer reviewer's questions, we also include additional experiments in the 1-page PDF file.** *If you have any additional questions or comments, please kindly let us know. Thank you!* **Q1: Running time of BASS?** Thank you for the feedback. We have included additional running time comparison results, compared with ATS [1] on 6 different settings. This is because ATS is generally the second-best task scheduling method, and it is also the only baseline method that can achieve adaptive task scheduling in meta-learning. Please refer to the running time results shown in Figure 1 of the PDF file. From the figure, we can observe that BASS can achieve significant running time improvements, and take as little as 50\% of the ATS's running time. We have also updated the manuscript Appendix by adding these results. **Q2: How would BASS scale with the size of the classifier?** In general, it can be challenging to train large meta-models. In this case, we propose the approximation method by applying the average pooling to reduce the bandit scheduler input in practice (Remark 3). This method is simple, but it can preserve the characteristics of meta-parameters, and we can adjust the converted dimensionality based on the complexity of meta-models. Please see our additional experiment result in Table 3 of the uploaded PDF file for reference. Here, we see that setting the dimensionality to 50 can generally lead to the good performance, and the BASS framework can also deal with higher dimensional average-pooled vector representations (e.g., dimensionality $= 500$). Meanwhile, for the experiments in the paper, the effectiveness of BASS is tested on two different types of meta-models (Fully-connected networks for the Drug data set, and CNNs for the Mini-ImageNet and CIFAR-100 data sets). Meanwhile, compared with the state-of-the-art baseline ATS, we can achieve effective task scheduling with relatively small computational cost. Please also see the results of running time for reference (Figure 1 in the PDF file). **REFERENCE** [1] Huaxiu Yao, Yu Wang, Ying Wei, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, and Chelsea Finn. Meta-learning with an adaptive task scheduler. Advances in Neural Information Processing Systems, 34:7497–7509, 2021. --- Rebuttal Comment 1.1: Title: Thanks to the authors for the rebuttal Comment: After reading it and the other reviews, I have decided to raise my score from 6 to 7. I would encourage the authors to include a comprehensive comparison of computation time for all baselines in the final paper, as it seems most reviewers had this question. --- Reply to Comment 1.1.1: Title: Thank reviewer zT5d for the discussion Comment: We sincerely thank the reviewer for your constructive comments and suggestions. We will definitely include a more detailed running time comparison with the baselines to the paper Appendix, along with other improved / newly added contents.
Summary: The paper presents a task scheduling approach in meta-learning under a contextual bandit framework. The proposed methodology, named BASS, treats each meta-learning task as an arm, prioritizing the selection of these arms according to exploration and exploitation scores. These scores are computed by a trainable neural network with the input being the meta-model's status. The authors provide theoretical proof demonstrating the convergence of the regret bound. Empirical results indicate that BASS outperforms baselines in meta-learning tasks, particularly those involving noisy or skewed datasets. Strengths: **S1. Clarity** The authors present their work with high clarity. The notations, a critical aspect in avoiding confusion in this complex subject, are well-defined. Also, the authors have strategically placed numerous remarks in the main section, which significantly assist readers in comprehending the principal concepts. **S2. Advantages from the use of bandits** The application of the contextual bandit in place of the previously used greedy update offers multiple strengths. - It eliminates the need for unstable bi-level optimization - In naturally manages the balance between exploration and exploitation. The way the extrinsic and intrinsic rewards are formulated is both intriguing and intuitive. Notably, the exploration factor, which takes into account both task uncertainty and difficulty, stands out as a significant advantage of this work. - As a consequence of the bandit application, the regret bound can be theoretically limited. The authors have put in considerable effort the substantiate their theoretical analysis in the appendix. Weaknesses: **W1. Experimental results** While the proposed method demonstrates significant enhancements over baseline models for noisy or skewed training tasks, it appears to yield marginal or negligible improvement on standard datasets, as acknowledged by the authors. Incorporating additional data regarding computation cost could provide valuable insight, especially when comparing the effect of noise or skewness on different datasets and assessing the increased computation demand posed by BASS. **Minor comment** It is recommended to incorporate the 'Related Works' section into the main body of the manuscript instead of including it in the appendix. Given that all submissions adhere to the same page limit, placing this section in the appendix can disrupt the contextualization of the work within the existing literature. **Acknowledgment Following Rebuttal** The author's rebuttal successfully addressed concerns about the experiment. The inclusion of additional results on average-pooling is particularly noteworthy. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Q1.** The explanation provided from Line 179 to Line 185 is somewhat unclear. Could the authors provide further clarification on this matter? **Q2.** The notations used in Line 108 and Line 326 appear to be inconsistent. Could these be typos? **Q3.** It's great that BASS employs all parameters of the meta-model to infer its status. However, BASS uses average pooling to reduce millions of parameters to about 50. Can this average-pooled feature truly encompass all necessary information about the status of the meta-model? **Q4.** In relation to Q3, the use of average-pooling seems to be due to the high dimensionality of the input. Have the authors considered employing other encoding techniques, such as an MLP with smaller hidden layers? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed their limitations regarding the computational cost and the marginal improvement for standard dataset in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable questions and comments. Here, we will try our best to address the questions and concerns in the form of Q\&A. Since we are unable to submit the improved manuscript based on reviewers' comments, we will describe these modifications on the current manuscript instead. **To better answer reviewer's questions, we also include additional experiments in the 1-page PDF file.** *If you have any additional questions or comments, please kindly let us know. Thank you!* **Q1: Could the authors provide further clarification on Line 179 to Line 185?** Recall that for task (arm) $\mathcal{T}\_{k, i}$, the exploration module $f_{2}$ takes two gradient vector $\nabla_{\theta} f_{1}(\chi_{k, i}^{s})$, $\nabla_{\theta} f_{1}(\chi_{k, i}^{q})$ as the input. And this paragraph (lines 176-185, main body) aims to show our intuition of this design. Here, we show two cases as the examples: (1) The variance of the corresponding data distribution $\mathcal{D}\_{\mathcal{T}\_{k, i}}$ is high. In this case, the support set $D\_{k, i}^{s}$ and the query set $D\_{k, i}^{q}$ can be considerably different, which makes the corresponding arm contexts $\chi\_{k, i}^{s}$, $\chi\_{k, i}^{q}$ divergent. As a result, the gradient vectors $\nabla_{\theta} f_{1}(\chi_{k, i}^{s})$, $\nabla_{\theta} f_{1}(\chi_{k, i}^{q})$ will likely be distinct from each other. (2) Alternatively, suppose the support set $D_{k, i}^{s}$ and the query set $D_{k, i}^{q}$ are not significantly distinct (which means that $\chi_{k, i}^{s}$ does not significantly differ from $\chi_{k, i}^{q}$). Then, if these two gradient vectors still tend to change dramatically when adapting to task $\mathcal{T}\_{k, i}$, we can consider that the exploitation model $f_{1}$ is not well adapted to this task $\mathcal{T}\_{k, i}$. For the above two cases, we give two scenarios where more exploration is needed for the task $\mathcal{T}\_{k, i}$, and this is to help $f_{1}$ better learn the reward for this task. In this case, the information from these two gradient vectors can help us make better exploration decisions, since they can encode information regarding the dynamics of the meta-model parameters and the exploitation model $f_{1}$ parameters. And we apply the exploration model $f_{2}$ to learn from these two gradient vectors. We have also updated this part in the manuscript for better clarify and presentation. **Q2: The notations used in Line 108 and Line 326 appear to be inconsistent. Could these be typos?** Thank you so much for spotting this typo, and it should be $\mathcal{P}(\mathcal{T})$. We have updated the manuscript, and will also double-check the manuscript for other potential typos / mistakes. **Q3: BASS uses average pooling to reduce millions of parameters to about 50. Can this average-pooled feature truly encompass all necessary information about the status of the meta-model?** Here, we include additional experiments with different levels of average pooling, such that after the average pooling, the input dimensionality will fall into $\\{20, 50, 100, 500\\}$. Please see Table 3 in the uploaded PDF file for the experiment results. Here, overly small dimensionality of the average-pooled vector representation (e.g., 20) can lead to the sub-optimal performance of the BASS framework. Meanwhile, we see that setting the dimensionality to 50 can generally lead to good performance, which means the average pooling method can effectively preserve the characteristic of meta-parameters. **Q4: Have the authors considered employing other encoding techniques, such as an MLP with smaller hidden layers?** Thank you for your suggestion, and we include additional experimental results using MLP to map the original context into the lower dimensional space instead of using our proposed average pooling (Remark 3). Please see Table 4 in the uploaded PDF file for experiment results. Here, we use the one-layer MLP with the ReLU activation to embed the original meta-parameters to the low-dimensional vector representations. We can see that the MLP-based method can indeed lead to some performance improvement. But in general, the performance difference between MLP-based embedding and the average-pooling vector representation is subtle. Meanwhile, we also note that the MLP-based mapping approach is considerably more time consuming compared with the average pooling approach, since we also need to train the additional embedding layer, which has a large number of trainable parameters. Alternatively, the computation cost of the average pooling is trivial. We have also included these experiment results in the paper Appendix. **Q5: Running time results?** Please refer to the running time results shown in Figure 1 of the PDF file. Here, we include the running time comparison results compared with ATS on 6 different settings. This is because ATS is generally the second-best task scheduling method, and it is also the only baseline method that can achieve adaptive task scheduling in meta-learning. From the figure, we can observe that BASS can achieve significant running time improvements, and take as little as 50\% of the ATS's running time. **Q6: Move "Related Works" section to the main body?** Thank you for the suggestion, and we agree with the reviewer that the "Related Work" should be moved to the main body for the sake of consistency. Therefore, we have updated the manuscript and moved this section to the main body. --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: I'd like to thank the authors for addressing my concerns and for conducting additional experiments. It's noteworthy how effectively the vanilla average-pooling compresses the status of the meta-model. Based on this, I have adjusted my rating accordingly from 6 to 7. --- Reply to Comment 1.1.1: Title: Thank reviewer Z8mk for the discussion Comment: We would like to sincerely thank the reviewer again for the discussion, and we will definitely update the new / improved contents to the paper.
Summary: This paper proposed a task scheduling framework BASS based on the status of meta-model based on contextual bandits setting. BASS addressed the performance bottleneck of meta-models by balancing exploitation and exploration and handled the data scarcity in the early stages of meta-training iterations with planning future meta-learning iteration strategies. The experimental and theoretical analysis showed the effectiveness of BASS. Strengths: 1 [Writing & Presentation] This paper is well-written and easy to follow. 2 [Motivation] The previous work about task schedulers aimed to improve meta-training strategies based on the various pre-defined criteria and assumptions, ignoring the global knowledge. It may result in a sub-optimal meta-model affected by noise perturbation or skewed task distributions. Motivated by this limitation, the authors proposed a novel framework to solve the meta-learning task scheduling problem. 3 [Contribution] Different from the existing methods that exploit the current/local knowledge for greedy scheduling, the proposed method leveraged a novel method to adaptively learn the relationship between the meta-model parameters and meta-model generalization ability. 4 [Experiments] The experiments demonstrated the effectiveness of BASS compared with seven strong basslines on three real datasets. The experimental results showed superiority in terms of accuracy and efficiency. Besides, BASS can explore the ‘tail’ task and enjoy a good performance in the ensemble inference setting, which further enhanced the generalization ability of meta-learning models. Weaknesses: 1 [Algorithm] In Algorithm 1. How about the detailed setting of the initialization? Intuitively, the meta-model parameters would rely on the various data distributions. 2 [ Theory] In Section 4, the $k$-iteration regret is the assumption of Theorem 4.2. However, the detailed setting of regret was not given. For example, the explanation of independence of regret bounds is not well illustrated. 3 [Experiment] In Section 5.1, the experimental analysis was on three datasets, the Drug dataset is a textual dataset, and both ImageNet and CIFAR are visual/image datasets. Since the proposed method focused on the generalization ability, it would be better to verify the performance on various categories of datasets. 4 [Experiment] In Table.1, the performance for 1-shot of BASS increased slightly. Does it mean the meta-model is invalid? If so, what is the main point of BASS leading to this result? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1 [Refer to Weakness 1] How about the assumption on various datasets? Does the assumption independent of each other? Will the results vary with different initialization? 2 [Refer to Weakness 1] What are the constraints or the policy of batch size B? How will the performance vary with different settings of B? 3 [Refer to Weakness 2] How about the bound of regret? How about the distribution of arms? Whether the performance change with different settings/assumptions of regret/arms? 4 [Refer to Weakness 3] How about the performance on various kinds of datasets? For example, image dataset, textual dataset, vision dataset, or dataset represented by deep features. Furthermore, the data dimension, data sparsity, even the image resolution may also affect the performance. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The target problem is interesting and worth studying. However, the detailed explanation of theory analysis needs to be clarified, and more experiments should be conducted to verify the superiority of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable questions and comments. Here, we will try our best to address the questions and concerns in the form of Q\&A. Since we are unable to submit the improved manuscript based on reviewers' comments, we will describe these modifications on the current manuscript instead. **To better answer reviewer's questions, we also include additional experiments in the 1-page PDF file.** *If you have any additional questions or comments, please kindly let us know. Thank you!* **Q1: How about the assumption on various datasets? Does the assumption independent of each other? Will the results vary with different initialization?** Analogous to other bandit-based works (e.g., Neural-UCB), in this paper, our regret bound considers the worst-case scenario. Therefore, as long as the separateness assumption (Assumption 4.1) is satisfied, we do not impose additional assumptions on the data or task distribution. Meanwhile, the separateness assumption is applied because we need the training data of the scheduler to be non-degenerate, in order to derive the performance guarantee. Please also see lines 244-254 (main body) as well as the Appendix Section C for the discussion on Assumption 4.1. For the meta-model, we first consider it to be a $L_{\mathcal{F}}$-layer fully-connected (FC) network (of width $m_{\mathcal{F}}$) with Gaussian initialization for the theoretical analysis (lines 237-239). In particular, we follow the settings in [1] for the Gaussian initialization of weight matrices. For the weight matrix elements in meta-model's first $(L_{\mathcal{F}}-1)$ layers, we draw each of them from the Gaussian distribution $\mathcal{N}(0, 2/m_{\mathcal{F}})$. Then, for the weight matrix elements of the last layer ($L_{\mathcal{F}}$-th layer), we draw each of them from the Gaussian distribution $\mathcal{N}(0, 1)$. Meanwhile, as we mentioned in lines 239-240 of the main body, our results can be generalized to other meta-model architectures (e.g., CNN and ResNet). For those architectures, we also can apply analogous Gaussian initialization procedures. Please see the parameter initialization details for these two architectures in Appendix Section B and Appendix Section C of [1]. To avoid possible confusion from the readers, we have added the above details to the Appendix. **Q2: What are the constraints or the policy of batch size B? How will the performance vary with different settings of B?** For the regret bound, we do not have constraints for $B$. Instead, for the second term of the regret bound (RHS of Eq. 10), we see that the term $B$ is in the numerator. Since we are considering the worst-case scenario, with larger $B$, the bandit-trained meta-parameters ($\mathbf{\Theta}^{(K)}$) will be more likely to deviate from the the optimal ones ($\mathbf{\Theta}^{(K), *}$), which may lead to a larger regret bound. Here, we include additional experiments with different batch sizes $B$, in comparison with ATS and the uniform sampling approach. This is because ATS is generally the second-best method. Please see Table 2 in the uploaded PDF file for detailed settings and results. Based on the results, we see that with larger $B$ values, the accuracy of BASS as well as the baselines will generally improve, and BASS still maintains the best performance. **Q3: How about the bound of regret? How about the distribution of arms? Whether the performance change with different settings/assumptions of regret/arms?** As we have discussed in the answer to Q1, since the data / task distributions are unknown, our regret bound considers the worst-case scenario. In this case, as long as the conditions in Section 4 are met, our derived regret bound can deal with various arm / task / data distributions. **Q4: Experiment results for 1-shot settings?** First, we would like to recall that under the 5-shot settings where information for task adaptation is relatively sufficient, our proposed BASS can achieve considerable improvements over the existing baselines, which shows the effectiveness of our proposed approach. Then, with the accuracy results and the corresponding standard deviation results, we would like to note that our proposed BASS can achieve statistically comparable (or better) performances, in comparison with baselines. This is because under the 1-shot settings, regardless of the scheduling approaches, the meta-learning models will have relatively insufficient information regarding the task adaptation, which would lead to unsatisfactory performances and make the 1-shot settings more challenging. Meanwhile, our improvements over the uniform sampling approach also support the effectiveness of our task scheduling strategy. In particular, to deal with the challenging 1-shot settings, we offer a practical solution in our case study (Subsec. 5.3, main body) by utilizing the ensemble inference to improve the meta-model's generalization ability. In this way, our proposed BASS can achieve more significant advantage over the baselines. **Q5: How about the performance on other datasets with different specifications?** Thank you for your suggestion, and we agree that data set specifications (e.g., the image resolution) can considerably affect the model performance. Therefore, we include additional experiments on the "DomainNet" data set [Moment Matching for Multi-Source Domain Adaptation (ICCV 2019)], which has a higher resolution ($128\times128$) for the image data. Please see our general response for the detailed experiment settings (bullet point 6). For experimental results, please see Table 5 in the uploaded PDF file. Here, we see that with a higher image resolution of the "DomainNet" data set, BASS can still maintain the best performance compared with the baselines. **REFERENCE** [1] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. "A Convergence Theory for Deep Learning via Over-Parameterization." arXiv preprint arXiv:1811.03962 (2018). --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response. My concerns are addressed properly. I raised my score from 5 to 6. --- Reply to Comment 1.1.1: Title: Thank Reviewer nu6A for the feedback Comment: We would like to thank Reviewer nu6A again for the comments and suggestions. We will definitely update the manuscript to include the discussion as well as the additional experiment results in the paper Appendix.
Rebuttal 1: Rebuttal: We would like to take the chance to thank all the reviewers for your constructive feedback and detailed comments for our work. Your suggestions can definitely make this paper a more solid one. Here, to better resolve the questions from reviewers, we have included supplementary experiments to provide the additional reference. **Please see the attached PDF file for details.** Our experiments include: 1. [Figure 1] We include the running time comparison with the adaptive scheduler ATS, because ATS can generally achieve the best performance among the baselines. * We can see that BASS can achieve significant improvement in terms of the running time, and can take as little as 50\% of ATS's running time. * (Intuition) This improvement is because our proposed BASS only needs one round of the optimization process to update the meta-model and BASS. On the other hand, from Algorithm 1 of the ATS paper [2], we see that ATS requires two optimization rounds to update the scheduler (lines 8-12, Algorithm 1 in [2]) with the temporal meta-model, and the actual meta-model (lines 13-14, Algorithm 1 in [2]) respectively. (Please also refer to Figure 2 in our paper main body.) 2. [Table 1] We include the experiments with different levels of skewness. * Here, we see that with less skewness levels (the skewness magnitude reduces from Pattern 1 to Pattern 3), the accuracy of BASS as well as the baselines will continue to improve, while BASS still maintains the best performance. 3. [Table 2] We include additional experiments with different batch sizes $B$, in comparison with the ATS and the uniform sampling approach. * Here, we see that with larger $B$ values, the accuracy of BASS as well as the baselines will generally improve, and BASS still maintains the best performance. 4. [Table 3] We include additional experiments with different levels of average pooling, such that after the average pooling, the dimensionality of the pooled vector representation will fall into $\\{20, 50, 100, 500\\}$. * Here, overly small dimensionality of the average-pooled vector representation (e.g., 20) can lead to sub-optimal performance of the BASS framework. Meanwhile, we see that setting the dimensionality to 50 can generally lead to good enough performance. 5. [Table 4] Meanwhile, we also include additional experimental results using MLP to map the original context into the lower dimensional space instead of using our proposed average pooling (Remark 3). * Here, we use the one-layer MLP with the ReLU activation to embed the original meta-parameters to the low-dimensional vector representations. We can see that the MLP-based method can indeed lead to some performance improvement. But in general, the performance difference between MLP-based embedding and the average-pooling vector representation is subtle. * We also note that the MLP-based mapping approach is considerably more time consuming compared with the average pooling approach, since we also need to train the additional embedding layer, which has a large number of trainable parameters. 6. [Table 5] We include additional experiments on the new "DomainNet" data set [1]. * Within the "real" domain, we filter 100 classes that have at least 600 images. In this way, with each class being a task with 600 images, we will have a total of 100 tasks. Compared with image data sets in our paper (Mini-ImageNet and CIFAR-100), we increase the image resolution of "DomainNet" by resizing its images to 128$\times$128 pixels. * Following the settings in our paper, we divide tasks into the portions of 64 : 16 : 20 that correspond to the training set, validation set and the test set respectively. For the few-shot settings, we formulate the problem to be 5-shot, 5-way / 7-way. We include uniform sampling and ATS as baselines, since they generally perform the best among baselines. * Here, we see that with a higher image resolution of the "DomainNet" data set, BASS can still maintain the best performance compared with the baselines. We will also update all these additional experiment results to the paper Appendix. **REFERENCE** [1] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In ICCV, pages 1406–1415, 2019. [2] Huaxiu Yao, Yu Wang, Ying Wei, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, and Chelsea Finn. Meta-learning with an adaptive task scheduler. Advances in Neural Information Processing Systems, 34:7497–7509, 2021. Pdf: /pdf/daead9f8ea32a5433308ea49c9b57d0f71b32c8d.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes to adaptively sample the meta-training tasks by optimizing the task scheduling strategy based on the status of the meta-model. The proposed method treats task scheduling as a contextual multi-arm bandit problem with a reward function balancing the exploitation and exploration. The authors provide theoretical analysis about the regret bound of the proposed method and conduct experiments on real data sets to show the effectiveness. Strengths: **DISCLAIMER:** I have not checked the proof thoroughly and cannot verify the correctness of the theorems. * The presentation of the paper is clean and clear. * It seems to be novel to formulate the curriculum learning as a contextual bandit in the context of meta-learning. Weaknesses: * The authors claim the proposed method can deal with the data scarcity problem at the early stage of meta-learning. However, I cannot further elaboration anywhere in the paper nor any experimental results supporting this claim. * Since the proposed method involves extra computation efforts, the quantitative comparison of computation cost is not included in the experiment results. * If the proposed method outperforms the existing method, especially when the data is noisy and skewed, the authors should include experiments with different levels of noise $\epsilon$ and different levels of skewness to show the trends of improvements. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * How is the K-round regret bound in Theorem 4.2 compared with the existing work? * How is the performance of the proposed method compared with the existing method when the data is noise-free and unskewed? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss the limitation in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable questions and comments. Here, we will try our best to address the questions and concerns in the form of Q&A. Since we are unable to submit the improved manuscript based on reviewers’ comments, we will describe these modifications on the current manuscript instead. **To better answer reviewer’s questions, we also include additional experiments in the 1-page PDF file.** *If you have any additional questions or comments, please kindly let us know. Thank you!* **Q1: How is the K-round regret bound in Theorem 4.2 compared with the existing work?** To the best of our knowledge, this is the first work to incorporate bandit ideas for the task scheduling problem in meta-learning, and it is also the first work that introduces the performance guarantee for the task scheduling problem. Meanwhile, instead of our carefully designed modeling, if we directly apply the existing bandit works to the task scheduling problem, we will (1) introduce additional assumptions for the sake of analysis (e.g., Lin-UCB [1] assumes the reward mapping function $h(\cdot)$ is linear), or (2) introduce additional terms to the regret bound (e.g., Neural-UCB [2] will need an additional $\tilde{d}$ in the regret bound, which is the effective dimension of the related NTK matrix). In our discussion in the main body (lines 266-269), we also show that the uniform sampling approach can lead to $\mathcal{O}(1)$ regret under the worst-case scenario, which is significantly worse compared with our derived regret bound (Eq. (10)). Please also refer to our discussion on the regret bound (lines 261-276 of the main body). **Q2: How is the performance of the proposed method compared with the existing method when the data is noise-free and unskewed?** Please refer to Table 2 (Subsec. B.1.1) in the Appendix for the experimental results when the data is noise-free and unskewed. We show that BASS can still achieve the best performance compared with baselines. Please also see the results from Table 1 (Appendix) for the experimental results with different noise levels. Meanwhile, we also include the experiments with different levels of skewness. Please see Table 1 in the uploaded PDF file for details. Here, we see that with less skewness levels (the skewness level reduces from Setting 1 to Setting 3), the accuracy of BASS as well as the baselines will continue to increase, and BASS can still maintain the best performance. **Q3: The claim of the data scarcity problem?** In the abstract and the conclusion, we mention that "BASS can deal with the data scarcity problem at the early stage of meta-training, and plan for the future meta-training iterations". Here, we mean that, due to the insufficient knowledge regarding the task and data distributions in the early stage of meta-training, existing greedy meta-task schedulers may lead sub-optimal meta-models, since they tend to make scheduling decisions solely based on the limited existing knowledge, without performing exploration for potential benefits. Alternatively, with the exploration strategy, our method can be less likely to be significantly affected by the insufficient knowledge issue (e.g., the skewed data distribution example). That is why we need exploration for task scheduling, especially during an early stage of meta-training when the learner's knowledge is limited. In the experiments, we have shown BASS can achieve the same accuracy as baselines using less number of training round (Figure 4, main body). This fact can be interpreted as the support for the exploitation-exploration strategy used in our approach, which is able to maximize the long-term benefit. Since this narrative may confuse the reviewer and the future readers, we have updated the abstract as well as the conclusion sections for better presentation, by mentioning the importance of exploration at the early stage of meta-training, due to the insufficient knowledge with respect to the task and data distributions. **REFERENCE** [1] Wei Chu, Lihong Li, Lev Reyzin, and Robert Schapire. Contextual bandits with linear payoff functions. In AISTATS, pages 208–214, 2011. [2] Dongruo Zhou, Lihong Li, and Quanquan Gu. Neural contextual bandits with ucb-based exploration, 2020. --- Rebuttal Comment 1.1: Title: RE: Comment: Thanks for the clarification and added experiments. I raised my rating to 6. --- Reply to Comment 1.1.1: Title: Thank Reviewer XGJX for the discussion Comment: We would like to thank the reviewer XGJX again for the valuable discussion, and we will definitely add the clarification as well as the additional experiment results to the paper Appendix.
null
null
null
null
null
null
Optimal Extragradient-Based Algorithms for Stochastic Variational Inequalities with Separable Structure
Accept (poster)
Summary: This paper studies the stochastic monotone variational inequality problem. They propose the Accelerated Gradient - Extragradient (AG-EG) algorithm which is shown to have the optimal convergence rates for the strongly monotone VI problem. Strengths: This papers studies the important problem of Stochastic VIs, the algorithm, especially the idea of restarting is interesting. Weaknesses: Dependence on the Diameter of the Set: A major differentiating point of this paper, as claimed by the authors in Remark 2.4, is that the convergence rate bounds have no dependence on the constraint set. However, there is a dependence on the initial distance of the iterate to the solution. Showing that the iterates are bounded is essentially the only requirement to convert the results of papers like [Chen et a. 2017] to the unconstrained setting. Can the authors describe the main difficulties and technical novelties involved in showing this? Rate dependence on the noise term: As stated in the appendix, the optimal dependence on the noise term has been achieved by a multistage algorithm in [Fallah et al. 2020], where the dependence on the noise term is $\mathcal{O}(\sigma^2/n)$, whereas this paper seems to have a dependence of the form $\mathcal{O}(\sigma^2/\sqrt{n})$. Am I missing something? Is the optimality of the proposed algorithm only in terms of the first term, where the dependence on the problem parameters $L, \mu$ etc. are optimal? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We address your comments and questions as follows. --- **Q1**: Showing that the iterates are bounded is essentially the only requirement to convert the results of papers like Chen et al. [2017] to the unconstrained setting. Can the authors describe the main difficulties and technical novelties involved in showing this? **A1**: We appreciate your question. While the optimal rate for monotone variational inequalities (VIs) has been achieved by Chen et al. [2017], achieving the optimal rate for strongly-monotone VIs remains largely unsolved. The boundedness of iterates has been one of the main barriers to proving the optimal rate for strongly-monotone VIs. The standard gradient descent-ascent method is known to *diverge* for many convex-concave problems [1]. Moreover, in the noisy setting, *even extragradient may diverge* [2]. Thus it is critical to show that the iterates are bounded. It is a challenge as many seminal works on stochastic VIs only consider bounded domains [3,4]. In the VI literature, showing the boundedness of the iterates is often the crux of the proof (see for instance, [5]). The main proof technique used in our work is the “bootstrapping” argument shown in step 3 of Sec. D.3. By summing up Eq. (33), a rearrangement of the terms implies a relation of $\mathbb{E}||z_t - z^*||^2$ with a summation of itself, the result follows by the bootstrapping argument. In sharp contrast, Chen et al. [2017] adopt "the enlargement of a maximal monotone operator" technique to deal with unbounded domains, which seems complicated and unnecessary for our purpose. Our bootstrapping method leads to a simpler and more concise analysis. Moreover, Chen et al. [2017] only considered the monotone VI setting rather than the strongly-monotone VI setting. In addition, we also consider specific instances of VI problems (i.e., bilinear games, bilinearly coupled SC-SC) with lower-bound matching results by utilizing scaling reduction techniques. These results were not obtained by Chen et al. [2017] either. [1] Mescheder, Lars, Andreas Geiger, and Sebastian Nowozin. "Which training methods for GANs do actually converge?." International conference on machine learning. PMLR, 2018. [2] Chavdarova, Tatjana, et al. "Reducing noise in GAN training with variance reduced extragradient." Advances in Neural Information Processing Systems 32 (2019). [3] Nemirovski, Arkadi, et al. "Robust stochastic approximation approach to stochastic programming." SIAM Journal on optimization 19.4 (2009): 1574-1609. [4] Juditsky, Anatoli, Arkadi Nemirovski, and Claire Tauvel. "Solving variational inequalities with stochastic mirror-prox algorithm." Stochastic Systems 1.1 (2011): 17-58. [5] Gorbunov, Eduard, et al. "Clipped stochastic methods for variational inequalities with heavy-tailed noise." NeurIPS 2022. --- **Q2**: The optimal dependence on the noise term has been achieved by a multistage algorithm in Fallah et al. [2020], where the dependence on the noise term is $\mathcal{O}\left(\sigma^2 / n\right)$, whereas this paper seems to have a dependence of the form $\mathcal{O}\left(\sigma^2 / \sqrt{n}\right)$. Is the optimality of the proposed algorithm only in terms of the first term, where the dependence on the problem parameters $L, \mu$ are optimal? **A2**: We believe there is a misunderstanding by the reviewer. This is due to the different definitions of the $\varepsilon$-optimal point. In our paper, the $\varepsilon$-optimal point is defined by $||z - z^*|| \leq \varepsilon$. In contrast, Fallah et al. [2020] define the optimal point by $||z - z^*||^2 \leq \varepsilon$. Thus, our $\varepsilon^2$ is equivalent to their $\varepsilon$. Therefore, the complexity term that depends on the noise variance $\sigma^2$ in Corollary 2.8, i.e., $\frac{\sigma^2}{\mu^2 \varepsilon^2}$ should be translated into $\frac{\sigma^2}{\mu^2 n}$ rather than $\frac{\sigma^2}{\mu^2 \sqrt{n}}$. This also applies to Corollaries 2.9, 3.1, and 3.3. To sum up, the optimality of our proposed algorithm is not only in terms of the first term but also in the second noise term. We apologize for the confusion caused by the different definition of $\varepsilon$-optimal point, and will make it clear in the revision. --- Rebuttal Comment 1.1: Title: Reply to Rebuttal Comment: I thank the authors for their rebuttal. I have increased my score. --- Reply to Comment 1.1.1: Comment: Thank you for considering our rebuttal and increasing the score. Your feedback is greatly appreciated.
Summary: The author(s) studied the extragradient algorithm for the separable strongly monotone VI problems. It was shown that the new analysis can achieve optimal error bounds in various settings. Strengths: - The new analysis gives optimal error bounds in various settings, which is a decent contribution to the field. - The paper is well-written and easy to follow in general. Weaknesses: - I encourage the author(s) to include a table to compare the error bounds and assumptions used with prior works. It seems that the error bound obtained by the author(s) is not entirely new, existing analysis can also achieve the same error bound under other conditions. - There is no related work section, which makes it hard for readers to understand the position of this work in the literature. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I suggest the author(s) to include a table of error bounds and assumptions to compare with existing works. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not find any negative societal impact of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We address your comments and questions as follows. --- **Q1**: I suggest the author(s) to include a table of error bounds and assumptions to compare with existing works. **A1**: Thank you for your suggestion. We have added a revised table comparing error bounds and assumptions with existing works in our uploaded pdf file. --- **Q2**: There is no related work section, which makes it hard for readers to understand the position of this work in the literature. **A2**: Due to the space limit in the submission, we have deferred the related work section to the supplementary material. We will restructure our content to accommodate a comprehensive related work section in the main content. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal Comment: Thanks for adding the table, the contribution becomes more clear to me (as an non-expert in this field). I am keeping my score unchanged. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback! We will be sure to integrate the table and all other revisions as promised into the final version.
Summary: The paper studies variational inequalities with separable structures (sum of the gradient of a strongly convex function and a monotone operator). This class subsumes bilinear coupling SC-SC minimax optimization and bilinear games. The authors propose an extragradient algorithm with acceleration by shifting the convexity part from the gradient to the operator. This algorithm matches the lower bounds for strongly monotone VIs (up to some log factors). When specialized in bilinear problems, the algorithm can be coupled with scheduled restarting to match the existing lower bounds for bilinear SC-SC and bilinear games. Strengths: - The proposed algorithm matches several lower bounds at the same time. - The algorithms do not require the bounded domain assumption as in some of the previous works. - The technical details of the paper are excellent and easy to follow. I checked some of the proofs in the appendix and did not find any issues. - The discussion provided in the paper is generally well written. Weaknesses: - Scheduled restart may not always be desirable, although this is not necessarily a weak point. For bilinear problems, the algorithm does need scheduled restart to achieve the optimality. The claim in the conclusion that all three lower bounds are matched in one algorithm is therefore not really accurate. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. In Jordan et al (2023) the separable structure means sum of the gradient of a general convex function and a strongly monotone operator. Could the authors discuss the difference between this and the one considered in the paper? How would shifting the convexity technique change? 2. Some existing papers talk about optimal bounds with the attention to the log factors. I am not sure to what extent the authors pointed this out in the related work and the paper. Could the authors discuss? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors discussed some limitations of the proposed algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We address your comments and questions as follows. --- **Q1**: For bilinear problems, the algorithm does need scheduled restart to achieve optimality. The claim in the conclusion that all three lower bounds are matched in one algorithm is, therefore not really accurate. **A1**: We appreciate your feedback and the opportunity to clarify our claim. First, our algorithm with scheduled restarting can achieve optimal results for general variational inequality, bilinear games, and bilinearly coupled strongly-convex-strongly concave (SC-SC) settings. Please refer to the table in the uploaded pdf file for the corresponding upper and lower bounds. So our claim that all three lower bounds are matched in one algorithm in the conclusion is valid. In addition, we agree that the scheduled restarting technique may not be ideal. For bilinear game problems, techniques such as scheduled restarting is essential in matching the lower bound. Alternative approaches exist, such as the work by Azizian et al. [2020b], which uses EG with momentum. However, the accelerated rate in their work is limited to scenarios with large condition numbers. It remains an open question whether a technique can simultaneously accelerate the bilinear part and the minimization part without scheduled restarting. This question is left for future work. --- **Q2**: In Jordan et al. [2023] the separable structure means sum of the gradient of a general convex function and a strongly monotone operator. Could the authors discuss the difference between this and the one considered in the paper? **A2**: Thanks for your question. Jordan et al. [2023] uses separable structure to refer to the sum of the gradients of a general convex function and a strongly monotone operator, while we use separable structure to refer to the sum of the gradients of a strongly convex function and a general monotone operator. These two settings are equivalent because we can shift the “strong convexity'' component from the strongly convex function to the monotone operator. --- **Q3**: Some existing papers talk about optimal bounds with the attention to the log factors. **A3**: Our upper bounds and existing lower bounds all contain log factors. For example, in Corollary 2.8, Eq. (17), Corollary 3.1, and Corollary 3.3, the complexity is presented with log factors being explicitly considered. In Table 1 in our original supplementary material, and the table in our uploaded pdf file, we have omitted dependency on log factors due to layout reasons. We have added discussions on the log factors in the caption of our table in the uploaded pdf file. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thoughtful rebuttal. I maintain my current score. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your positive feedback. We're glad to know that our response has effectively addressed your questions.
Summary: The work presents a stochastic accelerated gradient-extra gradient (AG-EG) algorithm for strongly monotone variational inequalities (VI) that is designed by combining extra gradient and Nesterov acceleration. The major of the work is extending the formulation to the case when the constraint set is convex but can be unbounded and still match the best-known convergence result in the literature. Two variants of the AG-EG algorithm are presented by the authors, a direct approach and another with scheduled restarting. The convergence rates also match the lower bounds for the special case of VI's in bilinearly coupled strongly convex strongly concave saddle point problems and bilinear games with AG-EG Scheduled Restarting. Strengths: The optimal convergence rate is established for strongly monotone VI's. The paper is written well for most parts except for a few issues which are listed in the weakness section. Weaknesses: - The notation in algorithm 1 is confusing for the update equations. - The work does not demonstrate the efficacy of the approach through empirical studies on an application eg. Reinforcemt Learning, Regularized empirical risk minimization, quadratic games, etc. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In section 2 a simplified deterministic setting is shown although the final results are presented for the stochastic case. This included the Assumption 1 as well. It leads to confusion about the assumptions, is it in expectation or holds for each random realization of function? - The authors may add a table comparing the complexity results with the SOTA algorithms for solving stochastic VI problems for better readability and presentation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We address your comments and questions as follows. --- **Q1**: The notation in algorithm 1 is confusing for the update equations **A1**: In Algorithm 1, we introduce notation to distinguish different points in the AGEG method. Specifically, $z_{t-\frac{1}{2}}$ refers to the extrapolated point, while $z^{\text{md}}$ and $z^{\text{ag}}$ denote the "middle" and "aggregated" points, respectively. In addition, we use notations $\zeta_{t-\frac{1}{2}}$, $\zeta_{t}$, and $\xi_{t - \frac{1}{2}}$ to account for noise during various steps. With our notation, $z_x$ depends on $\zeta_{y\leq x}$ and $\xi_{y \leq x}$ but remains independent of $\zeta_{y > x}$ and $\xi_{y > x}$. We provide definitions of $f(x; \xi), h(x, y, \zeta)$, and $g(y; \xi)$ in Sec. 1 as stochastic oracles of $F(x), H(x, y)$, and $G(y)$, respectively. --- **Q2**: The work does not demonstrate the efficacy of the approach through empirical studies on an application **A2**: During the rebuttal period, we conducted a comparison between our stochastic AGEG algorithms (with restarting) and stochastic extragradient (SEG) algorithms (with restarting) on quadratic games. Our experiments focused on synthetic quadratic games with varying parameter settings and noise scales. Detailed results have been provided in our uploaded pdf file. We can see that stochastic AGEG with restarting outperforms SEG with restarting by a large margin. Due to the time limit, we will add more empirical results in our final version. --- **Q3**: Section 2 presents a simplified deterministic setting, although the final results are given for the stochastic case. This included the Assumption 1 as well. It leads to confusion about the assumptions, is it in expectation or holds for each random realization of function? **A3**: Throughout this paper, our notation $\mathcal{F}$ and $\mathcal{H}$ represents functions in their expected forms. Consequently, Assumption 2.1 holds in expectation. --- Rebuttal Comment 1.1: Comment: Thanks for clarifying the notation and the additional empirical results. I am keeping my score unchanged.
Rebuttal 1: Rebuttal: Dear reviewers, Thank you for your time and constructive comments. Based on the feedback of Reviewer iB8j and Reviewer oFAu, we have added a table comparing the complexity results and assumptions with the prior works. In addition, to address the feedback of Reviewer iB8j on empirical studies, we conducted an experiment to examine the performance of stochastic AGEG vs stochastic EG. Please find the table and figure in the uploaded PDF file. For all other comments and feedbacks, we have provided point-by-point responses in the rebuttal separately. Pdf: /pdf/d37eb5f532c62b7acac7006d80e4fa1071936ff1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Revisiting Logistic-softmax Likelihood in Bayesian Meta-Learning for Few-Shot Classification
Accept (poster)
Summary: This paper tackles the problem that classification tasks in Bayesian meta-learning meet mathematical problems with the softmax function. The previously proposed logistic-softmax function as an alternative can be optimized, but tends to exhibit inherent lack of confidence in prediction. To solve this problem, the authors propose to add a temperature scaling parameter in the function. This simple change not only solves the confidence problem when setting the temperature to less than 1, but also enjoys both theoretical and empirical advantages, as verified in the paper. For efficient and intractable optimization, the authors additionally use the data augmentation technique to derive a fully analytical mean-field inference method for the Bayesian meta-learning model. Strengths: + While being simple, the revised logistic-softmax function indeed enjoys better theoretical and empirical advantages, and these are all verified in the paper with proofs or experiments. + The proposed logistic-softmax function with temperature has the potential to be applied to broader research areas such as multi-class classification. + The writing is clear, making the paper easy to follow. Weaknesses: - When we just focus on few-shot classification, I have to say that all bayesian meta-learning algorithms are not comparable to even very simple algorithms like ProtoNet or CE Baseline (I mean, if well-tuned, not the performance in Table 1) with the same architecture Conv-4. Also, as standard algorithms now use ResNet-12 as the backbone, all bayesian meta-learning algorithms still use Convnet, making the performance not comparable to those in most recent reported papers. This leads to doubt that do these bayesian meta-learning algorithms still perform well if we use ResNet-12 as the backbone? If the reason not to use ResNet-12 is the computation burden, then the biggest problem may be how to make bayesian inference more tractable. Also, another problem of interest is to figure out why bayesian meta-learning with complicated training steps cannot perform well on few-shot learning, even if equipped with nice theoretical properties. Since this paper does not answer these questions, I think the value of this paper is restricted. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In theorem 3.3, $a$ is said to be the mean function with no restriction, so the mean of different sample points can be different. However, in line 444 in the appendix, all different samples have the same mean, can the author explain this? - How does your method compare with non-Gaussian processes like neural process [1]? It seems that neural process is more flexible and efficient than Gaussian process. [1] Garnelo et al. Neural processes. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations which I found reasonable. I suggest the authors think more about the role of Gaussian process in bayesian meta-learning, as well as the role of bayesian meta-learning in few-shot classification. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable advice. As an overview, our paper mainly focuses on two themes: 1. theoretical analysis of the logistic-softmax function. 2. application of the logistic-softmax function to Bayesian meta-learning. We admit the second section mainly follows the research line of the Bayesian meta-learning community while paying less attention to the larger context of few-shot classification. Although we do not offer a comprehensive remedy for the problems faced by the Bayesian method in FSC, we believe our theoretical analysis on logistic-softmax and the derivation of the mean-field approximation provides valuable insight and tools for the strive of the Bayesian methods community. In the revised version, we will add more discussions with regard to the role of Gaussian process in Bayesian meta-learning, as well as the role of Bayesian meta-learning in FSC in the Limitations section. Now we answer your questions below. > Q1. When we just focus on few-shot classification, I have to say that all bayesian meta-learning algorithms are not comparable to even very simple algorithms like ProtoNet or CE Baseline (I mean, if well-tuned, not the performance in Table 1) with the same architecture Conv-4. Thank you for the comment. In Table 1, the result for ProtoNet is state-of-the-art performance under the same training protocol in the existing literature. Therefore, we may not fully understand the meaning of 'well-tuned' in your concern and would love to engage in further discussion. In addition, though some Bayesian methods may not be competitive in terms of accuracy, we would like to point out that the major advantage of Bayesian methods is the capability of uncertainty calibration in a probabilistic framework. This is often desirable in fields relying on risk measurement, especially in the context of few-shot classification. > Q2. Also, as standard algorithms now use ResNet-12 as the backbone, all Bayesian meta-learning algorithms still use Convnet, making the performance not comparable to those in most recent reported papers. This leads to doubt that do these bayesian meta-learning algorithms still perform well if we use ResNet-12 as the backbone? Thank you for the question. We would like to emphasize that our work mainly focuses on the theoretical property of modified logistic softmax. Its application to Bayesian meta-learning is largely an attempt to verify our theoretical analysis with empirical results. Therefore, we use the most common backbone structure and training protocol inherited from Bayesian meta-learning. Since OVE and DKT (and other Bayesian meta-learning methods) differ from our method only in their likelihood functions in essence, we use the same Convnet setup as them for better comparison of the logistic-softmax function and others. Besides, we refer to the paper of DKT [1] which uses both ResNet-10 and Convnet as the backbone and both reaches competitive results. As a similar Bayesian meta-learning method, we believe our method can extend to different neural network structures as well. [1] Patacchiola, Massimiliano, et al. "Bayesian meta-learning for the few-shot setting via deep kernels." Advances in Neural Information Processing Systems 33 (2020) > Q3. If the reason not to use ResNet-12 is the computation burden, then the biggest problem may be how to make Bayesian inference more tractable. Thank you for raising this question. We agree that the biggest problem is to make Bayesian inference more tractable. In the context of Bayesian meta-learning, the main challenge is to compute the integral for posterior inference. Our derivation of mean-field approximation targets to resolve this problem, as we provide a closed-form expression for variational parameters. In practice, our inference method is more efficient than the Gibbs sampling one while reaching close classification accuracy. > Q4. Also, another problem of interest is to figure out why bayesian meta-learning with complicated training steps cannot perform well on few-shot learning, even if equipped with nice theoretical properties. Since this paper does not answer these questions, I think the value of this paper is restricted. We acknowledge that the theoretical explanation for sub-optimal results for Bayesian methods in meta-learning is not explored in our paper. However, we would like to emphasize that the main focus of this paper is the modified logistic-softmax likelihood, whose good theoretical property invites potential applicability in various domains beyond few-shot learning. Nonetheless, we agree that this is a thrilling and urgent topic and we would love to do some future work for exploration. > Q5. In theorem 3.3, $a$ is said to be the mean function with no restriction, so the mean of different sample points can be different. However, in line 444 in the appendix, all different samples have the same mean, can the author explain this? We really appreciate the reviewer for pointing out this typo and thank the reviewer for the chance for us to clarify it here. The mean function does not have to be constant w.r.t. different sample points. We have proofread the proof of Thm3.3, and our analysis is still correct when the mean function is not constant. > Q6. How does your method compare with non-Gaussian processes like neural process [1]? It seems that neural process is more flexible and efficient than Gaussian process. Thank you for reminding us. The neural process is a promising and efficient method, and we will add some discussion on the extension to non-Gaussian processes such as the neural process in the revised paper. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I'd like to remind the authors that even in the original paper of ProtoNet, the reported accuracy is much better than that in Table 1 of this paper. I maintain the score. --- Reply to Comment 1.1.1: Comment: Thank you for responding to us. In terms of your concern, we first notice that in the original paper of ProtoNet, the accuracy result on CUB is in the setting of 50-way and zero-shot, which is different from ours. Therefore, the reported accuracy in the original paper of ProtoNet is not discussed in our paper. Moreover, we also notice that the ProtoNet and OVE[1] have the same first author, and OVE recognizes the result we report of ProtoNet on 1-shot 5-way and 5-shot 5-way CUB problems. Finally, the result we report of ProtoNet on CUB with 1-shot and 5-shot is first provided by the paper DKT[2] and has been adopted by much literature[1] [3] [4] ever since. All results, as DKT puts in the paper, "are trained from scratch with the same backbone and learning schedule". Therefore, we believe the ProtoNet result in Table 1 is rigorous in our setting. However, we do feel the need to revise our paper to point out this research line in the experiment part. Thank you for reminding us! [1] Snell, Jake, and Richard Zemel. "Bayesian Few-Shot Classification with One-vs-Each Pólya-Gamma Augmented Gaussian Processes." International Conference on Learning Representations. 2020. [2] Patacchiola, Massimiliano, et al. "Bayesian meta-learning for the few-shot setting via deep kernels." Advances in Neural Information Processing Systems 33 (2020): 16108-16118. [3] Wang, Ze, et al. "Learning to learn dense gaussian processes for few-shot learning." Advances in Neural Information Processing Systems 34 (2021): 13230-13241. [4] Sendera, Marcin, et al. "Hypershot: Few-shot learning by kernel hypernetworks." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023.
Summary: The paper revisits the design of logistic-softmax function in the context of classification in Bayesian machine learning. In particular, the paper shows that a logistic-softmax function with a temporature could be more expressive than the conventional softmax function. However, due to the intrinsic nature of the logistic-softmax function, the inference might be non-conjugate, resulting in an intractable solution. To mitigate such an issue, the paper adopts the data augmentation technique proposed in the original logistic-softmax paper and propose a mean-field variational inference as an approximation to make the inference more efficient. The paper demonstrates the capability of the logistic-softmax with temporature in the context of few-shot meta-learning and shows that the approaches that integrate such method could achieve comparable performance in the two classificatio benchmarks: CUB-200-2011 and mini-ImageNet. Strengths: - The paper has carried out a thorough investigation of the properties of the logisitic-softmax function with a temporature parameter. In particular, the paper shows that the logistic-softmax function with a temporature can easily model a one-hot vector or a uniform one (Theorem 3.1) or it can converge to a softmax function under certain conditions (Theorem 3.2). In addition, the logistic-softmax function of interest could model a richer family of distribution functions (Theorem 3.3). - The explanation of the paper is clear with easy-to-follow intuitition. The formulations are also clear to increase its clarity. Weaknesses: The weakness of the paper might be at the applications of the logistic-softmax function. Currently, the paper targets to meta-learning, but to me, the paper is about the logistic-softmax function and meta-learning is just a mere application to demonstrate. Since the authors specify explicitly meta-learning, I cannot ask for other exploration. However, the paper might be strengthen more if it could include other applications using such a function for classification with GP. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors discuss further some potential applications using the logistic-softmax function of interest (beside meta-learning mentioned in the paper)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - This is an increamental improvement from the original paper of the logistic-softmax function. - As mentioned in section 7, the current analysis is carried out in the context of Bayesian meta-learning. It might worth to investigate in other settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your advice on this paper. We answer your questions below: > Q1. Currently, the paper targets to meta-learning, but to me, the paper is about the logistic-softmax function and meta-learning is just a mere application to demonstrate. However, the paper might be strengthen more if it could include other applications using such a function for classification with GP. We agree that logistic-softmax is the main focus of this paper. Despite meta-learning being one of its typical applications, we acknowledge that logistic-softmax should be employed in other domains. In future research, we are committed to exploring additional applications of logistic-softmax within and beyond GP classification tasks. > Q2. Could the authors discuss further some potential applications using the logistic-softmax function of interest (besides meta-learning mentioned in the paper)? Thank you for raising this question. We believe our logistic-softmax function has the potential to replace the softmax function in various domains. To begin with, our function can be applied to Gaussian process classification tasks, including class-imbalanced scenarios [1], active learning [2], and time-series data analysis [3]. In this case, logistic-softmax brings desired conditional conjugacy to make inference tractable and provides additional flexibility in data modeling than softmax as indicated in our paper. Moreover, the logistic-softmax function can be a great choice in modern Bayesian methods, such as Bayesian neural networks and neural network Gaussian processes although further adaptation is needed. Furthermore, our logistic-softmax function might be capable of replacing softmax beyond the Bayesian domain since we prove its flexibility over the softmax function. For example, as the logistic-softmax function captures positive signals for multiple classes, it may have prospective advantages in scenarios like multi-label classification [4] and multi-label contrastive learning [5], ushering in new paradigms thanks to its unique property. [1] Ye, Changkun, et al. "Efficient Gaussian Process Model on Class-Imbalanced Datasets for Generalized Zero-Shot Learning." 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. [2] Zhao, Guang, et al. "Efficient active learning for Gaussian process classification by error reduction." Advances in Neural Information Processing Systems 34 (2021) [3] Constantin, Alexandre, Mathieu Fauvel, and Stéphane Girard. "Mixture of multivariate gaussian processes for classification of irregularly sampled satellite image time-series." Statistics and Computing 32.5 (2022) [4] Lanchantin, Jack, et al. "General multi-label image classification with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. [5] Zhang, Shu, et al. "Use all the labels: A hierarchical multi-label contrastive learning framework." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. > Q3. This is an incremental improvement from the original paper of the logistic-softmax function. We acknowledge that our work is built upon the original paper on logistic-softmax, but we would like to emphasize that the main theoretical findings of our research have not been explored in the existing literature to the best of our knowledge. Furthermore, the application of logistic-softmax in Bayesian meta-learning with mean-field posterior inference is a novel contribution of our work. Through empirical evaluation, we have achieved state-of-the-art performance on several benchmarks, providing empirical validation for our theoretical analysis. While we admit the modification is simple, we believe our theoretical and methodological findings support the significance and relevance of our research. --- Rebuttal Comment 1.1: Title: Discussion Comment: Thank you, the authors, for addressing my concerns. --- Reply to Comment 1.1.1: Title: Thank you for your quick response Comment: Thank you for your quick response. With all your concerns now resolved, would you be willing to consider increasing the score?
Summary: This paper proposes to modify the logistic Softmax likelihood by including a temperature coefficient in GP-based meta learning for few-shot classification. This is motivated by the observation that prediction made by logistic Softmax likelihood do not have confidence. Theoretically, they demonstrate that using the temperature, one could control for the confidence in logistic softmax. Furthermore, it is proved that softmax likelihood is a special case of logistic softmax. They also propose to use mean-field variational inference to approximate the posterior which is computationally more efficient than the typically adapted Gibbs sampler. It is demonstrated that their method achieves superior performance in uncertainty quantification. However, the performance improvements measured by accuracy are not as impressive as the uncertainty quantification. Strengths: The problem is well motivated, and all the required theoretical derivations have been included. There are new theoretical findings presented. Weaknesses: The idea of including temperature is not novel but its application to a new domain is novel. Including one more recent method for few-shot classification, referenced in my questions, can improve the benchmark. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Q1. I understand that the typical benchmark for few-shot classification uses 5 classes. Have you seen improvements with your method when there are more than 5 classes? Like 10 or 20? (Answering this question won’t impact my decision negatively.) Q2. Can you also include https://arxiv.org/pdf/2101.06395.pdf method in your benchmark? Q3. I understand that the temperature has been tuned for your experiments, but it would be nice to see the variation in the accuracy and uncertainty estimation as the temperature changes. Can you demonstrate this relationship for a subset of your experiments? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your advice to this paper. We answer your questions below: > Q1. I understand that the typical benchmark for few-shot classification uses 5 classes. Have you seen improvements with your method when there are more than 5 classes? Like 10 or 20? (Answering this question won’t impact my decision negatively.) We understand your concern about our method's effectiveness on 10 or 20-way few-shot problems. We acknowledge that we have not run experiments on those settings because most benchmarks in Bayesian meta-learning do not include results for 10 or 20-way few-shot problems. In future research, we will investigate the potential impact brought by the choice of class numbers, but we believe more classes will not fundamentally change the effect of our current method. > Q2. Can you also include https://arxiv.org/pdf/2101.06395.pdf method in your benchmark? The referred paper proposes a novel idea for few-shot learning by calibrating data distribution. Although the method brings promising results on similar benchmarks, we suppose its framework is a bit different from the Bayesian meta-learning methods considered in this paper. In fact, the calibration method is compatible with our proposed method since we essentially introduce a Bayesian framework to train classifiers, and the calibration method works with arbitrary classifiers. However, we would love to add some discussion about the referred paper to give a more comprehensive view of typical few-shot classification methods. > Q3. I understand that the temperature has been tuned for your experiments, but it would be nice to see the variation in the accuracy and uncertainty estimation as the temperature changes. Can you demonstrate this relationship for a subset of your experiments? Thank you for the question. We have added some results on the CUB dataset with different temperature parameters. We hope the additional empirical evidence resolves your concern. | Temperature | 0.2 | 0.5 | 0.75 | 1 | 1.5 | | ----------- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | | CUB 1 shot | 65.76 $\pm$ 0.40 | 65.16 $\pm$ 0.28 | 64.02 $\pm$ 0.29 | 60.85 $\pm$ 0.38 | 59.43 $\pm$ 0.25 | | CUB 5 shot | 79.10 $\pm$ 0.33 | 78.48 $\pm$ 0.18 | 77.20 $\pm$ 0.13 | 75.98 $\pm$ 0.33 | 72.13 $\pm$ 0.20 | --- Rebuttal Comment 1.1: Comment: Thanks for your response. All my questions have been addressed. I will maintain my original score. --- Reply to Comment 1.1.1: Title: Thank you for your quick response Comment: Thank you for your quick response. With all your concerns now resolved, would you be willing to consider increasing the score?
Summary: In this work, the logistic-softmax likelihood is redesigned, allowing control of the a priori confidence level through a temperature parameter. The modified logistic-softmax is shown to encompass softmax as a special case and induces a larger family of data distributions. By integrating this modified likelihood into a deep kernel-based Gaussian process meta-learning framework with data augmentation, well-calibrated uncertainty estimates are achieved in experiments, and competitive results are obtained on standard benchmark datasets. Post rebuttal: I have read the authors' rebuttal and I appreciate the authors' effort in addressing my concerns. Strengths: +The logistic-softmax function with temperature is a nice idea and has potential to be used in multiple domains. +The theoretical analysis of the logistic-softmax likelihood is solid. +Some promising results are presented. +The paper is well written and easy to follow. Weaknesses: -The proposed modified logistic-softmax function with temperature can be fundamental for different machine learning problems, it is not clear why it is specifically used for Bayesian meta-learning for few-shot classification. It is suggested to be tested on simpler tasks first before applying it to tasks like Bayesian meta-learning. -The proposed method performs marginally worse than state-of-the-art in maximum calibration error on a few benchmark datasets. It is not clear why how it happens. Theoretically, an adaptive temperature is expected to give better performance for all the tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: Have you considered other forms of the temperature in the modified logistic-softmax likelihood, for example having it as the power of the functions? Q2: Can you give a bit more detail of how the modified logistic-softmax function in multi-label classification? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations have been discussed in the paper: the performance of the proposed modified logistic-softmax is only evaluated in Bayesian meta-learning. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper. We answer your questions below. > Q1. The proposed modified logistic-softmax function with temperature can be fundamental for different machine learning problems, it is not clear why it is specifically used for Bayesian meta-learning for few-shot classification.It is suggested to be tested on simpler tasks first before applying it to tasks like Bayesian meta-learning. Thank you so much for pointing this out. Logistic-softmax was initially proposed to address the conjugation issue that arises in multi-class Gaussian process classification. As the Bayesian framework is advantageous in uncertainty calibration, some researchers focus on adapting the multi-class Gaussian process to few-shot classification tasks, where Bayesian meta-learning is one of the prevalent paradigms. In specific, OVE attempts to apply it to Bayesian meta-learning but fails to achieve optimal results. Therefore, we specify Bayesian meta-learning because it is one of the initial applications for the original logistic-softmax likelihood in literature and is consistent with our research line. Nonetheless, our theoretical analysis has broad applicability across various domains beyond its motivation from the Bayesian framework. Meanwhile, we do realize that the presentation of our paper could be adjusted for better clarity and we thank you for pointing this out. We will elaborate on why we apply the logistic-softmax function to Bayesian meta-learning and explain our motivation more clearly in the revised paper. In addition, we will add more discussions on other potential applications of logistic-softmax. > Q2. The proposed method performs marginally worse than state-of-the-art in maximum calibration error on a few benchmark datasets. It is not clear why how it happens. Theoretically, an adaptive temperature is expected to give better performance for all the tasks. Thank you for raising this concern. We notice that our result on maximum calibration error is worse in only one dataset. One possible explanation for this phenomenon is that the adaptive temperature is tuned on the validation set (line 295). It is highly possible that in the test dataset, there are some outliers that this particular temperature is unable to effectively handle. To provide some context, the MCE result of 0.036 is from a bin with a small number of samples. > Q3. Have you considered other forms of the temperature in the modified logistic-softmax likelihood, for example having it as the power of the functions? Thank you for reminding us, we will add a section discussing possible forms of temperature in the revised paper. We consider the current form because we are motivated by temperature in contrastive learning, where the most popular choice is to scale the logit directly. In addition, we also believe that it is the simplest way to maintain the conditional conjugate structure, while using the power of the function may lead to an intractable integral unless further adapted. > Q4. Can you give a bit more detail of how the modified logistic-softmax function in multi-label classification? Thank you for raising this question. In general, multi-class classification is often treated as a combination of binary classification problems. For example, [1-3] use independent classifiers for each label, and design a sum of binary cross-entropy(BCE) loss as the training objective. From a higher level, this phenomenon is due to the lack of an appropriate likelihood function that identifies multiple positive labels at the same time, and researchers have to rely on the One-vs-Rest scheme. Specifically, if we use the softmax function (at a low temperature) to process the logits, the output fails to depict the probability of multiple labels, as softmax converges to a one-hot vector (for example, the positive label includes cat and dog, logits are 5 and 4.9, softmax (at a low temperature) outputs 0.99 and 0.01). With our modified logistic-softmax function, it can leverage the upside of a low temperature while preserving an appropriate probability output (logits are 5 and 4.9, logistic-softmax (at a low temperature) outputs 0.51 and 0.49). Although we acknowledge the potential barriers in the application to multi-class classification (e.g., designing novel loss function paired with logistic-softmax), we hold a strong belief in its potential to motivate new paradigms in multi-label classification. [1] Lanchantin, Jack, et al. "General multi-label image classification with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. [2] Wang, Haoran, et al. "Can multi-label classification networks know what they don’t know?." Advances in Neural Information Processing Systems 34 (2021) [3] Panos, Aristeidis, Petros Dellaportas, and Michalis K. Titsias. "Large scale multi-label learning using Gaussian processes." Machine Learning 110 (2021) --- Rebuttal Comment 1.1: Title: Response acknowledged Comment: I thank the authors for providing the response --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thank you for your response. If you have no other concerns, would you be willing to consider increasing the score?
Rebuttal 1: Rebuttal: We extend our sincere appreciation to all reviewers for their time, effort, and insightful feedback. We are encouraged by their recognition of the significance of our work in introducing an effective modification to control the confidence of logistic-softmax, uncovering novel theoretical properties and broad applicability, deriving an efficient mean-field inference method, conducting comprehensive numerical experiments, and maintaining clear and concise writing. In the following, we respond meticulously to each of the reviewers' comments. Our aim is to ensure that we address all the concerns and offer clarity and reassurance where needed. Should any additional questions arise, we invite reviewers to engage in further discussion. Once again, we express our gratitude for your time and dedication in reviewing our work.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Accept (poster)
Summary: The authors present a theoretical analysis of optimization for path-regularized parallel ReLU networks. They demonstrate how to represent this non-convex problem as a regularized convex problem. While convex problems in general admit polynomial time solutions, the size of this convex problem is exponential in the rank $r$ of the data matrix. However, its size is polynomial in the data dimension $d$ and number of examples $n$ if those are treated as being independent of the rank $r$. The authors then establish approximation guarantees when operating on a rank $r$ approximation of a (potentially) full rank data matrix. For matrices with quickly decaying singular values, this allows the convex problem to be effeciently solved at a decent accuracy. They then run experiments on a toy task and image classification datasets to demonstrate that their method can outperform gradient descent and other optimizers in both final performance and time. Strengths: - Theoretical understanding of neural networks is an important and impactful topic. - Paper is well written and introduces novel analysis. - Emperical results support theory. Weaknesses: - Although the authors claim that their solution is polynomial time in both data dimension $d$ and number of examples $n$, it is exponential in the rank $r$ of the data matrix. While they prove that using a low rank approxiation will approximate the optimal solution well if the low rank approximation is good, their approximation bound will be poor if the data matrix is inherently high dimensional. So their solution still has exponential complexity in the "implicit" dimensionality of the inputs. Typos: - line 166: I see $RR$ in one of the expressions. Should this be $\sqrt{m_1m_2}R$ instead? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does the algorithm scale to larger problem sizes? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback and comments. We hope that you would consider increasing your score if your concerns are adequately addressed. Please see our responses below. $\textbf{Responses to the comments on complexity and large problems sizes:}$ There are multiple ways to further reduce the computational complexity to solve our convex program as detailed below. 1. First of all, we can change the architecture to CNNs to reduce computational complexity as detailed in Remark 3 of Appendix A.10. To extend our convex approach to convolutional neural networks (CNNs), we basically need to separate the data matrix $\mathbf{X}$ into patches as $\\{\mathbf{X}\_b\\}\_{b=1}^B$, where $\mathbf{X}\_b \in \mathbb{R}^{n \times h}$ and $h$ denotes the filter/patch size. However, notice that since CNNs operate on the patch matrices $\\{\mathbf{X}\_b\\}\_{b=1}^B$ instead of the full data matrix $\mathbf{X}$, the number of hyperplane arrangements $P_1$ is upperbounded by a fully polynomial term, $P_1 \leq \mathcal{O}(n^{ r_c})$, where $r_c:=\max_b \mbox{rank}(\mathbf{X}\_b)\leq h \ll \min\\{n,d\\}$ even when the data matrix is full rank, i.e., $r=\min\\{n,d\\}$. For instance, let us consider a CNN with $m_1$ $3 \times 3$ filters, then $r_c \leq 9$ independent of $n,d$. As a consequence, weight sharing structure in CNNs dramatically limits the number of possible hyperplane arrangements and therefore substantially reduce the complexity to solve our convex program. 2. In Section 3.1, we propose an $\epsilon$-approximate training approach that has polynomial-time complexity even when the data matrix is full rank. Here, you can select the rank $r$ by plugging in the desired approximation error and network structure in equation 10. We show that the approximation error proved in Theorem 2 can be arbitrarily small for practically relevant problems. As an example, consider a parallel architecture training problem with $\ell_2$ loss function, then the upperbound becomes $(1+\frac{\sqrt{m_1 m_2}\sigma\_{r+1}}{\beta})^2$, which can be arbitrarily close to one due to presence of noise component (with small $\sigma_{r+1}$) in most datasets in practice (see Figure 4 for an empirical verification). This observation is also valid for several benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100, which exhibit exponentially decaying singular values and therefore effectively has a low rank structure. In addition, singular values can be computed to set the target rank and the value of the regularization coefficient to obtain any desired approximation ratio using Theorem 2. **We included a section in Appendix A.9 to further clarify these issues.** 3. As noted in Appendix A.1, we use a sampling based approach where one can randomly sample a tiny subset of all possible hyperplane arrangements and then solve the convex program with this subset. Thus, although the resulting approach isn't exact, the training complexity won't be exponential in $\prod_{j=1}^l m_j$ anymore. The experimental results in Section 4 and Appendix A.1 show that this approximation in fact works extremely well, specifically better/faster than training the standard non-convex architecture with solvers such as SGD and Adam. You can find the detailed explanation of the sampling procedure below. We note that the convex program can also be approximately solved by using a subset of diagonal matrices $\\{\\{\mathbf{D}\_{1ij}\\}\_{i=1}^{\bar{P}\_1}\\}\_{j=1}^{m_1}$ and $\\{\mathbf{D}\_{2l}\\}\_{l=1}^{\bar{P}\_2}$. In particular, for the first ReLU layer, we can randomly sample $m_1\bar{P}\_1$ vectors $\mathbf{w}\_{ij}$ from an arbitrary probability distribution, e.g., for multivariate standard Gaussian $\mathbf{w}\_{ij}\sim \mathcal{N}(\mathbf{0},\mathbf{I}\_d)$ and then set $\mathbf{D}_{1ij}=\mathrm{diag}(\mathbf{1}[\mathbf{X} \mathbf{w}\_{ij}\geq 0]), \forall i \in [\bar{P}_1]$, $\forall j \in [m_1]$. Likewise, for the second ReLU layer, we can randomly sample $\\{(\mathbf{W}\_{1l},\mathbf{w}\_{2l})\\}\_{l=1}^{\bar{P}\_2}$ and then set $\mathbf{D}\_{2l}=\mathrm{diag}\left(\mathbf{1}\left[( \mathbf{X} \mathbf{W}\_{1l})\_+\mathbf{w}\_{2l}\geq 0\right] \right), \forall l \in [\bar{P}\_2]$. Then, we can solve the convex program using only these hyperplane arrangements. We also remark that even though this is an approximation, it is extremely efficient and works much better/faster than standard non-convex training as shown in our experimental results in Section 4 and Appendix A.1. $\textbf{Typos:}$ We thank the reviewer for pointing out these typos. We will correct them in the revised version. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concerns were adequately addressed.
Summary: This paper studies the problem of training parallel networks with three-layer subnetworks under path regularization. It is shown that the non-convex optimization problem of minimizing the regularized loss can be cast as a convex optimization problem, which can be solved efficiently. Strengths: 1. The study of parallel networks is interesting. 2. Finding efficient algorithms for training neural networks is important and the proposed solution seems novel and performs well. Weaknesses: 1. In all the experiments the networks have $m_2=1$? what happened if $m_2$ is large, does it affect the training time? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See "Weaknesses" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See "Weaknesses" Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback and comments. We hope that you would consider increasing your score if your concerns are adequately addressed. Please see our responses below. $\textbf{Responses to the comments on equation (4) and Figure 2:} $ As noted in the paper, we performed all the experiments in a completely fair setting where the convex and nonconvex models have the same number of parameters and there is a one-to-one mapping between these parameters. Therefore, if we increase $m_2$ this would increase the training times for both nonconvex and the equivalent convex model. However, as shown in our experiments, the training of convex models will still be significantly faster compared to nonconvex training thanks to the benefits of convexity.
Summary: The paper shows how parallel ReLU networks can be trained to approximate optimality in polynomial time by convex programming when employing a path-regularization (rather than weight decay or other regularization of the network weight parameters) and a low-rank approximation of the data matrix. Some numerical experiments demonstrate practical benefits of the approach. -- update: I have read and acknowledged all other reviews and the authors rebuttals, see discussion. -- Strengths: Positive and practical results on optimal training of neural networks are quite rare, so I see this work as an interesting contribution. The numerical experiments demonstrate that the proposed approach can work in practice as well (however, it should be mentioned in the main paper, not only as an aside in the supplementary document (Sect. A.1, ll. 491ff), that the image classification experiments did not use all hyperplane arrangements for the convex program, but only a small sampled subset, thus deviating from the theoretical model to achieve practical speed-up). Weaknesses: I have read this paper (or a precursor preprint) a year or two ago, so the results aren't exactly new anymore, which may be a weakness. The techniques (to obtain a convex program solvable in poly-time) are very similar to earlier works on two-layer ReLU networks, which may make this work seem somewhat derivative; I don't think this is necessarily a weakness, but it may open a way to significantly shorten the very technical parts and referring to the earlier work (by Tolga&Pilanci) instead. In the (main) paper, Tables and Figures are not placed near where they are mentioned/discussed in the text body, which should be fixed as it breaks the flow or reading in a distracting way. Some statements are repetitive and should be consolidated accordingly (e.g., "Remark 1" essentially repeats what was mentioned at the end of the first paragraph of Sect. 2, and the sentence in lines 86-87 repeats Footnote 1). Throughout, there are various small typos and missing words (short ones like "of") Finally, it appears to have become common practice to submit papers to NeurIPS (and ICML) whose actual, main content is put in a separate "Supplementary Material" document whose length exceeds that of the supposed main paper. This paper is only a partial exception -- the main paper provides enough information and details to follow the ideas, but all (admittedly very technical) proofs can then be found in the very long Appendix (supplementary document) along with some further additional information. Thus, it may be considered a weakness of the paper to have such a long Appendix, because this format bears the danger of the formally most important parts of the work (proofs; algorithm details and specific setups) not being reviewed thoroughly due to the short review period and high review load of reviewers at these conferences. I cannot exclude myself from this -- I simply did not have the time to rigorously check all the details in the long supplementary document, and therefore cannot give a definitive answer regarding the proofs' correctness beyond "believing" everything appears to be well in order, especially since it is so reminiscent of the earlier work on two-layer ReLU networks, which I had read carefully at some point. In this regard, I cannot help but wonder if a full journal paper would not be the better way to publish results that simply do not fit into the 9-page limit. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is it correct that the entries in the last column of Table 2 are all identical? Also, in Remark 2, l. 166, is it meant to say "RR" in the nominator? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations have been adequately addresses, as far as I can tell. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback and comments. We will address all of your concerns regarding the paper structure and typos in the revised version. Please see our responses below. $\textbf{Our contributions over prior works:}$ We first would like to clarify our contributions over the following prior works: [17] Ergen and Pilanci. Global optimality beyond two layers: Training deep relu networks via convex programs. ICML 2021. [29] Pilanci and Ergen. Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks. ICML 2020. * One of the main crucial differences is the training problem we consider. Particularly, [17] analyzes a training problem with $\ell_2$-norm regularization while we focus on path regularization. We note that in the earlier work (Pilanci and Ergen, 2020) [29], the authors introduced a convex training approach for two-layer $\ell_2$-norm regularized networks. However, the authors were not able to extend this result to deeper architectures since it is not clear how the rescaling should be applied to obtain a finite dimensional dual problem. To avoid this issue, **[17] introduces additional unit Frobenius norm constraint on the first layer weights (see eq (2) in [17])**. In other words, since finding the right scaling for 3-layer networks is challenging, the authors only regularize the last two layers' weights and include an additional constraint on the first layer weights. Moreover, when they extend their approach to deeper architectures, say an $L$-layer network, **they have to impose constraints on the first $L-2$ layer weights, which makes the assumption even stronger**. Therefore, as already noted by the authors of [17] in the conclusion section, we believe that this is a significantly restricted setting and doesn't reflect realistic neural network problems. Unlike [17], we realize that the correct way of regularizing these deep parallel architectures is to use path regularization, which yields a simple and interpretable convex program without requiring any additional constraints. We also note that our path regularized architecture reduces to the same problem in [29] when $L=2$ and therefore subsumes the results there in a cleaner manner. * Another major difference is the introduction of a guaranteed approximation scheme which results in significantly better time complexity. As already observed by the authors, the approach in [17] has exponential time complexity when the data matrix is full rank, which is unavoidable. However, in this paper, we develop an approximation scheme which has fully polynomial-time complexity with respect to all dimensions for all datasets, i.e., even for full rank data, and prove strong approximation guarantees for this algorithm in Theorem 2. **To the best of our knowledge, this is the first convex optimization based and fully polynomial-time training algorithm for arbitrarily deep networks with strong theoretical guarantees**. We also demonstrated the efficacy of this algorithm via a simulation in Figure 4. * Finally, [17] only considers the case where the second hidden layer has only one neuron, i.e., $m_2 = 1$, therefore, fails to analyze standard three layer or deeper networks. Note that this is a significantly restrictive assumption on the architecture. In contrast, we study standard deep networks with an arbitrary number of hidden neurons in the second layer. $\textbf{Responses to the comments on Table 2:}$ We note that the last two columns are not identical since one of the is a function of the rank of the data $r$, the other depends on the fixed constant $\kappa$. $\textbf{Responses to the comments on Remark 2:}$ We thank the reviewer for pointing out this issue and apologize for this typo. The correct version should have only one $R$ and we will revise this in the updated paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their replies to the issues raised by the other reviewers and myself. Overall, I think most concerns were adequately addressed. Regarding my question on Table 2, there seems to have been a misunderstanding, as the authors' reply doesn't match the question (and, it seems, doesn't pertain to that table). The question is why in the *last* column of table 2, the number pairs for each K are identical (0.0007 and 4.947) and if this was perhaps a mistake? So I'd ask the authors to check this and fix/explain accordingly. Overall I maintain my recommendation to accept the (revised) manuscript. --- Reply to Comment 1.1.1: Title: Re: Table 2 Comment: We apologize for the confusion regarding our response on Table 2. Here, we wanted to emphasize that the equivalent convex formulation is independent of the number of subnetworks $K$. Thus, although the performance of the original nonconvex formulation improves with respect to the number of subnetworks due to the benign impact of the overparameterization level, our convex approach solves the same optimization problem for each case and outperforms the original nonconvex formulations. This illustrates that our convex formulation doesn't require extreme levels of overparameterization to be optimized properly unlike standard nonconvex training approaches.
Summary: This study considers training the parallelized multi-layer neural networks with path-wise norm regularization. Through the duality argument, the authors reduce the regularized empirical minimization problem, which is highly non-convex, to the convex programming problem. They show that when the data matrix has a small rank, the solving time will only be polynomial time. In general, in cases where the data matrix does not shrinkage, an approximation method that also runs in polynomial time is proposed through the matrix approximation technique. Numerical experiments are conducted to show the effectiveness of the proposed regularization and the solving algorithm. Strengths: - The theoretical results seem to be solid and clearly written. - The reduction of the neural network training problem beyond two-layer to the convex programming is a novel, and the proposed algorithm seems to work effectively in the experiment section. Weaknesses: The exhibited results are interesting from the theoretical view, but I think there are restrictive from the practical view. - Whereas the reduction to the convex programming is interesting, the obtained time complexity $O(d^3m_1^3m_2^32^{3(m_1+1)m_2}n^{3(m_1+1)r})$ will be extremely large in the practical models even in it is polynomial. Indeed, the numerical experiment seems to end with narrow networks. - Apart from the above concern, although the approximate algorithm is provided, obtaining the low-rank approximation of the data matrix requires much time when using a large dataset (especially to which deep learning is applied). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Do you consider that the time complexity obtained in Proposition 2 is just a theoretical one, and it will be faster in practice? - I have several concerns about the parameter dependence of the obtained time complexity in Proposition 2. First, does it not depend on the $K$, the number of parallelized networks? Second, the time complexity exhibited in Corollary 1 seems to be inconsistent with Proposition 2. When we apply Corollary 1 to the 3-layer networks, the term two will be powered by $3(m_1+m_2)$, but it becomes $3(m_1+1)m_2$ in Proposition 2. Which is correct? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback and comments. We hope that you would consider increasing your score if your concerns are adequately addressed. Please see our responses below. $\textbf{Responses to the comments on complexity:}$ As stated by the reviewer, our computational complexity is a theoretical upperbound. We already demonstrated in our experiments (Figure 5 and Table 3 in the appendix) that our convex solvers are significantly faster/more efficient than standard non-convex training with conventional solvers such as SGD/Adam. Therefore, we believe that our convex solver can replace standard non-convex training with SGD/Adam in practical settings. Also, there are multiple ways to further reduce the computational complexity to solve our convex program as detailed below. 1. First of all, we can change the architecture to CNNs to reduce computational complexity as detailed in Remark 3 of Appendix A.10. To extend our convex approach to convolutional neural networks (CNNs), we basically need to separate the data matrix $\mathbf{X}$ into patches as $\\{\mathbf{X}\_b\\}\_{b=1}^B$, where $\mathbf{X}\_b \in \mathbb{R}^{n \times h}$ and $h$ denotes the filter/patch size. However, notice that since CNNs operate on the patch matrices $\\{\mathbf{X}\_b\\}\_{b=1}^B$ instead of the full data matrix $\mathbf{X}$, the number of hyperplane arrangements $P_1$ is upperbounded by a fully polynomial term, $P_1 \leq \mathcal{O}(n^{ r_c})$, where $r_c:=\max_b \mbox{rank}(\mathbf{X}\_b)\leq h \ll \min\\{n,d\\}$ even when the data matrix is full rank, i.e., $r=\min\\{n,d\\}$. For instance, let us consider a CNN with $m_1$ $3 \times 3$ filters, then $r_c \leq 9$ independent of $n,d$. As a consequence, weight sharing structure in CNNs dramatically limits the number of possible hyperplane arrangements and therefore substantially reduce the complexity to solve our convex program. 2. In Section 3.1, we propose an $\epsilon$-approximate training approach that has polynomial-time complexity even when the data matrix is full rank. Here, you can select the rank $r$ by plugging in the desired approximation error and network structure in equation 10. We show that the approximation error proved in Theorem 2 can be arbitrarily small for practically relevant problems. As an example, consider a parallel architecture training problem with $\ell_2$ loss function, then the upperbound becomes $(1+\frac{\sqrt{m_1 m_2}\sigma\_{r+1}}{\beta})^2$, which can be arbitrarily close to one due to presence of noise component (with small $\sigma_{r+1}$) in most datasets in practice (see Figure 4 for an empirical verification). This observation is also valid for several benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100, which exhibit exponentially decaying singular values and therefore effectively has a low rank structure. In addition, singular values can be computed to set the target rank and the value of the regularization coefficient to obtain any desired approximation ratio using Theorem 2. **We included a section in Appendix A.9 to further clarify these issues.** 3. As noted in Appendix A.1, we use a sampling based approach where one can randomly sample a tiny subset of all possible hyperplane arrangements and then solve the convex program with this subset. Thus, although the resulting approach isn't exact, the training complexity won't be exponential in $\prod_{j=1}^l m_j$ anymore. The experimental results in Section 4 and Appendix A.1 show that this approximation in fact works extremely well, specifically better/faster than training the standard non-convex architecture with solvers such as SGD and Adam. You can find a detailed explanation of the sampling procedure below. We note that the convex program can also be approximately solved by using a subset of diagonal matrices $\\{\\{\mathbf{D}\_{1ij}\\}\_{i=1}^{\bar{P}\_1}\\}\_{j=1}^{m_1}$ and $\\{\mathbf{D}\_{2l}\\}\_{l=1}^{\bar{P}\_2}$. In particular, for the first ReLU layer, we can randomly sample $m_1\bar{P}\_1$ vectors $\mathbf{w}\_{ij}$ from an arbitrary probability distribution, e.g., for multivariate standard Gaussian $\mathbf{w}\_{ij}\sim \mathcal{N}(\mathbf{0},\mathbf{I}\_d)$ and then set $\mathbf{D}_{1ij}=\mathrm{diag}(\mathbf{1}[\mathbf{X} \mathbf{w}\_{ij}\geq 0]), \forall i \in [\bar{P}_1]$, $\forall j \in [m_1]$. Likewise, for the second ReLU layer, we can randomly sample $\\{(\mathbf{W}\_{1l},\mathbf{w}\_{2l})\\}\_{l=1}^{\bar{P}\_2}$ and then set $\mathbf{D}\_{2l}=\mathrm{diag}\left(\mathbf{1}\left[( \mathbf{X} \mathbf{W}\_{1l})\_+\mathbf{w}\_{2l}\geq 0\right] \right), \forall l \in [\bar{P}\_2]$. Then, we can solve the convex program using only these hyperplane arrangements. We also remark that even though this is an approximation, it is extremely efficient and works much better/faster than standard non-convex training as shown in our experimental results in Section 4 and Appendix A.1. $\textbf{Complexity and number of parallel networks K:}$ We thank the reviewer for pointing out these. As emphasized in the proof of Theorem 1, the number of subnetworks, $K$ needs not to be larger than a certain threshold, denoted as $K^* \leq n+1$ to enable strong duality. In the worst case scenario, we have $K^*=n+1$. Due to this observation, the computational complexity of our convex program depends on $K^* \leq n+1$ not $K$. $\textbf{Responses to the comments on Proposition 2:}$ We thank the reviewer for pointing out this issue and apologize for this typo. The generic formulation in Corollary 1 is the correct one and we will revise Proposition 2 in the updated paper. --- Rebuttal Comment 1.1: Comment: Thanks for the author's reply. My concerns were adequately addressed, so I raised by score to 7.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work studies the training of 3-layer parallel ReLU networks with path regularization. Specifically, the authors show that minimizing the non-convex learning objective with an \ell_2 pathwise weight decay is equivalent to a convex program that can be solved optimally in polynomial-time complexity (Proposition 1), except when the data matrix is full-rank. The authors also propose a more efficient polynomial-time algorithm (Theorem 2) for arbitrary data by using a local approximation of the data matrix on the training objective. The paper also provides numerous empirical results on toy and real data to support the theoretical results. Strengths: + This work provides a convex duality for parallel ReLU networks with path regularization. + An optimal, polynomial-time algorithm for arbitrary data, including full-rank data. Weaknesses: - This work is basically an extension of techniques and results in [17] to a slightly different regularization, so there is a concern about novelty. - The performance of the proposed algorithm does not improve others on Fashion-MNIST, and why is the gain much higher in CIFAR-10, which is a more challenging dataset? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The authors claim that “we proved the polynomial-time trainability of deep ReLU networks without requiring any impractical assumptions unlike [17, 29].” Can the authors compare and contrast the assumptions made in this and those works? 2. How practical is the algorithm in Theorem 2? I can’t tell from Proposition 1 what closed-form mapping is and how one constructs optimal network weights from it. Can the authors clarify? 3. In Table 1, the bounds are exactly the same between this work and [29] for 2-layer networks. Why? 4. How does low-rank approximation error populate to the optimality? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback and comments. We hope that you would consider increasing your score if your concerns are adequately addressed. Please see our responses below. $\textbf{Our contributions over [17,29]:}$ * One of the main crucial differences is the training problem we consider. Particularly, [17] analyzes a training problem with $\ell_2$-norm regularization while we focus on path regularization. We note that in the earlier work (Pilanci and Ergen, 2020) [29], the authors introduced a convex training approach for two-layer $\ell_2$-norm regularized networks. However, the authors were not able to extend this result to deeper architectures since it is not clear how the rescaling should be applied to obtain a finite dimensional dual problem. To avoid this issue, **[17] introduces additional unit Frobenius norm constraint on the first layer weights (see eq (2) in [17])**. In other words, since finding the right scaling for 3-layer networks is challenging, the authors only regularize the last two layers' weights and include an additional constraint on the first layer weights. Moreover, when they extend their approach to deeper architectures, say an $L$-layer network, **they have to impose constraints on the first $L-2$ layer weights, which makes the assumption even stronger**. Therefore, as already noted by the authors of [17] in the conclusion section, we believe that this is a significantly restricted setting and doesn't reflect realistic neural network problems. Unlike [17], we realize that the correct way of regularizing these deep parallel architectures is to use path regularization, which yields a simple and interpretable convex program without requiring any additional constraints. We also note that our path regularized architecture reduces to the same problem in [29] when $L=2$ and therefore subsumes the results there in a cleaner manner. * Another major difference is the introduction of a guaranteed approximation scheme which results in significantly better time complexity. As already observed by the authors, the approach in [17] has exponential time complexity when the data matrix is full rank, which is unavoidable. However, in this paper, we develop an approximation scheme which has fully polynomial-time complexity with respect to all dimensions for all datasets, i.e., even for full rank data, and prove strong approximation guarantees for this algorithm in Theorem 2. **To the best of our knowledge, this is the first convex optimization based and fully polynomial-time training algorithm for arbitrarily deep networks with strong theoretical guarantees**. We also demonstrated the efficacy of this algorithm via a simulation in Figure 4. * Finally, [17] only considers the case where the second hidden layer has only one neuron, i.e., $m_2 = 1$, therefore, fails to analyze standard three layer or deeper networks. Note that this is a significantly restrictive assumption on the architecture. In contrast, we study standard deep networks with an arbitrary number of hidden neurons in the second layer. $\textbf{Responses to the comments on closed form mapping and practicality:}$ We first note that the explicit formulation for the closed-form mapping (in Proposition 1) between the parameters of nonconvex and convex formuatlions can be found at the beginning of page 19 of appendix (fullpaper.pdf). Also notice that as already demonstrated in our experiments (Figure 5 and Table 3 in the appendix), our convex solvers are significantly faster/efficient than standard non-convex training with conventional solvers such as SGD/Adam. Therefore, we believe that our convex solver can replace standard non-convex training with SGD/Adam in practical settings. $\textbf{Responses to the comments on Table 1:}$ This is due to the fact that our path regularized architecture reduces to the same problem in [29] when $L=2$. However, the analysis in [29] don't extend to arbitrarily deep networks since their regularization because of the inconsistencies between L2 regularization and scaling of the parameters in Lemma 1. **Therefore, our path regularized architecture not only subsumes the results in [29] when $L=2$ but also extends it arbitrarily deep networks in a cleaner manner.** $\textbf{Responses to the comments on low rank approximation and optimality:}$ We show that the approximation error proved in Theorem 2 can be arbitrarily small for practically relevant problems. As an example, consider a parallel architecture training problem with $\ell_2$ loss function, then the upperbound becomes $(1+\frac{\sqrt{m_1 m_2}R \sigma\_{r+1}}{\beta})^2$, which can be arbitrarily close to one due to presence of noise component (with small $\sigma_{r+1}$) in most datasets in practice (see Figure 4 for an empirical verification). This observation is also valid for several benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100, which exhibit exponentially decaying singular values and therefore effectively has a low rank structure. In addition, singular values can be computed to set the target rank and the value of the regularization coefficient to obtain any desired approximation ratio using Theorem 2. **We included a section in Appendix A.9 to further clarify these issues.**
null
null
null
null
null
null
A Unified Framework for Rank-based Loss Minimization
Accept (poster)
Summary: This paper presents rank-based loss as a popular replacement for empirical loss. The work develops how optimization of rank-based loss can be done by a proximal alternating direction method of multipliers. The authors also demonstrate the algorithm's features in terms of convergence under certain conditions. Experimentation which includes synthetic and real datasets is also added in the paper with some numerical simulations showing how the framework behaves. It supports the theoretical results. Strengths: The paper is mathematically strong Weaknesses: The contribution doesn't seem significant enough The applicability of the result is not clearly shown Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I would like the authors to compare the method with similar-purpose frameworks, even if not using rank-based loss, and see their differences in performance. I would appreciate the inclusion of the applicability of the result to a real machine-learning problem. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer kXbu for the constructive comments! **The applicability in machine learning** It is noteworthy that rank-based loss is a highly valuable and extensively researched concept in the field of machine learning. Commonly encountered variations of rank-based losses include spectral risk measures [1], empirical human risk [4], and the average of ranked-range aggregate loss [3]. These losses have been demonstrated to be significant and applicable in various scenarios, as expounded in the literature. Spectral risk measures, which include the well-known CVaR loss, the maximum risk [5], and the average top-k risk [2], are widely used risk metrics in the field of machine learning and finance. Empirical human risk is an aggregate loss framework inspired by cumulative prospect theory [6], which considers fairness performance across different groups. The average of ranked-range aggregate loss is particularly effective in handling outliers, ensuring the robustness of the model against anomalous observations in the dataset. This paper considers a unified framework, equation (1), which encompasses all the aforementioned rank-based losses. Furthermore, we introduce a method to tackle problems within this unified framework. Our proposed algorithm exhibits comparable or even superior performance compared to algorithms specifically tailored for the three individual frameworks, as demonstrated in the experimental results. The versatility of our framework, along with the promising results obtained in experiments, highlights the potential of our approach in addressing various rank-based loss scenarios in machine learning applications. **Comparison with the existing method** Sorry that we may not understand your first question correctly. To our best knowledge, there does not exist any method that can address a unified framework akin to equation (1). Our proposed algorithm is the first one that is applicable to this unified framework. Nevertheless, we have compared different algorithms for solving (1) with different risk measures in the current version of our paper. In the experiments, we conducted comprehensive comparisons with algorithms used or proposed in prior work for the three application scenarios, i.e., LSVRG, SGD, and DCA . Our algorithm consistently demonstrated stability and superior performance in these experiments. Additionally, we examined the commonly used empirical risk minimization (ERM), which can be seen as not using rank-based loss, and presented the results in Figure 1 of Section 5.1 and Appendix C.1. In this setting, SGD refers to the standard stochastic gradient descent method, which is a commonly used optimization algorithm in machine learning. Once again, we emphasize that our algorithm offers a comprehensive and effective solution for various rank-based loss scenarios within the unified framework presented in equation (1). The experimental results consistently demonstrate the robustness and efficacy of our proposed algorithm. **The real machine-learning problem** In our study, we primarily focused on presenting the binary classification scenario to emphasize the effectiveness of our algorithm in solving problems within the equation (1) framework, with a particular focus on comparing the objective function values. However, we also included classification accuracy results to provide a comprehensive evaluation. In Tables 1 and 2, we displayed the classification accuracies of the spectral risk measure minimization model on the 'SVMguide' and 'AD' binary classification datasets (detailed sources and statistical information are provided in the appendix). The results show that our algorithm achieves comparable classification accuracy with existing methods. Additionally, in Table 3, we demonstrated the results of empirical human risk minimization on the 'UTKFace' dataset for the binary gender classification task. The results indicate that our algorithm outperforms existing methods in terms of test accuracy. Regarding fairness metrics, detailed explanations are available in Appendix B.5. For our analysis, we used race groups (white G1 and other race G2) to compare fairness metrics. The results reveal that our algorithm performs favorably on most fairness metrics compared to existing methods. **References** [1] Mehta, R., Roulet, V., Pillutla, K., Liu, L., & Harchaoui, Z. (2023, April). Stochastic Optimization for Spectral Risk Measures. In International Conference on Artificial Intelligence and Statistics (pp. 10112-10159). PMLR. [2] Fan, Y., Lyu, S., Ying, Y., and Hu, B. (2017). Learning with average top-k loss. Advances in neural information processing systems, 30. [3] Hu, S., Ying, Y., Lyu, S., et al. (2020). Learning by minimizing the sum of ranked range. Advances in Neural Information Processing Systems, 33:21013–21023. [4] Leqi, L., Prasad, A., and Ravikumar, P. K. (2019). On human-aligned risk minimization. Advances in Neural Information Processing Systems, 32. [5] Shalev-Shwartz, S. and Wexler, Y. (2016). Minimizing the maximal loss: How and why. In International Conference on Machine Learning, pages 793–801. PMLR. [6] Tversky, A. and Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5:297–323. --- Rebuttal Comment 1.1: Title: acknowledgement of having read the authors' reply Comment: I thank the authors for addressing all the points I indicated and providing a reply to them.
Summary: This submission focuses on efficient minimization of a group of loss functions called rank-based losses. It proposes to consider several related losses from the perspective of a genral unified framework with a regularizer. Then, focusing on the case of monotone increasing loss functions and wealy convex regularizers, it proposes an ADMM-based algorithm and shows its convergence rate under common assumptions. Furthermore, when the regularizer is non-smooth, it then extends the proposed algorithm with weaker assumptions and also shows its convergence rate. Sufficient numerical verification shows the satisfying empirical performance of the proposed algorithm compared with several existing methods. Strengths: Originality: - The task provides a new perspective on an important problem and the proposed methods improving ADMM are novel. - Authors clearly address where exactly the improvements from existing methods are. - Related literature review is adequate and detailed. Quality: - The submission is technically rigid. - Claims are well-supported by its clear presentation. Adequate theoretical and empirical results are presented. Clarity: - The submission is clearly written and easy to follow. It is very well organized and the story line expands naturally. Significance: - The results are important to the field. Others are very likely to use the results as a baseline method or build extensions upon it. Weaknesses: - It would be more pursuative to elaborate on potential practical limitations of the proposed method. For example, under what case the proposed method may not function efficiently. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is equation (1) first proposed by this paper, or it has been somehow mentioned in related publications? - In Figure 1 (h), it seems like existing methods can take advantage of more data at the beginning of optimization. How this trend changes when more samples are available? Is there some theoretical explainable available for the phenomenon? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impact needs to be addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer G6DJ for the positive feedback and comments! **The potential practical limitations of the proposed method** To utilize our algorithm effectively, it is essential for the individual loss to exhibit monotonicity, as this allows us to employ the PAV algorithm to solve the $z$-subproblems. However, on the other hand, when dealing with a considerably large sample size, the computational efficiency of the PAV algorithm might be affected, potentially limiting the overall effectiveness of the entire algorithm. We will point out the limitations in our paper. **More information about equation (1)** In other papers, we have also observed similar equation representations. For example, in [1], the authors presented a similar equation as follows, \begin{equation} \mathcal{R}_\sigma(\boldsymbol{w})+\frac{\mu}{2}\\|\boldsymbol{w}\\|_2^2 \quad \text { for } \\mathcal{R}\_\\sigma(\\boldsymbol{w}) = \sum\_i \sigma_i \boldsymbol{l}\_{[i]}(\boldsymbol{w}) \end{equation} with $0 \leq \sigma_1 \leq \dots \sigma_n \leq 1, \sum_{i=1}^n \sigma_i=1$. We replace $\boldsymbol{l}\_{[i]}(w)$ by $\boldsymbol{l}\_{[i]}\left(- \boldsymbol{y} \odot (X\boldsymbol{w})\right)$ in order to obtain easily solvable subproblems of the ADMM algorithm. Moreover, in equation (1), we only require $\sigma_i\geq 0$, which includes a wider range of ranked-based losses, and allow $\sigma_i$ (in $\sigma_i(z_i)l(z_i)$) have different values depending on the values of $z_i$ being larger than or less than the reference point, which includes the human risk loss. We sincerely apologize for the typos in equation (1) which should be $\boldsymbol{l}:\mathbb{R}^n \to \mathbb{R}^n$ and $g: \mathbb{R}^d \to \mathbb{R}$. Here $\boldsymbol{l}:\mathbb{R}^n \to \mathbb{R}^n$ represents a vector-valued mapping, where its $i$-th element represents the individual loss for the $i$-th sample. **The explanation about Figure 1 (h)** This is a very interesting observation. Following your question, we varied the random seed to generate different datasets to eliminate the impact of randomness, and we still observed the same phenomenon across different datasets; moreover, we increased the sample size to explore trends, but the phenomenon remained unchanged. The corresponding experimental plots are included in the submitted PDF file. The potential reason for this phenomenon could be that existing methods update their model parameters using a mini-batch of the sample that is independent of the sample size, resulting in lower computational costs per iteration. This enables other algorithms to update parameters more frequently, leading to faster convergence at the beginning of optimization. In contrast, the proposed algorithm uses the full batch of the sample in each iteration, so each iteration is slower. This, in turn, leads to inferior solutions compared to existing methods at the beginning of optimization. In Appendix B.2, we provide a detailed explanation of the time complexity of the PAV algorithm, which is $O(n + nT)$ for top-k loss and AoRR loss, where $T$ represents the maximum number of iterations when solving each PAV subproblem and $n$ is the sample size. As a result, as the sample size increases, the time required by our proposed algorithm increases. Nevertheless, we can still achieve higher accuracy within a reasonable time frame compared to existing algorithms. In future research, we will aim to develop a variant of the proposed algorithm that uses a mini-batch of samples instead of using the full batch to achieve better initial solutions at the beginning of optimization. **Reference** [1] Mehta, R., Roulet, V., Pillutla, K., Liu, L., & Harchaoui, Z. (2023, April). Stochastic Optimization for Spectral Risk Measures. In International Conference on Artificial Intelligence and Statistics (pp. 10112-10159). PMLR. --- Rebuttal Comment 1.1: Comment: I thank authors for providing detailed replies for all my and other reviewers concerns. Authos clarified my concerns over the difference of similar equations to Eq(1) in related literatures. The consideration for faster update of existing methods in experiments is very thoughtful.
Summary: This paper presents a new ADMM algorithm that focuses on three specific cases of rank-based losses. The algorithm's convergence is theoretically analyzed in the paper. Additionally, the authors conducted comprehensive experiments to compare the new algorithm with traditional approaches. The results indicate that the proposed algorithm outperforms traditional methods in terms of both efficiency and effectiveness. Strengths: 1. The authors proposed a new algorithm that can address three specific cases of rank-based losses, whereas traditional algorithms like SGD, DCA, and LSVRG can only handle one. Additionally, the new algorithm permits the regularization term to be a weakly convex function. 2. The authors' theoretical analysis of the new algorithm's convergence is a significant contribution that has not been achieved in previous work. 3. To demonstrate the advantages of the new algorithm, the authors conducted comprehensive experiments. The results show that the new algorithm generally outperforms existing methods in terms of both efficiency and effectiveness. Overall, this study provides valuable insights into the development of more efficient and effective algorithms for rank-based losses. Weaknesses: Significant Issues (Detailed Explanation Required) 1.The representation of the loss vector function in line 23 is unclear. If function l maps an n-dimensional column vector to a real number, how a comparison of magnitude can be made later on? 2.As is widely known, due to the unknown underlying distribution of the data, we can only optimize the discrete form of risk expectation, which is the arithmetic mean of n observations. The theoretical basis for doing so is that the arithmetic mean converges in probability to the expectation. Therefore, could you please explain the theoretical basis for the discrete form of spectral risk that you employed in lines 78-79. 3.Could you please clarify how the conclusion stated in line 116 was derived? If it is a result obtained from citing other papers, please provide the source. If it is a result you have proven yourself, please provide a detailed proof. 4.In line 159, definitions for two consecutive blocks disorder are provided, but definitions for three or more consecutive block disorder are missing. 5.In line 6 of Algorithm 2, it is not possible to select the disordered blocks because definitions for three or more consecutive blocks disorder have not been provided. 6.In line 6 of Algorithm 2, you assume that the optimal solutions for are equal to each other. Could you please a more detailed proof or a citation if the conclusion is derived from another paper? 7.In line 225, you stated that Assumptions 4 and 5 are weaker than Assumption 2. Please provide a detailed explanation for this claim. 8.Could you please provide specific information on the loss functions and regularizers used in the experiments in Sections 5.2 and 5.3? Minor issue (needs improvement). 1.The "l" in line 23 and the "l" in line 71 have different meanings. It’s suggested to use distinct notations to indicate this difference. 2.In Equation (1), the independent variable of is , where is a d-dimensional column vector. However, the independent variable of is written as an n-dimensional column vector in line 24. 3.Line 71 should use lowercase "d" instead of uppercase. 4.In line 85, please provide a brief explanation of the term "fairness". 5.There is a missing negative sign in line 127. 6.In line 154, "" should be in lowercase. 7.The explanations in lines 169-170 appear to contradict the explanations in lines 187-188. 8.To avoid confusion among readers, please use a consistent notation for the loss function "" in lines 71, 125, and 194. 9.Please provide an explanation for "dist" as used in line 249 at its initial occurrence in line 212. 10.The placement of "Dataset" in Tables 1 and 2 is inconsistent. 11.Please use the notation "new-ADMM" to distinguish the modified version of the traditional ADMM algorithm. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the above comments about the Weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer GkCz for the feedback and comments! **The confusion in notations for the loss function** We apologize for the confusion. We should only use the definitions of $l$ and $\boldsymbol{l}$ as follows: 1. $l$: $\\mathbb{R}\\to\\mathbb{R}$: a function that represents the loss for an individual sample. 2. $\\boldsymbol{l}$: $\\mathbb{R}^n\\to\\mathbb{R}^n$: a vector-valued function whose $i$-th element represents the individual loss for the $i$-th sample. We will correct all the places where the loss function is used incorrectly. **Theoretical basis for the discrete form of spectral risk** The discrete form of the spectral risk converges to the spectral risk of the population distribution, governed by Wasserstein distance. Please refer to [Proposition 1, 1]. **Conclusion about Moreau Envelope** The proximal operator of a $c$-weakly convex function $g$ is $\\operatorname{prox}_{g,\\gamma}(w)=\\arg\\min _{x\\in\\mathbb{R}^d}\\left\\{g(x)+\\frac{1}{2 \\gamma}\\|x-w\\|^2\\right\\}$. According to the definition of the weakly-convex function in line 106 and given $0 < \\gamma < \\frac{1}{c}$, the function $\\mathbb{R}^d \\ni x\\to g(x)+\\frac{1}{2\\gamma} \\|x-w\\|^2\\in\\mathbb{R}$ is strongly convex for all $w\\in\\mathbb{R}^d$, thus its argmin is always a singleton. Furthermore, $ M_{g,\\gamma}(w)=\\min_x\\left\\{g(x)+\\frac{1}{2 \\gamma}\\|x-w\\|^2\\right\\}=g(\operatorname{prox}\_{g,\gamma}(w))+\\frac{1}{2\\gamma}\\|\\operatorname{prox}_{g,\\gamma}(w)-w\\|^2 $ is well-defined. A detailed proof can be found in [Proposition 3.1, 2]. **Explanation of PAV algorithm** Sorry for missing the definition for three or more consecutive out-of-order blocks, which is similar with two consecutive out-of-order blocks. That is, if $v_{[s_k,s_{k+1}]} > v_{[s_{k+1} + 1,s_{k+2}]} > \cdots > v_{[s_{k+t}+1,s_{k+t+1}]}$, then $\\{[s_k,s_{k+1}],[s_{k+1} + 1,s_{k+2}], \cdots , [s_{k+t}+1,s_{k+t+1}]\\}$ are consecutive out-of-order blocks. For example, if $v_{[1,2]}<v_{[3,3]}<v_{[4,5]}>v_{[6,6]}>v_{[7,7]}<v_{[8,8]}$, then $\\{[4,5], [6,6], [7,7]\\}$ are a consecutive out-of-order blocks. Sorry that the statement in line 169-170 is unclear. We should point out that in $\theta_i(z_i)=\sigma_i l(z_i)+\frac{\rho}{2}(z_i-m_i)^2$, $\sigma_i$ is constant for the spectral risk loss or the ranked-range loss, but $\sigma_i$ is a function of $z_i$ in the human risk loss as it depends on the value of $z$ being larger than or less than the reference point $B$ (see eq. (4)). So for cases that $\sigma_i$ is a constant, $\theta_i$ is a convex function, and the acceleration procedure via merging multiple consecutive out-of-order blocks is applicable. However, for human risk minimization, $\theta_i$ is not convex, and we do not adopt this acceleration anymore. In line 6 of Algorithm 2, we do not assume "that the optimal solutions for are equal to each other." This is a step of the PAV algorithm. In this step, we enforce to solve the problem $\min_z \sum_{i=s_k}^{s_{k+t+1}}\theta_i(z)$, where the optimal solution is $v_{[s_k,s_{k+t+1}]}$. This step is called "merge the consecutive out-of-order blocks." When $\theta_i$ are all convex functions, the PAV algorithm returns a global minimum, which is proved in [3]. For human risk minimization, the PAV algorithm finds a point that satisfies the first-order condition, which is proved in [Theorem 3, 4]. **Statement about assumptions** Sorry that the statement that Assumptions 4 and 5 are weaker than Assumption 2 may not be accurate. It is preferable to state that Assumptions 4 and 5 are more practical than Assumption 2, as they are easier to verify in practice. As mentioned in line 226-234, the full row rank property of the data matrix is often assumed in the high dimensional setting classification and Assumption 5 is satisfied by weakly convex functions that are Lipschitz continuous. **Specific information on experiments** Sure. We will present the formulations in Section 5.2 in the manuscript. In Section 5.2, we used logistic loss and $\ell_2$ norm regularizer. In Section 5.3, we used both logistic loss and hinge loss and $\ell_2$ norm regularizers as in [5]. Detailed information on loss functions, regularizers, and other settings can be found in Appendix B.5. **Typos and Minore issues** Thank you for pointing out our typos and suggestions in the Minor issue. We will make the necessary corrections in the subsequent manuscript. Here we provide a detailed explanation of two points. For Minor issue 4, fairness means that our predictions remain consistent across different groups. For example, in the examples used in the experimental section, we examine whether there are differences in predictions between different races when predicting gender. The fairness metrics are detailed in Appendix B.5. For Minor issue 5, since we set $D=-\text{diag}(\boldsymbol{y})X$ in line 125, the expression $\boldsymbol{z}=D\boldsymbol{w}$ in line 127 is correct. **References** [1] Mehta, R., Roulet, V., Pillutla, K., Liu, L., & Harchaoui, Z. (2023, April). Stochastic Optimization for Spectral Risk Measures. In International Conference on Artificial Intelligence and Statistics (pp. 10112-10159). PMLR. [2] Hoheisel, T., Laborde, M., & Oberman, A. On proximal point-type algorithms for weakly convex functions and their connection to the backward Euler method. Optimization Online. [3] Best, M. J., Chakravarti, N., & Ubhaya, V. A. (2000). Minimizing separable convex functions subject to simple chain constraints. SIAM Journal on Optimization, 10(3), 658-672. [4] Cui, X., Jiang, R., Shi, Y., and Yan, Y. (2023). Decision making under cumulative prospect theory: An alternating direction method of multipliers. arXiv preprint arXiv:2210.02626. [5] Hu, S., Ying, Y., Lyu, S., et al. (2020). Learning by minimizing the sum of ranked range. Advances in Neural Information Processing Systems, 33:21013–21023.
Summary: This paper proposes a unified framework for rank-based loss minimization based on the ADMM algorithm. The paper proposes to apply a pool adjacent violators (PAV) algorithm to solve one of the subproblems of ADMM. Numerical experiments show that the proposed algorithm outperforms the existing ones. Strengths: + The problem of rank-based loss minimization is very important. + The proposed PAV algorithm looks interesting. + Convergence of the algorithm is theoretically analyzed. Weaknesses: - It seems that Eq.(1) assumes a linear model and it is unclear how to generalize to non-linear settings. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can the proposed method be applied to nonlinear models? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer Gyag for the feedback and comments! Eq. (1) indeed assumes a linear model. One reason for making this assumption is that currently, many studies or experimental parts related to rank-based loss predominantly concentrate on linear models [1-3]. So far, there has been limited research on ADMM with nonlinear constraints. However, as stated below, our method can be applied to non-linear models. **Applying the method to a non-linear model** The linear model in Eq. (1) can be replaced with a non-linear model. The rank-based loss in Eq. (1) can be rewritten as follows: \begin{equation} \Omega(\boldsymbol{w}):=\sum_{i=1}^{n} \sigma_i \boldsymbol{l}_{[i]}\left(- \boldsymbol{y} \odot (h(\boldsymbol{w}))\right) \end{equation} where $h(\boldsymbol{w}):\mathbb{R}^d\to\mathbb{R}^n$ is a differentiable non-linear function. Let us explain how to adapt the ADMM algorithm. Now the constraint in eq. (9) should be $\boldsymbol{z} = Dh(\boldsymbol{w})$, where $D = -\text{diag}(\boldsymbol{y})$. Information about the samples $X$ is encompassed in $h(\boldsymbol{w})$, but to maintain consistency in notation, we still retain $D$. The $z$-subproblem is the same as the linear model version. In order to keep our theory still valid, we need to make modifications in the following two aspects: **1. Solving $w$-subproblem** Due to the inclusion of the non-linear term $h(\boldsymbol{w})$, the $\boldsymbol{w}$-subproblem may no longer be strongly convex, leading to potential inability to exactly solve the $\boldsymbol{w}$-subproblem. To overcome this limitation, we assume $\boldsymbol{w}$-subproblem is solved inexactly such that $\text{dist}\left(0,\partial_{\boldsymbol{w}} L\_\rho\left(\boldsymbol{z}^{k+1},\boldsymbol{w}^{k+1},\boldsymbol{\lambda}^{k}\right)+r(\boldsymbol{w}^{k+1}-\boldsymbol{w}^{k})\right)\leq O(\epsilon)$, where $\epsilon$ is the accuracy in Theorems 1, 2. Moreover, we assume the solution to the $\boldsymbol{w}$-subproblem is an $\epsilon_k$ optimal solution ($\epsilon_k\geq 0$) and $\sum\_{k=1}^{\infty}\epsilon\_k<\infty$. We say $\hat{x}$ is an $\tilde{\epsilon}$ optimal solution for $\min_x f(x)$ if $f(\hat{x})\leq \min_x f(x) +\tilde{\epsilon}$. In this way, we can obtain the descent of $\boldsymbol{w}$-subproblem: $$ L_\rho(\boldsymbol{\boldsymbol{z}}^{k+1},{\boldsymbol{w}}^k,\boldsymbol{\lambda}^k)\geq L_\rho(\boldsymbol{\boldsymbol{z}}^{k+1},{\boldsymbol{w}}^{k+1},\boldsymbol{\lambda}^k) +\frac{r}{2}\\|{\boldsymbol{w}}^{k+1}-{\boldsymbol{w}}^k\\|^2-\epsilon_k. $$ **2. Convergence guarantee of our algorithm** We need the following additional assumptions to guarantee the convergence of the algorithm: - $h(\boldsymbol{w})$ and $\nabla h(\boldsymbol{w})$ are Lipschitz continuous in any sublevel sets. - $\boldsymbol{\lambda}^k \in Im(D\nabla h(\boldsymbol{w}^k))~ \forall k$. The first controls $\\|\boldsymbol{w}^{k+1}-\boldsymbol{w}^k\\|$ by $\\|h(\boldsymbol{w}^{k+1})-h(\boldsymbol{w}^{k})\\|$ or $\\|\nabla h(\boldsymbol{w}^{k+1})-\nabla h(\boldsymbol{w}^{k})\\|$. The latter is introduced to replace Assumption 4 in our paper. As a result, our main findings can be rewritten as follows: **Theorem 1 (modified)** Under the same assumptions and settings as in Theorem 1, along with the two additional assumptions mentioned above, Algorithm 1 can find an $\epsilon$-KKT point $(\boldsymbol{\boldsymbol{z}}^{k+1},{\boldsymbol{w}}^{k+1},\boldsymbol{\lambda}^{k+1})$ within $O(1/\epsilon^2)$ iterations, i.e., \begin{equation} \text{dist}\left(-\boldsymbol{\lambda}^{k+1},\partial\Omega\left(\boldsymbol{\boldsymbol{z}}^{k+1}\right)\right)\leq \epsilon,\quad \text{dist}\left(\left(D\nabla h(\boldsymbol{w}^{k+1})\right)^T{\boldsymbol{\lambda}}^{k+1},\partial g\left(\boldsymbol{w}^{k+1}\right)\right)\leq O(\epsilon),\quad \\|\boldsymbol{\boldsymbol{z}}^{k+1}-D{\boldsymbol{w}}^{k+1}\\|\leq \epsilon. \end{equation} **Theorem 2 (modified)** Under the same assumptions and settings as in Theorem 2, along with the two additional assumptions mentioned above (Assumption 4 is no longer required), Algorithm 1 can find an $\epsilon$-KKT point $(\boldsymbol{\boldsymbol{z}}^{k+1}, {\tilde{\boldsymbol{w}}^{k+1}},\boldsymbol{\lambda}^{k+1})$ within $O(1/\epsilon^4)$ iterations, i.e., \begin{equation} \text{dist}\left(-\boldsymbol{\lambda}^{k+1},\partial\Omega\left(\boldsymbol{\boldsymbol{z}}^{k+1}\right)\right)\leq \epsilon,\quad \text{dist}\left(\left(D\nabla h(\tilde{\boldsymbol{w}}^{k+1})\right)^T{\boldsymbol{\lambda}}^{k+1},\partial g\left(\tilde{\boldsymbol{w}}^{k+1}\right)\right) \leq O(\epsilon),\quad \\|\boldsymbol{\boldsymbol{z}}^{k+1}-D\tilde {\boldsymbol{w}}^{k+1}\\|\leq \epsilon. \end{equation} **The potential practical limitations of the proposed method** To ensure the effective utilization of our algorithm, it is crucial for the individual loss to demonstrate monotonicity. This property enables the application of the PAV algorithm for solving the $z$-subproblems. Some examples of loss functions that satisfy this criterion include logistic loss, hinge loss, and exponential loss. On the other hand, when confronted with a substantially large sample size, the computational efficiency of the PAV algorithm could be impacted, thereby potentially constraining the overall effectiveness of the entire algorithm. We will point out the limitations in our paper. **References** [1] Mehta, R., Roulet, V., Pillutla, K., Liu, L., & Harchaoui, Z. (2023, April). Stochastic Optimization for Spectral Risk Measures. In International Conference on Artificial Intelligence and Statistics (pp. 10112-10159). PMLR. [2] Hu, S., Ying, Y., & Lyu, S. (2020). Learning by minimizing the sum of ranked range. Advances in Neural Information Processing Systems, 33, 21013-21023. [3] Leqi, L., Prasad, A., & Ravikumar, P. K. (2019). On human-aligned risk minimization. Advances in Neural Information Processing Systems, 32. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: Thank the authors for the response. The nonlinear extension looks interesting and with this incorporated, the paper overall looks solid to me. I am willing to raise the rating to 6.
Rebuttal 1: Rebuttal: We truly thank all reviewers’ insightful and constructive suggestions, which helped to significantly improve our paper! **The more experiments about Figure 1** To explain the phenomenon observed in Figure 1(h), we conducted experiments with increased sample sizes. The corresponding results are included in the PDF file. Pdf: /pdf/520fe9e832914042a289417e555c2a6f147bfa8d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Compositional Policy Learning in Stochastic Control Systems with Formal Guarantees
Accept (poster)
Summary: This paper proposes a verifiable RL framework that learns a composition of NN policies for stochastic control systems, along with a formal supermartingale certificate for the safety probability of satisfying the reach-avoid specification. It decomposes the global reach-avoid task into a DAG with edges denoting subtasks which are solved by policy+RASM. Finally, it composes the low-level subtask's policy into a global safe policy with probability guarantees. The authors evaluate this framework in a relatively simple Stochastic Nine Room environment. Strengths: This paper is greatly written and easy to follow. The authors did a good job on the paper presentation. They study an important safe RL problem with step-wise safety chance constraint, which is suitable for safety-critical systems. The proposed compositional safe RL framework is novel to me. The proposed algorithm and approach are sound. Control policy learning with formal guarantees in probability is significant. Weaknesses: 1. My biggest concern is the scalability problem of this paper, which might originate from the DAG representation. If the global RL task is too complex, the graph might be too huge to handle. 2. Because of the scalability problem, the experiments in this paper look much simpler than other RL papers. 3. The authors may miss a few recent papers addressing a similar safe RL problem with chance constraints, as in a) Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments, as far as I can tell, this reference also considers continuous control problems with safety probability chance constraints, and its guarantees are also based on the supermartingale property, although the approaches are quite different. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. What are the relations between (either additive or multiplicative) RASM and barrier certificate for stochastic (either continuous or discrete) systems? The definition of the barrier certificate can be found in the following references. b) Stochastic safety verification using barrier certificates. c) A Barrier Function Approach to Finite-Time Stochastic System Verification and Control. 2. What's the complexity of the proposed algorithm? It should be feasible to conduct the big O complexity analysis as the algorithm is built on top of topologic sorting, binary searching, etc. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: This paper may face a scalability problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. In what follows, we answer the two questions raised by the reviewer. **Relation between RASMs and stochastic barrier functions.** On the high level, the main difference between RASMs and stochastic barrier functions is that RASMs consider reach-avoid specifications, whereas stochastic barrier functions only consider safety specifications. However, if we are only interested in safety, additive and multiplicative RASMs reduce to stochastic barrier functions by letting $X_t = \emptyset$ and $\epsilon = 0$ in Definition 2 for additive RASMs, and $\gamma = 1$ in Definition 3 for multiplicative RASMs. Here, we are referring to discrete-time stochastic barrier functions defined in Prajna et al. “Stochastic safety verification using barrier certificates”, CDC 2004. This was discussed in Zikelic et al. “Learning Control Policies for Stochastic Systems with Reach-Avoid Guarantees”, AAAI 2023, who introduced additive RASMs. Our multiplicative RASMs in Definition 3 syntactically resemble exponential stochastic barrier functions defined in Santoyo et al. “A Barrier Function Approach to Finite-Time Stochastic System Verification and Control”, Automatica 2021. Exponential stochastic barrier functions also impose a multiplicative expected decrease condition. However, they consider finite time horizon systems, where the time horizon N is given and known a priori, and show that exponential barrier functions provide bounds on safety probability which are tighter by a factor which is exponential in time horizon N . However, as $N \rightarrow \infty$, their bound reduces to the bound of Prajna et al. In contrast, in our Theorem 2 we show that our multiplicative RASMs provide tighter bounds on safety (or more generally, reach-avoid) probability even in unbounded (i.e. indefinite) or infinite time horizon systems. **Computational complexity.** The worst-case complexity of our algorithm is $\mathcal{O}(|E| \cdot RA)$, where E is the number of edges in the abstract graph and RA is an upper bound on the computational complexity of learning and verifying a reach-avoid policy. This is because both the topological sort and the forward pass on the abstract graph can be done in time which is linear in the number of edges. The computational complexity of RA is $\mathcal{O}(I \cdot (L + V))$, where \ $I$ = bound on the number of learner-verifier iterations, \ $L$ = complexity of the learner, and \ $V$ = complexity of the verifier. \ In general, the learner-verifier procedure is not guaranteed to converge (in which case our algorithm does not output a policy), so we introduce a parameter $I$ which is the maximal number of learner-verifier loop iterations per edge. Given that we consider learning over continuous state spaces and given that our loss function is not convex, we cannot bound $L$ and also need to introduce a timeout parameter. Finally, we have $V = \mathcal{O}((D / \tau)^n \cdot N)$ where $D$ is the diameter of the state space, $\tau$ is the mesh of the discretization used by the verifier, $n$ is the dimension of the state space, and $N$ is the number of neurons in the policy and RASM neural networks. This is because, for each cell in the discretization grid, interval-arithmetic abstract interpretation can verify all RASM defining conditions in time linear in the size of the networks. On the other hand, in the worst case there are $\mathcal{O}((D / \tau)^n)$ cells in the discretization grid. This results in a final bound on computational complexity $\mathcal{O}(|E| \cdot I \cdot (L + (D / \tau)^n \cdot N))$ with the notation above. We will incorporate the above discussions into the final version of the paper. We also thank the reviewer for pointing out the recent ICML 2023 paper. We will discuss the comparison to our work in the final version. In particular, this paper considers a model-free setting and jointly learns a policy and a barrier function-like certificate, however it does not provide guarantees on correctness of the learned certificate. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses. They have sufficiently addressed my questions. I will keep my score for now due to the scalability weakness.
Summary: This paper introduces CLAPS (Compositional Learning for Probabilistic Specifications), a new method for learning a composition of neural network policies in stochastic environments, together with a formal certificate which guarantees that a reach-avoid specification over the policy's behavior is satisfied with the desired probability. The proposed approach is evaluated empirically on a stochastic Nine Rooms environment. Strengths: 1) Different to previous works discussed by the authors, the CLAPS method is applicable to stochastic control systems, that may be defined via non-linear dynamics functions. 2) The approach is compositional. Complex tasks are analysed and solved in terms of simple tasks, that are then combined in order to achieve the overall goal. 3) The literature is discussed in detail. The bibliography, at 61 items, is an excellent overview of current methods in safe RL. Weaknesses: 1) A lot of material appears in the appendix. I did not check it, but the paper appears to be consistent. 3) This work builds on top of [61] as regards the use of supermartingales. However, the bounds obtained are stricter, which seems to justify the claims to novelty (together with the compositional approach). Minor, p. 3: When discussing the equation for the stochastic feedback loop system, u_t is described as the control action even though no u_t appears in the equation. On the other hand, function \pi is not described. I understand that \pi: X -> U is the policy and \pi(x_t) = u_t. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: p. 4: When the authors say "if the probability of a random trajectory...", do they actually mean "if the probability of any trajectory..." Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: p. 9: The authors mention that "the systematic decomposition used in our algorithm has advantages over manual task decompositions". They might discuss these advantages further. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. In what follows, we answer the question raised by the reviewer. Indeed, on p. 4 we mean “if the probability of any trajectory”. This sentence refers to trajectories sampled from the probability space of all trajectories starting in initial state $x_0$, where the probability space is defined by the Markov chain semantics of the system under a control policy. We will clarify this in the final version of the paper. We will also address the minor remark on p. 3 and write $\pi: X \rightarrow U$ and $u_t = \pi(x_t)$ when defining the dynamics. We will also further discuss the advantages of systematic decomposition on p. 9. --- Rebuttal Comment 1.1: Comment: I am happy with the rebuttal provided by the authors. Thanks!
Summary: This paper introduces CLAPS, a compositional method designed for learning and verifying neural network policies in stochastic control systems. By considering control tasks with specifications expressed in the SPECTRL language, CLAPS decomposes the task into an abstract graph of reach-avoid tasks. It utilizes reach-avoid supermartingales to offer formal guarantees on the probability of reach-avoidance in each subtask. Additionally, the paper establishes proof demonstrating that RASMs (Reach-Avoid Supermartingales) provide a significantly more stringent lower bound on the probability of reach-avoidance compared to prior approaches. The experimental evaluation conducted in the Stochastic Nine Rooms environment showcases the ability of CLAPS to derive guarantees for global compositional policies. Strengths: + Compositional Learning is an important research direction. The approach presented in this paper provides correctness guarantees for individual sub-policies that can be used to collectively ensure the correctness of the global policy, making the learning and verification approach valuable for applications where safety and reliability are critical. The ability to verify and validate each component of the policy offers a robust and trustworthy framework for developing complex control systems. + Taking inspiration from exponential barrier certificates developed in the control theory community, the paper introduces a conceptually similar concept to improve Reach-Avoid Supermartingales, which provides a more strict lower bound on the probability of reach-avoidance guarantees (compared with prior work [61]). + The main algorithm is easy to understand, follow, and implement. Weaknesses: The proposed approach falls short of advancing the state-of-the-art in compositional learning and verification. While Reach-Avoid Supermartingales in prior work makes sense for providing probabilistic correctness guarantees for infinite time horizon systems, the expectation for a global policy composed of sub-policies is that each sub-policy terminates within a finite time horizon. Simpler techniques such as statistical verification, which relies on drawing a large number of samples and employing concentration inequalities, can achieve high-probability correctness guarantees for finite horizons. Consequently, the paper lacks a compelling argument showcasing why their probabilistic verification approach surpasses a straightforward statistical verification approach for compositional policies. Strengthening the paper would involve demonstrating that the probabilistic verification approach indeed outperforms stochastic verification in practice. In the specific context of the Stochastic Nine Rooms environment examined in this paper, it seems that the robot can reach the goal within a finite time horizon. Given the convergence of PPO sub-policies, it is reasonable to anticipate that stochastic verification methods can yield substantially higher probabilistic guarantees compared to the reported result of 33% in the paper's probabilistic verification approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: + Can the proposed Reach-Avoid Supermartingales (RASMs) approach be formally characterized in relation to exponential barrier certificates? + Have you considered applying Statistical Verification of Learning-Based Cyber-Physical Systems (https://cpsl.pratt.duke.edu/sites/cpsl.pratt.duke.edu/files/docs/zarei_hscc20.pdf) to the Nine Rooms environments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper would benefit from a more comprehensive discussion and extensive experimentation regarding the advantages of the probabilistic verification approach over stochastic verification within the context of compositional policy learning. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. In what follows, we answer the two questions raised by the reviewer. **Comparison of RASMs and exponential barrier functions.** Stochastic barrier functions were introduced for proving probabilistic safety in stochastic dynamical systems, i.e. without the additional reachability condition of reach-avoid specs (Prajna et al. “Stochastic Safety Verification Using Barrier Certificates”, CDC 2004). If we are only interested in probabilistic safety, RASMs reduce to stochastic barrier functions by setting $X_t = \emptyset$ together with $\epsilon = 0$ in Definition 2 for additive RASMs, and $\gamma = 1$ in Definition 3 for multiplicative RASMs. This was discussed in Zikelic et al. “Learning Control Policies for Stochastic Systems with Reach-Avoid Guarantees”, AAAI 2023, who introduced additive RASMs. To the best of our knowledge, exponential barrier functions for stochastic dynamical systems have been considered only for finite time horizon systems in which the time horizon N is fixed and known a priori (Santoyo et al. “A Barrier Function Approach to Finite-Time Stochastic System Verification and Control”, Automatica 2021). Exponential barrier functions also impose a multiplicative expected decrease condition, similar to our multiplicative RASMs. This allows them to provide tighter bounds on safety probability compared to classical stochastic barrier functions, which are tighter by a factor which is exponential in the time horizon N (Theorem 2 in Santoyo et al.). However, as $N \rightarrow \infty$, their bound reduces to the bound of Prajna et al. In contrast, Theorem 2 in our paper shows that our multiplicative RASMs provide tighter bounds on safety (or more generally, reach-avoid) probability even in unbounded (i.e. indefinite) or infinite time horizon systems. **Comparison of CLAPS and statistical verification.** We clarify that, while satisfaction of a reach-avoid specification (and, more generally, any SpectRL specification) can be witnessed by a finite trace, one needs infinite traces to witness that a reach-avoid specification is not satisfied. For instance, in the Nine Rooms environment, one can design a policy under which a trace remains stuck in one room indefinitely without violating safety constraints and without reaching the target room. However, one cannot witness this by sampling finite traces, since any finite trace can also be extended to an infinite trace that eventually leaves the room. One way to overcome this limitation would be to sample traces of some fixed length and then treat all longer traces as either satisfying or violating the specification. However, such an approach would either be overestimating or underestimating the probability of a trace satisfying the specification and would not provide statistical guarantees. This means that statistical methods are applicable to and effective in finite time horizon systems, but they are not applicable to our setting. Our algorithm (CLAPS) provides formal guarantees even for unbounded (i.e. indefinite) or infinite time horizon systems. This is because the guarantees provided by RASMs do not impose any restrictions on the time horizon. We will incorporate the above discussions into the final version of the paper, as we believe they will strengthen the paper. --- Rebuttal Comment 1.1: Title: Still confused about the experiment settings Comment: I thank the authors' response. However, I am still confused about the experiment settings and the argument made in the rebuttal. In this paper > Our method learns a policy along with a formal certificate which guarantees that a specification is satisfied with the desired probability. In the rebuttal > We clarify that, while satisfaction of a reach-avoid specification (and, more generally, any SpectRL specification) can be witnessed by a finite trace, one needs infinite traces to witness that a reach-avoid specification is not satisfied. Does your argument retain its validity if I'm only concerned about verifying whether a specification is **satisfied** with a certain desired probability? I concur with your assessment that your work has strength in providing probabilistic correctness assurances for systems with infinite time horizons, such as verifying the balance of a pendulum. However, in the context of a global policy composed of sub-policies, it's expected that each sub-policy concludes within a finite time horizon. Expecting a sub-policy to run infinitely seems unreasonable. Upon reaching a sub-goal, why not transition to the subsequent sub-policy for the next sub-goal? What's the rationale behind verifying a sub-policy that **satisfies** its specification with the desired probability over an infinite time horizon? While statistical methods are only suited for and effective in finite time horizon systems, they seamlessly align with your scenario if your objective is to ensure that a specification is **satisfied** with the desired probability, and each sub-policy is anticipated to reach a known sub-goal. Even if traces longer than a certain threshold are treated as violating a specific reach-avoid specification, stochastic verification methods could conceivably provide significantly higher probabilistic assurances than the reported 33% result. In this context, why can this experimental outcome be used to validate the effectiveness of your approach? --- Reply to Comment 1.1.1: Title: Response Comment: We thank the reviewer for the response. The key assumption required to use statistical verification methods is that a finite threshold is known a-priori. As mentioned in the scenario, it is conceivable that this shortcoming can be side-stepped by stipulating that ‘traces longer than a certain threshold are treated as violating a specific reach-avoid specification’. However, in applications where we cannot make an accurate guess for such a threshold we would have no way to distinguish between bad (low satisfaction probability) sub-policies and a bad threshold guess. For such situations it would be useful to have methods that provide formal guarantees without making an assumption about the finite threshold. As a consequence of this assumption CLAPS should be experimentally compared to other formal guarantee methods, where we provide better results (in particular the comparison of multiplicative and additive RASMs in Table 1). Additionally, the satisfaction bound for CLAPS (Theorem 2) does not depend on the number of environment interactions, which further highlights its applicability for situations when the satisfaction threshold might be arbitrarily long. We agree that an additional discussion about the benefits and drawbacks compared to statistical methods should be included in the paper and will add this to the final version. Including a clear statement about how statistical methods are likely to provide higher probabilistic assurances when the threshold is guessed correctly but might fail if the threshold is under-approximated. Our view is that statistical and formal methods should be viewed as complementary, where statistical methods give an estimate of the satisfaction of the reach-avoid spec, whereas we provide a lower bound​​.
Summary: The paper presents CLAPS, a compositional RL algorithm that also ensures guarantees of correctness when learning from temporal specifications. The core contribution of this work revolves around guarantees. Prior works in compositional RL with guarantees apply to deterministic environments and/or ones with linear dynamics. This work presents an algorithm that is suitable for stochastic environments with non-linear dynamics. The algorithm builds on two prior works., namely [61] that learns policies for control systems with non-linear dynamics with reach-avoid guarantees and [30] that compositionally learns policies (without guarantees) from temporal specifications. An empirical evaluation on 9-rooms environment over a suite of source and target states shows the efficacy of the approach in learning long-horizon tasks with guarantees. Strengths: RL from temporal logic to learn long-horizon tasks is a promising and thriving research community. This paper makes a strong contribution in that space by learning policies with guarantees in stochastic systems with non-linear dynamics. Weaknesses: The empirical evaluation can be strengthened. Please find concrete comments below: 1. The evaluations have been presented for one environment only. The 9-room environment is somewhat low-dimensional. I am curious about how the approach may scale to higher dimensional environments. 2. Is there a comparison between the sample complexity of CLAPS with prior compositional approaches such as DiRL? This could offer a study of tradeoffs for guarantees. 3. Would it be possible to compare CLAPS to [27] on the benchmarks that would be common to both? It would be interesting to see how the formal guarantees of both approaches compare. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I am quite unclear as to how the guarantee extends compositionally. From my understanding, a critical challenge in obtaining guarantees from compositional approaches such as [27] is that one needs to ensure the guarantee at the transition point between two edges, ie., in the region where one policy ends and the other begins. Could you clarify how that is accounted for in the proof of Theorem 5? The example I have in mind is as follows: Say, one of the subgoal regions is split by a wall such that a learn policy entering the subgoal region ends before the wall and the policy exiting the subgoal region exits from the other side of the wall. The probability of connecting these two policies is clearly not 1. How is the guarantee in CLAPS accounting for such scenarios? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See above. I can change my score based on the clarification of the question above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. In what follows, we answer the questions raised by the reviewer. We clarify that Theorem 5 in the Appendix only states that a trajectory satisfies a SpectRL specification if and only if it satisfies abstract reachability in the abstract graph associated with the specification. Therefore, the probability of a SpectRL specification being satisfied is equal to the probability of abstract reachability being satisfied. However, Theorem 5 does not consider composition of reach-avoid guarantees associated to two edges. The composition guarantees are obtained by our composition of edge policies described in l.321-343 and proved correct in Theorem 3. For each edge $(v_1,v_2)$ in the abstract graph, CLAPS synthesises an edge policy $\pi_{v_1,v_2}$ together with a lower bound $p_{v_1,v_2}$ on the probability of satisfying the reach-avoid specification associated to the edge (line 10 in Algorithm 1). The lower bound $p_{v_1,v_2}$ is the worst-case lower bound for *any initial state* in the region associated with the source vertex $v_1$. Hence, when composing guarantees of edges $(v_0, v_1)$ and $(v_1, v_2)$, CLAPS considers the worst-case state in the region associated to $v_1$ in which the agent may end up upon executing the policy associated to $(v_0,v_1)$, before moving on to the policy associated to $(v_1,v_2)$. Hence, the lower bound computed by CLAPS on the probability of satisfying the specification obtained by composing edges is $p_{v_0,v_1} \cdot p_{v_1,v_2}$. Our composition described in l.321-343 and proved correct in Theorem 3 formalises this reasoning and ensures that the guarantees obtained by composing edge policies are indeed correct. We will clarify this further in the final version of the paper. In the example described by the reviewer, this means that CLAPS would compute (1) a lower bound $p_{v_0,v_1}$ on the probability of satisfying the reach-avoid specification of the first edge, (2) a lower bound $p_{v_1,v_2}$ on the probability of satisfying the reach-avoid specification of the second edge (which would be attained in a state before the wall), and would conclude the lower bound of $p_{v_0,v_1} \cdot p_{v_1,v_2}$ of satisfying the composition of two reach-avoid specifications.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Text Promptable Surgical Instrument Segmentation with Vision-Language Models
Accept (poster)
Summary: This manuscript presents a clip-assisted semantic image segmentation method for surgical instruments. In terms of methodology, the proposed work can be viewed as an adaption of CLIPSeg to a domain-specific problem. Compared with CLIPSeg, a mixture of prompt strategy is used for augmenting text prompt information. Hard sample mining is also employed to further improve segmentation performance. The image segmentation network architecture has also been optimized. The proposed method is evaluated on two public surgical instrument segmentation datasets and is shown to outperform some fully-supervised and clip-assisted methods. **Commented after rebuttal**: I would like to thank the authors, other reviewers, and the ACs. I have carefully read through the rebuttals, and the comments from other reviewers. The discussion with the authors is constructive. Most of my raised concerns have been properly addressed. Strengths: The proposed method yields improved performance on two surgical instrument segmentation datasets, compared with some existing fully-supervised/clip-assisted methods. Employing text prompt brings additional flexibility in terms of the categories of segmentation targets. This is a major advantage of clip-assisted segmentation over conventional fully-supervised segmentation. The proposed method is introduced in sufficient detail and can be easy to follow. Weaknesses: The take-home information for readers may be limited/unclear. Despite that domain-specific adaptions on top of vanilla CLIPSeg leads to improved performance on surgical instrument segmentation, these contributions themselves may not be of sufficient interest to readers: hard sample mining, feature pyramid, generating multiple text prompts are already the common practice and should have been well-known for the community. The authors are encouraged to highlight their key take-home information to readers, and argue how the proposed work brings new knowledge. In the abstract and introduction, the authors highlight improved flexibility when dealing with new segmentation targets of clip-assisted segmentation models. However, this point is not sufficiently evaluated and discussed in the experiments: how does CLIPSeg and CRIS that are also clip-driven work in this context? The authors may also want to discuss/comment on a closely-related existing work [1]. 1. CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection Technical Quality: 3 good Clarity: 3 good Questions for Authors: Judging by the ablation study (table 3 - 6), the contributions of individual components, except for the text encoder, seem to be quite marginal (the ablated models seem to be already quite strong). Therefore, from the authors' point of view, which factor accounts for most of the improved performance over CLIPSeg and CRIS? Or if there are other components that make a difference? Does the proposed method employ a stronger segmentation backbone or better training technique? The authors are encouraged to provide more details about the compared methods to avoid confusion. From the authors' perspective, what is the core take-home information to the readers? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Automated instrument image segmentation may affect the outcome of a surgery, which may lead to real-world impact to the clinicians and the patient. The authors are encouraged to discuss the potential impact of their work for patients, clinicians, device manufacturers, and the society. The authors are also encouraged to discuss if the proposed method exacerbate/mitigate potential risks and biases in image segmentation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### [fFdZ-W1, Q2] Take-home information for readers. To our knowledge, we're the first to introduce text promptable method in surgical instrument segmentation. Through problem-driven thinking, we proposed Mixture of Prompts (MoP) and Hard Instrument Area Reinforcement (HIAR) modules tailored for specific challenges in this field. MoP addresses instrument similarity by integrating detailed text descriptions, hence enhancing the classification among of instruments. HIAR improves segmentation in areas where instruments and tissues often overlap. Both MoP and HIAR are inspired by advances in computer vision segmentation, but the concrete ideas of mixing the strength of multiple prompts and reinforcing hard regions in a masked auto-encoder are actually novel in the computer vision literature. Finally, while designed for this specific task, we believe our method's applicability also extends to other segmentation tasks, underscoring its efficacy. ### [fFdZ-W2] The performance of CRIS and CLIPSeg for the cross dataset validation. For comprehensive analysis, we've added comparisons against CRIS and CLIPSeg by cross dataset validation in following tables. In the following tables, it can be seen that CRIS and CLIPSeg exhibit a significant drop in performance in cross-dataset validation compared to their corresponding counterparts and ours, indicating weaker generalization capabilities for both methods. Cross dataset validation for CRIS and CLIPSeg on EndoVis2017: Method | Ch_IoU | ISI_IoU | mc_IoU ------------------|--------|---------|-------- CRIS | 69.94 | 67.83 | 38.21 Cross-CRIS | 61.33 | 59.87 | 31.52 CLIPSeg | 70.15 | 65.02 | 33.42 Cross-CLIPSeg | 60.73 | 57.26 | 28.71 Ours | 79.90 | 77.83 | 56.22 Cross-Ours | 72.18 | 70.44 | 49.09 Cross dataset validation for CRIS and CLIPSeg on EndoVis2018: Method | Ch_IoU | ISI_IoU | mc_IoU ------------------|--------|---------|-------- CRIS | 74.10 | 72.29 | 46.04 Cross-CRIS | 53.91 | 52.18 | 29.46 CLIPSeg | 74.95 | 69.86 | 39.70 Cross-CLIPSeg | 51.35 | 50.77 | 27.62 Ours | 84.92 | 83.61 | 65.44 Cross-Ours | 66.25 | 64.92 | 37.34 ### [fFdZ-W3] Discuss the Universal Model. Thank you for pointing out the "Universal Model" paper. Our work primarily focuses on surgical instrument segmentation, while the Universal Model targets organ segmentation and tumor detection, thus differing in their research domains. This is why it was not initially cited or discussed in our manuscript. However, consistent with our approach, the paper also leverages the CLIP vision-language model to enhance the performance of medical image segmentation models. We will cite this work in our paper and discuss it in the appropriate sections. ### [fFdZ-Q1] More explanation for ablation study. In Tabs. 3-6 in the paper, we present ablation study results. From our final model, we remove each single module to observe performance changes. Notably, without multi-scale feature augmentation (MSFA), there's a 2.4% performance drop; without Mixture of Prompts (MoP), the drop is 2.1%; and without Hard Instrument Area Reinforcement (HIAR), it's 2.5%. These aren't marginal but significant if we compare the performance increase between the previous two SOTA (S3Net [5] and MATIS [3] in Tab. 1 of the paper). Moreover, our modules are complementary, omitting both MoP and HIAR leads to a 3.9% drop. Removing MSFA, MoP, and HIAR all together results in a 5.2% decrease; this, as our baseline, is still 2.6% higher than CRIS (Ch_IoU result in Tab. 1 in our paper). It is because our baseline benefits from data augmentation strategies (random crop, horizon flip, random rotate) and employs a ViT-based CLIP model trained on Laion2B. We will elaborate on these details in the revised paper to avoid ambiguity. ### [fFdZ-L1] The potential impact for patients, clinicians, device manufacturers, and the society. Enhanced automated surgical instrument segmentation offers manifold benefits to patients, surgeons, manufacturers, and society. Precise instrument tracking boosts surgical safety and precision, mitigating unintended tissue damage. Such automation eases surgeons' tasks, letting them concentrate on intricate procedures, and augments training for novices. Manufacturers integrating this technology could produce smarter surgical tools, aligning with AI-driven healthcare innovations to elevate patient care. ### [fFdZ-L2] If the proposed method exacerbates/mitigates potential risks and biases in image segmentation. Model bias due to non-representative distributions of gender or race in the training data does not apply to instrument segmentation datasets. Nevertheless, there could still be a human-driven bias in surgical instrument segmentation training data due to surgeons' preference for specific surgical instruments in certain scenarios over other tools. This bias can be mitigated by our text-promptable approach, which allows for using text prompts that are aimed at mitigating these scenarios (e.g. providing specific surgical scenario/function descriptions for certain surgical instrument). The study into the risk and biases of our method differs from this research's current focus; hence we leave further investigations for future works. --- Rebuttal Comment 1.1: Comment: Hi authors, Thank you so much for the clarification. Most of my concerns are properly addressed and I am very pleased to see the proposed method yielding improved performance compared with existing prompt-driven methods with large margins on both IID and OOD settings. I am also pleased to see the in-depth analysis about the potential impacts and biases, which comprises responsible research and innovation. Just two minor comments: 1. I am still a bit missing about the core take-home information. Despite the authors' claim to be the first to apply prompt-driven segmentation to surgical instrument segmentation, unless the drastic differences between segmenting surgical instruments versus segmenting radiological images/RGB images are adequately explained, I would not be 100% convinced that itself comprises a major contribution in the context of NeurIPS. 2. Still, the improved segmentation performance attribute to a synergy among multiple components, instead of the prompt-driven mechanism alone. The authors are expected to make this point very clear to readers, and emphasize/argue a) why a synergy among these components is crucial for the targeted task; b) why the proposed synergy is applicable to other domains that are relevant to the readers of NeurIPS (e.g., segmenting RGB and radiological images). --- Reply to Comment 1.1.1: Comment: Thank you very much for acknowledging our rebuttal and paper. Below, we provide answers to your newly raised comments: ## Q1 Thanks for your question. Below we summarize the challenges lying in surgical images: - In surgical images, it is common that instruments may exhibit significant similarity, necessitating the model that can distinguish their subtle differences for accurate segmentation. - As surgeries progress, surgical instruments might cut, suture, or otherwise manipulate tissues, altering their shapes. This can lead to tissues obscuring the instruments and potentially making some parts of the tissues even resemble the instruments. - In surgical settings, the use of diminutive endoscopic devices with small lenses inherently constrains the imaging quality due to hardware limitations. - The areas into which the endoscope is inserted, like the gastrointestinal tract, are continuously moving, which can add complexities to the segmentation of surgical instruments. Given above, on one hand, when comparing with those objects in natural scenes in RGB images which often have distinct and rigid boundaries, the continuous morphological changes of instruments and tissues during surgeries make instrument segmentation easily affected by tissue occlusions and variations in illumination, etc. On the other hand, when dealing with radiological images, different segmentation challenges emerge: the presence of various imaging modalities, intrinsic noise, limited contrast, and the potential for artefacts collectively introduce complexity to the precise segmentation in radiological images. Overall, we firmly believe that research on surgical instrument segmentation is both challenging and meaningful. Our methods have achieved significant improvements over text-promptable methods (e.g., CRIS, CLIPSeg) developed on natural images. We follow a problem-driven approach, in response to the aforementioned challenges in surgical images, we introduce the mixture of prompts module to address instrument similarity by integrating detailed text descriptions; moreover, the hard instrument area reinforcement module further amplifies the model's precise segmentation performance, especially in tricky regions where distinctions between instruments and tissues become blurred. Finally, although our method is designed for surgical instrument segmentation, it should also have the merits in natural image segmentation, especially when encountering challenges similar to those in surgical contexts. For instance, when undertaking fine-grained segmentation or in the presence of heavy occlusions. Therefore, our method holds potential for broader applications in computer vision tasks, and NeurIPS would be an excellent venue to showcase our work. ## Q2 Thanks for this question. First, we emphasize that our method is not merely a synergy of a few components. For the text-promptable pipeline itself, there are inherent novelties instead of a simple adaptation from existing approaches. For instance, we devise a multi-scale fusion scheme for image encoder and the mask decoder integrated both attention-based and convolution-based prompting schemes to facilitate text features in guiding visual features for segmentation prediction (see Tab. 3 & 4 in our paper for their improvements). Building upon the text-promptable method, we introduce the mixture of prompts, substantially enhancing model performance, especially in the prediction of novel instruments (see Tab. 5 in our paper). Additionally, to overcome classification inaccuracies during segmentation, we incorporate the hard instrument area reinforcement module (see Tab. 6 in our paper). All these modules, as mentioned in the above answer, follow a problem-driven paradigm in the surgical instrument segmentation domain. Next, although our modules have been primarily validated in surgical instrument segmentation, we believe they can offer insights or potential benefits for segmentation tasks in other domains (e.g., RGB and radiological images), especially when facing challenges akin to those found in our surgical images. For example, in tasks such as fine-grained segmentation of natural images, our mixture of prompts module could be effectively employed and for scenarios with heavy occlusions, our hard instrument area reinforcement module could be particularly suited. Our approach exhibits strong adaptability, as demonstrated on the CholecSeg8k dataset where it not only segments instruments but also various tissues. Experimentally, our method surpasses the current SOTA in performance (see results in the global response). In summary, our method was borne out of a problem-driven necessity. To address these challenges, we introduce various modules which, when synergized, significantly enhance performance. We believe our approach harbors immense potential for broader applications within the computer vision domain, especially for methods centered on text prompts.
Summary: This paper proposes a novel text promptable surgical instrument segmentation approach to overcome challenges associated with the diversity and differentiation of surgical instruments by using the large CLIP model and a text promptable mask decoder. The experiments show the effectiveness of the proposed method on surgical instrument segmentation. However, I am very concerned with the novelty of the proposed modules due to the limited situations. Strengths: 1, The paper is well-written and easy to understand. 2, The experiments show the effectiveness of the proposed modules. Weaknesses: 1, There are too few novelties due to most of the proposed modules being explored in the traditional segmentation tasks. 2, The dataset is too limited to prove the effectiveness of the proposed modules. Furthermore, there is no specific module designed for surgical instrument segmentation. 3, Lacking the comparison of inference time. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1, How about comparing the proposed method and the Segment Anything model? 2, What’s the difficulty of surgical instrument segmentation? I understand that there might be a lack of some segmentation datasets leading to overfitting. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### [9oei-W1, W2] Few novelties and modules already explored in traditional segmentation. We respectfully disagree. First, to our knowledge, we're the first to introduce text promptable method in surgical instrument segmentation. Second, through problem-driven thinking, we proposed the Mixture of Prompts (MoP) and Hard Instrument Area Reinforcement (HIAR) modules tailored for specific challenges in this field. MoP addresses instrument similarity by integrating detailed text descriptions, hence enhancing the classification among instruments. HIAR improves segmentation in areas where instruments and tissues often overlap. Both MoP and HIAR are inspired by advances in computer vision segmentation, but the concrete ideas of mixing the strength of multiple prompts and reinforcing hard regions in a masked auto-encoder are actually novel in the computer vision literature. Finally, while designed for this specific task, we believe our method's applicability also extends to other segmentation tasks, underscoring its efficacy. ### [9oei-W2] The dataset is too limited. It is a common practice to evaluate methods on the two established datasets, EndoVis 2017 and 2018 (see [36, 18, 11, 47, 5, 3]). To further validate our approach, we have added experimental results on EndoVis2019 and CholecSeg8k. Please refer to our global response section. ### [9oei-W3] Lacking the comparison of inference time. We assess the computational complexity and inference speed by evaluating the floating point operations per second (FLOPs) and frames per second (FPS) respectively, using a single A100 GPU. We run the experiment on EndoVis2017 by resizing the input image to the default input sizes corresponding to different methods for testing (e.g. $800 \times 800$ for ISINet, $224 \times 224$ for MATIS, $416 \times 416$ for CRIS, $448 \times 448$ for CLIPSeg and Ours). From the table below, it's evident that our model's computational complexity (FLOPs) and inference speed (FPS) align with other adapted text-promptable approaches (i.e., CRIS and CLIPSeg), achieving real-time performance suitable for clinical applications. On the other hand, compared to conventional segmentation methods (i.e., ISINet and MATIS), ours appears to be clearly more efficient than ISINet; while it is marginally slower than MATIS [3], likely due to MATIS's small input size. Method | FLOPs (G) | FPS --------------|-----------|----- ISINet [11] | 264 | 19 MATIS [3] | 66 | 27 CRIS [39] | 196 | 19 CLIPSeg [24] | 127 | 23 Ours | 125 | 22 ### [9oei-Q1] Compare with Segment Anything. We value the recommendation to compare our method with the Segment Anything model (SAM), though it is only recently appeared. While SAM is proficient with various input prompts, its official released implementation only contains visual prompts (point, box and mask), lacks text prompting. However, visual prompts differ from our task setting, we aim to evaluate the performance of text-promptable SAM. We leverage a community-based solution (lang-segment-anything from luca-medeiros), which enables this function for SAM. We let SAM use the same set of prompts as we do and present the results in following tables. We found that SAM struggles with medical prompts, performing significantly inferior than ours. This suggests challenges with medical concepts without fine-tuning. Additionally, we notice that the text encoder's output in this unofficial implementation of SAM might not align well with its visual encoder's output, potentially leading to decreased performance. SAM results on EndoVis2017: Method | Ch_IoU | ISI_IoU | mc_IoU ------------|--------|---------|-------- SAM | 17.77 | 14.32 | 10.28 Ours (448) | 77.79 | 76.45 | 54.78 SAM results on EndoVis2018: Method | Ch_IoU | ISI_IoU | mc_IoU ------------|--------|---------|-------- SAM | 22.08 | 17.88 | 12.33 Ours (448) | 82.67 | 81.54 | 65.48 ### [9oei-Q2] What’s the difficulty of surgical instrument segmentation? Will the lack of segmentation datasets lead to overfitting? As described in the introduction of the paper, surgical instrument segmentation faces two typical difficulties. The first is the continual emergence of new instruments, necessitating frequent model retraining to accommodate these novelties. The second pertains to misclassification during segmentation of similar instruments and their boundaries. Our paper proposes solutions tailored to these challenges: To address the emergence of new surgical instruments, we pioneer the definition of surgical instrument segmentation in a text-promptable format, enabling the open-set segmentation. We also introduce the Mixture of Prompts (MoP) strategy to enhance the segmentation robustness. MoP addresses instrument similarity by integrating detailed text descriptions, hence enhancing the classification among instruments. Moreover, we introduce the Hard Instrument Area Reinforcement (HIAR) module that further improves the segmentation in areas where instruments and tissues often overlap. HIAR deepens the model's understanding of challenging regions while reducing confusion between similar instruments. The lack of surgical instrument segmentation datasets indeed poses an overfitting risk. A larger dataset could both enhance the model performance and its robustness. This however is not a free lunch but comes with much more expensive annotation cost. The cross-dataset experiments presented in Tabs. 1 & 2 of the paper demonstrate that our model can bypass this situation with its potential for open-set instrument segmentation. For instance, when our model is trained on EndoVis2017, it can actually adeptly handle previously unseen classes, such as suction instrument (SI) in EndoVis2018 by utilizing only their text prompts without retraining. --- Rebuttal Comment 1.1: Comment: Hi authors, Thanks for your hard work on the rebuttal. Overall, I am familiar with general segmentation tasks instead of surgical segmentation. It is challenging for me to make the proper justification in this area. Therefore, I have carefully read other reviewers' comments and see their appreciation of this work. Finally, I change my original score to borderline acceptance. I cannot give higher scores due to that I still lack specific knowledge. Hope the SPCs and ACs can see more comments from other reviewers. --- Reply to Comment 1.1.1: Comment: Many thanks for acknowledging our rebuttal and paper, we will continuely improve our paper according to your comments in the revised version.
Summary: This paper presents a text prompt-based surgical instrument segmentation method, which is more scalable to handle the diverse targets in endoscopy videos. Strengths: - originality: this is the first work to use text prompt for surgical instrument segmentation - quality: the performance of the proposed methods surpassed previous methods - clarity: the paper is well organized - significance: the method can handle unseen targets, which is desired in clinical scenarios. Weaknesses: - EndoVis challenge is organized every year. However, this paper validated the method on old datasets (18-19). Why not use the latest dataset and compare it to the challenge winning solutions? e.g., 21-22 http://opencas.dkfz.de/endovis/challenges/2022/ - Reference formats are not consistent. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Does the hard instrument area identified by the model align with the surgeons' perspective? - Since the best performance in Table 1-2 still has a large room for further improvement. What are the typical failure modes? What's the potential reason for the model to generate such segmentation errors? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: A more comment task is to segment both instruments and tissues. It would be great to validate the method in this setting. Here is a public dataset https://www.kaggle.com/datasets/newslab/cholecseg8k Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### [Rt9K-W1] Why not validate the method on the latest EndoVis datasets and compare with the challenge winners? Please refer to our global response for the justification of choosing Endovis2017 and 2018 datasets. Our method actually compares with winning solutions, like TernausNet-11 [36] in Tab. 1 of the paper, which was that year's winner. Additionally, we research contemporary papers and compare with SOTA methods like S3Net [5] and MATIS [3], underscoring our approach's superiority. ### [Rt9K-W2] Reference formats are not consistent. We will correct it in our revised submission. ### [Rt9K-Q1] Does the hard instrument area identified by the model align with the surgeons' perspective? Referring to the Fig. 2 in pdf file in global response section, we visualize the hard instrument areas and observe that these areas predominantly reside at the instrument's clasper and shaft positions (red rectangles). For the clasper, due to its deep interaction with the tissue, making the image complex and the segmentation challenging. The shaft, on the other hand, presents issues because different instruments often have similar shaft appearances, leading to model misinterpretations. After consulting with surgeons, they agreed that the clasper is the hard area to identify, aligning with our findings, but they didn't find the shaft as challenging. The difference arises because the instrument clasper, influenced by factors like lighting, can more easily be mistaken for tissue. While for surgeons, the classification of the instrument shaft is inferred from the clasper, making the shaft a non-challenging area for them. However, for models, grasping the relationship between the clasper and shaft might not be as intuitive, leading to misclassification. This observation suggests that future work should focus on modeling the relationship between the clasper and shaft to enhance segmentation performance across different parts. ### [Rt9K-Q2] Typical failure modes and reasons. As described in the introduction of the paper, surgical instrument segmentation faces two typical difficulties. The first is the continual emergence of new instruments, necessitating frequent model retraining to accommodate these novelties. The second pertains to misclassification when segmenting similar instruments and their boundaries. Our paper proposes solutions tailored to these challenges: To address the emergence of new surgical instruments, we pioneer the definition of surgical instrument segmentation in a text-promptable format, enabling the open-set segmentation. We also introduce the Mixture of Prompts (MoP) strategy to enhance the segmentation robustness. MoP addresses instrument similarity by integrating detailed text descriptions, hence enhancing the classification among instruments. Moreover, we introduce the Hard Instrument Area Reinforcement (HIAR) module that further improves the segmentation in areas where instruments and tissues often overlap. HIAR deepens the model's understanding of challenging regions while reducing confusion between similar instruments. Despite these proposed advancements, challenges persist to certain extent (see Fig. 9 & 10 in supplementary material). The reason is manifold: the alignment between text and image features is not impeccable, and our current method treats different phrases within prompts uniformly, leading to potential identification challenges for new instruments. Additionally, while HIAR alleviates mis-classification issues, it doesn't eradicate them entirely. Some ambiguous (hard) areas can be indistinguishable even to human eyes, requiring further exploration for a holistic solution. In the future, we will focus on refining text-image alignment methodologies, leveraging weighting mechanisms to accentuate pivotal distinctions in prompts across instruments, ultimately diminishing segmentation mis-classification, and bolstering instrument edge segmentation through advanced techniques. ### [Rt9K-L1] It would be great to validate the method on CholecSeg8k. Thanks for this suggestion. We have done it accordingly, please refer to the CholecSeg8k experiment section in the global response. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed explanation very much. My major concern has been addressed. One minor question, if the paper is accepted, could you please promise to make the complete training and inference code publicly available by Dec.? --- Reply to Comment 1.1.1: Comment: Thank you very much for your reply. If our paper is accepted, we promise to release the code by December.
Summary: The paper introduces a novel approach for surgical instrument segmentation in minimally invasive surgeries. By leveraging text prompts and vision-language models, the proposed method achieves improved segmentation performance. The approach shows promise for practical use in robotic-assisted surgery. Strengths: The present work contributes with an innovative and effective approach for text promptable surgical instrument segmentation in minimally invasive surgeries. This paper presents a meticulous study of previous work, which is important in the development of the present work. Also, the technical aspects are clearly explained and have also been evaluated using the correct metrics. Another strength of this paper is the introduction of a mixture of prompts mechanism. By leveraging multiple text prompts for each surgical instrument, the authors enhance the segmentation performance of their model. The experimental evaluation of the proposed model on EndoVis2017 and EndoVis2018 datasets demonstrates its superior performance compared with other works and promising generalization capability. In summary, the work is an interesting application of deep learning in the medical area, and it also has a remarkable novelty. Weaknesses: Regarding the ablation study, it would be good if the authors could explain why they chose 448x448 as the image size. Aren't some details lost using this size? It would be good if the authors could give more details about the dataset, i.e. the average duration of each video for example. It would be very positive to also include more datasets, such as EndoVis2019, EndoVis2020 or EndoVis2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Do you plan to include more datasets such as EndoVis2019, EndoVis2020 or EndoVis2021? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: limitations are not mentioned, authors should include the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### [NUuZ-W1] Why chose $448 \times 448$ as the input image size? The previous methods use different input image sizes ranging from $224 \times 224$ to original image size (i.e.,$1024 \times 1280$). Given our adoption of a ViT-based image encoder, the input size must conform to ViT's patching requirements, i.e. being divisible by 14, 16, and 32 to fit various ViT models. As such, we choose $448 \times 448$ as a default setting considering both efficiency and accuracy. Notice that, as with all other methods, our predictions are always reshaped to the original image size for fair evaluation. ### [NUuZ-W2] More details about the datasets. Below are detailed descriptions of various datasets. - EndoVis2017 consists of ten sequences of abdominal porcine procedures, each containing 300 frames sampled at 1 Hz. For training data, the first 225 frames from eight sequences are used, and the remaining 75 frames are kept for testing. Two more sequences with full 300 frames are also reserved for testing. - EndoVis2018 includes 19 sequences, 15 of them are considered as the training set while the rest 4 as test set. Each sequence is originated from a single porcine training procedure. Redundant frames are manually removed to precisely ensure 300 frames in each sequence. - EndoVis2019 (Robust MIS) is derived from 30 minimally invasive surgical procedures, including 10 rectal resection, 10 proctocolectomy, and 10 sigmoid resection procedures. A total of 10,040 images are extracted from these procedures. The dataset consists of both training and test cases. Each case contains a 10-second video snippet with 250 endoscopic image frames and a reference annotation for the last frame. - CholecSeg8K is derived from Cholec80, containing 80 videos of cholecystectomy surgeries performed by 13 surgeons. Each video in CholecSeg8K is recorded at 25 FPS and has annotations for instruments and operation phases. Each video clip contributes 80 image frames, and for each of these frames, the dataset includes raw image data, annotations, and colour masks. In total, the dataset comprises 101 directories with a collection of 8,080 frames. ### [NUuZ-W3, Q1] Experiments on more datasets. Thank you for your suggestion. To further validate our approach, we have added experimental results on EndoVis2019 and CholecSeg8k. For detailed information, please refer to our global response section. --- Rebuttal Comment 1.1: Title: Final remarks Comment: Thank you for your comprehensive response to my review of your paper. I appreciate the clarifications and additional details you've provided in response to the concerns I raised. Certainly, your explanations have explained better different aspects of your work. I appreciate your explanation regarding the choice of the input image size (448x448) and your consideration of the requirements imposed by the ViT-based image encoder. The provided information clears up any confusion I had regarding this matter. Thank you for providing detailed descriptions of the datasets used in your experiments. The additional information you've shared regarding EndoVis2017, EndoVis2018, EndoVis2019, and CholecSeg8K is invaluable in understanding the scope and diversity of the data you've employed. I am pleased to see that you've taken our suggestion into consideration and included experimental results on EndoVis2019 and CholecSeg8K. This expansion of your evaluation enhances the robustness and generalizability of your findings, and we believe it will strengthen the paper's contribution. Based on your detailed responses and the additional information you've provided, I am confident that your paper offers a positive contribution to the domain of text-promptable surgical instrument segmentation. --- Reply to Comment 1.1.1: Comment: Thank you very much for your recognition. We will continue to improve our paper and incorporate the information from the rebuttal. --- Rebuttal 2: Comment: This is a friendly reminder from the AC that you need to respond to the rebuttal, since the authors spent quite a lot of time preparing the rebuttal.
Rebuttal 1: Rebuttal: We thank the constructive comments from all reviewers. As reviewers say, our key idea of using textural prompts to perform surgical instrument segmentation is novel (wnjR, NUuZ) and interesting (NUuZ). Our paper is overall well organized (Rt9K) and easy to understand (9oei). Below we first answer the common question raised by several reviewers on our experimental datasets; next, we address the concerns from each reviewer separately. ## Experiments on more datasets Several reviewers point out the need to validate our method on more datasets. We deeply value your comments. First, we would like to emphasize that it is a common practice [36, 18, 11, 47, 5, 3] to evaluate the surgical instrument segmentation performance on the two established datasets, EndoVis 2017 [1] and 2018 [2]. We followed this practice and should be not disadvantaged. Next, we delineate the consideration for selecting additional datasets, and then report our results on these datasets. ### Dataset selection Although the EndoVis Challenge is an annual event, datasets specific to surgical instrument segmentation aren't released every year. Specifically, the EndoVis2019's Robust-MIS dataset merely differentiates between tissues and instruments, not aligning with our study's focus on segmenting different instrument types. Moreover, the instance segmentation in EndoVis2019 also does not align with our paper's problem. The challenges in EndoVis2020 and 2021 do not address instrument segmentation either. Regarding the datasets for EndoVis2022 and 2023 challenges, namely SAR-RARP50 and SIMS, their usage in publications is restricted until the release of their respective competition reports. Given the SAR-RARP50 report's pending release and the ongoing SIMS competition, these datasets are not incorporated into our research. To further substantiate our approach, we choose to conduct experiments on the instrument binary segmentation task on the EndoVis2019 dataset [R1], as well as instrument and tissue segmentation task on the CholecSeg8k dataset [R2]. ### Comparison to state of the art 1. **EndoVis2019**: Consistent with the competition's evaluation protocol [R1], we use the Dice Similarity Coefficient (DSC) and Normalized Surface Dice (NSD) to assess the segmentation performance. As the following table shows, our method (input size 448) notably surpasses the competition's top performers, with +3% increase in DSC and +2% enhancement in NSD, which demonstrates the superiority of our method. It's worth noting that our approach is designed for multi-class segmentation while is tested for binary class segmentation. Despite this, the performance improvement by ours over SOTA underscores its efficacy. Method | DSC | NSD ----------------|------|------ haoyun [R1] | 0.89 | 0.89 CASIA-SRL [R1] | 0.78 | 0.89 Ours (448) | 0.92 | 0.91 2. **CholecSeg8k**: Following the protocols from SP-TCN [R4], we split the dataset into training and testing sets (videos 12, 20, 48 and 55 for testing and others for training) and utilize the mean Intersection over Union (mean IoU) as the evaluation metric. In the following table, the result of our method is 71.03% mean IoU. It's evident that our method surpasses the current SOTA (SP-TCN) by 1.65% in mean IoU, even though SP-TCN leverages temporal information from videos to boost the performance, while our method solely relies on individual image data. It's worth noting that for the CholecSeg8k, we use the same prompt generation method described in our paper to obtain prompts for both tissues and instruments. The result demonstrates that prompts for tissues are appropriately generated following our method, further attesting to the generalizability of our method. Method | mean IoU -------------------------|---------- Swin base [R3] | 0.6842 Swin base + SP-TCN [R4] | 0.6938 Ours (448) | 0.7103 ## References [R1] Ross, Tobias, et al. "Robust medical instrument segmentation challenge 2019." arXiv preprint arXiv:2003.10299 (2020). [R2] Hong, W-Y., et al. "Cholecseg8k: a semantic segmentation dataset for laparoscopic cholecystectomy based on cholec80." arXiv preprint arXiv:2012.12453 (2020). [R3] Liu, Ze, et al. "Swin transformer: Hierarchical vision transformer using shifted windows." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. [R4] Grammatikopoulou, Maria, et al. "A spatio-temporal network for video semantic segmentation in surgical videos." International Journal of Computer Assisted Radiology and Surgery (2023): 1-8. Pdf: /pdf/0405b4a32f674227ce525b877c99404f717def79.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a novel idea of utilizing text prompts and vision-language models to make surgical instrument segmentation more flexible and robust to diversity. The proposed method and custom modules achieve strong results on two endoscopic datasets. Strengths: 1. The paper tackles an important problem in surgical instrument segmentation, which aims to enhance robot-assisted surgery systems. The key idea of using text prompts to improve generalization and adaptability to new instruments is novel. 2. The method is technically sound, leveraging recent advances in vision-language models like CLIP. The image and text encoder setup makes sense. The text promptable mask decoder uses attention and convolution schemes nicely for decoding. 3. Several custom modules are proposed to boost segmentation performance: 1) Mixture of prompts leverages multiple prompts effectively. 2) Hard instrument area reinforcement focuses on challenging regions. 4. Comprehensive experiments on two datasets demonstrate superior performance over state-of-the-art methods. The cross-dataset generalization results are promising. The ablation studies validate the efficacy of individual model components like multi-scale feature extraction, mixture of prompts, and hard area reinforcement. Weaknesses: 1. The problem definition and goal can be further sharpened. How does text-based prompting specifically help with increasing instrument variety and subtle inter-class differences? This needs more elaboration upfront. 2. Some architectural details are unclear - like how exactly text features are integrated into the convolutional prompting scheme. More implementation specifics will help reproducibility. 3. The computational complexity and inference speed are not analyzed. This could be important for practical usage. 4. More in-depth experimentation on real-world surgical videos and systems would be preferred to further demonstrate applicability. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Here are some suggestions for authors to consider: 1. Explain how text prompting helps with adaptation to new instruments and resolving inter-class confusions, with examples. 2. The method sections explain each component logically but can be more crisp and coherent in places. Authors could provide more implementation specifics for text feature integration into convolutional prompting and gating network. 3. Consider reporting computational complexity and inference speeds. How does it compare with prior arts? 4. Consider detecting and segmenting novel instruments not seen during training by using only text prompts. Typos: 1. line 58: "launguage" should be "language". 2. Mixed use of "visual-textual" and "visual-textural" at multiple places. Authors should also consider discussing the limitations, such as those listed below, in the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: 1. The method has only been evaluated on two datasets with limited surgical scenarios. Performance on more diverse real-world data is unclear. 2. It relies on high-quality textual prompts, which may not always be available or easy to construct in practice. 3. The requirement of retraining with new text prompts for new instruments reduces adaptability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### [wnjR-W1, Q1] Explain how text prompting helps with adaptation to new instruments and resolving inter-class confusions, with examples. For the adaptation to new instruments, unlike previous instrument segmentation methods [36, 18, 11, 47, 5, 3] that require model retraining with new data, our text-promptable method instead offers the open-set potential that can directly segment/infer new instruments in images based on their textual descriptions. For example, in Tab. 2 of the paper we show that, despite "suction instruments" appearing only in EndoVis2018, our model trained from EndoVis2017 attains a high IoU of 79.77\% (SI in Tab. 2 of the paper). This underscores our approach's adaptability to new instruments. On the other hand, for resolving inter-class confusion, our text prompting emphasizes unique traits of instrument in descriptions, aiding finer instrument class differentiation. For example, while Bipolar forceps and Prograsp forceps are visually similar (see Fig. 1 in pdf file in global response section), they can be distinguished by "elongated tweezer-like design" and "curved scissor-like handles" from their textual descriptions, respectively. These textual distinctions can help enhance the visual classification. ### [wnjR-W2, Q2] More details for text feature integration into convolutional prompting and gating network. Given the visual and textual feature, $F_I \in \mathbb{R}^{N \times D}$ and $F_T \in \mathbb{R}^{1 \times D}$, respectively, we transform $F_T$ via a fully connected (FC) layer: $\tilde F_{T} = FC(F_{T})$. $F_T$ is a vector of dimension $D$, the FC layer reshapes it to the vector $\tilde F_{T}$ of dimension $D \times k \times k + 1$, with "$k \times k$" representing the convolution kernel size and "$+1$" the extra dimension accounting for bias. This allows the decomposition of $\tilde F_{T}$ into convolution weights $w \in \mathbb{R}^{1 \times D \times k \times k}$ and bias $b \in \mathbb{R}^{1}$, which are used in subsequent convolutions. For the gating network, it consists of a 3-layer residual block [13]. We duplicate $F_T \in \mathbb{R}^{1 \times D}$ to $F_T^p \in \mathbb{R}^{N \times D}$ to match $F_I$'s dimension and concatenate them before inputting to $\mathcal G$. The output of $\mathcal G$ are three weight maps corresponding to the three score maps in $\mathcal S$. These weights are normalized using softmax operations along the prompt dimension (they are normalized pixel-wisely across weight maps). We calculate the weighted sum of scores maps in $\mathcal S$ to derive the final score map. In the revised version, we will refine our paper and release the code. ### [wnjR-W3, Q3] Analysis of computational complexity and inference speed. We assess the computational complexity and inference speed by evaluating floating point operations per second (FLOPs) and frames per second (FPS) respectively, using a single A100 GPU. We run the experiment on EndoVis2017 by resizing the input image to the default input sizes corresponding to different methods (e.g. $800 \times 800$ for ISINet, $224 \times 224$ for MATIS, $416 \times 416$ for CRIS, $448 \times 448$ for CLIPSeg and Ours). From the table below, it's evident that our model's computational complexity (FLOPs) and inference speed (FPS) align with other adapted text-promptable approaches (i.e., CRIS and CLIPSeg), achieving real-time performance suitable for clinical applications. On the other hand, compared to conventional segmentation methods (i.e., ISINet and MATIS), ours appears to be clearly more efficient than ISINet; while it is marginally slower than MATIS [3], likely due to MATIS's small input size. Method | FLOPs (G) | FPS --------------|-----------|----- ISINet [11] | 264 | 19 MATIS [3] | 66 | 27 CRIS [39] | 196 | 19 CLIPSeg [24] | 127 | 23 Ours | 125 | 22 ### [wnjR-W4, L1] Experiments on more diverse real-world data. Thank you for your suggestion. The data in EndoVis2017 and 2018 datasets are indeed from real-world surgical videos. To further validate our approach, we have added experimental results on EndoVis2019 and CholecSeg8k, which both consist of data from real-world surgical videos. For detailed information, please refer to our global response section. ### [wnjR-Q4] Use textural prompts to segment unseen instruments. In our study, the cross-dataset experiments between EndoVis2017 and EndoVis2018 (Tabs. 1 & 2 of the paper) indeed underscore the potency of our method for segmenting unseen instruments. For instance, when our model is trained on EndoVis2017, it can actually adeptly handle previously unseen classes, such as suction instrument (SI) in EndoVis2018 by utilizing on only their textural prompts without retraining. ### [wnjR-L2] High-quality textual prompts potentially limit practical use. Firstly, we have designed well-crafted question templates to guide LLMs like GPT to automatically derive high-quality textural prompts (see Section 6.1 in our supplementary material). Given these predefined question templates, adapting to new instruments becomes straightforward. Secondly, our method with simple prompts (not using LLMs) still performs very well (see Tab. 5 in the paper), outperforming SOTA with large margins. Thirdly, once our model is trained, it is equipped with the open-set potential that can segment new instruments without retraining/finetuning (see cross dataset experiment in Sec 4.3). ### [wnjR-L3] Retraining with new prompts for different instruments limits adaptability. As highlighted in [wnjR-W1, Q1] and [wnjR-L2], our text-promptable approach can adapt to new instruments without retraining/finetuning the model. ### [wnjR-Misc] Typos. Thanks! We will rectify them in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I appreciate the additional experimental results and analysis. Please try to incorporate the details mentioned in the rebuttal into the revised version. Since more datasets are involved, I think it will be beneficial to also include more cross-dataset results to better demonstrate the robustness of the proposed method for segmenting unseen instruments. I presume it is relatively easy to complete as no retraining is needed. In general, the authors' rebuttal resolved most of my concerns. I'm raising my rating to weak accept. --- Reply to Comment 1.1.1: Comment: Thank you very much for recognizing the value of our rebuttal. We will include the additional experimental results from the rebuttal phase in our revised paper. We appreciate your suggestion, and we also plan to incorporate more cross-dataset validation results in the revised paper. We conducted a quick cross-dataset experiment using a model trained on EndoVis2017 and tested it on the EndoVis2019 dataset. Our method achieved DSC=0.90 and NSD=0.90, which surpasses the competition's previous best result of DSC=0.89 and NSD=0.89. This further underscores the exceptional performance of our approach. We will provide more cross-dataset experimental results in our subsequent revised paper.
null
null
null
null
null
null
Integration-free Training for Spatio-temporal Multimodal Covariate Deep Kernel Point Processes
Accept (poster)
Summary: This work proposes to fuse Deep Kernel Learning (DKL) into the Deep Mixture Point Processes (DMPP), resulting in Deep Kernel Mixture Point Processes (DKMPP) which can handle complex relationships between events and covariates in a more flexible and expressive manner. The authors also leverage the denoising score matching technique and come up with a training procedure that does not require integration and computation of second-order derivative. The proposed method is evaluated on a variety of datasets and compared with other baseline point process models to demonstrate its advantages. Strengths: • The authors extend the score matching to point processes and make the training more scalable by leveraging denoising technique, which, I believe, is a fair contribution to the field of point process learning and can be applied to a variety of scenarios. • The authors conduct a wide range of experiments using both synthetic and real-world datasets, demonstrating the efficiency and effectiveness of the proposed method. The choice of model hyperparameters is appropriately studied. • The organization and the presentation of the proposed method are clear and easy to understand. Weaknesses: On lines 158-159, the authors argue that Euclidean distance in the kernel may not be a suitable measure of similarity, especially for **high-dimensional inputs**. I think this statement needs more clarification. If I understand correctly, the input $\textbf{s}$ to the kernel function $k_{\phi}$ in the original DMPP lies in the space $\mathcal{R} \times \mathcal{R}^2$, but in DKMPP, based on (7), the input to the kernel function is first mapped by a deep neural network $g_{w_2}: \mathcal{R} \times \mathcal{R}^2 \rightarrow \mathcal{R}^D$ where $D$ can be much larger than the original dimension of $\textbf{s}$. In other words, the **high-dimension** of the input is induced by our design choice of $g_{w_2}$ instead of the data, is that right? Can $\textbf{s}$ itself be high-dimensional in spatio-temporal point processes? Minor comments: • Since $\lambda(\textbf{s}|\mathcal{D})$ in (6) and (7) is an approximation of the true intensity of DMPP, I recommend using a different notation such as $\hat{\lambda}(\textbf{s}|\mathcal{D})$. • Figures 1(c)-(d) are displayed but never referred to. I suggest the authors either remove the figures or compare 1(c)-(d) to the true intensity function. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: • If $f_w$ is a network that outputs nonnegative mixture weights and $k_{\phi}$ is positive semi-definite, why do we need the link function $\eta(\cdot)$ to ensure non-negativity in (7) but not in (6)? • Why do all point process models yield relatively low accuracy on the NYC Complaint data? Does this mean point process models are not good at processing textual information? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations are adequately addressed. I’m not aware of any direct negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: "On lines 158-159, the authors argue that Euclidean distance in the kernel may not be a suitable measure of similarity, especially for high-dimensional inputs. I think this statement needs more clarification. If I understand correctly, the input $\mathbf{s}$ to the kernel function $k_\phi$ in the original DMPP lies in the space $\mathcal{R}\times\mathcal{R}^2$, but in DKMPP, based on (7), the input to the kernel function is first mapped by a deep neural network $g_{w_2}: \mathcal{R}\times\mathcal{R}^2\to\mathcal{R}^D$ where $D$ can be much larger than the original dimension of $\mathbf{s}$. In other words, the high-dimension of the input is induced by our design choice of $g_{w_2}$ instead of the data, is that right? Can $\mathbf{s}$ itself be high-dimensional in spatio-temporal point processes?" A: The original intention behind deep kernel [1] was to overcome the limitations of Euclidean distance and enhance the expressive power of the kernel. If the raw data, denoted as $\mathbf{x}$, has high dimensionality, [1] has shown that Euclidean distance is an unsuitable measure of similarity. If we first pass the raw data $\mathbf{x}$ through a deep neural network to obtain a feature (which may also be high-dimensional), the advantage lies in the deep kernel's ability to learn metrics by optimizing the input space transformation in a data-driven manner. Thus, we emphasize the significance of handling high-dimensional raw data $\mathbf{x}$ where Euclidean distance is unsuitable for measuring similarity. Regarding spatio-temporal point processes, we understand your concern, as the dimensionality of $\mathbf{s}$ seems not very high. However, in our experiments, we found that even for this three-dimensional problem, Euclidean distance is not optimal. The deep kernel's ability to learn metrics from data outperforms Euclidean distance. We will provide further clarification on this statement in the camera-ready version. [1] Wilson, A. G., Hu, Z., Salakhutdinov, R., \& Xing, E. P. (2016, May). Deep kernel learning. In Artificial intelligence and statistics (pp. 370-378). PMLR. > Q2: "Since $\lambda(\mathbf{s}|\mathcal{D})$ in (6) and (7) is an approximation of the true intensity of DMPP, I recommend using a different notation such as $\hat{\lambda}(\mathbf{s}|\mathcal{D})$." A: Thanks for your suggestion. We agree and will correct this in camera ready. > Q3: "Figures 1(c)-(d) are displayed but never referred to. I suggest the authors either remove the figures or compare 1(c)-(d) to the true intensity function." A: Figures 1(c)-(d) are referred to. Please see lines 344-350. > Q4: "If $f_w$ is a network that outputs nonnegative mixture weights and $k_\phi$ is positive semi-definite, why do we need the link function $\eta(\cdot)$ to ensure non-negativity in (7) but not in (6)?" A: In the original DMPP (Eq. (6)), $f_w$ is a deep neural network that outputs nonnegative mixture weights, so we do not need the link function. However, in our proposed DKMPP (Eq. (7)), we eliminate the constraint that the mixture weight $f_{w_{1}}$ must be non-negative. Therefore, to ensure the non-negativity of the intensity function, DKMPP introduces a link function $\eta(\cdot)$. Maybe we should write $\tilde{f}_{w_1}$ in Eq.(7) to distinguish it from $f_w$ in Equation (6) to avoid confusion. We will provide further clarification on this statement in the camera-ready version. > Q5: "Why do all point process models yield relatively low accuracy on the NYC Complaint data? Does this mean point process models are not good at processing textual information?" A: It seems that the low accuracy observed with point process models on the NYC Complaint dataset is attributed to the dataset's specific characteristics, rather than an inherent limitation of point process models in processing textual information. Because on the NYC Vehicle Collisions dataset, we also used textual covariate information and achieved very good accuracy. The overall low performance on the NYC Complaint dataset is due to the dataset's poor data quality. We found that all models' performance on the complaint dataset is poor, primarily due to the following reasons: the predictions are made within a relatively short period, and the number of events on each sequence fluctuates significantly. Consequently, even when our model's predicted values closely align with the average value of multiple sequences, the accuracy remains low for each individual sequence. However, even in this situation, our model still outperforms other baselines. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Thank you for answering my questions. The responses mostly make sense to me and I'm inclined to maintain the score as is.
Summary: This paper argues that there are two common approaches for modeling intensity functions: traditional and covariate based methods, and this paper focuses on the latter one. In detail, the intensity function is designed in a kernel convolution form: $\lambda(s|\mathcal{D})=\int f_w(\mathbf{u},\mathbf{Z(u)})k_{\phi}(\mathbf{s,\mathbf{u}})d\mathbf{u}$, in which contextual information is embedded in $\mathbf{Z(u)}$. In practice, its integration is replaced by summation. This method makes integration $\int \lambda(s|\mathcal{D})ds$ intractable. However, the kernels lack expressiveness both because of the unknown relationship between covariates and event occurrence and improper usage of Euclidean distance. To solve this issue, authors propose to use deep kernels which are modeled by neural networks. To address parameter estimation, the authors further propose to use of a score matching-based estimator to estimate parameters. Strengths: 1. The theoretical part is reasonable and complete, and the proposed model effectively takes into account both solving the difficulty of integrating the intensity function and maintaining the strong expressiveness of the intensity function. 3. The score matching-based modeling method is novel and interesting and plays a positive role in promoting research in the field of point processes. 3. The advantages of the model are adequately and effectively demonstrated by experiments. Weaknesses: 1. The motivation for using a score-based approach is not clear. In fact, the score-based approach is a special generation model. In that case, why not use another generation model, such as GAN or VAE? The authors point out that the score-based approach can effectively solve the parameter estimation problem, but it seems that other generative models can also solve the problem. 2. The proposed model seems to be a combination of existing frameworks, which actually hinders its nolvity to some extend. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I'm confused about the reason why the authors use score-based methods. 2. Actually, there are some efforts concerning the embedding of generative models into point processes. Is there any comparison between the proposed model and some existing methods, see, for example [1-2]. [1] Xiao S, Farajtabar M, Ye X, et al. Wasserstein learning of deep generative point process models[J]. Advances in neural information processing systems, 2017, 30. [2] Mehrasa N, Jyothi A A, Durand T, et al. A variational auto-encoder model for stochastic point processes[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 3165-3174. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: "The motivation for using a score-based approach is not clear. In fact, the score-based approach is a special generation model. In that case, why not use another generation model......" "I'm confused about the reason why the authors use score-based methods." "Actually, there are some efforts concerning the embedding of generative models into point processes......" A: It appears that the reviewer may have confused score matching with generative models. While in recent years score matching has been used for training generative models such as diffusion models, score matching itself is originally an estimator used for estimating model parameters, rather than being a generative model. When score matching was originally invented [1], it was designed for estimating parameters of non-normalized statistical models, rather than for generative models. This is clearly indicated in the title of [1]. Similarly, in our current work, we use score matching as an estimator for point process parameters, as point processes themselves can be understood as non-normalized models: the compensator in the log-likelihood (the second term in Eq. (8)) can be understood as an intractable normalizing constant. As indicated in lines 46-49, 53-56, 184-188 in our paper, the compensator in the log-likelihood is typically an intractable integral that usually requires numerical integration methods, leading to numerical errors and computational inefficiency. Therefore, we adopted score matching to avoid the computation of the compensator, hence the name "integration-free" for our paper. In conclusion, our focus is to find an alternative estimator to MLE that does not require integration, rather than designing a generative model for point processes. [1] Hyvärinen, Aapo, and Peter Dayan. "Estimation of non-normalized statistical models by score matching." Journal of Machine Learning Research 6.4 (2005). > Q2: "The proposed model seems to be a combination of existing frameworks, which actually hinders its nolvity to some extend." A: We politely disagree. All research builds upon existing works. As far as we know, few works have attempted to utilize score matching for the estimation of point processes. Our work appears to be the first attempt to apply score matching to covariate-based deep spatio-temporal point processes. --- Rebuttal Comment 1.1: Comment: After re-reading the author's detailed response and the paper, I agree with the author's emphasis on the work, that is, the focus of the paper is on designing an alternative estimator to MLE rather than on proposing a generative model-based point processes. To this end, I raised my score.
Summary: The paper proposes an enhanced version of Deep Mixture Point Processes with a flexible neural network-based kernel. The intractable training process of the point process with deep kernel is handled by a score-matching technique with the denoising method. Strengths: 1. The proposed deep kernel goes beyond the parametric kernel and substantially improves the model flexibility and expressiveness. 2. Address the learning challenges of MLE (a long-lasting problem in neural pp training) by proposing a score-matching technique which achieves better modeling performance and computational efficiency. This is also supported by the experimental results. Weaknesses: 1. Related works are not comprehensive enough. There is plenty of work on point processes equipped with deep kernel, such as Okawa [1] and Zhu. [2]. These works can be reviewed to make the paper more comprehensive. 2. More experiments can be included. For example, one baseline of Hawkes process equipped with a deep kernel can be included. Also model performance on synthetic point process data (such as self-exciting point process or self-correcting process) would make the numerical results more convincing. --- [1] Maya Okawa, Tomoharu Iwata, Yusuke Tanaka, Hiroyuki Toda, Takeshi Kurashima, and Hisashi Kashima. Dynamic hawkes processes for discovering time-evolving communities’ states behind diffusion processes. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1276–1286, 2021. [2] Shixiang Zhu, Haoyun Wang, Zheng Dong, Xiuyuan Cheng, and Yao Xie. Neural spectral marked point processes. In International Conference on Learning Representations, 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How do we calculate the model density $p_\theta(\tilde{S}_m)$ in equation 12? 2. Could the authors include additional synthetic experiments, such as modeling self-exciting point process data, because such data is becoming ubiquitous and gaining popularity in recent research and real-world application. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The current structure still imposes parametric constraints on the kernel. An alternative form of the kernel can be considered [1]. --- [1] Shixiang Zhu, Haoyun Wang, Zheng Dong, Xiuyuan Cheng, and Yao Xie. Neural spectral marked point processes. In International Conference on Learning Representations, 2022. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: "Related works are not comprehensive enough. There is plenty of work on point processes equipped with deep kernel, such as Okawa [1] and Zhu. [2]. These works can be reviewed to make the paper more comprehensive." A: Thank you for your suggestion. Because we cannot make changes to the manuscript during the rebuttal stage, we will review those works you mentioned in the camera-ready. > Q2: "How do we calculate the model density $p_{\theta}(\tilde{S}_m)$ in equation 12?" A: $p{\theta}(\tilde{S}_m)$ and $p{\theta}(S_m)$ share the same $p{\theta}(\cdot)$ that represents the probability density function (likelihood function) corresponding to the parameterized point process model. The only difference between the two lies in the sequences used: $p{\theta}(S_m)$ utilizes clean sequences, while $p{\theta}(\tilde{S}_m)$ in equation 12 utilizes noisy sequences. > Q3: "Could the authors include additional synthetic experiments, such as modeling self-exciting point process data, because such data is becoming ubiquitous and gaining popularity in recent research and real-world application." A: This is a misunderstanding. We would like to clarify that our proposed model is not a history-dependent point process model but rather a covariate-dependent point process model. In other words, we focus on the impact of covariates on point process dynamics, rather than the influence of past events on subsequent point process dynamics. However, we appreciate your valuable feedback, and in future work, we will consider the mutual influences among events, such as self-exciting point processes or self-correcting processes. --- Rebuttal Comment 1.1: Title: Reply to the author rebuttal Comment: The authors' rebuttal appropriately addresses my concerns and questions. I believe the neural network-based kernel method and the score-matching technique would contribute to the point process community in the future. In light of this, I would raise my score.
null
null
Rebuttal 1: Rebuttal: We would like to express our gratitude to all the reviewers for their valuable efforts in providing insightful comments and constructive feedback. We are pleased that the reviewers have recognized the significance of our paper in solving an interesting covariate point process estimation problem, proposing efficient score-based estimators, conducting comprehensive numerical experiments, and maintaining clear and concise writing. In the following, we address reviewers' comments point by point. We hope that our responses adequately address the concerns raised by the reviewers. Should any further doubts or questions arise, please do not hesitate to reply. Thank you once again for your time and effort in reviewing our work.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Online Map Vectorization for Autonomous Driving: A Rasterization Perspective
Accept (poster)
Summary: This paper tackles the emerging task of vision-based online HD mapping. It has 2 main contributions: (1) AP_{raster}, a new metric that is shown to be better suited to evaluate methods in this field, and (2) MapVR, a plug-in rasterization and loss module to any existing vector-based online mapping system. Several intuitive examples demonstrate the value of the new metric. Furthermore, MapVR is extensively validated on three different datasets, where it maintains or improves the existing metrics, while significantly boosting the proposed AP_{raster} metric. Strengths: Both contributions are simple and well-motivated. The empirical evaluation is extensive, covering 4 task settings over 3 datasets. In particular, the design choices of MapVR are carefully ablated. In addition, the presentation and organization of the draft are good. Overall, this paper makes a crucial contribution to a field that is beginning to rapidly expand by identifying limitations in the current metrics employed in the literature and proposing an alternative. Weaknesses: No major weaknesses stood out to me. The presentation of the draft could be slightly improved (see “Questions” section). AP_{raster} may be sensitive to hyper-parameter choices, but I believe a supplementary experiment may be sufficient to demonstrate that the chosen hyper-parameters are reasonable, since they are well-motivated. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. What is the computational overhead of MapVR during training? 2. L30-32 and L96-106 have some text repetition. Would it be possible to rephrase the latter, focusing on the differences between different vector-based techniques (e.g. VectorMapNet and MapTR), instead of the motivation for MapVR? 3. In Tables 1, 2 and 3, would it be possible to highlight the “avg.” columns with a grey background, as in Table 5? 4. L243 “covering the most complex driving scenes in the real world” is a strong and unsubstantiated claim regarding this proprietary dataset. Please rephrase this. Update: Thank you for the thorough response. I appreciate the effort taken to answer each question. My concerns have been addressed in the rebuttal, and I am maintaining my initial positive rating. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations are not discussed in main text - the main limitation I see is that AP_{raster} may be sensitive to hyper-parameter choices, which could be mentioned in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; We sincerely thank you for appreciating our work as well as for your thoughtful and constructive comments, which we believe will significantly enhance the presentation of our manuscript. &nbsp; ### Sensitivity of $\text{AP}_\text{raster}$ to Hyper-Parameters We agree with the reviewer that $\text{AP}_\text{raster}$ relies on certain hyper-parameters, more specifically, the rasterization resolution (480x240) and the dilation pixel number (2 pixels). However, we think it does not constitute a severe limitation, but instead offers better evaluation flexibility. - _First_, the two hyper-parameters have clear practical meanings and are easy to set. In our nuScenes experiments, the bird's-eye-view (BEV) perception range is 60m x 30m. Setting the evaluation rasterization resolution at 480x240 offers a localization precision of 0.125m, satisfying the requirement of autonomous driving. A dilation of 2 pixels can make the evaluation criteria tolerant to a deviation of at most 4 pixels, which equals 0.5m. Users can easily adapt these hyper-parameters to their own needs. - _Second_, we conducted sensitivity analyses in the table below, which demonstrate that $\text{AP}_\text{raster}$ is robust with respect to the two hyper-parameters. Therefore, the performance improvements brought by MapVR under $\text{AP}\text{raster}$ are genuine and significant. | | MapTR | MapTR+MapVR (Ours) | |---|:---:|:--:| | AP-raster (320x160, dilation=2) | 34.9 | 41.6 | | **AP-raster (480x240, dilation=2, default metric)** | 24.3 | 31.2 | | AP-raster (480x240, dilation=1) | 15.1 | 20.0 | | AP-raster (640x320, dilation=2) | 18.7 | 24.8 | - _Finally_, it's worth noting that most computer vision evaluation metrics (e.g., COCO's AP, nuScenes' NDS, etc.) have inherent hyper-parameters, and their choices always involve a trade-off between different considerations. The proposed $\text{AP}_\text{raster}$ is not an exception and we believe it does provide a valuable and practical tool for assessing the performance of HD map vectorization methods. &nbsp; ### Computational Overhead of MapVR During Training The extra differentiable rasterizer is implemented with CUDA to maintain training efficiency. The table below summarizes the training overhead. | Methods | Modality | Backbone | Training Time / Iter | GPU Memory Usage | |:---------------------|:--------:|:--------------:|:--------------------:|:----------------:| | MapTR | C | Res-50 | 0.82 s | 14021 MB | | MapTR | C & L | Res-50 | 1.18 s | 28557 MB | | MapTR + MapVR (Ours) | C | Res-50 | 0.91 s | 14169 MB | | MapTR + MapVR (Ours) | C & L | Res-50 | 1.37 s | 28673 MB | &nbsp; ### On the Presentation of the Manuscript We cannot be more grateful for your thoughtful suggestions. We will incorporate the following changes into the camera-ready version of the manuscript or future submissions. &nbsp; **To rephrase Line #96-106, focusing on the differences between map vectorization techniques.** Following your advice, we will revise Line #96-106 as below: > Rasterization methods [37, 40, 12, 58, 19, 59, 36, 50] generate HD maps via semantic segmentation in BEV, which have good sensitivity to details. However, the lack of vital instance-level information and lane topology limits the utility of rasterized maps in downstream tasks like navigation and planning. On the other hand, map vectorization addresses this limitation by producing vectorized map elements. HDMapNet [16] and SuperFusion [6] employ post-processing to group pixels from rasterized maps into vectorized elements. ~~Moreover, the latest approaches – VectorMapNet [26] and MapTR [20], directly predict map elements as vectorized point sets with neural networks, attaining superior performance.~~ _Moreover, VectorMapNet [26] proposes to directly predict map elements as vectorized point sets in an auto-regressive manner, achieving superior performance. And MapTR [20] - the current state of the art, further proposes a unified permutation-equivalent modeling approach to model the HD map elements, achieving superior accuracy. Furthermore, MapTR [20] achieves real-time efficiency with its one-stage and parallel framework._ However, _despite the recent progresses,_ vectorized maps still often exhibit minor deviations that can be critical in autonomous driving, where safety is of utmost importance. ~~We believe integrating HD rasterization into vectorization can improve precision while retaining vectorized representation.~~ &nbsp; **To highlight the “avg.” columns in Table 1-3 with a grey background.** Sure, we will do so in the updated version. &nbsp; **To rephrase Line #243.** The suggested sentence will be rephrased into: > covering ~~the most~~ _very_ complex driving scenes in the real world. &nbsp; ### Additional Discussions on Limitations Due to the space limitation, we did not discuss the limitations in the submitted manuscript. In the camera-ready version where one additional content page is allowed, we will incorporate a discussion on the potential limitations. **_Please refer to `our global response to all reviewers` for additional discussions on potential limitations._** &nbsp; &nbsp; We hope that our response has addressed your concern. And we welcome further dialogue to discuss any concerns and clear up any doubts that may still exist. &nbsp; --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response. I appreciate the effort taken to answer each question. My concerns have been addressed in the rebuttal, and I am maintaining my initial positive rating. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for spending time to read our rebuttal. We are glad that your concerns have been addressed. Once again, we sincerely thank you for recognizing the merits of our work!
Summary: This work targets online map creation for autonomous driving. The authors claim vector-based approaches exhibit artifacts due to lack of geometric supervision in current loss functions. The main idea is to add a differentiable rasterization layer to any model that predicts vectors and add an additional segmentation loss (dice loss) during training. Their method does not require any additional model parameters, and can be added to any existing vector-based model. They perform experiments on the nuScenes dataset using camera only, and camera + lidar, and show their additional supervision can improve performance of the current state-of-the-art vector-based model. Strengths: * Simple idea, easy to execute and shows modest improvements for MapTR with camera+lidar. * The proposed method can be applied to any vector-based architecture. * Technical section described clearly, notation looks sound. * Convincing analysis and visual evidence for the need of the rasterization-based metric (Figure 4.) Weaknesses: * I am borderline on if the current contribution is significant enough for a paper. The results are sound, particularly for AP-raster since the baselines do not optimize for this. However, the story and writing needs more focus on the AP-raster metric if that is the real contribution (i.e. should all future map models focus on this particular metric?). * The main ideas presented are architecture agnostic but experiments are only done using MapTR. * I found the direction regularization loss to be unrelated to the main idea of rasterization. How does the baseline (MapTR) perform with this extra loss, or how does MapVR perform without this? A majority of the main results in Table 1 are very close - this is my primary factor for my current rating. * I was hoping ablations included evaluation using both metrics (AP-raster+AP-chamfer). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Table 1: It would be nice to have AP-raster numbers for HDMapNet and VectorMapNet. In the current draft these two methods do not provide much information as of now. * Should all future vector models be evaluated and assessed with AP-chamfer and AP-raster metrics? Is the proposed AP-raster superior? * Figure 2: the authors argue that equidistant parameterization causes inaccuracy in modeling map primitives. Couldn't the same be said for resolution of rasterization? Why not simply increase the number of points? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: No limitations/societal impact section provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; Thank you for taking the time to review our manuscript. Your comments have provided us with valuable insights to improve our work. We have carefully considered each of your points, and we hope that our responses below could address your concerns. &nbsp; ### Significance of $\text{AP}_\text{raster}$ Short answer: **We believe $\text{AP}_\text{raster}$ is of great significance to the autonomous driving community, and we highly recommend future works to use** $\text{AP}_\text{raster}$ **for map vectorization evaluation.** Actually, in our team, we already transitioned to $\text{AP}_\text{raster}$. And we believe it is also valuable for the emerging community, as it guides subsequent works toward the actual needs of autonomous driving. To highlight its significance, we have dedicated the whole of Section 3 to introducing this evaluation metric. And we will amend the narratives to make our points clearer. We further explain the significance of $\text{AP}_\text{raster}$ as follows. - First, as described in Section 3.1, the existing AP_chamfer metric is not scale-invariant and overlooks shape and geometrical details. Fig. 4 illustrates these issues with a few practical examples, where AP_chamfer fails to reasonably judge the prediction quality while AP_raster does clearly a better job. - Second, from the visualizations in Fig. 6, it can be observed that our proposed rasterization-based approach yields substantially better results, although it does not outperform MapTR by a large margin in terms of AP_chamfer. This also indicates that AP_chamfer falls short of providing a comprehensive and precise evaluation as compared to AP_raster. - Third, while the judgment of significance can be subjective, `Reviewers YAhf and U7tP` strongly support us on the significance of $\text{AP}\text{raster}$. Their comments reflect the value of this new metric to the community to some extent. Nevertheless, we fully appreciate your concern and welcome further conversations. &nbsp; ### Compatibility of MapVR We agree we should have integrated MapVR with more baselines. However, as map vectorization is an emerging field, existing studies are limited. By the time of submission (May 2023), VectorMapNet and MapTR are the only two peer-reviewed works that directly use networks to predict the vectorized map elements. Since MapTR outperforms VectorMapNet significantly in both accuracy and efficiency, we selected MapTR as the baseline. Besides, as our work does not involve any assumption or modification in network architecture, it should be generalizable to other future map vectorization solutions. &nbsp; ### Why Not Increase the Number of Equidistant Points to Avoid Parameterization Error? The attempt was made in MapTR, but resulted in a decline in performance. You can find more information about this in Table 5 of MapTR's appendix [20]. This issue has also been mentioned in the footnote of Page 2 in our manuscript. &nbsp; ### Regarding Direction Regularization Loss We wish to clarify that **MapTR's official results were obtained with a similar direction loss**. Therefore the comparisons in Tables 1-3 are fair. MapTR's direction loss is computed between predicted points and ground-truth equidistant points; while our MapVR's direction loss is self-regularized along predicted adjacent segments. With the supervision via differentiable rasterization, MapVR largely diminishes the need for equidistant predicted points, which was a constraint in MapTR. As a result, our designed direction loss aids in allocating more points in areas with greater curvature and fewer points in straight-line regions, ultimately enhancing the precision. In conclusion, the direction regularization loss is not unrelated to the rasterization idea, but serves as an important component that works in conjunction with it for better map vectorization. The table below presents extra experiments regarding the direction loss. | |MapTR (AP-chamfer/AP-raster)|MapTR+MapVR(Ours) (AP-chamfer/AP-raster)| |--|:--:|:--:| |w/o direction loss|48.2 / 23.7| 48.5 / 29.5 | |w/ MapTR's direction loss|50.3 / 24.3|---- / ----| |w/ our direction loss|---- / ----| 51.2 / 31.2 | &nbsp; ### Regarding Blanks in Table 1 Neither HDMapNet nor VectorMapNet provides source codes for the "C & L" modality. Besides, HDMapNet does not provide any checkpoint for evaluation. Therefore, we did not provide their performance under $\text{AP}_\text{raster}$. Fortunately, VectorMapNet provides codes and checkpoint for the camera-only modality. The table below shows the additional results (highlighted in bold), which we will incorporate into future versions. |Method|Modality|Backbone|#Epochs|AP_chamfer_avg|AP_raster_avg|FPS| |--|:--:|:--:|:--:|:--:|:--:|:--:| |VectorMapNet|C|Res-50|110|40.9|**15.0**|2.9| |MapTR|C| Res-50 |110|58.7|35.0|18.4| |MapTR+MapVR (Ours)|C|Res-50|110|58.8|38.5|18.4| It can be seen that: for methods that are inferior under $\text{AP}\text{chamfer}$, the gap under the stricter metric $\text{AP}\text{raster}$ will be even larger. &nbsp; ### Ablations Under Both Metrics (a) Rasterization resolution | resolution | AP-chamfer/AP-raster | |--|:--:| |X | 50.3 / 24.3 | |64x32 | 45.1 / 21.5 | |180x90 | 50.6 / 30.4 | |256x128 | 51.2 / 31.2 | |320x160 | 50.9 / 30.9 | &nbsp; (b) Line rasterization softness | $\tau$ | [Divider] AP-chamfer/AP-raster | |--|:--:| |0.5|48.0 / 29.2| |2.0|54.4 / 33.1| |6.0|52.8 / 31.4| &nbsp; (c) Direction loss. _Please refer to the response above ('Regarding Direction Regularization Loss')._ &nbsp; (d) Geometry-aware rasterization | | [Ped_crossing] AP-chamfer/AP-raster | |--|:--:| |all as lines|34.9 / 21.8| |lines and polygons|47.7 / 37.5| &nbsp; (e) MapVR vs. parallel segm | |AP-chamfer/AP-raster| |--|:--:| |MapVR|51.2 / 31.2| |parallel segm|48.1 / 26.7| &nbsp; &nbsp; We sincerely hope the above address your concerns. Please don't hesitate to continue the discussion if you have further queries. &nbsp; --- Rebuttal Comment 1.1: Comment: Thank you for clarifying some of these important details and running additional experiments during the short rebuttal period. I would highly recommend adding these into the revision! It's interesting to see the field go to vectorized representation then back to using the raster metric but I am more convinced now on the usefulness of this work. I'll raise my score post-rebuttal --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for taking time to read our response! We are so glad and grateful to have your acknowledgement! We will surely add them into our final version. And we are very grateful for your constructive comments that help us improve the manuscript.
Summary: Online HD mapping is essential for autonomous driving because maps provide detailed and precise environmental information for perception and planning. Existing mapping literatures have limitations both in methods and metrics. To address them, the authors a new method called MapVR, integrating the philosophy of rasterization into map vectorization and a new metric, a rasterization-based evaluation metric. Experiments are conducted on four existing datasets (nuScenes Map (basic), nuScenes Map (extended), Argoverse2, and 6V-mini-v0.4) and show that incorporating rasterization into map vectorization greatly enhances performance with no extra computational cost during inference. Strengths: 1. Writing: Overall the paper is well-written and is easy to understand the goal of the paper. 2. Motivation: Particularly, the authors provide a very introduction that explains the limitations of existing mapping algorithms and metrics. The work is highly valuable to the field of HD mapping. 3. Proposed method: To address the limitations of existing mapping algorithms, the authors propose to combine the two camps (i.e., map rasterization and map vectorization) in a unified framework. In particular, the proposed MapVR applies differentiable rasterization, inspired by recent advances in graphics and vision, to the map vectorization task to bridge vectorized outputs and rasterized HD maps. The proposed strategy enables more refined and comprehensive supervision and yields predictions with improved precision. The authors offer a new perspective on HD mapping. Moreover, the work could motivate relevant fields in autonomous vehicle research. 4. Metric: the authors deliver a clear explanation of the proposed metric in Figures 3 and 4. Particularly, Figure 4 can immediately spot the effect of two different metrics. Importantly, the proposed metric can capture fine-grained differences for quality evaluation. 5. Experimental results: the authors conduct experiments on multiple datasets, i.e., nuScenes Map (basic), nuScenes Map (extended), Argoverse2, and 6V-mini-v0.4. In Table 1-4, MapVR clearly demonstrates its superiority over existing baselines (i.e., HDMapNet, VectorMapNet, MapTR). The tables provide a clear benchmarking protocol for the community. It is worth noting that MapVR can enhance map vectorization without adding any extra computational cost during inference. 6. Ablation study: the reviewer particularly finds the “1. **Why Not Introduce HD Supervisory Signals from an Auxiliary Segmentation Task?**” important. It shows that a simple auxiliary task can benefit rasterization prediction. The results are valuable for future research. Weaknesses: Overall, the paper is a really good shape. However, the reviewer has the following comments. 1. Differentiable rasterization: In sec. 4.2, the authors discussed two rendered masks for line-shaped and polygon-shaped elements. Can the two rendered masks cover all elements in HD maps? On a relevant note, do the authors identify failure cases due to the conversion? 2. Rasterization resolution: what is the implication of having 256x128 the best performance? Is it related to the HD map size? Do we have a systematic way to identify the proper rasterization resolution? 3. While the proposed method achieves the best quality, the numerical results are still far from perfect. The reviewer strongly suggests the authors conduct a thorough failure case analysis that would shed light on future research. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please see the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: No. The authors do not provide adequate discussions on the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; We are immensely grateful for the comprehensive and positive acknowledgment received. And we deeply appreciate your insightful comments and suggestions. Below we would like to respond to your queries and address your concerns point-by-point. &nbsp; &nbsp; ### Discussions on Failure Cases and Limitations We greatly appreciate your constructive feedback. If our work is accepted for NeurIPS 2023, we will incorporate discussions on failure cases and limitations within the one additional page allowed in the camera-ready version. **_Please refer to `our global response to all reviewers` for (i) additional visualization and analysis on failure cases and (ii) additional discussions on limitations._** &nbsp; ### Regarding Line-Shaped and Polygon-Shaped Differentiable Rasterization Our empirical findings indicate that the two proposed rasterization strategies can effectively cover all elements in HD maps. To the best of our knowledge, the majority of HD map elements can be abstracted into either one of three primal shapes: points, lines, and polygons. As thoroughly explained in our manuscript, the most common HD map elements, such as lanes, curbsides, stoplines, crosswalks, intersections, and parking areas, can be categorized as lines and polygons, which can be well covered by the two designed rasterization strategies. Other elements like traffic lights, traffic signs, and cones are ideally abstracted as points within HD maps. These elements can be interpreted as small, circular-shaped polygons, thus integrating them seamlessly into our designed rasterization strategies. This strategy allows us to integrate our proposed method in an 'all-in-one' vectorized map perception framework without any complex network design. It should be noted that the differentiable rasterization procedure is accomplished intuitively via Eq. 2 and Eq. 3 in our manuscript, rather than learned parametrically. We did not notice any error caused by the differentiable rasterizer. &nbsp; ### Regarding Rasterization Resolution In our understanding, this hyper-parameter is related to the perception range as well as the perception precision. Taking our nuScenes experiments as an example. The perception range is ±30m along the y-axis (vertically) and ±15m along the x-axis (horizontally), yielding a bird's-eye-view (BEV) perception range of 60m x 30m. With a rasterization resolution of 256x128 pixels, each rasterized pixel corresponds to a real-world area of 0.23m x 0.23m. This precision is generally considered sufficient for accurate fine-grained localization supervision. To determine the appropriate rasterization resolution, it is crucial to proportionally align the rasterization resolution with the BEV perception range. For example, if we keep the per-pixel size unchanged, a perception range of 120m x 45m would correspond to a rasterization resolution of 480x180. In addition, fine-tuning the rasterization resolution within a small neighboring range should help for better performance. It is also noteworthy that, as per Table 5a in our manuscript, the performance is robust against a certain range of rasterization resolutions (180x90 ~ 320x160). &nbsp; &nbsp; We hope the above response addresses your concerns. Once again, we thank you for your insightful comments and strong recognition of our work, and we are more than happy to have further discussions with you. &nbsp; --- Rebuttal Comment 1.1: Title: Response to the rebuttal. Thanks! Comment: Dear Authors, Thanks for the detailed response! Most of the questions are addressed! I have a follow-up question and suggestion: 1. I am particularly interested in failure example #3, shown in the rebuttal PDF. I would like to learn from the authors the reason behind it because it is not a challenging case. 2. Regarding Line-Shaped and Polygon-Shaped Differentiable Rasterization: Thanks for the detailed response. I felt it would be a good summary if the authors can provide several examples of lanes, curbsides, stoplines, crosswalks, intersections, parking areas, traffic lights, traffic signs, and cones in the appendix in the final version to showcase the proposed method can reconstruct these elements well. --- Reply to Comment 1.1.1: Title: Authors' Response to Reviewer's Follow-Up Inquiry and Suggestion Comment: Thanks for reading our rebuttal. We are glad that most of your concerns have been addressed. As per your follow-up inquiry and suggestion, we would like to respond below. - There are two reasons that we attribute to _failure case #3_. - _First_, there exists ambiguity in the connectivity of crosswalk ground truth. It is unknown whether the left vertical crosswalk and the right vertical crosswalk should be connected to the horizontal crosswalk or not. In _failure case #3_, the left crosswalk is separate and the right crosswalk is linked to the horizontal one, while there is no visual clue to distinguish these two patterns. The ambiguity causes failure in detecting the left and the right crosswalks. - _Second_, intersections with a yellow box on the road face are underrepresented in the nuScenes dataset. This causes unsatisfactory results in _failure case #3_, as the network rarely sees such cases during training. - Thank you for your advice. We will surely update the final version accordingly. &nbsp; Once again, thank you for your valuable advice as well as your strong recognition of our work.
Summary: -- Strengths: -- Weaknesses: -- Technical Quality: 3 good Clarity: 3 good Questions for Authors: -- Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: -- Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
null
Rebuttal 1: Rebuttal: &nbsp; We are incredibly grateful for all the helpful comments received. Here, we provide some additional contents that cannot fit into the nine-page manuscript. These additional contents will be included in the camera-ready version, in which one additional page of content is allowed. &nbsp; &nbsp; ## Potential Limitations We found the potential limitations of our work primarily regarding the reliance on certain hyper-parameters. &nbsp; ### Sensitivity of $\text{AP}_\text{raster}$ to Hyper-Parameters Our designed $\text{AP}_\text{raster}$ relies on certain hyper-parameters, more specifically, the rasterization resolution (480x240) and the dilation pixel number (2 pixels). However, we think it does not constitute a severe limitation, but instead offers better evaluation flexibility. - _First_, the two hyper-parameters have clear practical meanings and are easy to set. In our nuScenes experiments, the bird's-eye-view (BEV) perception range is 60m x 30m. Setting the evaluation rasterization resolution at 480x240 offers a localization precision of 0.125m, satisfying the requirement of autonomous driving. A dilation of 2 pixels can make the evaluation criteria tolerant to a deviation of at most 4 pixels, which equals 0.5m. Users can easily adapt these hyper-parameters to their own needs. - _Second_, we conducted sensitivity analyses in the table below, which demonstrate that $\text{AP}_\text{raster}$ is robust with respect to the two hyper-parameters. Therefore, the performance improvements brought by MapVR under $\text{AP}\text{raster}$ are genuine and significant. | | MapTR | MapTR+MapVR (Ours) | |---|:---:|:--:| | AP-raster (320x160, dilation=2) | 34.9 | 41.6 | | **AP-raster (480x240, dilation=2, default metric)** | 24.3 | 31.2 | | AP-raster (480x240, dilation=1) | 15.1 | 20.0 | | AP-raster (640x320, dilation=2) | 18.7 | 24.8 | - _Finally_, it's worth noting that most computer vision evaluation metrics (e.g., COCO's AP, nuScenes' NDS, $\text{AP}\text{chamfer}$, etc.) have inherent hyper-parameters, and their choices always involve a trade-off between different considerations. The proposed $\text{AP}_\text{raster}$ is not an exception and we believe it does provide a valuable and practical tool for assessing the performance of HD map vectorization methods. &nbsp; ### Sensitivity of MapVR to Hyper-Parameters MapVR also introduces hyper-parameters that could affect performance. The major concern is on the rasterization resolution. Similarly, this hyper-parameter is most related to the perception range as well as the perception precision. Taking our nuScenes experiments as an example. The perception range is ±30m along the y-axis (vertically) and ±15m along the x-axis (horizontally), yielding a bird's-eye-view (BEV) perception range of 60m x 30m. With a rasterization resolution of 256x128 pixels, each rasterized pixel corresponds to a real-world area of 0.23m x 0.23m. This precision is generally considered sufficient for accurate fine-grained localization supervision. It should be quite simple to customize the hyper-parameters under a particular setup. To determine the appropriate rasterization resolution, it is crucial to proportionally align the rasterization resolution with the BEV perception range. For example, if we keep the per-pixel size unchanged, a perception range of 120m x 45m would correspond to a rasterization resolution of 480x180. In addition, fine-tuning the rasterization resolution within a small neighboring range should help for better performance. It is also noteworthy that, as per Table 5a in our manuscript, the performance is robust against a certain range of rasterization resolutions (180x90 ~ 320x160). &nbsp; &nbsp; ## Failure Case Analysis Based on Fig. 6 in our manuscript as well as the visualization results in the appendix, it is evident that our method can perceive pretty accurately when the road structure is highly regularized and when there is no occlusion. **In the `pdf` file attached in this 'global rebuttal' message, we present a few failure cases for analysis.** These cases reveal challenges predominantly at complex road intersections. Here, occlusions, whether from vehicles, constructions, or a limited field of view, hamper our system's perception in the bird's-eye-biew (BEV). Such occlusions often result in inaccuracies in the predicted vectorized maps. Yet, since road structures typically follow regular patterns, there's an opportunity for improvement. Current map vectorization techniques may benefit from integrating road structure priors and enhancing their reasoning capabilities. Future research could explore merging perception frameworks with road prior information or leveraging the knowledge from the standard definition map (SDMap) to address these challenges. Moreover, enhancing map perception during nighttime remains an exciting direction for upcoming works. &nbsp; Pdf: /pdf/e8345a281398a277ad2d68ce3b60a906584ccdb2.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors explore and analyze the existing HD map framework and its evaluation metrics, and propose to adopt rasterization on top of the existing vectorization-based HD map for more precision. Also, they propose a rasterization-based evaluation metric rather than the existing chamfer-distance one. The experimental results on a number of datasets are better compared to the existing methods. Strengths: 1. The task of HD map is very important in the 3D autonomous driving community. Interestingly, the authors propose to adopt rasterization on top of the existing vectorization-based HD map for more precision. 2. The paper is easy to follow. 3. The authors conduct the experiments on differnt outdoor datasets, including widely-used nuScenes, Argoverse2 and their own 6V-mini-v0.4. Weaknesses: 1. Computation/memory footprint comparison. The authors didn't make a comparison of their work in terms of memory/speed with the existing 3D HD map methods. The time consumption might be not trivial since the memory/time might be heavy for the customized differentiable rasterizer from my point. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the questions that I describe in the Weakness part. I would also consider the rebuttal and other reviews. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; Thank you for acknowledging the significance, presentation, and thorough evaluation of our work. We also appreciate your kind suggestion on computation/memory footprint comparison. We hope the information provided below will help address your concerns. And we will include them in our final version. &nbsp; &nbsp; ## Computation/Memory Footprint (Evaluation Metric) Our empirical tests indicate that our rasterization-based evaluation metric $AP_\text{raster}$, though more complex, runs at similar speeds to the Chamfer-distance-based metric $AP_\text{chamfer}$. Specifically, our metric takes about 3 minutes to evaluate on the nuScenes Map validation set, while the Chamfer-distance-based metric takes around 2 minutes. Notably, our proposed evaluation metric can be further accelerated with multi-threading. Both evaluation metrics are performed on CPU sequentially, therefore their memory consumption is negligible. &nbsp; &nbsp; ## Computation/Memory Footprint (Neural-Network Model) ### Training Stage We concur with the reviewer that the runtime/memory consumption will be expensive if the customized differentiable rasterizer is implemented in pure PyTorch/TensorFlow. Given this, we implement the differentiable rasterizer (both forward and backward propagation) with CUDA. Please refer to our code implementation in the supplementary materials for details. Upon acceptance, we intend to release all our code implementations. This will allow our work to serve as a foundation for future research and development. With the CUDA-accelerated differentiable rasterizer, our proposed MapVR ensures a marginal increase in memory footprint while maintaining computational efficiency during the training process. The following table presents a detailed comparison between the training costs of our method, `MapTR+MapVR`, and its baseline, `MapTR`. Experiments were conducted with 8x NVIDIA A100 GPUs under the training setups specified in our manuscript. &nbsp; | Methods | Modality | Backbone | Training Time / Iter | GPU Memory Usage | |:---------------------|:--------:|:--------------:|:--------------------:|:----------------:| | MapTR | C | Res-50 | 0.82 s | 14021 MB | | MapTR | C & L | Res-50 | 1.18 s | 28557 MB | | MapTR + MapVR (Ours) | C | Res-50 | 0.91 s | 14169 MB | | MapTR + MapVR (Ours) | C & L | Res-50 | 1.37 s | 28673 MB | &nbsp; ### Inference Stage The differentiable rasterization is not needed anymore once the training is complete. Hence, our method can improve map vectorization with no additional computation or memory usage during inference. In Table 1 and Table 2 of our manuscript, we have compared the inference speed of different methods for map vectorization. The table below is an updated version of Table 1, which provides more details on inference time, memory consumption, etc. Results are obtained with 1x NVIDIA 3090 GPU. &nbsp; | Method | Modality | Backbone | #Epochs | AP_chamfer_avg | AP_raster_avg | FPS | GPU Memory | |:-------------------|:--------:|:--------:|:-------:|:--------------:|:-------------:|:----:|:----------:| | HDMapNet | C | Effi-B0 | 30 | 23.0 | - | 0.8 | 3264 MB | | HDMapNet | C & L | Effi-B0 | 30 | 31.0 | - | 0.5 | - | | VectorMapNet | C | Res-50 | 110 | 40.9 | 15.0 | 2.9 | 3232 MB | | VectorMapNet | C & L | Res-50 | 110 | 45.2 | - | - | - | | MapTR | C | Res-50 | 24 | 50.3 | 24.3 | 18.4 | 3115 MB | | MapTR | C | Res-50 | 110 | 58.7 | 35.0 | 18.4 | 3115 MB | | MapTR | C & L | Res-50 | 24 | 62.7 | 45.2 | 7.2 | 16760 MB | | MapTR+MapVR (Ours)| C | Res-50 | 24 | 51.2 | 31.2 | 18.4 | 3115 MB | | MapTR+MapVR (Ours)| C | Res-50 | 110 | 58.8 | 38.5 | 18.4 | 3115 MB | | MapTR+MapVR (Ours)| C & L | Res-50 | 24 | 63.5 | 51.1 | 7.2 | 16760 MB | Note: Entries marked with '-' indicate that official results are not reported, and codes or model checkpoints are unavailable. &nbsp; &nbsp; We hope that our response has addressed your concern. And we welcome further dialogue to discuss any concerns and clear up any doubts that may still exist. &nbsp; --- Rebuttal Comment 1.1: Title: Authors have addressed most of my concerns. Comment: Thanks for the answers and clarification in the rebuttal, which covered most of my concerns. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you so much for your reply. We are so glad to hear that! We will add these results into our final version. And we sincerely appreciate your effort and your advice during the review phase.
null
null
null
null
null
null
Diversifying Spatial-Temporal Perception for Video Domain Generalization
Accept (poster)
Summary: In this work, the authors propose a Spatial-Temporal Diversification Network (STDN) for video domain generalization. First, they intrdouce a spatial grouping method to summarize the spatial clues in each frame. Then, they further build up spatial-temporal relations in a multi-scale manner. Finally, they show the effectiveness of the method via different experiments of video domain generalization. Strengths: 1. The domain generalization problem is important for video understanding in practice. 2. The paper is written in a good structure, basically. 3. The experiments somehow show the effectiveness of the design. Weaknesses: 1 Design. I am not quite convincing by the proposed design in the paper. Bascially, the spatial grouping (or clustering) or spatial-temporal relation modeling are not particularly designed for domain generalization. It could be used for the traditional video classification problem without any difficulty. Why are these designs important for domain generalization? 2 Experiment. 2.1 The setting is not quite challenging, actually. The data sets bascially belong to the same domain. It would be interesting to see the cross domain setting, like action recognition from dark videos in UG2+ Challenge. 2.2 It would be interesting to show the results of traditional video classification setting on the popular benchmarks, like Kinetics400 or Something-Something V1 or V2. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please see the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Q1. Why the proposed diversity-based modeling is important for video domain generalization (VDG)?** RE: Our model is designed for VDG, although it is applicable to traditional video classification. The core idea of our proposed designs, including our proposed Spatial Grouping Module and Spatial-Temporal Relation Module, is to perceive diverse class-correlated cues in videos. A detailed illustration to answer the question is as follows: (1) As illustrated in our manuscript (Line 31-32, Figure 1), previous video classification models usually suffer from the overfitting of some domain-specific cues in the source domain (these domain-specific cues are easy-to-fit, e.g., class-correlated contexts, as demonstrated by [19, 20, 21]). In VDG, videos from the source and target domains follow different distributions, which means that some class-correlated cues in the source domain may be unseen or not correlated with categories in the target domain. As a result, a model that overfits to the cues specific in the source domain would fail to generalize in the target domain. (2) Since the target domain is unseen, it is impossible to explicitly learn the invariant cues across the source and target domains from the data. Thus, we propose an alternative approach. (3) We propose to discover diverse class-correlated cues in the source domain, such that our model can leverage different types of cues for recognition in the target domain. The set of rich and diverse class-correlated cues is more likely to include recognition cues that are invariant (shared) across the source and target domains, compared with previous models that overfit domain-specific cues (as discuss in (1)). Thus, our proposed model can generalize better in the target domain. Regarding traditional video classification, it is a very different task from VDG. In traditional video classification, the training and test videos follow the same distribution (namely an identical domain), thus the class-correlated cues specific in the training domain would be effective in the test domain (this is very different from VDG). In our reply to Q3, we quantitatively demonstrate that our STDN can effectively generalize to videos from unseen test domains and perform well in traditional video classification. #### **Q2. Experiments on dark videos** RE: Our proposed STDN effectively generalizes to dark videos by learning from normal videos. Specifically, we conduct experiments on the challenging HMDB->ARID benchmark following ACAN [R5] (ARID is the dataset from UG2+ challenge which consists of dark videos). On HMDB->ARID, methods are implemented using the I3D backbone, and these experiments are conducted using the same augmentation setting. As shown by the table below, our proposed STDN outperforms VideoDG [13] (previous SOTA) on HMDB->ARID, which demonstrates the effectiveness of our model. | Baseline (TRN [6]) | VideoDG [13] (SOTA) | STDN (Ours) | | :-: | :-: | :-: | | 41.1 | 41.3 | 44.4 | [R5] Xu et al.: Aligning Correlation Information for Domain Adaptation in Action Recognition. TNNLS 2023. #### **Q3. Results on traditional video classification** RE: The results of our proposed STDN vs. TRN (the network that our model is founded on) on the validation set of Something-Something-v2 (SSv2) are given in the table below. We use the same training recipe for both methods (e.g., resnet50 backbone, 5 segments, 80 epoch). As shown in the table, our STDN can obtain improvement over TRN in traditional video classification. Together with our superior performance on video domain generalization (Table 1&2 in our main manuscript), we demonstrate that our STDN can effectively generalize to videos from unseen test domains and perform well in traditional video classification. | Baseline (TRN [6]) | STDN (Ours) | | :-: | :-: | | 42.6 | 43.2 | --- Rebuttal Comment 1.1: Comment: Thanks for the feedback. The Rebuttal addresses my main concerns. I change my rating to borderline accept. --- Reply to Comment 1.1.1: Comment: Thank you for your time and efforts. We are encouraged by your recognition.
Summary: In this manuscript, the authors proposed a novel Spatial-Temporal Diversification Network (STDN) for video domain generalization. More precisely, the proposed method introduces the Spatial Grouping Module and Spatial=Temporal Relation Module to discover various groups of spatial cues within individual frames and to model spatial-temporal dependencies. Experimental results on three benchmarks show the effectiveness of the proposed method. Strengths: This paper is well-written and well-organized. The proposed method achieves state-of-the-art results across three benchmarks. The proposed method is straightforward and interesting. Weaknesses: The Temporal Relation Module is derived from the work cited as [6]. It would be beneficial for the authors to acknowledge this in their manuscript. In my opinion, the proposed spatial grounding module is similar to spatial attention and the KNN model. The authors are suggested to conduct an ablation study to compare these methods for a more thorough analysis. The proposed method is founded on Temporal Segment Networks TSN). However, the comparative methods use ensembles with various backbones. It would be advisable for the authors to replicate this approach for a fairer comparison. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N.A. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Q1. Discussion about TRN [6]** RE: Thank you, we will add discussion about this. Although our proposed Spatial-Temporal Relation Module (STRM) is founded on TRN, they are designed for different tasks with different motivations. Our STRM is designed for video domain generalization, while TRN is for generic video classification. Specifically, our key idea is to explicitly perceive diverse class-correlated cues in videos by modeling multi-scale spatial-temporal dependencies, aiming to alleviate the overfitting of domain-specific cues in the source domain and make the model generalize better in unseen test domains. By contrast, TRN is a multi-scale temporal modeling module that does not address the unique challenges of VDG. In our STRM, we contribute two novel technical designs for perceiving diverse cues: 1) we model multi-scale spatial dependencies within individual frames, which enriches the spatial diversity; 2) we propose a relation discrimination loss to constrain that different temporal dependency modeling functions capture different temporal cues, which enriches the temporal diversity. By enriching the diversity in both space and time dimensions, our STRM discovers diverse class-correlated cues from source videos for effective generalization to unseen target domains. #### **Q2. Analysis of our proposed Spatial Grouping Module (SGM)** RE: We conduct further comparison experiments to demonstrate the effectiveness of our SGM. We compare our SGM with two methods as follows: (1) Spatial Attention (SA): following CBAM [R4], we use $K$ spatial attention modules on top of the backbone to extract $K$ features for subsequent feature modeling. (2) $K$-means Grouping (KG): we use $K$-means algorithm to partition spatial features into $K$ groups, and then the $K$ cluster centers are used as features for subsequent feature modeling. We implement the two methods with our proposed Spatial-Temporal Relation Module (STRM). The results on UCF->HMDB are given in the table below, which shows that our SGM outperforms the two comparative methods. Our superiority is attributed to that we force the network to learn different types of features according to the guidance of our entropy-based losses, which constrain the distinction between different spatial groups. | STRM+SA | STRM+KG | STRM+SGM (Ours) | | :---: | :---: | :---: | | 56.0 | 55.5 | 58.3 | [R4] Woo et al.: CBAM: Convolutional Block Attention Module. ECCV 2018. #### **Q3. Comparison with methods using various backbones** RE: We would like to clarify that our comparison is fair, since we use ResNet50 as the backbone for both comparative methods and our STDN. In order to compare with general domain generalization methods in video domain generalization (VDG), we combine these methods with classical temporal modeling modules (i.e., TSN [5], TRN [6], TSM [7] and APN [13]) to adapt video data. Our proposed STDN has temporal modeling capabilities, and thus extra temporal modeling modules are needless. All comparative methods and our STDN is based on TSN, i.e., using the sparse temporal sampling strategy. In addition, we conduct experiments based on other backbones. The following table shows a comparison between our STDN and two existing methods (TRN and VideoDG) on UCF->HMDB. Our superior performance demonstrates the effectiveness of our design. | Backbone | TRN | VideoDG [13] (SOTA) | STDN (Ours) | | :---: | :---: | :---: | :---: | | ViT-B/32 | 61.3 | 61.8 | 64.8 | | I3D | 68.0 | 68.7 | 72.1 --- Rebuttal Comment 1.1: Title: Offical comments by Reviewer YkHU Comment: Thanks for the response. However, I still have some concerns about STRM. From my perspective, I can not see the specific design in STRM for the generalization. --- Reply to Comment 1.1.1: Comment: Thank you for your comment. We would like to present a detailed analysis to clarify that our STRM is a specifically designed module for video domain generalization. Our key idea is to perceive diverse spatial-temporal cues, which is critical and specific to video domain generalization (line 42-49), since it alleviates the overfitting of domain-specific cues in the source domain. Thus, our proposed STRM is diversity-driven, which enriches the feature diversity in both space and time dimensions. **(1)** Time dimension: We extract diverse temporal relation features between frames by explicit dependency modeling at multiple time scales. *More importantly*, we propose a relation discrimination loss to ensure the diversity of temporal relation features, i.e., it constrains the discrimination between temporal relation features across different scales (line 206-218). As shown in the table below, our TRM without the relation discrimination loss $L_{rel}$ has a low value of normalized MSE (i.e., low feature diversity). By contrast, introducing the loss $L_{rel}$ obtains significant improvement in terms of both normalized MSE and ACC. The normalized MSE quantitatively measures the feature diversity by the difference between temporal relation features across different time scales (please refer to Figure 5 for more analysis). | UCF->HMDB | TRM w/o $L_{rel}$ | TRM (Ours) | | :-: | :-: | :-: | | ACC | 53.1 | 55.3 | | Normalized MSE | 0.0434 | 0.3329 | To further verify the effectiveness of our diverse temporal relation features, we evaluate a trained STDN (4 time scales) by dropping features of a specific time scale (STDN-T-$i$ denotes that the $i$-th time scale is dropped for each video). As shown in the table below (UCF->HMDB), dropping any one of the 4 time scales will cause performance degradation compared with the full STDN, which demonstrates the effectiveness of our diversity-driven temporal relation modeling. | STDN-T-1 | STDN-T-2 | STDN-T-3 | STDN-T-4 | Full STDN | | :-: | :-: | :-: | :-: | :-: | | 59.2 | 58.1 | 59.4 | 58.9 | 60.2 | **(2)** Space dimension: To unleash the diversity in the space dimension, we extract diverse spatial relation features within individual frames by explicit dependency modeling at multiple space scales. As shown in the table below, our full STRM outperforms our STRM without spatial relation modeling in terms of both ACC and normalized MSE. This demonstrates that our proposed spatial relation modeling can enrich the feature diversity and improve the generalization performance. Here SGM denotes our Spatial Grouping Module. | UCF->HMDB | SGM+TRM | SGM+STRM | | :-: | :-: | :-: | | ACC | 56.7 | 58.3 | | Normalized MSE | 0. 5124 | 0.5441 | To further verify the effectiveness of our diverse spatial relation features, we evaluate a trained STDN (3 space scales) by dropping features of a specific space scale (STDN-S-$i$ denotes that the $i$-th space scale is dropped for each video). As shown in the table below (UCF->HMDB), dropping any one of the 3 space scales will cause performance degradation compared with the full STDN, which demonstrates the effectiveness of our diversity-driven spatial relation modeling. | STDN-S-1 | STDN-S-2 | STDN-S-3 | Full STDN | | :-: | :-: | :-: | :-: | | 59.5 | 59.3 | 59.7 | 60.2 | In summary, our STRM is a diversity-driven module that possesses spatial-temporal dependency modeling capability with the relation discrimination loss as guidance, thus it is specifically designed for video domain generalization.
Summary: The paper addresses the problem of video domain generalization for classification task. The core idea of the paper is to enhance the diversity in class-correlated cues both in spatial and temporal dimensions with the assumption that in this diverse pool, it is more likely to capture the domain-generalizable features. To capture diversity in spatial dimensions, a spatial grouping module is proposed which forms K integrated features representing K groups of features by aggregating spatial features in each group. Two different entropy-based losses are used to ensure diversity in spatial cues. Next, the paper learns spatial relation features by sampling the integrated feature from different space scales. These spatial relation features, limited to space only, are then leveraged to learn temporal relation features. To improve the effectiveness of temporal relation features, a relation discrimination loss is also used to avoid collapse of learned temporal relation feature. The overall loss for optimisation is composed of task-specific loss, two entropy losses and a relation discrimination loss. Experiments have been conducted on three different datasets and the results claim to achieve better performance than the existing method and other image-based DG baselines. Strengths: 1) Generalizing to novel domains for video modality is an important and challenging task and it carries several applications in real-world. Also, not much work has been done for video domain generalization. 2) The idea of spatial-temporal relation feature is interesting to capture the diverse class-correlated cues in search of domain-invariant features in the video data. 3) Results claim to demonstrate better performance against competing methods in all three datasets, including EPIC-kitchens-DG, UCF-HMDB, and Jester-DG. 4) Ablation studies show the clear performance contribution of different components in the proposed method. Weaknesses: 1) The analyses of spatial grouping using the tSNE in Fig. 6 is not very convincing. It is not very clear how the claim that the spatial grouping does the feature group of spatial features is justified in this diagram. It is important to better understand either by visualizing or some other quantification measures that what is the clustering ability of spatial grouping mechanism. 2) It is not very clear how the diverse class-correlated cues in space and time, for which the spatial-temporal diversity module is developed, are domain-invariant information that the paper claims to extract (L:6). Fig. 6 uses Grad-cam visualization to show the attention heatmaps but it is very general and can be applicable to any classification task. 3) In table 1, the improvement from UCF to HMDB is not very encouraging over VideoDG [13]? The paper doesn’t discuss any potential reasons for this. In fact, the the performance improvement from UCF to HMDB is less than than VideoDG [13] if Mixstyle [67], which is an off-the-shelf component, is not used in the overall framework. 4) What is the performance of the method when only using MixStyle [67] for the datasets used in Table 3? Technical Quality: 3 good Clarity: 3 good Questions for Authors: The paper tackles the problem of generalizing video classification task to novel (unseen)domains. Overall, the paper presents interesting technical contributions aimed at capturing diverse spatial-temporal features across multiple time-scales and the results show that it the method is effective against the existing method and image-based baselines. However, there are some concerns/questions (as listed in the weaknesses 1-4) due to which my initial rating for the paper is ‘weak accept’. Also: What 'model selection criterion' is used to report the results? What is the performance of the method in different types of domains shifts and how the performance varies as the domain shift increases? What would be the performance gain compared to baseline with a different backbone, such as vision transformer-based? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The supplementary material mentions that the work doesn't consider the multi-modal nature of video data as it contains different modalities such as, RGB data, optical flow, and audio. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Q1. About the t-SNE visualization** RE: In Figure R1 (please see the PDF in the global response), we show an improved version of Figure 6, which includes a quantitative analysis of cluster separation. According to Figure R1, we qualitatively (t-SNE) and quantitatively (Davies-Bouldin Index) verify that our proposed Spatial Grouping Module (SGM) extracts spatial features with better clustering than the baseline (we use $K$-means for both methods to cluster spatial features for visualization). The better clustering of our SGM indicates that, the features from different spatial groups encode different spatial information. In addition, we use Grad-CAM visualization to qualitatively compare our SGM with the baseline. As shown in Figure R2, our SGM focuses on different class-correlated cues by using features from different spatial groups, while the baseline focuses on very similar regions across different groups. This demonstrates that our SGM can extract diverse class-correlated spatial cues. #### **Q2. How the proposed model extracts domain-invariant cues?** RE: We would like to clarify that, our proposed STDN is not an approach that explicitly extracts domain-invariant cues between the source and target domains, since target data are not accessible during training in video domain generalization (VDG). Accordingly, we propose an alternative approach for VDG. Our STDN perceives diverse class-correlated cues in the source domain, such that the model can leverage various types of cues for recognition in the target domain. The set of rich and diverse class-correlated cues is more likely to include recognition cues that are invariant (shared) across the source and target domains, compared with previous models. Our quantitative analysis in Figure 5 demonstrates that our model can improve the feature diversity, namely diverse information is encoded in our learned features. And, the state-of-the-art performance on three benchmarks (Table 1&2) and the ablation study (Table 3) demonstrates the better generalization in target domains, which indicates that our model effectively discovers domain-invariant cues across the source and target domains. #### **Q3. Comparison with VideoDG [13]** RE: The VideoDG method actually involves a strong data augmentation technique, i.e., Robust Adversarial Domain Augmentation (RADA). By dropping RADA from VideoDG and dropping MixStyle from our STDN, we conduct a fair comparison between our STDN and VideoDG. As shown in the table below, our STDN outperforms VideoDG on both UCF->HMDB and HMDB->UCF under the same augmentation setting. | | UCF->HMDB | HMDB->UCF | | :-: | :-: | :-: | | VideoDG w/o RADA [13] | 54.3 | 71.4 | | STDN w/o MixStyle (Ours) | 58.3 | 76.2 | It is an interesting question that why VideoDG obtains higher improvement over the baseline on UCF->HMDB than other benchmarks. We conjecture that, it is because the adopted RADA augmentation well simulates the distribution shift from UCF to HMDB (especially with Adversarial Pyramid Network for temporal modeling, as shown by the ablation study in Table 1 of [13]). Even though VideoDG performs well on UCF->HMDB, our STDN outperforms it. Besides, our STDN obtains superior performance on other benchmarks, which verifies the effectiveness and versatility of our design. #### **Q4. Performance with only MixStyle [37]** RE: The performance of using only MixStyle is given in the following table. As shown in the table, our proposed STDN obtains significant improvement over MixStyle, which demonstrates the effectiveness of our proposed design. | | UCF->HMDB | HMDB->UCF | | :-: | :-: | :-: | | MixStyle | 55.7 | 73.5 | | Full STDN (Ours) | 60.2 | 77.1 | #### **Q5. Model selection criterion** RE: Following VideoDG [13], we conduct model selection according to the validation set of the source domain. #### **Q6. Analysis of different types of domain shifts** RE: We are delighted to talk about this interesting question. As demonstrated by the experiment results, our proposed STDN obtains substantial improvement over previous SOTAs under different domain shifts, e.g., environment change (EPIC-Kitchens-DG), subclass change (Jester-DG) and large illumination shift (HMDB->ARID, as shown in the Q2 of Reviewer SFFk). These results demonstrate that our method is a promising solution for video domain generalization. Regarding the performance variation at different levels of domain shifts, it is very challenging to analyze this by using existing video domain generalization benchmarks. We thank for your valuable idea for our future works, and we will attempt to quantitatively analyze this problem in the future (e.g., by constructing new benchmarks). #### **Q7. Results based on other backbones** RE: We conduct a comparison with TRN [6] and VideoDG [13] (previous SOTA) on UCF->HMDB using ViT-B/32 and I3D as backbones. As shown in the table below, our proposed STDN outperforms VideoDG based on both backbones, which demonstrates the effectiveness of our proposed STDN. | Backbone | TRN [6] | VideoDG [13] (SOTA) | STDN (Ours) | | :-: | :-: | :-: | :-: | |ViT-B/32 | 61.3 | 61.8 | 64.8 | | I3D | 68.0 | 68.7 | 72.1 | --- Rebuttal Comment 1.1: Comment: I have gone through the rebuttal and other reviews. Authors have provided adequate responses to most of questions, including comparisons with VideoDG[13], performance with only Mixstyle[37] and clarification on model selection criterion. It would be better to include some clear examples in the response to Q2. Authors are strongly encouraged to include rebuttal responses in the main draft of the paper. I would be inclined toward accepting this paper and would like to hear the opinion of fellow reviewers on the rebuttal. --- Reply to Comment 1.1.1: Comment: Thanks for your recognition. We are delighted and encouraged that our responses have solved most of your concerns. We would like to provide you an intuitive explanation about our idea using the following example. Suppose people always play football in professional fields in the source domain, previous models prefer to recognize the action by the static fields. However, when people play football in a basketball court in the target domain, then those models would not recognize the action. To address this, our work proposes to capture rich and diverse cues, e.g., the fields and the act of kicking a ball (invariant across the two domains), leading to effective recognition in unseen target domains. In addition to Figure 4, we show an extra example in Figure R2 (the PDF in the global response), which demonstrates that our model captures different spatial cues by different spatial groups separately. We will provide more clear examples and also include rebuttal reposes in our main manuscript.
Summary: The paper proposes Spatial-Temporal Diversification Network (STDN) for Video Domain Generalization (VDN). VDN is a new problem which is similar to video domain adaptation, but more challenging due to no unlabeled videos from target domain is provided. STDN is mainly designed into two modules: Spatial Grouping (soft-clustering) and Spatial-Temporal Relation (similar idea as TRN [6]). Experiments are done on 3 different benchmarks: UCF-HMDB, EPIC-Kitchens-DG, Jester-DG with good improvements over baselines. Written presentation is clear and mostly easy to read. Strengths: * The motivation of the proposed method is clearly presented and experimental results are solid, i.e., good improvements over baselines. * Various ablations, qualitative analysis provide better understanding about the proposed method. * Written presentation is clear and easy to read and understand. Weaknesses: * Missing a direct comparison with [14], even though [14] may use additional audio modality, an attempt to compare with [14] may make the paper more solid. * Although the problem of general domain generalization has been studied recently, the video domain generalization is less explored, which can be either good (this paper and a few other [13,14] are among the first ones) or bad (the problem is too small with limited impact). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Given the proposed method works well on VDG, it is convinced that STDN can learn diverse set of spatial and temporal features. It is natural to ask if STDN also work on generic video classification problems e.g, on Kinetics, Something-Something-v2? It will be great if it work, if not, where it falls short, it may give further insights for video understanding? - The Spatial-Temporal Relation Module shares some similarity with TRN [6], it would be nice to have a few sentences to compare and contrast. * minor comments: - citation format needs further correction, e.g., [77] Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The reviewer does not foresee any potential negative social impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Q1. Comparison with AVRNA [14]** RE: We conduct experiments to compare our STDN with AVRNA on UCF->HMDB. Since AVRNA involves the audio modality but our work focuses on the RGB modality, we implement two variants of AVRNA for comparison as follows: (1) Hard Norm Alignment loss (HNA): apply the HNA loss (Eq. (4) of [14]) for normalizing video features. (2) Relative Norm Alignment loss (RNA): first divide the source domain into two subdomains following [R2, R3] (as the target domain is not accessible), and then apply the RNA loss (Eq. (3) of [14]) for aligning feature distributions of the two subdomains. We implement these two variants based on TRN, and all of these experiments are conducted under the same augmentation setting. As shown by the table below, our STDN significantly outperforms the two variants of AVRNA on UCF->HMDB, which demonstrates the effectiveness of our model. | Baseline (TRN) | HNA | RNA | STDN (Ours) | | :-: | :-: | :-: | :-: | | 53.1 | 54.3 | 55.8 | 58.3 | [R2] Zhang et al.: Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning. NeurIPS 2022 [R3] Yang et al.: Divide to Adapt: Mitigating Confirmation Bias for Domain Adaptation of Black-Box Predictors. ICLR 2023 #### **Q2. Significance of studying video domain generalization** RE: In our opinion, video domain generalization (VDG) is a critical research area to develop robust video classification models capable of effectively generalizing to unseen test domains. The setting of VDG aligns closely with real-world applications, since models would face unfamiliar scenarios in practice. Compared with the widely studied general domain generalization that focuses on image data, VDG focuses on more complex video data with an extra time dimension. Therefore, VDG would suffer from large and complex domain shifts (e.g., variations of motion, unexpected absence or misalignment of short-term snippets), which cannot be addressed by general domain generalization methods. Therefore, advanced spatial-temporal modeling methods should be developed to address VDG. Hoping to further the development of this field, our work designs two new benchmarks for evaluating VDG methods, with numerous reproduced baselines. As shown in Table 1&2, general domain generalization methods perform poorly in VDG. By contrast, our work proposes a diversity-based spatial-temporal modeling approach tailored to challenges of VDG, which achieves superior performance on all the three benchmarks. #### **Q3. Performance of generic video classification** RE: The results of our proposed STDN vs. TRN on the validation set of Something-Something-v2 (SSv2) are given in the table below. We use the same training recipe for both methods (e.g., resnet50 backbone, 5 segments, 80 epoch). As shown in the table, our STDN can obtain improvement over TRN in generic video classification. Together with our superior performance on video domain generalization (Table 1&2), we demonstrate that our STDN can effectively generalize to videos from unseen test domains and perform well in generic video classification. | Baseline (TRN [6]) | STDN (Ours) | | :-: | :-: | | 42.6 | 43.2 | #### **Q4. Discussion about TRN [6]** RE: Thank you, we will add discussion in our manuscript. Overall, our proposed Spatial-Temporal Relation Module (STRM) is different from TRN, although STRM is designed based on TRN. Our STRM is designed for video domain generalization (VDG), while TRN is for generic video classification. Specifically, our STRM proposes to perceive diverse class-correlated cues in videos by modeling multi-scale spatial-temporal dependencies, aiming to alleviate the overfitting of domain-specific cues in the source domain and make the model generalize better in unseen test domains. By contrast, TRN is a multi-scale temporal modeling module that does not address the unique challenges of VDG. Technically, our STRM differs with TRN in two aspects: 1) our STRM models multi-scale spatial dependencies within individual frames, which enriches the spatial diversity; 2) we propose a relation discrimination loss to constrain that different temporal dependency modeling functions capture different temporal cues, which enriches the temporal diversity. By enriching the diversity in both space and time dimensions, our STRM discovers diverse class-correlated cues from source videos for effective generalization to unseen test domains. #### **Q5. Citation format** RE: Thank you, we will improve this. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: The rebuttal partly addressed my concerns. More specific, I appreciate the direct comparison with [14], however, I am not convinced that the difference with TRN is significant. Since I appreciate the effort the author(s) put into comparison with [14], I am leaning to vote for accepting this paper, but the rating is still borderline accept. --- Reply to Comment 1.1.1: Comment: Thank you for your recognition. We would like to present more details about our work, in order to clarify the differences between our STRM and TRN. To tackle the unique challenges of video domain generalization (line 31-32), our key idea is to perceive diverse class-correlated cues in videos. Although the design of our STRM draws valuable inspiration from the classical TRN, they have important technical differences, as summarized in the following table. Due to these differences, our STRM can perceive diverse spatial-temporal cues, whereas TRN cannot. | | Multi-scale Temporal relation | Multi-scale Spatial relation | Relation discrimination | | :-: | :-: | :-: | :-: | | TRN [6] | $\checkmark$ | | | | STRM (Ours) | $\checkmark$ | $\checkmark$ | $\checkmark$ | We emphasize that, our STRM is a straightforward yet effective adaptation of TRN for video domain generalization, which imposes diversity-driven modeling capability into TRN. Specifically, our STRM improves upon TRN to endow it with spatial-temporal dependency modeling capability. And more importantly, our relation discrimination loss constrains the temporal relation learning process, which is critical for learning diverse features. In what follows, we illustrate the differences between our STRM and TRN in detail. **(1)** We propose the *relation discrimination loss* $L_{rel}$ to ensure the diversity of temporal relation features, i.e., it constrains the discrimination between temporal relation features across different scales (line 206-218). It is a simple yet effective loss to improve the temporal diversity for better video domain generalization. As shown in the table below, the classical TRN has a low value of normalized MSE (i.e., low feature diversity). By contrast, our TRM with the loss $L_{rel}$ significantly outperforms TRN in terms of both normalized MSE and ACC. The normalized MSE quantitatively measures the feature diversity by the difference between temporal relation features across different time scales (please refer to Figure 5 for more analysis). | UCF->HMDB|TRN [6]|TRM (Ours, TRN+$L_{rel}$)| | :-: | :-: | :-: | |ACC|53.1|55.3| |Normalized MSE|0.0434|0.3329| **(2)** To enrich the spatial diversity, we extract *spatial relation* features by explicit dependency modeling at multiple space scales, while the original TRN ignores the space dimension. As shown in the table below, our full STRM outperforms our STRM without spatial relation modeling in terms of both ACC and normalized MSE. This demonstrates that our proposed spatial relation modeling can enrich the feature diversity and improve the generalization performance. | UCF->HMDB | SGM+TRM | SGM+STRM | | :-: | :-: | :-: | | ACC | 56.7 | 58.3 | | Normalized MSE | 0. 5124 | 0.5441 | In addition, our STRM addresses *a non-trivial technical challenge* of applying our Spatial Grouping Module (SGM). - Our SGM proposes to extract various spatial cues of different types, leading to features of $K$ spatial groups as the output. How to integrate features of these $K$ spatial groups and produce an integrated frame-level feature for further temporal modeling is a non-trivial challenge. - There are some optional schemes for feature integration: a) Avg: average features over different spatial groups; b) Cat: concatenate features of different spatial groups; c) SGM-t: a modified SGM that conducts grouping over spatial features from all frames of each video. We conduct empirical analysis for these schemes on UCF->HMDB, and the results are shown in the following table. | SGM+Avg+TRM|SGM+Cat+TRM|SGM-t+TRM|SGM+STRM (Ours)| | :-: | :-: | :-: | :-: | |55.8|56.7|56.0|58.3| - Compared with these schemes, our STRM is more reasonable, and our STRM outperforms other schemes with large margins as shown in the table above. **(3)** Overall: As shown in the following table, our full STRM significantly outperforms the classical TRN on UCF->HMDB, which is attributed to our diversity-driven design (as discussed in (1) and (2) above). |SGM+TRN [6]|SGM+STRM (Ours)| | :-: | :-: | |55.1|58.3| In summary, our STRM is a straightforward yet effective approach inspired by TRN, which improves spatial-temporal diversity for effective video domain generalization. In addition to our STRM, our work has two other important contributions as follows (please refer to line 48-60 for more details): 1) We propose Spatial Grouping Module to enrich the spatial diversity by embedding a clustering-like process, which is an important technical contribution. 2) We design two new benchmarks with numerous reproduced baselines (existing works publish only a limited number of benchmarks), which will further the development of the video domain generalization field.
Rebuttal 1: Rebuttal: Thanks to all reviewers for your constructive comments. We are encouraged that the reviewers found that, our work studies an important and practical problem (Reviewer m6Ue, SFFk) with a clear motivation (Reviewer JaYX), proposes a straightforward and interesting idea (Reviewer m6Ue, YkHU), presents comprehensive and solid experimental analysis or obtains state-of-the-art performance (Reviewer bwaF, JaYX, m6Ue, YkHU) and has good writing (Reviewer bwaF, JaYX, YkHU, SFFk). We have carefully addressed your concerns and provided detailed responses for each review. Pdf: /pdf/06db4e372d18e613c55751f47857f2fc5aab930c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents STDN, a spatio-temporal diversification network designed for domain generalization. It introduces a spatial grouping module that effectively groups features from individual frames across different spatial frames. Additionally, a spatio-temporal relation module is proposed to model spatial-temporal correlations at multiple scales. The experiments demonstrate the network's good performance on three benchmarks. Strengths: - The paper is well written and easy to follow - The results on four setup is comprehensive (I do have concerns on the results, pleas also read the next section) Weaknesses: Regarding the methodology: - The author asserts that domain-specific cues are crucial for achieving good generalization (L43-45). Subsequently, the paper states that spatial grouping is employed to enhance the diversity of spatial modeling. It is necessary to provide further justification as to why this diversity aids in learning domain-specific features instead of introducing noise. - The term "domain-specific feature" is frequently used in this paper; however, it is never explicitly defined. Moreover, the results fail to substantiate that the proposed method effectively learns these domain-specific features. Taking the basketball example into consideration, the backboard can be regarded as the domain-specific feature within the training set due to its construction, and the basketball itself might not be the case as you can kick the basketball, so what is the domain specific feature? should it be data driven or manually defined? When the paper claims that the proposed method can learn more representative features, supporting evidence must be provided. Currently, aside from visualization (which will be discussed later), there is a lack of evidence to support the theory that the proposed method effectively learns domain-specific features. - It is important to note that improved results do not necessarily demonstrate that the proposed method resolves the domain generalization problem. For instance, if a stronger backbone were employed, significantly better performance could be achieved under the same experimental conditions; however, this would not imply that the stronger backbone more effectively addresses the domain generalization issue. To this end, the most straight forward experiments would be comparing against the baselines use I3D backbone, as I3D learns spatio-teamporal feature without grouping and multi-scale. Concerning the results: - Firstly, it should be noted that the works listed for comparison in Table 1 are incomplete. The authors could easily find numerous works on UCF-HMDB that exhibit considerably better performance. - Given the substantial emphasis placed on spatial modeling and spatio-temporal modeling, it is crucial to compare the proposed method against previous works that utilize 3D backbones, such as CoMiX (refer to https://arxiv.org/pdf/2110.15128.pdf), which demonstrates superior performance. Additionally, it is important to include the set of baselines cited by this paper and establish a fair comparison, such as using ResNet101 as the backbone. - It is worth pointing out that there may be instances where Grad-CAM highlights the "domain-specific" features, yet the network makes incorrect classifications. Thus, Grad-CAM alone cannot serve as conclusive evidence for improved domain generalization. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see my comments above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Overall clarification of our idea** We would like to clarify that, our key idea is to discover diverse class-correlated cues in videos, aiming to *alleviate the overfitting of domain-specific cues* in the source domain, NOT focus on learning domain-specific cues, as stated in our introduction (Line 31-32, 43-48). As diverse class-correlated cues are discovered from the source domain, our STDN can leverage various types of cues for recognition in the target domain. Compared with previous models, the set of diverse class-correlated cues is more likely to include recognition cues that are invariant (shared) between the source and target domains, leading to better generalization in the unseen target domain. #### **Q1. Concern about noise** RE: First, as illustrated in the overall clarification, our key idea is to perceive diverse class-correlated cues (NOT focus on learning domain-specific cues) for video domain generalization. To further justify the effectiveness of diverse features learned by our Spatial Grouping Module, we evaluate a trained STDN (4 spatial groups) by dropping features from a specific spatial group (STDN-$i$ denotes that the $i$-th group is dropped for each video). As shown in the table below (UCF->HMDB), dropping any one of the 4 spatial groups will cause performance degradation compared with the full STDN, which demonstrates that our learned diverse spatial features encode effective class-correlated information more than noise. | STDN-1 | STDN-2 | STDN-3 | STDN-4 | Full STDN | | :-: | :-: | :-: | :-: | :-: | | 59.7 | 59.5 | 59.7 | 59.1 | 60.2 | #### **Q2. About the domain-specific cues** RE: First, as illustrated in the overall clarification, our proposed STDN aims to perceive diverse class-correlated cues (NOT focus on learning domain-specific cues) for video domain generalization (VDG). Second, we clarify the definition of domain-specific cues. In our work, domain-specific cues are learned from the data rather than manually defined, similar to [48, 19, 20, 21]. In principle, domain-specific cues are recognition cues that are associated with video categories in one domain (e.g., source) but do NOT have a correlation with the (identical) categories in another domain (e.g., target). Taking the EPIC-Kitchens-DG (EPIC) benchmark as an example, videos from different domains are recorded in different environments (native kitchens), and thus the domain-specific cues learned from an EPIC domain should be some cues from the environment that are statistically correlated with the video categories (e.g., specific decorations). Our work is motivated by that, previous video classification models are prone to overfit some domain-specific cues in the source domain (e.g., class-correlated contexts, as demonstrated by [19, 20, 21]), and thus this impairs the generalization performance in unseen target domains. Accordingly, we propose a diversity-based approach to tackle VDG. In Figure 5, we quantitatively demonstrate that our proposed STDN can improve the feature diversity, i.e., diverse information is encoded. #### **Q3. Experiments based on I3D backbone** RE: The results based on I3D backbone on UCF->HMDB are shown in the below table, and all methods are implemented under the same augmentation setting. As shown in the table, our proposed STDN outperforms VideoDG (previous SOTA), which demonstrates the effectiveness of our proposed design. | Baseline (TRN [6]) | VideoDG [13] (SOTA) | STDN (Ours) | | :-: | :-: | :-: | | 68.0 | 68.7 | 72.1 | #### **Q4&Q5. Incomplete comparison on UCF-HMDB (Table 1)** RE: The works listed for comparison on UCF-HMDB are complete. The works you mentioned that have better performance (e.g., CoMix [41]) are designed for video domain **adaptation** (VDA), which is a different task from video domain **generalization** (VDG). In VDA, unlabeled videos from the target domain are accessible for training, while VDG (our setting) cannot access to any target videos. A comparison between VDA and VDG methods is not fair, since VDA methods can leverage extra target videos for training and naturally lead to better performance. As for the comparison results using 3D backbone, please refer to Q3. #### **Q6. About the Grad-CAM visualization** RE: First, as illustrated in the overall clarification, our key idea is to perceive diverse class-correlated cues (NOT focus on learning domain-specific cues) for video domain generalization. The Grad-CAM visualization in Figure 4 is a qualitative analysis to intuitively illustrate that our model can discover diverse class-correlated cues. We also conduct a quantitative analysis in Figure 5, and the results show that our proposed STDN can improve the feature diversity. Furthermore, the comparison experiments on three different benchmarks (Table 1&2) and ablation study (Table 3) quantitatively demonstrate the effectiveness of our proposed diversity-based modeling. In our reply to Q1, we quantitatively demonstrate that our model learns effective class-correlated cues more than noise. --- Rebuttal Comment 1.1: Title: Thanks for the additional information Comment: I carefully read the rebuttal and it addressed most of my concerns. I am willing to raise my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your time and efforts. We are encouraged by your recognition.
null
null
null
null
null
null
Deep Fractional Fourier Transform
Accept (spotlight)
Summary: This paper introduces Fractional Fourier Transform to provide comprehensive unified spatial-frequency perspectives for deep learning, and further introduces a basic operator, Multi-order Fractional Fourier Convolution. Besides, this paper experimentally evaluates the effectiveness of MFRFC on various computer vision tasks, including object detection, image classification, guided super-resolution, denoising, dehazing, deraining, and low-light enhancement. Strengths: There are several strengths here: 1. This paper is of high novelty and originality. Fourier transform is very important and is employed by many deep learning-based methods. But Fractional Fourier transform, a generalized version of Fourier transform, is less explored in deep learning era. This paper introduces FRFT into deep learning, analyses the property of FRFT, and designs a basic convolutional operator. 2. This paper achieves the fast implementation of FRFT, which is very important for the future development of FRFT in the deep learning pipeline. 3. This paper validates the effectiveness of the MFRFC operator on various computer vision tasks, including high-level tasks (object detection and image classification) and low-level tasks (guided super-resolution, denoising, dehazing, deraining, and low-light enhancement). 4. This paper is clearly written and easy to follow. Weaknesses: Three are several weakness here: 1. The FRFT itself has a long history and is explored in many research areas. The discussion in the related work part is limited and not comprehensive. For example, the recent work [1] also employs FRFT in image super-resolution. 2. It is necessary to explain the design of MFRFC operator and why such design is optimal. The MFRFC operator employs three paths. But is it possible to apply more paths in the operator. How is the performance and parameters comparison between different number of paths. 3. The MFRFC operator integrates multiple paths/domains, which is superior to single path. But, how is the performance of the operator with only fractional branch compared to other single path. Can single fractional path achieves comparable performance to the full MFRFC operator? [1] Adaptive Image Super-Resolution Algorithm Based on Fractional Fourier Transform. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I only have several concerns here: 1. Does there exists a constant optimal fractional order for each task. Besides, the authors exploring the optimal fractional order via learnable parameters. Is there exists other traditional method that determines the optimal fractional? 2. The non-stationary image signals can be better decoupled with FRFT as shown in Figure 1. Can you show some specific examples. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1: The research history of FRFT in many research areas.** Thanks for the suggestion, we will make a comprehensive discussion in the research history of FRFT in many other research areas in the related work. The reference work you mentioned is quite simple and is not deep learning based, so we did not include it in the submission. We will also add it in the related work. **2: The design choice of MFRFC operator.** We devise the MFRFC operator from two main perspectives. (1) The number of branches. Our work takes FFC [1] , a spatial and frequency two-branch operator, as baseline. To verify the effectiveness of FRFT, we add one more fractional order branch. The two-branch baseline is designed with the same manner as our three-branch MFRFC operator, with the only difference in fractional branch. Such design is enough to demonstrate the superiority of FRFT over Fourier transform in a fair setting. (2) The order of the fractional domain. The order selection of the fractional domain is more important. To this end, we experiment with different fractional order in the supplementary material. Besides, we also conduct experiments with adaptive order. The adaptive order version achieves nearly optimal performance among different fractional orders. Besides, our selection order=0.5 in the main manuscript also gets relatively good performance among different fractional orders. [1] Fast Fourier Convolution. NeurIPS, 2020. **3: The performance of the operator with only fractional branch.** It is a good question and we explore the performance of the operator with only fractional branch in the supplementary material. We choose two representative tasks, object detection with Faster RCNN as backbone and guided image super-resolution with PanNet as backbone. The experimental results show that single branch performs slightly worse than the integrated operator SFC and MFRFC. For the comparison between different single branches, spectral branch (order=0.5 and order=1) performs slightly better than the spatial branch (which is also the baseline). **4: About the optimal fractional order.** (1) We hold the belief that the optimal fractional order of an image is to do with the task. In other words, it depends on what we want to disentangle from the image. This means that the optimal fractional order is related with the data and task. Thus, for tasks in deep learning paradigm, a fixed optimal fractional order may not exist, since the data is different and uncertain. (2) Certain previous method [1] employs entropy maximization to determine the optimal fractional order in hyperspectral anomaly detection task. But this method is coupled with this specific task and data format and this method is not deep learning based. In contrast, setting adaptive order with learnable parameters is a more general way in deep learning paradigm. [1] Hyperspectral anomaly detection by fractional Fourier entropy. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019. **5: Specific examples.** It is a good question since specific examples are intuitive for the better understanding of FRFT. We show the decoupling capacity of FRFT for non-stationary signal with a previous work [1]. This work explores decoupling degradation from the degraded image with FRFT. FRFT can well decouple degradation and the clean image, filter the degradation, and return the clean image in this case. This well supports our work with specific examples. [1] Optimal image restoration with the fractional Fourier transform. JOSA A, 1998. --- Rebuttal Comment 1.1: Title: Response to Authors' Comments Comment: Many thanks for your response. After carefully reading authors' rebuttal and other reviewers' comments, my concerns have been addressed. Overall, this paper is with sufficient contributions and convincing experiments,which is suggested to be accepted.
Summary: + The paper provides an implementation framework using deep learning for fractional Fourier transform. Strengths: + Good deep algo for FRFT. Weaknesses: - Baselines for different applications are a bit dated. - The paper could have focused on just one application and dealt deeper. - The writing is not that good, needs to be precise and with enough motivation. Addressed. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Why there is not fast algo for FRFT? - Can you tell about the accuracy of Deep FRFT on simulated signals compared to others? Answered. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1: Baseline methods for different tasks.** (1) Our method is not SOTA-oriented. Instead, the key of this paper is that we unlock the FRFT in deep learning paradigm, solving the biggest challenge for the popularization of FRFT: vague characteristics and missing fast implementation. Fourier transform is pretty useful and vastly employed in deep learning era. As a generalized and improved version of Fourier transform, FRFT surely has great application prospect and exploration value. (2) We validate the effectiveness of FRFT on various tasks with classical and commonly acknowledged baselines. We select the classical methods as baselines for three reasons: 1. We follow the setting of previous basic operator-based methods [1][2], which also employ classical methods as baselines. 2. The selected baselines are the representative and commonly acknowledged works in the related tasks. In addition, as a general operators, implementing the proposed operator in the standard benchmarks is more fair. 3. The performance of classical methods are highly likely to be reproducible. (3) To further solve your misgiving, we also apply our operator to recent SOTA baselines. We choose GPPNN [3] and LACNET [4] for guided image super-resolution task on WorldView-II dataset, NBNet [5] and Restormer [6] for image denoising task on SIDD dataset, URetinex [7] and SNR [8] for low-light image enhancement task on LOL dataset. As can be seen from the following three tables, our method still largely elevates the performance of these SOTA baselines. We will add these experiments and conduct more experiments on remaining tasks in the final version. Besides SOTA baselines, our method may be critical to small networks in the practial deployment, which shows the irreplaceable and important role of this basic tool and operator. | Model | Methods |PSNR | SSIM | | :-----| ----: | ----: | :----: | | GPPNN | Original | 41.16 | 0.968 | | | SFC | 40.14 | 0.967 | | | MFRFC | 41.47 | 0.972 | | LACNET | Original | 41.45 | 0.972 | | | SFC | 41.52 | 0.973 | | | MFRFC | 41.74 | 0.975 | | Model | Methods |PSNR | SSIM | | :-----| ----: | ----: | :----: | | NBNet | Original | 39.75 | 0.959 | | | SFC | 39.86 | 0.959 | | | MFRFC | 39.97 | 0.960 | | Restormer | Original | 40.02 | 0.960 | | | SFC | 40.05 | 0.960 | | | MFRFC | 40.19 | 0.961 | | Model | Methods | PSNR | SSIM | | :-----| ----: | ----: | :----: | | URetinex | Original | 21.33 | 0.835 | | | SFC | 21.82 | 0.837 | | | MFRFC | 22.35 | 0.839 | | SNR | Original | 24.61 | 0.842 | | | SFC | 24.84 | 0.846 | | | MFRFC | 24.98 | 0.848 | [1] Fast Fourier Convolution. NeurIPS, 2020. [2] Deep Fourier Up-sampling. NeurIPS, 2022. [3] Deep gradient projection networks for pan-sharpening. CVPR, 2021. [4] LAGConv: Local-context adaptive convolution kernels with global harmonic bias for pansharpening. AAAI, 2022. [5] NBNet: Noise basis learning for image denoising with subspace projection. CVPR, 2021. [6] Restormer: Efficient transformer for high-resolution image restoration. CVPR, 2022. [7] URetinex-Net: Retinex-based Deep Unfolding Network for Low-light Image Enhancement. CVPR, 2022. [8] SNR-Aware Low-light Image Enhancement. CVPR, 2022. **2: The paper could have focused on just one application.** (1) Our paper first and comprehensively introduces the characteristics and properties of FRFT in the deep learning paradigm and designs a simple and general operator. Since FRFT is a basic tool and our operator is general, we naturally need to verify the effectiveness and generality of our method on various tasks and classical baselines. (2) Our paper unlocks FRFT in deep learning. With this pioneer work, the closer and deeper combination with FRFT and specific tasks is expected to be easier and has great application prospect. **3: The writing quality.** Do you have specific suggestions for the writing. Reviewer c5M1 highly praises the writing "The writing is clear and lucid. The experimental settings are clearly described." Reviewer HR4i confirms our writing: "This paper is clearly written and easy to follow." **4: Why there is not fast algo for FRFT?** (1) The fast discrete implementation of FRFT is difficult is theory [1][2][3]. Intuitively, the formulation of FRFT shown in equation 1 of the main manuscript is much complicated than that of Fourier transform. (2) There exists no fast implementation in practice. Previous discrete implementation of FRFT is much slower than FFT in practical use. There also exists no official package for FRFT. [1] The fractional order Fourier transform and its application to quantum mechanics. IMA Journal of Applied Mathematics, 1980. [2] Digital computation of the fractional Fourier transform. IEEE Transactions on signal processing, 1996. [3] Two dimensional discrete fractional Fourier transform. Signal Processing, 1998. **5: FRFT on simulated signals.** There may be a misunderstanding to our work. Our method validates the effectiveness of FRFT on various tasks. The processed signals are 2-D image signals, instead of simulated signals. What do you mean by simulated signals. --- Rebuttal Comment 1.1: Comment: Satistfied with the rebuttal and other reviews. Raising my rating. --- Rebuttal 2: Comment: Hope that we have solved your confusion. We are looking forward to your feedback.
Summary: The paper discusses Fractional Fourier Transform (FRFT) in the context of deep learning based computer vision methods. FRFT is a unified continuous spatial-frequency transform which reflects spatial and frequency representations of images. Based on FRFT, the paper proposes a new convolutional operator (MFRFC) which is more suitable for image signal processing. With MFRFC, the networks achieve consistent performance improvement across several computer vision tasks. Strengths: + The paper proposes a good idea to firstly introduce the well-known FRFT in recent deep learning-based methods and achieves substantial performance improvement on various tasks and networks. + The proposed method is simple yet effective. The fast implementation of 2D discrete FRFT is vital to the research community. The experimental results show empirical improvement in visual quality in image restoration tasks. + The writing is clear and lucid. The experimental settings are clearly described. Weaknesses: - The concept and properties of FRFT is broad. This paper introduces certain properties of FRFT, but does not comprehensively investigate the FRFT. The proposed operator MFRFC is very effective, while may also be treated as one solution of all the possibilities. - The paper lacks comparisons with other spectral-related methods that also try to incorporate spectral information in the deep learning pipeline (for example, the FFC method mentioned in the related work). - MFRFC has three different order paths: a spatial (p=0) path, a spectral (p=1) path, and a fractional order (p=0.5) path. The authors should also evaluate how the order of fractional order path effects performance. - Visual comparison is incomplete regarding the conducted experiments. The visual results for image denoising and low-light enhancement are missing. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The introduced FRFT is a spatial-frequency analysis tool of signals. But, there also exists other spatial-frequency analysis tools including wavelet transform and Gabor transform, among which wavelet transform has been widely employed in the deep learning methods. What’s the difference between FRFT and these spatial-frequency analysis tools? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors have discussed the limitations from different aspects. While they do not include sufficient discussions about the computational cost, which are suggested to be included. Since the authors claim that they solved the fast implementation of FRFT, it is encouraged to clearly compare the speed of the original discrete FRFT, the author’s fast implementation version, and the fast implementation of discrete Fourier transform (FFT). Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1: Properties of FRFT.** It is true that the properties of FRFT is broad. Here, we introduce the main properties inherent in FRFT. For more properties associated with specific tasks, we believe that with our pioneer work, it will be easier to explore them in the near future. Similar case applies to Fourier transform. Fourier transform itself has only several main properties. However, with the fast implementation FFT, researcher can easily explore the properties of Fourier transform with specific tasks [1][2]. With our work, similar combination can also be made be with FRFT. [1] Learning frequency-aware dynamic network for efficient super-resolution. ICCV, 2021. [2] Frequency and spatial dual guidance for image dehazing. ECCV, 2022. **2: Comparisons with other spectral-related methods.** In fact, we have compared with other spectral-related methods. Previous similar spectral-related methods mainly works in a two-branch manner with spatial and frequency branches. They employ Fourier transform for the frequency branch. In our method, the SFC method works in a similar way with spatial and frequency branches. SFC is designed with the same manner as our three-branch MFRFC operator, with the only difference in fractional branch. Such design is enough to demonstrate the superiority of FRFT over Fourier transform in a fair setting. **3: Relationship between order and performance.** We explore the relationship between order and performance in the supplementary material, including different orders and adaptive order. We draw two conclusions from our empirical results. First, different fractional orders in the MFRFC operator all can significantly elevate the performance over the original baseline, with slight difference between different fractional orders. Secondly, the adaptive order version achieves nearly optimal performance among different fractional orders. Besides, our selection order=0.5 in the main manuscript also gets relatively optimal performance among different fractional orders. **4: Difference with other spatial-frequency analysis tools.** (1) Gabor and Wavelet transform are also time-frequency analysis tools. The key difference between FRFT and these two spatial-frequency analysis tools is that Gabor and Wavelet transform are indeed special cases of short time Fourier transform. The window function of Gabor is Gaussian function and the window function of Wavelet transform is adaptive. While, FRFT is a generalized version of Fourier transform. (2) Gabor has the inherent limitation of short time Fourier transform, such as fixed window function and poor time-frequency resolution balance. Besides, Gabor has more hyper-parameters which makes it hard to optimize. Wavelet transform also suffers from the difficulty of selecting the proper wavelet basis function. **5: Speed of the fast implementation version.** We compare the parameters and Flops of MFRFC and vanilla convolution in Table 2 in the main manuscript. As for the speed, the baseline equipped with previous discrete FRFT is about 20 times slower than the original baseline in average. While, the baseline equipped with our fast implementation is about 5\% slower than the original baseline in average. Our method can substantially elevate the performance of baseline method with negligible additional computational burden including parameters, Flops and running speed. Beisdes, we also provide the code for FRFT in the supplement. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for the author's thoughtful reply. The rebuttal addressed my concerns well. I was originally positive at the paper. When I checked other reviews and the rebuttal, I decided to raise my rating. Besides, it is better to add these analyses in the rebuttal to the released version.
Summary: This paper proposed a fractional Fourier transform-based module, which can simultaneously exploit the information from spatial and frequency perspectives. With a fast implementation of FRFT, multi-order MFRFC module can be easily incoporated to existing convolutional networks for different tasks. Strengths: 1. A unified spatial-frequency analysis module based on fractional Fourier transofrm is proposed 2. The proposed MFRFC module is applied to several tasks including denoising, deraining, classification, dehazing, detection etc, where the proposed module brings performance gains. Weaknesses: 1. The main issue is the methods for different tasks are not new, e.g., DnCNN has been much inferior to recent denoisers in terms of quantitative metrics. So it should be evaluated whether the proposed module still works for recent denoisers with better performance. I guess the perfmance gains would be not much significant. Currently, I am at the borderline leaning to accept this work. But if my guess is correct, the contribution of this work would be not so significant, and may turn down my rating. 2. It is not clear how the MFRFC module is applied in existing methods for different tasks, e.g., for DnCNN, all the convolutional layers are replaced? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see weakness Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1: Baseline methods for different tasks.** (1) Our method is not SOTA-oriented. Instead, the key of this paper is that we unlock the FRFT in deep learning paradigm, solving the biggest challenge for the popularization of FRFT: vague characteristics and missing fast implementation. Fourier transform is pretty useful and vastly employed in deep learning era. As a generalized and improved version of Fourier transform, FRFT surely has great application prospect and exploration value. (2) We validate the effectiveness of FRFT on various tasks with classical and commonly acknowledged baselines. We select the classical methods as baselines for three reasons: 1. We follow the setting of previous basic operator-based methods [1][2], which also employ classical methods as baselines. 2. The selected baselines are the representative and commonly acknowledged works in the related tasks. In addition, as a general operators, implementing the proposed operator in the standard benchmarks is more fair. 3. The performance of classical methods are highly likely to be reproducible. (3) To further solve your misgiving, we also apply our operator to recent SOTA baselines. We choose GPPNN [3] and LACNET [4] for guided image super-resolution task on WorldView-II dataset, NBNet [5] and Restormer [6] for image denoising task on SIDD dataset, URetinex [7] and SNR [8] for low-light image enhancement task on LOL dataset. As can be seen from the following three tables, our method still largely elevates the performance of these SOTA baselines. We will add these experiments and conduct more experiments on remaining tasks in the final version. Besides SOTA baselines, our method may be critical to small networks in the practial deployment, which shows the irreplaceable and important role of this basic tool and operator. | Model | Methods |PSNR | SSIM | | :-----| ----: | ----: | :----: | | GPPNN | Original | 41.16 | 0.968 | | | SFC | 40.14 | 0.967 | | | MFRFC | 41.47 | 0.972 | | LACNET | Original | 41.45 | 0.972 | | | SFC | 41.52 | 0.973 | | | MFRFC | 41.74 | 0.975 | | Model | Methods |PSNR | SSIM | | :-----| ----: | ----: | :----: | | NBNet | Original | 39.75 | 0.959 | | | SFC | 39.86 | 0.959 | | | MFRFC | 39.97 | 0.960 | | Restormer | Original | 40.02 | 0.960 | | | SFC | 40.05 | 0.960 | | | MFRFC | 40.19 | 0.961 | | Model | Methods | PSNR | SSIM | | :-----| ----: | ----: | :----: | | URetinex | Original | 21.33 | 0.835 | | | SFC | 21.82 | 0.837 | | | MFRFC | 22.35 | 0.839 | | SNR | Original | 24.61 | 0.842 | | | SFC | 24.84 | 0.846 | | | MFRFC | 24.98 | 0.848 | [1] Fast Fourier Convolution. NeurIPS, 2020. [2] Deep Fourier Up-sampling. NeurIPS, 2022. [3] Deep gradient projection networks for pan-sharpening. CVPR, 2021. [4] LAGConv: Local-context adaptive convolution kernels with global harmonic bias for pansharpening. AAAI, 2022. [5] NBNet: Noise basis learning for image denoising with subspace projection. CVPR, 2021. [6] Restormer: Efficient transformer for high-resolution image restoration. CVPR, 2022. [7] URetinex-Net: Retinex-based Deep Unfolding Network for Low-light Image Enhancement. CVPR, 2022. [8] SNR-Aware Low-light Image Enhancement. CVPR, 2022. **2: Implementation of MFRFC.** We apply the MFRFC in the middle layer of the network. We find that this is enough to incorporate fractional domain information into the network and can get significant performance improvement with the computational burden introduced by our method negligible. --- Rebuttal Comment 1.1: Comment: It is necessary to choose baseline models for verification, but I think it is not sufficient to support the effectiveness of proposed method on a wide range of algorithms. Considering the new experiments on SOTA algorithms, I would like to keep my rating for acceptance.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: 1. This technique delves into novel fundamental operators for deep learning, the Fractional Fourier Transform (FRFT), exploring a new perspective of signal processing between two orthogonal domains (spatial and frequency domains). 2. This technique have implemented a fast and differentiable Multi-order Fractional Fourier Transform Convolution, an elegant combination of deep learning and traditional image processing. 3. Unignorably, as an theoretical extension of FFC, it is expected to make positive contributions to the research community. Strengths: This work provides a solid foundation for delving into this unexplored territory for deep learning. A unified analysis tool in the spatial-frequency domain, known as the Fractional Fourier Transform (FRFT), has been introduced, accompanied by sufficient theoretical analysis with following points: + The spatial and frequency domains have been extensively explored, while the intermediate chaotic region between the two has been underestimated. + The author has developed a fast implementation of the 2D FRFT, enabling comprehensive image processing from multiple perspectives in the spatial-frequency plane. + Sufficient experimentation. The operators were experimentally evaluated on a range of vision tasks and results demonstrate its substantial performance improvements. + Overall, the study is solid, authors also provide executable code implementation. It is a valuable supplement to deep learning-based vision toolkits. Weaknesses: There are also few concerns: - The three-branch design in MFRFT indeed reasonable. However, it is worth considering whether there is a justifiable explanation for utilizing 1x1 convolutions to process signals in fractional domains. - Additionally, has the author taken into account the possibility of handling this chaotic information through the MLP layers instead of a unified convolutional operator? -The introduced FRFT is mainly applied to CNN-based architectures. The transformer has also demonstrated strong capacities and performance. Thus, the authors may also need to discuss the possibility of introducing FRFT in transformer architecture. The basic operator for transform may be different from that for CNN. - How FRFT in low-level vision tasks and high-level vision tasks. In FFC, it only conduct experiments in high-level vision tasks. Why authors choose low-level vision tasks for experiments conduction? - Several minor issues that need to be modified. (1) Inconsistent abbreviation for the citation of figures. Figure in line 248 and Fig. in line 249. (2) Figure 1 is easy to understand, but the detail of this figure may need further improving for beauty. (3). The caption for Table 2 is abnormally long, compared to other Figures and Tables. (4) The evaluation for guided image super-resolution task adopts two more metrics than all other evaluated low-level tasks, which may be redundant. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: In Figure 2, compared to the spatial and frequency domains, the fractional amplitude spectra exhibit a scaling effect. Why is this phenomenon present? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes. The limitations and potential negative societal impact of this paper have been addressed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1: 1x1 convolutions in fractional domains.** FRFT is a generalized and extended version of Fourier transform. For Fourier transform, spectral theory demonstrates the existence of operator duality between convolution in the spatial domain and element-wise multiplication in the spectral domain, and thus 1x1 convolution is a default setting in spectral domain [1] [2]. Correspondingly, we also employ 1x1 convolution for fractional fourier domain. [1] Fast Fourier convolution. NeurIPS, 2020. [2] Deep Fourier Up-sampling. NeurIPS, 2022. **2: FRFT in CNN, MLP and Transformer architectures.** (1) Our work is based on FFC [1] with an theoretical extension. Following this baseline work, we design a convolutional operator for CNN-based architectures. Most of previous networks are CNN-based, endowing our method with promising application prospect. (2) Fourier transform has been explored in both MLP [2] and Transformer [3] architectures. Our work is an improved version of Fourier transform, and can also been explored in these architectures as a future work. For example, GFNet [3] replaces the self-attention sub-layer with the frequency filter layer via Fourier transform. Such operation can be directly replaced with FRFT. Besides, FRFT can be applied to more architectures and tasks since our work unlocks FRFT in deep learning. [1] Fast Fourier convolution. NeurIPS, 2020. [2] Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS, 2020. [3] Global filter networks for image classification. NeurIPS, 2021. **3: FRFT in both low-level and high-level vision tasks.** (1) FRFT is a basic tool and our designed MFRFC is a general operator. Thus we conduct comprehensive experiments on both low-level and high-level vision tasks to validate the effectiveness of our method. (2) Besides the consideration of comprehensive verification on different tasks, we find support that FRFT is closely related to low-level vision tasks [1]. This also motivates us to verify our method on low-level vision tasks. [1] Optimal image restoration with the fractional Fourier transform. JOSA A, 1998. **4: Scaling effect in the fractional amplitude spectra.** The scaling effect in the fractional amplitude spectra is the inherent characteristic of FRFT. It is to do with the projection of fractional domain signal on the spatial domain, as shown in Figure 1 in the main manuscript. As the fractional order varies from 0 to 1, the energy projection on the spatial domain gets less, demonstrating as the scaling effect. This phenomenon is also explained as the energy distribution property of FRFT in the supplementary material. --- Rebuttal Comment 1.1: Comment: Thanks for your response, all of my concerns have been well-addressed. Therefore, I decide to raise my score.
null
null
null
null
null
null
Federated Learning with Client Subsampling, Data Heterogeneity, and Unbounded Smoothness: A New Algorithm and Lower Bounds
Accept (poster)
Summary: This manuscript introduces a new federated learning algorithm, EPISODE++, to accelerate the convergence speed of federated learning and provides theoretical proofs for the upper bound of convergence speed. Strengths: The method proposed in this manuscript improves the EPISODE algorithm by addressing the issue of client heterogeneity while allowing partial client participation. The manuscript also provides theoretical proofs for the convergence speed. Weaknesses: The experimental section does not explain how the hyperparameters for each baseline method were selected, and it does not discuss how the performance of the method is affected when the number of clients reaches hundreds. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Figure 1 (a) and (b), the left two figures show that NaiveParallelClip has lower training loss than other methods when the number of clients is 8, but its testing accuracy is lower than that of EPISODE++. How can this phenomenon be explained? 2. How does the performance of the method change when there are more clients (resulting in larger variances for $G_{r+1}$ and $G_{r+1}^i$)? Does it outperform classical methods like fedprox? 3. How were the hyperparameters (learning rate, training steps) chosen for each method? 4. How does the proposed method perform when the client data are i.i.d.? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The method proposed in this manuscript only slightly modifies the EPISODE method, and the experimental section is not comprehensive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your effort in providing valuable feedback. Below, we have individually addressed the questions in your review. **Q1: “In Figure 1 (a) and (b), the left two figures show that NaiveParallelClip has lower training loss than other methods when the number of clients is 8, but its testing accuracy is lower than that of EPISODE++. How can this phenomenon be explained?”** This can be explained by the fact that NaiveParallelClip with $S=8$ is essentially simulating large-batch SGD, since it eliminates local steps and computes the update by averaging the gradient across ALL clients. It has been shown [1] that large-batch SGD can overfit in deep learning, and this may cause the overfitting in the case $S=8$. Notice that when $S<8$, NaiveParallelClip is not simulating large-batch SGD because it is still missing information from unsampled clients. Also, we would like to reiterate that NaiveParallelClip is not a practical algorithm due to the significant communication cost. **Q2: “How does the performance of the method change when there are more clients?”** In our 1 page rebuttal PDF, we have included results for large-scale experiments in two settings: (i) SNLI with $N=128, S=16, s=30\\%$ and (ii) Sent140 with $N=128, S=16, s=10\\%$. All other configuration details are the same as in the main text. The results are shown in Figure 2 of the 1 page rebuttal PDF. In both settings, the relative performance of each algorithm is similar to that of the main text experiments (i.e. those with $N=8$). The proportion of participating clients is $S/N=1/8$ for the large-scale experiments and $S/N \geq 1/4$ for the main text experiments, so that the effect of client sampling may be stronger in the large-scale experiments. With $N=128$, EPISODE++ achieves a better training loss for both datasets than in any setting with $N=8$, while the testing accuracy of EPISODE++ is about the same as the highest testing accuracy of the $N=8$ settings. Further, EPISODE++ outperforms all other algorithms in the $N=128$ setting. **Q3: “Does it outperform classical methods like fedprox?”** We have included additional results comparing EPISODE++ and FedProx in the 1 page rebuttal PDF. In this experiment, we used both algorithms for the SNLI setting as described in the main text, with $N=8, S=2, s=10\\%$. For FedProx, we tuned the additional parameter $\mu$ over $\{0.01, 0.03, 0.1, 0.3, 1.0\}$, and the best tuned value according to test accuracy was $\mu = 0.03$. The results are included in Table 2 of the 1 page rebuttal PDF. Due to time constraints, we were not able to evaluate other baselines in this setting. As shown in Table 2, EPISODE++ achieves a lower training loss and higher training accuracy than FedProx. Similar to FedAvg, CELGC, and NaiveParallelClip, FedProx does not utilize gradient information from unsampled clients, which suggests that the performance of FedProx should degrade under data heterogeneity and client sampling. Since EPISODE++ utilizes information from ALL clients in the form of correction terms, EPISODE++ may be more resilient to data heterogeneity and client sampling, and indeed we observe this in Table 2. **Q4: “How were the hyperparameters (learning rate, training steps) chosen for each method?”** The learning rate $\eta$ and the clipping parameter $\gamma$ were tuned according to a grid search described in Appendix C.1, which is referenced on line 260 of the main text. The number of training steps was chosen to be long enough that the test accuracy stopped increasing. **Q5: “How does the proposed method perform when the client data are i.i.d.?”** We have included additional experimental results using i.i.d. data in the 1 page rebuttal PDF. We evaluated EPISODE++ and all baselines in two settings: (i) the SNLI dataset with $N=8, S=4, s=100\\%$ and (ii) the Sent140 dataset with $N=8, S=4, s=100\\%$. All other configuration details remain the same as the experiments from the main text. The results are shown in Figure 1 of the 1 page rebuttal PDF. For both datasets, EPISODE++ remains the best performing method by both training loss and testing accuracy. All algorithms (besides CELGC) perform similarly with homogeneous data as in the counterpart settings with heterogeneous data. **Q6: “The method proposed in this manuscript only slightly modifies the EPISODE method”** There are two important differences between EPISODE and EPISODE++, one practical and one theoretical. The first is that EPISODE++ does not perform double communication at each round, since the correction terms $\mathbf{G}_r^i$ are computed using information from previous rounds. In EPISODE, these correction terms depend on the new averaged model $\bar{\mathbf{x}}_r$. Computing corrections requires one communication operation to broadcast $\bar{\mathbf{x}}_r$ and then another to share the newly computed corrections. Note that this is not just a doubling of the communicated bits: since $\mathbf{G}_r^i$ must be computed after the end of the first communication operation and before the beginning of the second, the two communication operations cannot overlap, and there must be a doubling of the required communication time. The halving of the communication cost in EPISODE++ is a significant practical benefit over EPISODE. The second important difference is that the introduction of client sampling brings new challenges for the convergence analysis which cannot be handled by the approach used to analyze EPISODE. To handle these new challenges we introduced a nested recursive analysis of the update size (Lemmas 1, 10, and 11) and provided a high probability guarantee of convergence (as opposed to the expectation guarantee of EPISODE). Thank you for your time, and please let us know if we have addressed your concerns. [1] Keskar, Nitish Shirish, et al. "On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima." International Conference on Learning Representations. 2016. --- Rebuttal Comment 1.1: Comment: Thanks for the response, I have changed the score to 6.
Summary: The authors propose a federated learning algorithm that can work under (L0,L1)-smooth function. Different from the previous work, the authors consider the partial-participant setting, modifying the previous algorithm, showing the convergence of the new algorithm, and giving a lower bound of the communication iterations under this setting. The experiments show that the proposed algorithm performs well. Strengths: 1. The authors propose a new algorithm that overcomes the bias introduced by client heterogeneity and client sampling. 2. The authors prove the convergence of the proposed algorithm and a lower bound of communication iterations under this setting. 3. The experimental results show the the proposed algorithm performs much better than previous work. Weaknesses: 1. The key motivation is not clear to me. For me, it is hard to justify that with a uniform sampling strategy on clients, why the bias will occur? And it is hard to see how the proposed algorithm can fix the introduced bias. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and for providing helpful comments. Below we have addressed the concern you expressed in your review. **Q1: “The key motivation is not clear to me. For me, it is hard to justify that with a uniform sampling strategy on clients, why the bias will occur?”** To discuss the bias from a theoretical perspective, we would like to clarify the meaning of bias in this context versus heterogeneity. They are different concepts in federated learning. Denoting the $i$-th client objective as $F_i$, consider the error $\nabla F_i - \nabla F$, where $i$ is sampled uniformly over $\{1, \ldots, n\}$. The expectation over $i$ of this error (i.e., sampling bias) is zero, i.e., $\mathbb{E} [ \nabla F_i(x) - \nabla F(x) ] = 0$ for all $x$. However, if we take the expectation of the squared norm of this error (i.e., sampling variance), we get a dependence on the heterogeneity $\kappa$, i.e. $\mathbb{E} \left[ \lVert \nabla F_i(x) - \nabla F(x) \rVert^2 \right\] \leq \kappa^2$. The bias in the first case is zero because the sampling is uniform and the error from different clients cancel each other. The expected $\ell_2$ error in the second case is non-zero because we consider the norm of the error and no canceling may occur: the expected norm depends on the heterogeneity $\kappa$. If the algorithm only uses gradient information from sampled clients (such as FedAvg, CELGC, NaiveParallelClip), then the update direction is approximating $ \frac{1}{S} \sum_{i \in \mathcal{S}} \nabla F_i(x)$, and the convergence will be slowed with a dependence on the heterogeneity $\kappa$ due to sampling variance, i.e., $\mathbb{E} \left[ \lVert \frac{1}{S} \sum_{i \in \mathcal{S}} \nabla F_i(x) - \nabla F(x) \rVert \right\] \leq \kappa$. Therefore the heterogeneity is introduced even under uniform sampling. The effect of client heterogeneity and client sampling can also be understood empirically. Several works have empirically demonstrated that the performance of FedAvg with uniform sampling and data heterogeneity decreases as the number of participating clients decreases (see Table 4 of [1] or Figure 1 of [2]), which demonstrates that this effect does occur even under uniform sampling. The motivation of this paper is designing computation and communication-efficient algorithms in the relaxed smoothness setting and client subsampling. Our Theorem 1 shows that our computational complexity does not depend on heterogeneity level $\kappa$. Thank you for your time, and please let us know if we have addressed your concern. [1] Karimireddy, Sai Praneeth, et al. "Scaffold: Stochastic controlled averaging for federated learning." International conference on machine learning. PMLR, 2020. [2] Li, Tian, et al. "Federated optimization in heterogeneous networks." Proceedings of Machine learning and systems 2 (2020): 429-450. --- Rebuttal Comment 1.1: Comment: Sorry, the explanation is still confusing. In the answer, you defined bias as heterogeneity $\kappa$, which I think is similar to $\kappa$ defined in the paper, because, for the special $\rho =1$, $\kappa$ is something related to the variance. However, in Theorem 1, R is in the order of $max(\Gamma1, \Gamma2)$, $\Gamma1 = O(\kappa)$ and $\Gamma2 = O(\kappa^2)$. Thus, the $\kappa$ does affect the result in Theorem 1. Meanwhile, it seems that in the paper, the bias is introduced by clipping instead of heterogeneity, can you comment something for line 144-146? --- Reply to Comment 1.1.1: Comment: Actually, $RI$ (the iteration complexity) is independent of $\kappa$ for sufficiently small $\epsilon.$ You can see in the statement of Corollary 1 that $\epsilon$ is required to be small enough that $\eta = \frac{S \epsilon^2}{216 AL_0 \sigma^2 \log \frac{1}{\delta}}$, i.e., the third term in the min of Equation 2 is the minimum. You may check the proof of Corollary 1 in Appendix to see that the derivation of the iteration complexity $RI$ is correct, and the result is independent of $\kappa$ under this condition of sufficiently small $\epsilon$. It should be noted that the communication complexity $R$ has a dependence on $\kappa$. In SCAFFOLD, $R$ indeed does not depend on $\kappa$, but only under the condition of smoothness. Under relaxed smoothness, the only prior work EPISODE [1] also has $R$ which depends on $\kappa$, even in the case of full client participation. The experiments from [1] also show that SCAFFOLD cannot work in the relaxed smoothness case because SCAFFOLD does not use gradient clipping. Under relaxed smoothness, gradient clipping is necessary, and this clipping operator introduces the dependence of $R$ on $\kappa$. The word bias is used to refer to multiple sources of error, so there may be some confusion. Lines 144-146 are describing the same source of error that we described in our rebuttal. Lines 144-146 say that a naive extension of EPISODE to the subsampling case would set $\mathbf{G}_r^i = \nabla F_i(\bar{\mathbf{x}}_r; \tilde{\xi}_r^i)$ and $\mathbf{G}_r = \frac{1}{S} \sum\_{i \in S_r} \mathbf{G}_r^i$, so that clipping would occur if $\lVert \mathbf{G}_r \rVert \geq \frac{\gamma}{\eta}$. $\mathbf{G}_r^i$ is an estimate of $\nabla F_i(\bar{\mathbf{x}}_r)$, so $\mathbf{G}_r$ is an estimate of $\frac{1}{S} \sum\_{i \in S_r} \nabla F_i(\bar{\mathbf{x}}_r)$. However, in order to simulate updates according to the global objective, the algorithm should perform clipping according to the norm of $\nabla F(\bar{\mathbf{x}}_r)$, whose distance to the value used by naive EPISODE is $ \kappa_S := \lVert \nabla F(\bar{\mathbf{x}}_r) - \frac{1}{S} \sum\_{i \in S_r} \nabla F_i(\bar{\mathbf{x}}_r) \rVert $. In this context, $\kappa_S$ is causing a bias in the vector whose norm determines clipping. In our rebuttal, we discussed how $\kappa_S$ may slow down optimization with a dependence on $\kappa$, since the update direction of e.g. FedAvg or CELGC differs from the global update direction of $- \nabla F(\bar{\mathbf{x}}_r)$ by $\kappa_S$. Because $\kappa_S$ is introduced by subsampling and it affects optimization (even without clipping), the bias is not due to clipping itself. The motivation of our paper is to design an efficient algorithm for federated learning under heterogeneity, relaxed smoothness, and client sampling, and dealing with the errors introduced by $\kappa_S$ is a main challenge of this problem. Please let us know if we have answered your concerns. [1] Crawshaw, Michael, Yajie Bao, and Mingrui Liu. "EPISODE: Episodic Gradient Clipping with Periodic Resampled Corrections for Federated Learning with Heterogeneous Data." The Eleventh International Conference on Learning Representations. 2022.
Summary: The paper presents a novel algorithm for non-convex federated learning that addresses the challenges of relaxed smoothness, client heterogeneity, and client subsampling. The authors begin by discussing the limitations of existing algorithms such as SCAFFOLD and EPISODE in handling these challenges simultaneously. They then introduce their algorithm, EPISODE++, which is designed to overcome these limitations. The algorithm is initialized with a set of parameters and then iteratively updated through a series of communication rounds. In each round, a subset of clients is selected randomly, and each client performs local updates based on its own data. The algorithm also includes a gradient clipping step to control the magnitude of the gradient updates. The authors provide a detailed theoretical analysis of the convergence properties of EPISODE++. They show that the algorithm can achieve a linear speedup over the standard federated averaging algorithm (FedAvg) under certain conditions. They also demonstrate that EPISODE++ can significantly outperform clipped minibatch SGD, another popular algorithm for non-convex optimization, in terms of the number of iterations required to find an epsilon-stationary point. The paper concludes with an extensive set of experiments that validate the theoretical findings. The authors show that EPISODE++ outperforms other state-of-the-art algorithms on a variety of benchmark datasets and neural network architectures, including LSTMs and Transformers. Strengths: 1. The authors provide a detailed theoretical analysis of the convergence properties of EPISODE++. They show that under certain conditions, the algorithm can achieve a linear speedup over the standard federated averaging algorithm (FedAvg), and significantly outperform clipped minibatch SGD in terms of the number of iterations required to find an epsilon-stationary point. This rigorous analysis strengthens the credibility of the proposed algorithm. 2. The paper includes an extensive set of experiments that validate the theoretical findings. The authors demonstrate that EPISODE++ outperforms other state-of-the-art algorithms on a variety of benchmark datasets and neural network architectures, including LSTMs and Transformers. There are many other FL papers that only conduct very toy experiments that are not convincing enough. This empirical evidence provides strong support for the effectiveness of the proposed algorithm. Weaknesses: While the paper presents a novel algorithm and provides a detailed theoretical analysis, the scope of the study appears to be limited to non-convex federated learning. It would be interesting to see how EPISODE++ performs in other contexts or problem domains. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The paper could provide more information about the implementation details of the EPISODE++ algorithm. For instance, how are the parameters of the algorithm chosen in practice? Are there any specific strategies or heuristics for setting these parameters? Providing such information could help other researchers to replicate the results and apply the algorithm to their own problems. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and give valuable feedback. See the list below, where we have individually responded to the questions and comments in your review. **Q1: “the scope of the study appears to be limited to non-convex federated learning.”** Our algorithm is indeed designed for the federated learning task, and the analysis focuses on the non-convex case, since it is more general (and includes) the convex case. Because of this, our theory provides a guarantee on the gradient norm $\lVert \nabla F(x_t) \rVert$ along the trajectory with high probability. In the strongly convex case, this guarantee implies a guarantee on the objective gap $F(x_t) - F^*$ with high probability, where $F^*$ is the global minimum of the objective function $F$. Therefore, if we would like to restrict our attention from the general non-convex case to the special strongly convex case, our analysis immediately gives guarantees. Also, the problem of finding a point with small gradient is of independent interest [1, 2] outside of its implications for the “objective gap” even in the convex case. **Q2: “how are the parameters of the algorithm chosen in practice? Are there any specific strategies or heuristics for setting these parameters?”** The parameters of EPISODE++ are the learning rate $\eta$, the clipping parameter $\gamma$, and the communication interval $I$, which are all parameters of previously existing algorithms such as FedAvg ($\eta$ and $I$) and CELGC ($\eta$, $\gamma$, and $I$). These parameters may be chosen following standard conventions of federated learning and machine learning in general. To choose hyperparameters in the experiments of the paper, we tuned the learning rate $\eta$ and the clipping parameter $\gamma$ according to a grid search that is described in Appendix C.1, which is referenced on Line 260 of the main text. After tuning the parameters for CELGC [3], we found that the tuned set of parameters worked well for the remaining algorithms, so we re-used these parameter values for all baselines. Thank you for your time, and please let us know if we have addressed your concerns. [1] Yurii Nesterov. “How to make the gradients small.” Optima, 88:10–11, 2012. [2] Allen-Zhu, Zeyuan. "How to make the gradients small stochastically: Even faster convex and nonconvex sgd." Advances in Neural Information Processing Systems 31 (2018). [3] Mingrui Liu, Zhenxun Zhuang, Yunwen Lei, and Chunyang Liao. A communication-efficient distributed gradient clipping algorithm for training deep neural networks. Advances in Neural Information Processing Systems, 35:26204–26217, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your feedback. After reading the comments and rebuttals of other reviewers, I decided to keep my original score.
Summary: The paper investigates Federated Learning (FL) under client subsampling and data heterogeneity, focusing on functions with potentially unbounded smoothness, and introduces the proposed algorithm to address the problem, EPISODE++. EPISODE++ has demonstrated benefits including linear speedup with client numbers, reduced communication rounds, and resilience to data heterogeneity. The authors provide theoretical convergence analysis and experimental results validate the effectiveness of their method. Strengths: 1. The paper is well-written and easy to follow. 2. Provides novel techniques when proofing both upper and lower bounds. 3. Theoretically demonstrates the benefit of proposed methods EPISODE++, and also shows the shortage of existing clipped minibatch SGD. 4. The proposed algorithm achieves significant improvement over existing methods. Weaknesses: 1. The experiments with N=8 may not fully reflect the actual performance with large-scale FL, for example, FedAvg uses 100 clients in their experiments, where more heterogeneity and client shift might result in different behavior. I would be happy to see if there's more benefit of EPOSIDE++ 2. Missing error bar and not indicates the number of repetition runs in experiments. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have mentioned limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer xXDR (7): Thank you for taking the time to review our paper. Below we have addressed the concerns you raised during your review. **Q1: “The experiments with N=8 may not fully reflect the actual performance with large-scale FL”** We agree that evaluating the proposed algorithm on large-scale experiments is important. In our 1 page rebuttal PDF, we have included results for large-scale experiments in two settings: (i) the SNLI dataset with $N = 128, S = 16, s = 30\\%$ and (ii) the Sent140 dataset with $N = 128, S = 16, s = 10\\%$, where $s$ stands for data similarity. Note that these experiments use the same dataset as those from the main text, and the dataset is split into a larger number of clients using the heterogeneity protocol described in the main text. We ran these experiments using 16 GPUs on a cluster of 4 nodes with 4 GPUs each. The results are shown in Figure 2 of the 1 page rebuttal PDF. In both settings, the relative performance of each algorithm is similar to that of the main text experiments (i.e., those with $N = 8$). Note that the proportion of participating clients $S/N=1/8$ for the large-scale experiments and $S/N \geq 1/4$ for the main text experiments, so that the effect of partial client participation may be stronger in the large-scale experiments. With $N=128$, EPISODE++ achieves a better training loss for both settings than in any experiment with $N = 8$, while the testing accuracy of EPISODE++ is about the same as the highest testing accuracy of the $N = 8$ experiments. Further, EPISODE++ outperforms all other algorithms in the $N = 128$ setting. **Q2: “Missing error bar and not indicates the number of repetition runs in experiments.”** Again, we agree that multiple repetition runs is important to properly evaluate our proposed algorithm against baselines. To ensure that our results are representative of the performance of each algorithm, we have included results with a total of three repetitions for two experimental settings: (i) the SNLI dataset with $N = 8, S = 4, s = 30\\%$ and (ii) the Sent140 dataset with $N = 8, S = 4, s = 10\\%$. All other hyperparameters and configuration details remain the same as the results reported in the main text. The average results and error bars are shown in Table 1 of the 1 page rebuttal PDF. Note that the size of each error bar is the distance from the average over three trials to the min/max over three trials. Across three trials, EPISODE++ remains the best performing algorithm in terms of training loss and testing error: the worst trial of EPISODE++ is better than the best trial of any other algorithm, across both metrics and both datasets. The ordering of the algorithms by performance in the trial-averaged results is the same as in the single-trial results of the main text, which is consistent with the experimental results of the main text. Thank you for your time, and please let us know if you have additional comments. --- Rebuttal Comment 1.1: Comment: After reading all reviews and responses, I will maintain the current score.
Rebuttal 1: Rebuttal: We would like to thank all of the reviewers for taking time to review and critique our work. We have provided an individual response to each reviewer, and here we provide a general summary of our additional results included in the 1 page rebuttal PDF. **Large scale experiments** In the 1-page rebuttal PDF file, we have included large scale experimental results with a large number of clients $N = 128$. Results are shown in Figure 2 of the 1-page rebuttal PDF and are further described in the individual responses. The results with $N = 128$ are consistent with the experiments of the main text that use $N = 8$. EPISODE++ outperforms all other algorithms in terms of both training loss and testing accuracy, for the two datasets SNLI and Sentiment140. **Multiple trials** We have additionally included results for all algorithms averaged over three random seeds, in order to ensure that our experimental results are representative of the expected performance for each algorithm. In general, the results of the additional trials match that of the results reported in the main text: EPISODE++ outperforms all baselines, and the performance of each algorithm is stable across the three seeds. The results are shown in Table 1 of the 1-page rebuttal PDF file and are further described in the individual responses. **Homogeneous data** We also evaluated our algorithm and baselines in two settings with homogeneous data, and results are shown in Figure 1 of the 1-page rebuttal PDF. Every algorithm has similar or better performance compared with the heterogeneous setting, and EPISODE++ remains the best performing algorithm by a wide margin. **Comparison with FedProx** We compared EPISODE++ against the algorithm FedProx on the SNLI dataset. EPISODE++ achieves a lower training loss and a higher testing accuracy. This gap follows intuition, since FedProx does not use any explicit mechanism to combat client subsampling and data heterogeneity. Pdf: /pdf/6553bfe59c0ab3512b8044ed8eb7ecfa23fa91f0.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Pareto Frontiers in Deep Feature Learning: Data, Compute, Width, and Luck
Accept (spotlight)
Summary: This paper considers learning parity functions with neural networks and particularly studies the tradeoff between size of the network and the sample size. The paper also explore a connection between the network size the lottery ticket hypothesis. The neural network studied in this paper has a sparse, specific, and symmetric structure. One step gradient arguments under the hinge loss have been used for the training. Finally, the paper proposes a promising direction for processing tabular datasets using neural networks. Strengths: - The paper shows that with sparse initialization, one can decide between wide network and smaller number of samples, or narrower network and higher number of samples and basically move between the two regime. - The connection of width to the lottery ticket hypothesis in the sparse initialization setting. - The experiments are quite extensive for the 2-layer MLP model with sparse initialization. - Observation of grokking and sample-wise double descent in the experiments is also interesting. This shows a potential avenue for future theoretical research on grokking. Weaknesses: - In the abstract, the lower-bound is stated for the number of training samples. However, the lower bound with SQ only contains the gradient precision. (Large number of samples gives is sufficient for the condition on gradient precision but not necessary). So to be exact, there is no bound on the number of samples. Moreover, the SQ bound does not give information about the SGD algorithm. - It would have been great if there were experiments checking the results of this paper beyond that particular setting of theoretical results. For example, it would have been nice to have the same experiments on Transformers and mean-field networks. Also the batch size is always 32 in the experiments; it would be nice to also try larger and smaller (maybe one sample at time) batch-sizes as well. Further, what happens if we do not have the initialization sparsity? Technical Quality: 3 good Clarity: 3 good Questions for Authors: The setting considered in the paper is quite specific (e.g., the initialization and the training of the network). So is it possible to theoretically/empirically discuss the limitations of this study. For example if we keep the parity function as the target, do we expect these result to generalize to other architecture with more standard training methods? Suggestion:\ The appendix part for the proof of the positive part is a bit hard to read. Particularly, if would be great if the exact training algorithm is once explained and also there is an overview over the proof. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This is a theoretical study and there is no negative societal impact. However, the potential limitations of the findings could be discussed more extensively. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the feedback and suggestions! **(W1) Gap between SQ and SGD**: We agree with the reviewer that the SQ framework as stated only gives a lower bound dependent on total number of queries and the precision of the queries. This does not directly imply a lower bound on the sample complexity, and is a known gap between SQ and SGD. We note that there has been work extending the SQ model to the honest SQ model (HSQ) [Yang05] which evaluates each query by sampling $M$ samples independently and providing the empirical average as the answer to the query. In these settings, tolerance is replaced by the number of samples and there are results of the following style: SQ dimension $d$ implies a lower bound on the total number of samples used $\Omega(d/\log d)$. We will add a discussion about this in the paper. _[Yang05] Ke Yang. New lower bounds for statistical query learning. J. Comput. Syst. Sci., 2005._ **(W2, Q1) Other settings**: We thought the more appropriate direction of "beyond theory" was to go from our synthetic parity setting to real tabular data. For the parity problem in particular, the effect of choice of architectures has been previously explored in [BEGKMZ‘22], and we believe similar results may hold here as well. As for initializations beyond sparsity, our experiments show similar results to the sparse initialization setting. However it is technically significantly challenging to analyze since it boils down to showing anti-concentration of higher-order Fourier coefficients of random halfspaces, which we do not currently know how to do. (The experiments in Appendix C.1 of [BEGKMZ22] provide empirical evidence that these halfspaces’ Fourier spectra behave like those of majority with high probability.) _[BEGKMZ22] Boaz Barak, Benjamin Edelman, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang. "Hidden progress in deep learning: Sgd learns parities near the computational limit." NeurIPS 2022._ **(S1) Clarity of proofs in appendix**: We appreciate the feedback, and will add high-level intuitive sketches to the main paper and the appendix. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. Indeed adding more detail on the sample complexity bounds and potential limitations (e.g., batch size and deriving lower bounds) would be very nice. I will maintain my score.
Summary: In this paper, the authors study the tradeoff between various resources in feature learning---data, compute, width. The authors focus on the fundamental problem of learning parity functions using a two-layer MLP. The degree $k$ of a parity function of $n$ variables controls the hardness of the problem: generally a bigger $k$ means more difficult. Learning parity function is a fundamental problem which has been used to study deep learning in the literature. The authors consider learning $(n,k)$-parities using gradient descent (GD) sparsely initialized at sparsity level $s$. The first main result shows that if $s > k$ (over-sparse initialization), then there is a tradeoff between network width $r$ and sample size $m$ required to learn a $(n,k)$-parity with high probability. This result interpolates existing feature-learning results as far as I know. The second main result shows that if $s<k$ (under-sparse initialization), there is a similar width vs. sample size tradeoff for one-step GD. Extensive simulations and experiments on natural tabular datasets are presented to complement the theory. Strengths: I find this paper impressive and enjoyable to read. Understanding the tradeoff of different resources (time, memory, data) is important for large-scale machine learning. Focusing on the fundamental problem of learning parity functions, the authors present solid theoretical results. I think both theoretical and empirical contributions are significant. The theory also provides perspectives for the lottery ticket hypothesis, by showing that a large width is beneficial for finding the right "lottery ticket". The writing is very clear. High-level ideas are well explained and figures are easy to understand. Weaknesses: I cannot evaluate the technical novelty of this paper as I do not claim expertise in learning parity functions, while some of the analytical strategies seem to be based on existing papers. One minor comment is that theory is established for shallow networks, and it cannot reflect "Deep Feature Learning" in the title. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I am curious (1) if the authors could comment on the role of depth in neural networks, and (2) how do the results connect to the staircase phenomenon? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. **(W1) “Deep feature learning” terminology**: Our intent for the shorthand “deep feature learning” in the title was to refer to gradient-based feature learning in neural networks, as opposed to the NTK regime. We understand the possible unintended interpretation of “deep” as “containing a many-layer hierarchy” (which we believe are currently clarified in the abstract and main body), and are considering minor adjustments to the title. **(Q1) Role of depth**: How depth $>2$ changes the picture sketched in this paper is non-obvious due to the entanglements with optimization, and is an excellent direction for future work especially for tasks that (unlike the parity function) have hierarchical structure. For our paper, we focused on depth-2 because it is more tractable to analyze but still not well-understood, and the number of hyperparameter knobs we analyzed was expansive even without adding depth. **(Q2) Relation to staircase phenomenon**: Compared to the original staircase paper [ABBBN21] where each staircase added a single variable, the sample/runtime is already polynomial in $n$ therefore this trade-off is not super interesting. In the more recent paper [ABM23] which quantifies the leap complexity, we speculate that the finite-sample tradeoffs we describe may apply to each leap in a staircase. This could be an interesting subject for future work. _[ABBBN21] Emmanuel Abbe, Enric Boix-Adsera, Matthew S. Brennan, Guy Bresler, and Dheeraj Nagaraj. The staircase property: How hierarchical structure can guide deep learning. NeurIPS 2021._ _[ABM23] Emmanuel Abbe, Enric Boix-Adserà, and Theodor Misiakiewicz. SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics. COLT 2023._
Summary: Disclaimer: My expertise in this domain is limited and understanding is highly superficial. The manuscript presents a detailed take on addressing the tradeoffs on 4 axes, namely model size, dataset size, training epochs, and stochasticity. It offers a well-rounded analysis of the area and provides empirical and theoretical evidence for intuitive ideas, increasing network width leading to better sample efficiency. They use sparse parity learning as a proxy for real world feature learning problems which might be aligned on the above mentioned 4-axes. They also provide curious insights like the network width improves the possibility of “winning lottery ticket” neurons. The wide, sparsely initialized MLPs also outperform the very strong RFs. Strengths: - Section 4.1 and C.1: The statements on interplay between width and dataset size for example are a good starting point for further focused research. - Section B: Detailed theoretical contributions in the supplementary section verify the experiments and intuition. - Highly exhaustive experiments, analyzing and investigating important questions. Weaknesses: - The paper, while having a good base, introduction and motivation, becomes hard to follow and the organization seems a bit unnatural. For example, in my opinion, moving the insight from Lines 823 to 853 to the main paper at the expense of the rather verbose Section 3.2, before the “actual” analysis starts in Line 223 might help pitch the paper better. - It’s not clear to me how this work is different from the rich vein of work in learning parities and providing a SQ learning bound. It might be worthwhile adding a section comparing against previous attempts and drawing focus to distinct contributions, or the work might seem incremental in several directions without making a significant difference in one direction. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - See Weaknesses. - How is the work distinct from the previous works like [1], especially as theorem 4 seems like a trivial extension of informal theorems and analysis from [1]? A bit more formalization of the distinction would be helpful. - More comparisons on the nC2 combinations of data-width-time-luck would provide one-on-one insights on the interplay and could be a good addition to the work. Almost a matrix of sorts, highlighting the relationship between two axes and their effect overall on the network could summarize the findings well. [1] Barak, Boaz, et al. "Hidden progress in deep learning: Sgd learns parities near the computational limit." Advances in Neural Information Processing Systems 35 (2022): 21750-21764. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful comments and questions. **(W1) Paper organization**: Thanks for the feedback and suggestions. We will improve the presentation, and add intuitive overviews of the technical proofs. **(W2, Q2) Relation to other work on NN parity learning**: We have not seen following contributions in prior work on SQ parity learning with NNs: (1) a “success frontier”-- a parametric family of algorithms which trade off the heterogeneous resources of data & computation (2) experiments which corroborate these tradeoffs. We will add a more detailed quantitative comparison to Appendix A.1. The main points are below: - [BEGKMZ22] shows that one step of gradient descent on a single neuron is able to recover the indices corresponding to the parity with $n^{O(k)}$ samples/computation. The present work expands on this with the following: (1) extends consideration to the finite-sample setting, not just online learning, where the heterogeneous resource tradeoff frontier arises; (2) shows how increasing width improves sample efficiency; (3) introduces sparsity of initialization as a hyperparameter, which has interplay with width & sample-efficiency. - [ABM23] improves this bound to $O(n^{k-1})$ online SGD steps and generalizes the result to handle hierarchical staircases of parity functions which requires a multi-step analysis. - [Telgarsky23] studies the problem of 2-sparse parities with two-layer neural networks trained with vanilla SGD (unlike our restricted two-step training algorithm) and studies the margins achieved post training. They use the margins to get optimal sample complexity $\tilde{O}(n^2/\epsilon)$ in the NTK regime. Going beyond NTK, they analyze gradient flow (with certain additional modifications) on an exponentially ($n^n$) wide 2-layer network (making it computationally inefficient) to get the improved sample complexity of $\tilde{O}(n/\epsilon)$. In contrast to these works, our theoretical contribution (Theorem 4) highlights the improvement in sample complexity while maintaining computational efficiency which we achieve using the sparse initialization. _[BEGKMZ22] Boaz Barak, Benjamin Edelman, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang. "Hidden progress in deep learning: Sgd learns parities near the computational limit." Advances in Neural Information Processing Systems 35 (2022): 21750-21764._ _[ABM23] Emmanuel Abbe, Enric Boix-Adserà, and Theodor Misiakiewicz. SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics. COLT 2023._ _[Telgarsky23] Matus Telgarsky. Feature selection and low test error in shallow low-rotation ReLU networks. ICLR 2023._ **(Q3) Additional pairwise tradeoffs**: We will attempt to provide further visualizations to elucidate the “${n \choose 2}$ grid” of pairwise resource tradeoffs in the revision. There are some methodological considerations (high sensitivity to the choice of accuracy threshold; effect of optimizer hyperparameters and initialization scales on the precise convergence time, …) which make some of these comparisons hard to make quantitatively.
Summary: In an attempt to explore the mechanisms behind generalization and training of neural networks and different elements that play a role in it, this paper investigates the impact of four resources of training MLPs on the famous and well-studied (n, k)-parity learning problem. The authors conduct a massive grid of experiments to analyze the trade-offs between data, compute, width (model size) and luck (sparsity at initialization) when training MLPs on the mentioned problem. The experimental evaluation suggests existence of "frontiers" of resource trade-offs such that decrease any resource at the frontier would likely result in incomplete or non-existent generalization. They further theoretically analyze this frontier and prove that under the necessary assumptions, this frontier can be recovered (at least in the over-sparse initialization case) theoretically. Based on the observed patterns, the authors show that under correct combination of resources, neural networks are capable of achieving similar or in some cases superior performance than tree-based methods on small tabular tasks. Strengths: * The diverse and large-scale experiments show clear patterns of existence of the trade-offs proposed by the authors, and provide thorough evaluation of different combinations of the four mentioned resources. * The theoretical results are supportive of the patterns present in the experiments and agree with the results from recent work. * Although the presented results may not seem novel, and arguably have been speculated before, a study that extensively studies the concept of "success frontier" was lacking. * This work addresses an important aspect of using neural networks in practice: tabular data and suggests that further improvements can be made in applying neural networks on these kind of problems. This is in contrary to the popular belief that neural networks are inferior than tree-based models on these datasets. * The manuscript is well-presented, and to my knowledge, the authors have covered most related work and included necessary discussion and comparison with related work, which helps in conveying the importance of this setup and the findings. Weaknesses: I believe that this work has the potential to be very influential, but there are still some aspects that should be improved and some points that could be better justified. ### Major concerns: * It's not clear if the study would apply to other problems or not. Hence, claiming that the results apply to "deep feature learning in general" sounds very exaggerated. It is definitely intuitive that the observed phenomena should extend to other problems to some extent, but we can't claim anything just based on intuition. For instance, it has been observed that width is not necessarily monotonically beneficial in all problems, and this could apply to other resources discussed in the paper as well. As a suggestion, a few ablation studies of smaller scale on other problems could be a better support for hypothesizing that the observations convey to other settings as well. * Section 3.1: Correct me if I'm getting this wrong, but based on Proposition 3 it seems like there "exists" settings in which SGD can't find the correct solution. If I'm right, this doesn't mean "GD will fail" at all, and it just means that GD is not guaranteed to converge. * In Theorem 4, setting error close to zero would result in $s \to \infty$, and $r \to 0$. How is this possible? Isn't $r$ required to be at least as large as $s$? Could the authors clarify the setting of this theorem? * I'm not convinced about some references made to the lottery ticket hypothesis and "good neurons", and that width buys "Luck" in general. Although LTH could be a plausible explanation for the observations in the experiments, totally attributing the affect of width to LTH and "good neurons" would require thorough evidence, may it be theoretical or empirical. ### Minor issues: * The authors have mentioned that they use bootsrapping for neural nets to solve the low-data problem. Have the authors used bootstrapping for other models that don't inherently perform it to have a fair comparison? * In theorem 5, and theorem 21 (the formal version), it is mentioned that the problem will be solved "approximately". Can the authors please clarify this approximation? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I have mentioned my questions in the "Weaknesses" section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review. Here we address the main concerns raised by the reviewer. **(W1) “Deep feature learning” terminology**: (Copied from response to R3) Our intent for the shorthand “deep feature learning” in the title was to refer to gradient-based feature learning in neural networks, as opposed to the NTK regime. We understand the possible unintended interpretation of “deep” as “containing a many-layer hierarchy” (which we believe are currently clarified in the abstract and main body), and are considering minor adjustments to the title. **(W2) Quantifiers in Proposition 3**: Proposition 3 is phrased according to the standard convention in learning theory that learning a concept class requires probable success for all concepts in the class. The proof can easily be tweaked to show that the same result applies for a $1-\epsilon$ fraction of all $(n,k)$-parities, if $\frac{rT}{\tau^2 \delta} \leq \frac{\epsilon}{2}\binom{n}{k}$. **(W3) $\epsilon$ in Theorem 4**: The statement of Theorem 4 only makes sense when $\epsilon > c_1/\sqrt{n}$, since sparsity $s$ cannot be greater than $n$. **(W4) Lottery ticket hypothesis**: We agree that our results and experiments establish that width buys “luck” in our specific setting, not necessarily in general. We will make the connection to LTH clearer in the revision. **(W5) Clarification on “bootstrapping”**: By “bootstrapping” the classification benchmark, we mean randomly downsampling the training data, inducing a distribution over harder supervised learning tasks. These smaller-sample tasks are not present in [GOV22]; hence, their reported numbers are only comparable with the rightmost points of the “error vs. sample size” curves. We will adjust the wording to minimize confusion. _[GOV22] Grinsztajn, Léo, Edouard Oyallon, and Gaël Varoquaux. "Why do tree-based models still outperform deep learning on typical tabular data?." NeurIPS 2022._ **(W6) Clarification on “approximate solution”**: As is standard in learning theory, approximation refers to the learned solution being $\epsilon$-close in terms of error to the optimal solution.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Diffusion-SS3D: Diffusion Model for Semi-supervised 3D Object Detection
Accept (poster)
Summary: The paper addresses the problem of semi-supervised 3D object detection (3DOD) where the aim is to train a 3DOD model using only few labelled and a lot of unlabelled data. On top of previous approaches, the authors propose to use the diffusion mechanism to enhance the pseudo labels generated by a teacher model for the unlabeled scenes. The main idea is to denoise noisy initial bounding boxes and class label distributions via the diffusion mechanism and thereby obtain better pseudo labels, which are used to train a student model. Additionally, the authors use the diffusion-based denoising to enhance the initial output of the student model during inference. Experiments on ScanNet and SUN RGB-D show that the approach outperforms previous approaches. Strengths: 1. Abstract and introduction provide a clear motivation for the considered problem as well as for the proposed solution. The contributions over previous works are clearly stated and discussed. 2. The method description and the mathematical definitions are clear and easy to follow. Figures and the algorithm are nicely prepared and facilitate the method description’s clarity. 3. The proposed method clearly outperforms the considered baselines. 4. Various ablation studies show the effect of various method components and parameters. 5. Nice qualitative examples in the supplementary. It would be nice to refer to them in the main paper. Weaknesses: 1. For a large part of the paper it remained a bit unclear to me, if the author’s method is applied during training or during inference. In the abstract and introduction, it first sounds like the method is applied only during training, but later on it becomes clear that the diffusion mechanism is also applied during inference to refine the initial predictions. I think this should be clearly stated in abstract and introduction. 2. In the SOTA comparison it remains unclear to me, which part of the improvement is due to the enhanced pseudo labels during training and which part is due to the refinement of predictions during inference. It would help, if the baseline without the author’s method could be added to the Table. Additionally, it would help, if a variant of the method using only training-side improvement but not the prediction enhancement during inference would be added. Thereby, it becomes clear, how much improvement can be achieved without additional inference complexity and how much improvement can be achieved due to enhancement of the predictions at the cost of additional inference complexity. Although, there are a few weaknesses in this paper, I think this work could present a valuable contribution to NeurIPS, if the mentioned issues can be addressed in the rebuttal. Minor comments and suggestions: 3. In Figure 1, it would help the clarity if the abbreviations such as EMA were not abbreviated. 4. Section 3.1: the symbol l is used two times with different meaning: First, it is an index for labelled (line 116) and then as an index for length (line 121). 5. The second paragraph in Section 4.1 discusses results which I could not find in any of the tables. It would help to add the comparison to, e.g., Tables 1 and 2. 6. The baselines in the ablations in Tables 3 and 4 are exactly equal to the results of 3DIoUMatch. Did the authors resimulate those results? I think it would make sense to resimulate them for the ablation studies to avoid impact of differing hardware or hyperparameters on the ablation results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The authors have evaluated their method on two indoor datasets. However, many recent works, e.g., [a], [b], mainly evaluate on outdoor datasets for 3DOD. Have the authors tried, how their method performs on such datasets? Maybe it makes sense to mention as a limitation that the method’s efficacy has not yet been established for such outdoor datasets. 2. The authors claim that the pseudo labels are improved by the diffusion mechanism. Is there an evaluation that supports this claim, i.e., an evaluation of the pseudo labels of the author’s method vs. a baseline without diffusion? 3. As far as I understood, the authors make use of the diffusion mechanism in two ways: First, the pseudo labels are improved and second as a refinement technique of 3DOD prediction during inference. I was wondering, if the second aspect is limited to semi-supervised 3DOD? Could it also be used to enhance fully supervised 3DOD? [a] Park et al., “DetMatch: Two Teachers are Better than One for Joint 2D and 3D Semi-Supervised Object Detection,” ECCV 2022. [b] Lian et al., “Semi-supervised Monocular 3D Object Detection by Multi-view Consistency,” ECCV 2022. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations of the method have been discussed to some degree. I think it would be interesting to additionally state if the method has limitations w.r.t. outdoor datasets and/or in terms of added complexity during training or inference. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback and we address each question below. **Q1: Diffusion applied during training or inference.** Our diffusion mechanism is applied during both the training and inference stages. However, like most diffusion models, our Diffusion-SS3D is applied to denoise randomly generated inputs during inference. To be specific, our diffusion model is first trained under the SSL setting. Then, during inference, the model would denoise from randomly generated object sizes and label distributions via DDIM sampling to generate final predictions. Although the diffusion model is designed to perform denoising during inference, it is still possible to directly use our learned decoder and output model predictions without the denoising step. We show results in the table below on ScanNet, where the DDIM step equal to 0 indicates no denoising step during inference. Our model still performs competitively due to our diffusion model training stage. The performance is improved further using more DDIM steps. We will clarify this in the final version and add more results. DDIM steps in inference|5% mAP@0.25|5% mAP@0.5| |:---:|:---:|:---:| |0|42.8 ± 0.5|26.8 ± 0.6| |1|43.1 ± 0.4|27.0 ± 0.7| |2|43.5 ± 0.2|27.9 ± 0.3| **Q2: Improvement due to the enhanced pseudo labels during training or inference, and its complexity.** As indicated in Q1 above, we report results of having the diffusion process in both training and inference or merely training on ScanNet, as well as the baseline without our method (i.e., 3DIoUMatch). Overall, with diffusion model training but without the denoising step during inference, the results are improved significantly from the 3DIoUMatch baseline while having the same runtime speed. With the denoising process during inference, the results are improved further. |Diffusion training|DDIM in inference|5% mAP@0.25|5% mAP@0.5| |:---:|:---:|:---:|:---:| |||40.5 ± 1.2|22.8 ± 0.8| |✓||42.8 ± 0.5|26.8 ± 0.6| |✓|✓|43.5 ± 0.2|27.9 ± 0.3| Regarding the inference complexity, we show results in Table 4 of the supplementary material. For example, compared to 3DIoUMatch, our runtime speed can be decreased by 28.8% (from 65.54 FPS to 46.64 FPS) while the performance in mAP@0.5 relatively improves by 61.3% (from 8% to 12.9%) on SUN RGB-D (Ln 61-65 of the supplementary material). **Q3: Minor comments.** We will revise the paper as suggested, including EMA in Figure 1 and notations in Section 3.1. For the second paragraph of Section 4.1, we discuss the usage of applying existing augmentation methods. Due to limited space, we only present this discussion in the text and include Table 3 in the supplementary material. We will improve the presentation. For baselines in Tables 3 and 4, we re-run 3DIoUMatch with comparable implementations and hardware on ScanNet. We obtain very similar results to the original ones shown in the table below. 3DIoUMatch|5% mAP@0.25|5% mAP@0.5| |:---:|:---:|:---:| Original |40.0 ± 0.9|22.5 ± 0.5| Re-simulated |40.5 ± 1.2|22.8 ± 0.8| **Q4: Outdoor dataset.** Due to limited time to experiment with a new training framework (i.e., the outdoor dataset needs a different baseline than VoteNet used in the paper), we will include the results in the final version. Note that, by design of the proposed diffusion model in a general teacher-student framework, our method should not be limited to certain datasets. However, we do recognize that some changes may be needed as some object properties (e.g., location, density) in outdoor would be different from the indoor scenario. We will add this discussion to the final paper. In addition, thanks for providing the references [a, b], and we will discuss them in the final paper. Note that, [a] operates a multi-view setting using stereo or video data, while [b] considers multi-modal data with both 2D and 3D labeled data, which are different from the setting of this work. **Q5: Pseudo-label quality.** To validate whether the quality of pseudo-labels is improved, we evaluate the metrics on unlabeled training data during model training, via the teacher model that generates pseudo-labels. In the table below, overall the pseudo-label quality of our Diffusion-SS3D is better than 3DIoUMatch by more than 8% improvement in mAP and recall rate. In addition, we observe that our diffusion model achieves a better quality in earlier epochs, and then become stable during the entire training process. Note that, we still observe the improvement of SSL performance during training, since the model needs to be trained longer and learn from both labeled and unlabeled data. We will add them to the final version. | ScanNet 5% |Metric|Epoch 100|Epoch 200|Epoch 400|Epoch 800| Epoch 1000| |:---:|:---:|:---:|:---:|:---:|:---:|:---:| | 3DIoUMatch | mAP@0.5 |14.08|17.85|21.73|22.17|22.42| | | Recall@0.5 |27.86|31.49|35.24|36.61|35.25| | Diffusion-SS3D | mAP@0.5 | 29.98 | 30.09|30.86| 31.01 |30.93 | | | Recall@0.5 | 43.73 | 44.14 |45.06|44.72|44.17| **Q6: Fully-supervised setting.** Our method is indeed not limited to only the SSL setting, and we can still train the model in fully-supervised setting without using our components relevant to unlabeled data. Specifically, we train the diffusion model with 100% labeled data using the same way as the student model does (top of Figure 2 in the main paper). During inference, random data distribution is then generated (Figure 3 of the main paper) and denoised via DDIM sampling to produce final predictions. We show the result in the table below on ScanNet, where our method performs better than the baseline without diffusion by more than 1%. Although the fully-supervised setting is not our main focus, this demonstrates the potential of introducing the diffusion process in more settings. |Model|100% mAP@0.25|100% mAP@0.5| |:---:|:---:|:---:| |SESS|61.3|38.8| |3DIoUMatch|62.9|42.1| |Diffusion-SS3D|64.1|43.2| |Gain (mAP)|+1.2|+1.1| **Q7: Limitations.** We will add more discussions as in Q2 and Q4. --- Rebuttal Comment 1.1: Comment: Thank you very much for the detailed response and the provided clarifications as well as the additional results and ablations. I have no further questions at this point. --- Reply to Comment 1.1.1: Title: Thank you for your comments Comment: Thank you for the comments. We wonder whether you could consider raising the score as all issues have been addressed. We appreicate your help. --- Rebuttal 2: Title: Please let us know whether you have additional questions after reading our response Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal. We hope to address all the potential issues during the discussion period. Thank you.
Summary: This paper introduces a novel approach to enhance the accuracy of pseudo-labels and inference results in semi-supervised 3D object detection through the utilization of diffusion models. The authors propose the integration of diffusion models from two perspectives: 3D object sizes and class label distributions. The application of diffusion models to 3D object sizes aims to improve the recall rate of ground-truth objects, while the denoising process of class label distributions addresses the issue of low accuracy in category predictions for pseudo-labels. The paper includes comprehensive ablation studies and detailed analysis to validate the effectiveness of the proposed approach. Strengths: 1. The authors present a comprehensive and detailed explanation of the algorithm, providing readers with a clear understanding of the proposed method and its underlying principles. 2. The use of diffusion models to improve the quality of pseudo labels in semi-supervised 3D object detection is novel. Weaknesses: 1. The authors should include a comparison on the 20% labeled data, as it is a commonly used setting in the literature, such as in SESS and 3DIoUMatch. Additionally, it would be beneficial to include comparisons at larger labeled ratios, such as 50% and 100%, to provide a more comprehensive analysis of the proposed method's performance. 2. Although the use of representative points in the proposed diffusion model for 3D object sizes is mentioned as a distinction from the 2D object detection approach [9], it is a straightforward idea as point sub-sampling is commonly employed in 3D object detection to reduce the search space. Moreover, it would be valuable to investigate the effects of using different sub-sampling techniques, such as random sub-sampling, and compare them with FPS sampling. 3. The diffusion model for refining bounding boxes appears similar to the IoU Optimization technique proposed in 3DIoUMatch, which aims to optimize the center and size of bounding boxes. It would be informative to clarify whether the IoU Optimization technique has been applied to the pseudo labels when comparing them with the results from 3DIoUMatch. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Is there any existing research that demonstrates the effectiveness of denoising classification logits through the diffusion process on label distributions in other classification-based tasks, such as image classification? It would be beneficial to discuss in the related work section if there are any existing studies. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors did not discuss the limitations or potential negative social impact of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback and we address each question below. **Q1: Comparisons on the 20% labeled data or larger labeled ratios.** For ScanNet, the results of using 20% labeled data are reported in Table 1 of the main paper. For SUN RGB-D, we show in the table below that our Diffusion-SS3D performs better than 3DIoUMatch by more than 1.5% in both mAP@0.25 and mAP@0.5. Due to limited time and computational resources, we present the new results of using one random data split and will include full results using more random splits in the final version. | SUN RGB-D | 20% mAP@0.25 | 20% mAP@0.5 | |:---:|:---:|:---:| | VoteNet | 45.7 ± 0.6 | 22.5 ± 0.8 | | SESS | 47.1 ± 0.7 | 24.5 ± 1.2 | | 3DIoUMatch | 49.7 ± 0.4 | 30.9 ± 0.2 | | Diffusion-SS3D | **51.3** | **32.7** | | Gain (mAP) | +1.6 | +1.8 | For higher ratios of labeled data or the fully-supervised setting (i.e., 100%), we did run the 100% setting on ScanNet and the results are shown in the table below. We find that our Diffusion-SS3D has more than 1% improvement in both mAP metrics. In addition, we also include the results of 20% labeled data as reference (also shown in Table 1 of the main paper). Note that, although the fully-supervised setting is not our main focus in this paper, these results demonstrate the potential of introducing the diffusion process in more settings. | ScanNet | 20% mAP@0.25 | 20% mAP@0.5 | 100% mAP@0.25 | 100% mAP@0.5 | |:---:|:---:|:---:|:---:|:---:| | VoteNet | 46.9 ± 1.9 | 27.5 ± 1.2 | 57.8 | 36.0 | | SESS | 49.6 ± 1.1 | 29.0 ± 1.0 | 61.3 | 38.8 | | 3DIoUMatch | 52.8 ± 1.2 | 35.2 ± 1.1 | 62.9 | 42.1 | | Diffusion-SS3D | **55.6** ± 1.7 | **36.9** ± 1.4 | **64.1** | **43.2** | | Gain (mAP) | +2.8 | +1.7 | +1.2 | +1.1 | **Q2: Effects of using different sub-sampling techniques.** Although we are aware of point sub-sampling as a common strategy in 3D object detection, we find that leveraging this scheme in our diffusion model can reduce the search space and decrease the degree of freedom in generated noisy boxes, facilitating the diffusion learning process, in which the 2D case [9] does not have the similar issue. We show more results in the table below regarding the point sampling strategies. With the suggested random sub-sampling, we find that results are similar to the ones using FPS (i.e., within 0.5% differences in mAP), for both models with and without our diffusion component. In practice, we follow the implementation of the VoteNet and 3DIoUMatch methods that use FPS. We will add these results in the final version. | ID | Diffusion | Sampling | ScanNet 5% mAP@0.25 | ScanNet 5% mAP@0.5 | |:---:|:---:|:---:|:---:|:---:| | (1) | | Random | 40.2 ± 1.5 | 22.1 ± 1.1 | | (2) | | FPS | 40.0 ± 0.9 | 22.5 ± 0.5 | | (3) | ✓ | Random | 43.1 ± 0.6 | 27.4 ± 0.6 | | (4) | ✓ | FPS | 43.5 ± 0.2 | 27.9 ± 0.3 | **Q3: IoU optimization technique from 3DIoUMatch.** We first note that the 3D IoU prediction in 3DIoUMatch is used as a filtering scheme based on model outputs to find high-quality pseudo-labels. In contrast, our diffusion model learns to denoise from random box sizes and labels to form pseudo-labels. Therefore, applying 3DIoUMatch's filtering scheme complements our method as a post-processing step. In practice, since we consider 3DIoUMatch as our baseline, we do use the same filtering step in our framework after the diffusion steps. We will clarify it in the final version. **Q4: Existing works using diffusion process on label distributions.** Thanks for the suggestion. Since using diffusion models for recognition tasks are relatively new, we have tried our best to include the work we are aware of in the Related Work section of the main paper. We do find two concurrent works [A, B] (both published on arXiv after the NeurIPS submission deadline) that involve the diffusion process for image classification. [A] uses the embedding of the image generation model for learning a classifier, while [B] tackles the learning process of noisy labeled data via the diffusion model. We will include and discuss both works in the final version. [A] Mukhopadhyay et al., Diffusion Models Beat GANs on Image Classification, arXiv:2307.08702, July 2023. [B] Chen et al., Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels, arXiv:2305.19518, May 2023. **Q5: Limitations and social impact.** Since the diffusion model requires more computational powers for training and inference (runtime in frame-per-second is presented in Table 4 of the supplementary material), optimizing efficiency is crucial for real-time applications and large-scale deployments. In the meantime, the increased energy consumption may cause an environmental impact, so it is worth exploring more eco-friendly computing strategies to reduce the environmental footprint. We will include more discussions in the final version. --- Rebuttal 2: Title: Please let us know whether you have additional questions after reading our response Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal. We hope to address all the potential issues during the discussion period. Thank you. --- Rebuttal Comment 2.1: Title: Post-rebuttal Comments Comment: Thank you for addressing my concerns and providing additional experiments in your responses. Most of my concerns have been resolved, thus I am inclined to raise my rating to borderline accept. However, I would like to suggest that you include more results, such as multiple runs of the experiments, results on different labeled data ratios (e.g., 50%) and other datasets (e.g., SUN RGB-D and outdoor KITTI dataset as pointed by Reviewer TXhm), in the final version of the paper. Additionally, it would be beneficial to see the promised modifications incorporated as well. --- Reply to Comment 2.1.1: Title: Thank you Comment: Dear Reviewer, We appreciate your comments and help. We will incorporate more results and revise this paper based on your comments. Thank you!
Summary: In this paper, the author argues that previous 3D semi-supervised detection methods relied solely on teacher models, which cannot generate sufficiently reliable pseudo-labels. Therefore, the author proposes a method called Diffusion-SS3D. This method enables diffusion learning to remove noise from corrupted 3D object size and class label distributions, thereby optimizing the pseudo-labels generated by the teacher model and obtaining more reliable pseudo-labels. The proposed method achieves state-of-the-art performance on the ScanNet and SUN RGB-D datasets. Strengths: Compared to previous works such as 3DIoUMatch, the approach of utilizing diffusion for denoising instead of relying solely on threshold filtering to generate better pseudo-labels is intriguing. Weaknesses: 1.In my opinion, the paper suggests that diffusion can generate more reliable pseudo-labels, but it does not clearly explain why the pseudo-labels generated by diffusion are more reliable than those generated by previous works. 2.In Table 4, the performance of using only Box Renewal is not provided. I believe that both DDIM and Box Renewal have certain denoising capabilities. If the paper considers DDIM to be effective, it should provide the results of using only Box Renewal to further demonstrate this viewpoint. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1.I am curious to know how much difference there would be in the performance if we trained diffusion and generated pseudo-labels using a sampled object size and class distribution based on the prior knowledge obtained from the dataset labels. 2.Since diffusion is just one module within the entire teacher-student framework, the metrics on the dataset may not directly indicate the reliability of diffusion as a source of pseudo-labels. Could the authors design an experiment that demonstrates the improved quality of pseudo-labels obtained through diffusion denoising? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: 1.The authors did not provide a detailed discussion on the limitations of the method. They only briefly mentioned in the final part of the experiment section that diffusion does not perform well in denoising orientations. In this case, could further exploration be conducted to investigate the conditions under which the proposed method performs well in terms of ground truth and predictions? 2.From my perspective, the diffusion model has certain computational costs, and it may be worth discussing the practicality of the method from this angle. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback and we address each question below. **Q1: Why are the pseudo-labels generated by diffusion are more reliable?** In Figure 1 of the main paper, we illustrate the fundamental difference between the conventional framework and our diffusion model in pseudo-label generation, which results in different pseudo-label qualities. First, prior works like 3DIoUMatch are designed to refine pseudo-labels purely based on model outputs, in which the objects may not be discovered if the model cannot output sufficient predictions (i.e., lower recall rate). On the other hand, our diffusion process starts with random box sizes and label distributions, which do not depend on model predictions, so there is a higher chance (i.e., higher recall rate) we can find more objects (please refer to the table provided in Q4). Then, through the denoising process in our diffusion model, pseudo-labels are refined to be more reliable. We will add this discussion to the paper. **Q2: Performance of box renewal.** We experiment with only using box renewal in our diffusion framework, as shown in the table below. We observe that both DDIM and box renewal have the denoising capability. Note that the effectiveness of box renewal also benefits from the diffusion training process, in which box renewal requires the trained diffusion decoder (as shown in Algorithm 1 of the main paper), so that the decoder can take the updated bounding box features and label distributions from box renewal. Therefore, we would emphasize that the entire diffusion training framework makes major improvements rather than a single DDIM or box renewal component, i.e., box renewal may not be solely a method without the diffusion training step. We will add this result with more discussions to the final version. | ID | DDIM | Box Renewal | ScanNet 5% mAP@0.25 | ScanNet 5% mAP@0.5 | |:---:|:---:|:---:|:---:|:---:| | (1) | | | 40.5 ± 1.2 | 22.8 ± 0.8 | | (2) | ✓ | | 42.8 ± 0.5 | 26.6 ± 0.9 | | (3) | | ✓ | 42.3 ± 0.4 | 26.8 ± 0.3 | | (4) | ✓ | ✓ | **43.5** ± 0.2 | **27.9** ± 0.3 | **Q3: Prior knowledge in diffusion training.** Thanks for the suggestion. Using prior knowledge in the diffusion process is interesting, but this may require more studies. For instance, we may consider using category-specific random sampling based on the possible object size as the prior. However, this may be non-trivial as we do not assume to know which category to perform random sampling during inference. In practice, we experiment and find that randomly sampling the object size within 1/4 of the entire scene is more effective than using a larger object size. This supports that having some prior knowledge should be still a helpful cue and we will consider this as a future work. **Q4: Pseudo-label quality.** To validate whether the quality of pseudo-labels is improved, we evaluate the metrics on unlabeled training data during model training, via the teacher model that generates pseudo-labels. In the table below, we show that overall the pseudo-label quality of our Diffusion-SS3D is better than 3DIoUMatch by more than 8% improvement in mAP and recall rate. In addition, we observe that our diffusion model is able to achieve a better quality in earlier epochs, and then become stable during the entire training process. Note that, we still observe the improvement of semi-supervised performance during training, since the model needs to be trained longer and learn from both labeled and unlabeled data. These results support our claim that the diffusion model can generate high-quality pseudo-labels and we will add the results to the final version. In addition, we have included more visual comparisons of generated pseudo-labels in the rebuttal pdf file. | ScanNet 5% | Metric | Epoch 100 | Epoch 200 | Epoch 400 | Epoch 600 | Epoch 800 | Epoch 1000 | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | 3DIoUMatch | mAP@0.5 | 14.08 | 17.85 | 21.73 | 21.95 | 22.17 | 22.42 | | | Recall@0.5 | 27.86 | 31.49 | 35.24 | 36.22 | 36.61 | 35.25 | | Diffusion-SS3D | mAP@0.5 | 29.98 | 30.09 | 30.86 | 30.55 | 31.01 | 30.93 | | | Recall@0.5 | 43.73 | 44.14 | 45.06 | 44.75 | 44.72 | 44.17 | **Q5: Limitations.** In diffusion model training, it relies on ground-truth information to perform the denoising process. For the datasets we experiment on, we find that the orientation information may not be available or inaccurate. For example, ScanNet does not provide orientations, i.e., all objects being asigned with orientations as 0. For SUN RGB-D, orientations are provided inconsistently across different scenes, which causes the training difficulty in denoising orientations (Ln 308-310). We will consider this as a future work to explore noisy data for diffusion models. For other limitations, since the diffusion model requires more computational powers for training and inference (runtime in frame-per-second is presented in Table 4 of the supplementary material), optimizing efficiency is crucial for real-time applications and large-scale deployments. In the meantime, the increased energy consumption may cause an environmental impact, so it is worth exploring more eco-friendly computing strategies to reduce the environmental footprint. We will include more discussions in the final version. **Q6: Computational costs.** One of the limitations of the diffusion model is its computational costs as described above (highlighted in Table 4 of the supplementary material). For example, compared to our baseline method without diffusion, i.e., 3DIoUMatch, our runtime speed is decreased by 28.8% (from 65.54 FPS to 46.64 FPS) while the performance in mAP@0.5 relatively improves by 61.3% (from 8% to 12.9%) on SUN RGB-D (Ln 61-65 of the supplementary material). We will add more discussions regarding this limitation, e.g., adjusting the diffusion sampling steps to achieve a trade-off between accuracy and efficiency as shown in Table 4 of the supplementary material. --- Rebuttal 2: Title: Please let us know whether you have additional questions after reading our response Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal. We hope to address all the potential issues during the discussion period. Thank you. --- Rebuttal Comment 2.1: Comment: Thanks for the rebuttal. With these additional results and explanations, my concerns have been addressed. I hope your final version can contribute to the development of the field. --- Reply to Comment 2.1.1: Title: Thank you for the comments Comment: We thank you for the comments and will include all the discussions in the revised manuscript and supplementary material. As all the issues have been addressed, we wonder whether you could consider raising the scores. Your help is gratefully appreciated.
Summary: This paper proposes a semi-supervied 3D object detection framework, named Diffusion-SS3D. Diffusion-SS3D introduces diffusion process to improve the quality of pseudo-labels. The authors perform experiments on ScanNet and SUN RGB-D benchmark to verify the effectiveness. Strengths: + The motivation of this paper makes sense. + The experimental results are promising. + The combination of semi-supervised 3D object detection and diffusion is interesting and novel. Weaknesses: - The statements in the method part are too long. The writing should be organized again. Try to use more detailed figure to introduce your method. - How to evaluate the quality of pseudo-labels? It’s better to provide statistical data or visualization to support your claims. - In Tables 5 and 6, it seems that the diffusion steps and scaling factors for SNR are sensitive to different datasets. What is the reason for the above? Please provide some insights or analysis on this phenomenon (maybe the amount of labeled data is limited?). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please see the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The method proposed by the author has achieved good results in indoor datasets, e.g., ScanNet and SUN RGB-D. What about the performance on outdoor datesets? And what about the performance of the proposed method in multi-camera 3D detection methods? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback and we address each question below. **Q1: Pseudo-label quality.** To validate whether the quality of pseudo-labels is improved, we evaluate the metrics on unlabeled training data during model training, via the teacher model that generates pseudo-labels. In the table below, we show that overall the pseudo-label quality of our Diffusion-SS3D is better than 3DIoUMatch by more than 8% improvement in mAP and recall rate. In addition, we observe that our diffusion model is able to achieve a better quality in earlier epochs, and then become stable during the entire training process. Note that, we still observe the improvement of semi-supervised performance during training, since the model needs to be trained longer and learn from both labeled and unlabeled data. These results support our claim that the diffusion model can generate high-quality pseudo-labels and we will add them to the final version. In addition, we have included more visual comparisons of generated pseudo-labels in the rebuttal pdf file. | ScanNet 5% | Metric | Epoch 100 | Epoch 200 | Epoch 400 | Epoch 600 | Epoch 800 | Epoch 1000 | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | 3DIoUMatch | mAP@0.5 | 14.08 | 17.85 | 21.73 | 21.95 | 22.17 | 22.42 | | | Recall@0.5 | 27.86 | 31.49 | 35.24 | 36.22 | 36.61 | 35.25 | | Diffusion-SS3D | mAP@0.5 | 29.98 | 30.09 | 30.86 | 30.55 | 31.01 | 30.93 | | | Recall@0.5 | 43.73 | 44.14 | 45.06 | 44.75 | 44.72 | 44.17 | **Q2: Sensitivity in Tables 5 and 6.** As discussed in Ln 291-297 of the main paper, one reason for the sensitivity of diffusion sampling steps (Table 5 of the main paper) to different datasets is indeed the limited labeled data as the reviewer suggested. Since the diffusion training process is mainly guided by the limited labeled data, it makes diffusion sampling steps more sensitive when inferring on unlabeled data distribution. Similarly for Table 6 and Ln 298-304 of the main paper, since the scaling factor determines the difficulty of noisy data (i.e., how difficult to denoise), having limited labeled data would also make the training process more challenging. We will clarify this in the final version. Nevertheless, in both Table 5 and 6 of the main paper, no matter whether there are slight deviations across different settings, we can conclude that using DDIM step equal to 2 and the scaling factor equal to 4.0 should be a general rule-of-thumb that achieves better results compared to the baseline without diffusion. **Q3: Outdoor dataset.** Due to limited time to experiment with a new training framework (i.e., the outdoor dataset needs a different baseline than VoteNet we use in the paper), we will include the results in the final version. Note that, by design of the proposed diffusion model in a general teacher-student framework, our method should not be limited to certain datasets. However, we do recognize that some changes may be required as some object properties (e.g., location, density) in outdoor would be different from the indoor scenario. For the multi-camera setting, it is an interesting direction as it may provide more information, e.g., pseudo-label consistency from multi-views in the diffusion training process, which can be considered as a future work as it is beyond the scope of this work. We will include these discussions in the final version. **Q4: Writings.** Thank you for the feedback. We will improve the writings and figures in the final version. --- Rebuttal Comment 1.1: Title: Official Comments by Reviewer TXhm Comment: Thanks for the rebuttal and response to my questions. Most of my concerns have been addressed, especially the “diffusion steps and scaling factors”. I keep my rating 7. Note that the writings and figures must revise in the final version. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer, We appreciate your feedback and will revise the paper based on your comments. Thank you! --- Rebuttal 2: Title: Please let us know whether you have additional questions after reading our response Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal. We hope to address all the potential issues during the discussion period. Thank you.
Rebuttal 1: Rebuttal: Thanks for your constructive feedback. In addition to addressing individual questions in each rebuttal, we include more example results of generated pseudo-labels in the pdf file, in comparisons with the 3DIoUMatch baseline that does not use the diffusion model like our method. If there are any further inquiries, please notify us by the end of Author-Reviewer Discussion Stage (Aug 16th). Pdf: /pdf/81e7a2bbe233f4d1df453c5e338b23924a06efaf.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a novel algorithm utilizing the diffusion model in 3D object detection for generating pseudo-labels for semi-supervised learning. Technically, it adopts the previous method of teacher-student architecture, but extends it by introdusing diffusion to generate pseudo-labels and denoising technique for class labels and object size distribution. The model incorporates both labeled and unlabeled data through asymetric augmentation mechanism. Strengths: - The novel algorithm creates high-quality pseudo-labels for unlabled data and higly increases the performance of the current techniques in 3D object detection. - The qualitative and quantitative results of the proposed approach demonstrate good improvements over existing methods. This indicates the effectiveness and superiority of the proposed approach in achieving better performance and accuracy for 3D object detection. - Also, the source code is available. Weaknesses: - There needs to be ablative study for farthest point sampling. I am curious this provides the confidential results. I encourage authors to visualize its results. - It would be helpful if there's additional ablative study for the baseline that does not use any teacher network but train the student network using the proposed method. I am curious why teacher model is necessary. - Less qualitative results. I encourage authors to add more qualitative results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please discuss the role of the teacher network. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I also encourage authors to add societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback and we address each question below. **Q1: Farthest point sampling.** We first note that we follow the common implementation in VoteNet and 3DIoUMatch to apply farthest point sampling (FPS) in our framework. To study the impact of FPS, we further conduct an experiment to use the random point sub-sampling method. Results shown in the table below indicate that both FPS and random sampling provide competitive performance improvement compared to the 3DIoUMatch baseline. We will add this result to the final version. | ID | Diffusion | Sampling | ScanNet 5% mAP@0.25 | ScanNet 5% mAP@0.5 | |:---:|:---:|:---:|:---:|:---:| | (1) | | Random | 40.2 ± 1.5 | 22.1 ± 1.1 | | (2) | | FPS | 40.0 ± 0.9 | 22.5 ± 0.5 | | (3) | ✓ | Random | 43.1 ± 0.6 | 27.4 ± 0.6 | | (4) | ✓ | FPS | 43.5 ± 0.2 | 27.9 ± 0.3 | To visualize the point sampling results via FPS, Figure 3 of the main paper illustrates one example. The red points in the top of figure are sampled points, in which we consider them as potential object centers to further produce random object bounding boxes in the diffusion process. We will clarify this in the final version. **Q2: Why teacher model is necessary.** For semi-supervised setting, the teacher-student framework plays a key role to handle unlabeled data, where the teacher model is utilized to generate pseudo-labels from unlabeled data, serving as a supervisory signal for the learning process in the student model. In our approach, we utilize the common teacher-student framework and integrate the diffusion process, so that the diffusion sampling occuring in the teacher model generates pseudo-labels. While it may be possible to only use a single student model and generate pseudo-labels, it would not be an effective way in semi-supervised learning (e.g., the VoteNet baseline) due to the absence of a more stable teacher model (i.e., updated via an exponential moving average scheme from the student model) for pseudo-label generation. Note that, all the main methods in the paper (e.g., SESS, 3DIoUMatch) use the same teacher-student framework for fair comparisons. To realize the suggested experiment without the teacher model, we conduct an experiment of using 100% labeled data via the proposed diffusion model. In this way, there is no need to generate pseudo-labels from unlabeled data, so that the teacher model is not critical anymore. Specifically, we train our diffusion model with labeled data using the same way as the student model does (top of Figure 2 in the main paper). During inference, random data distribution is then generated (see Figure 3 of the main paper) and denoised via DDIM sampling to produce final predictions. We show the results in the table below on ScanNet, where our method performs better than the baseline without diffusion by more than 1%. Although the fully-supervised setting is not our main focus in this paper, this demonstrates the potential of introducing the diffusion process in more settings. | Model | 100% mAP@0.25 | 100% mAP@0.5 | |:---:|:---:|:---:| | VoteNet | 57.8 | 36.0 | | SESS | 61.3 | 38.8 | | 3DIoUMatch | 62.9 | 42.1 | | Diffusion-SS3D | **64.1** | **43.2** | | Gain (mAP) | +1.2 | +1.1 | **Q3: Qualitative results.** In our supplementary materials, Figure 1 and 2 demonstrate the effectiveness of our diffusion process, showcasing the progressive improvement from random noisy boxes to final predictions. In addition, we have included more visual comparisons of generated pseudo-labels in the rebuttal pdf file. We will show more qualitative results in the final version as suggested. **Q4: Societal impact.** Since the diffusion model requires more computational powers for training and inference (runtime in frame-per-second is presented in Table 4 of the supplementary material), the increased energy consumption may cause an environmental impact. Therefore, it is worth exploring more eco-friendly computing strategies to reduce the environmental footprint. We will add more discussions to the final version. --- Rebuttal 2: Title: Please let us know whether you have additional questions after reading our response Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal. We hope to address all the potential issues during the discussion period. Thank you.
null
null
null
null
null
null
GraphAdapter: Tuning Vision-Language Models With Dual Knowledge Graph
Accept (poster)
Summary: This paper introduces GraphAdapter, a novel adapter-style tuning strategy for vision-language models. GraphAdapter can leverage task-specific structure knowledge by explicitly modeling the dual knowledge graph. The authors validate the proposed method on 11 popular benchmarks on few-shot classification setting. Strengths: - GraphAdapter explicitly models the dual-modality structure knowledge with a dual knowledge graph, allowing for the leveraging of task-specific structure knowledge from both textual and visual modalities. - The authors conducted comprehensive evaluations on 11 datasets and demonstrated the performance under different shots. Weaknesses: - The authors claimed the previous methods overlook the explicit exploitation of the structure knowledge, but the experimental results showed that the combination of text and visual adapters achieved limited gain (compared to text-only adapter). To some extent, it makes the motivation less convincing. - The knowledge graph is constructed at the very beginning for once, this can be inconvenient in the setting where there will be new data coming sequentially over time, and then the knowledge graph will need to be updated every time. It's important to consider this because the reason we use adapters is we want a quick adaptation with minimal cost. - The performance improvements seem trivial. For example, in Figure 2 and Table 1, most of the improvement over the second-best model is around 1-2%, and sometimes less than 1% or worse. It seems less promising especially considering the complicated design. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the authors also compare the training time/parameters with previous methods to show the efficiency? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** The authors claimed the previous methods overlook the explicit exploitation of the structure knowledge, but the experimental results showed that the combination of text and visual adapters achieved limited gain (compared to text-only adapter). To some extent, it makes the motivation less convincing. **A1:** Thanks for your comments. We will respond in two aspects. (i) **An adapter typically exhibits effectiveness on a specific branch.** We claim that previous works, such CLIP-Adapter and TaskRes have shown that different types of adapters are specifically effective on a particular branch while facing performance degradation when extended to both branches. For example, CLIP-Adapter and TaskRes are working on visual and textual branches, respectively. Similarly, it is reasonable that our GraphAdapter performs superior on the textual branch only. Moreover, it is worth noting that our method applying to two branches brings slight gains compared with other methods decreasing the performance. (ii) **Our motivation**. As stated in lines 78-80, we claim that our GraphAdapter mainly captures the structure knowledge via two sub-graphs, i.e., textual sub-graph, and visual sub-graph, ***for adapting text branch only***. To show the effectiveness of exploring structure knowledge, we should compare our GraphAdapter with others that do not utilize such knowledge, instead of contrasting with variants of our own method. The significant performance gains against other methods substantiate our rationale for harnessing structural knowledge. **Q2:** The knowledge graph is constructed at the very beginning for once, this can be inconvenient in the setting where there will be new data coming sequentially over time, and then the knowledge graph will need to be updated every time. It's important to consider this because the reason we use adapters is we want a quick adaptation with minimal cost. **A2:** Thanks for your comments. We contend that the necessity for updating the knowledge graph hinges on the nature of incoming data. If the new data sharing the same label space is introduced sequentially, our graph can remain unaffected. In contrast, when new classes or tasks are introduced sequentially, creating a lifelong/incremental learning (IL) set-up, all adapter-style methods need re-optimization including our GraphAdapter. For example, TaskRes necessitates configuring a new residual, while Tip-Adapter needs cache updates and weight relearning. Overall, we claim that all these methods are not designed for the IL set-up. However, we acknowledge that the IL set-up is practical and evaluate our method under a pseudo IL setting where a model is trained on base classes and directly tested on unseen new classes. The results demonstrate the superiority of our method against CoOp and CoCoOp by gains 2.08\% and 0.90\%, respectively. Note that our method is not specifically designed for this base-to-new set-up. Furthermore, we provide the possibility of reformulating our method to real IL, i.e., we can introduce a dynamic graph, which would facilitate the incremental expansion of nodes and limited updating of edges correlated with the new nodes. **Q3:** The performance improvements seem trivial. For example, in Figure 2 and Table 1, most of the improvement over the second-best model is around 1-2\%, and sometimes less than 1\% or worse. It seems less promising, especially considering the complicated design. **A3:** Thanks for your thoughtful feedback. We acknowledge that the observed improvements may appear modest at first glance. However, we emphasize that while the percentage differences might seem small, they can still be meaningful for the task of tuning VLMs with few-shot samples due to the deficiency of data used for training and limited learnable parameters. For example, regarding the average accuracy over 11 diverse datasets, the number has been improved gradually across methods, e.g., 73.42\% (CoOp, IJCV) $\rightarrow$ 74.44\% (CLIP-Adapter, Arxiv2021) $\rightarrow$ 75.65\% (Tip-Adapter, ECCV2022) $\rightarrow$ 75.10\% (TaskRes, CVPR2023). This underscores the challenge of enhancing performance in this task and, consequently, highlights that the 1-2\% gains are not trivial. Moreover, while the conceptualization of constructing the dual knowledge graph to adapt VLMs might seem intricate, its implementation is remarkably straightforward and the introduced parameters are limited (only 4.145M). Consequently, we assert that our method is not overly complex. In summary, taking into account the ease of implementation and the modest parameter requirements that lead to 1-2\% performance gains on such a challenging task, our approach demonstrates its effectiveness and holds the potential to inspire future research. **Q4:** Can the authors also compare the training time/parameters with previous methods to show the efficiency? **A4:** Thanks for your great suggestions. In the following table, we compare our method with existing published ETL methods on the ImageNet in 16-shot case from the perspectives of tunable parameters, computational flops, training time, inference time. All results are measured with the officially released code from GitHub. We can observe our tunable parameters are still less than Tip-Adapter-F. For computational flops, our GraphAdapter takes about 5.42GFlops, almost the same as Tip-Adapter-F and TaskRes, which is far lower than CLIP-Adapter. Moreover, our training and inference times are less than the adapter-style work CLIP-Adapter and the prompt tuning method CoOp. | | CoOp| CLIP-Adapter| Tip-Adapter-F| TaskRes| Ours| |:-:|:-:|:-:|:-:|:-:|:-:| |Tunable Parameters (M)| 0.008 | 0.524| 16.384 | 1.024| 4.145| |GFlops | 1943.12 | 1959.44 | 5.43 | 5.42 | 5.42| |Training time (one epoch) (s)| 40.91 | 45.71 | 12.36 | 13.64 | 23.29| | Inference time (s/100) | 119.64 | 275.22 | 51.03 | 4.89 | 4.91| | Performance | 62.95 | 63.59 | 65.44 | 64.75 | 65.70 | --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: The authors have addressed most of my concerns. I'm raising my score to 5. --- Reply to Comment 1.1.1: Title: Appreciation for Your Feedback Comment: Thanks for your great efforts in our work. We sincerely appreciate your positive and constructive comments. We will incorporate these in our revision carefully.
Summary: This paper proposes a new prompt tuning strategy named GraphAdapter to fuse textual and visual structure knowledge for downstream tasks. It first constructs the dual knowledge graph by taking the textual / visual features of a specific class as nodes and the cosine similarities between these features as edges. After that, the textual feature $z_t$ is warped into the graph space and two GCNs are applied to excavate the knowledge from the textual and visual sub-graphs respectively. Extensive experiments on 11 benchmark datasets show that GraphAdapter consistently outperforms previous works. Strengths: - The idea of introducing graph learning into prompt tuning methods is novel. - The average performance of GraphAdapter is better than previous methods on few-shot learning, including 1-/2-/4-/8-/16-shots on 11 benchmark datasets. - Ablations also show that GraphAdapter can achieve consistent gains when using different backbones. Weaknesses: - My major concern is about the **efficiency**. Although modeling all the classes as a knowledge graph is a novel idea, introducing GNNs into prompt tuning are generally time and memory consuming and may violate the original motivation of the *efficient* transfer learning for Vision Language foundation models (VLFMs). In fact, as is mentioned in the appendix (L40), the authors already decouple the original graph of ImageNet (1k nodes) into 4 sub-graphs (256 nodes) to alleviate the computational cost. So the *scalability* of this method maybe limited. - Besides that, one of the most important potential of VLFMs is trasferring to *open-world* scenarios, where the classes are unknown and infinite, which may make the graph impossible to build in advance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In the few-shot training setting, what is the detailed process to obtain the visual features for constructing the visual sub-graph? For example, what is the number of images used here? what kind of data augmentation is applied? - what is the overall cost of GraphAdapter compared to previous methods? For example, (a) The training FLOPs; (b) The training wall-time of one epoch; (c) The inference speed; (d) The number of parameters. - (Minor) The reference figure in L171 is incorrect. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are partly addressed. The authors may need to further discuss the limitation of **scalability** as well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your great efforts and insightful questions. **Q1:** About the concern on efficiency and scalability. **A1:** Thanks for your valuable comments. We will answer the above questions from three perspectives. (i) As the definition of efficient transfer learning in VLMs, it contains two perspectives: 1) parameter-efficient transfer learning, and 2) data-efficient transfer learning. Our GraphAdapter satisfies these two principles like existing ETL methods since it only utilizes 4.145M learnable parameters and assists few-shot learning. (ii) To prove that our scheme is efficient, we compute and compare our methods with existing published efficient transfer learning methods on the ImageNet with the 16-shot setting, from the perspectives of tunable parameters, computational flops, training time, and inference time. All results are measured with the officially released code from GitHub. The experimental results are shown in the table below. We can observe our tunable parameters are still less than Tip-Adapter-F. For computational flops, our GraphAdapter takes about 5.42GFlops, almost the same as Tip-Adapter-F and TaskRes, which is far lower than CLIP-Adapter. Moreover, our training and inference times are less than the adapter-based work CLIP-Adapter and the prompt tuning method CoOp. Therefore, our GraphAdapter satisfies the requirement of efficient transfer learning. Moreover, our GraphAdapter can achieve the best performance. | | CoOp| CLIP-Adapter| Tip-Adapter-F| TaskRes| Ours| |:-:|:-:|:-:|:-:|:-:|:-:| |Tunable Parameters (M)| 0.008 | 0.524| 16.384 | 1.024| 4.145| |GFlops | 1943.12 | 1959.44 | 5.43 | 5.42 | 5.42| |Training time (one epoch) (s)| 40.91 | 45.71 | 12.36 | 13.64 | 23.29| | Inference time (s/100) | 119.64 | 275.22 | 51.03 | 4.89 | 4.91| Memory Cost (Training) | 18.907| 9.257 | 4.313 | 6.227 | 10.75 | Memory Cost (Inference) | 7.403 | 7.615 | 4.161 | 6.225 | 4.433 | | Performance | 62.95 | 63.59 | 65.44 | 64.75 | 65.70 | (iii) For adapter-style efficient transfer learning methods in VLMs, the computational complexity will increase when the number of classes increases in the training process. For example, the complexity of the adapter process in CLIP-Adapter, Tip-Adapter-F, and TaskRes are all $O(kn)$, where $k$ is constant. In our GrapAdapter, **the number of parameters of GCN is invariant for any class number.** The increase in computational cost is primarily caused by edge computing. To reduce the complexity, we decompose the 1000 nodes in ImageNet into four 250 nodes for the trade-off of performance and computational complexity. Here, we give the theoretical derivation. If we have n class, we can divide it into several subgraphs with fixed $m$ nodes, and decrease the edge computing to $O(mn)\sim O(n)$ when n is large since $m$ is constant like $k$. We also conduct experiments for different $m$ as the table below. We can find with the decrease of $m$, the performance only drops a little. Therefore, it owns almost the same scalability as existing adapter-style ETL methods, like TaskRes, CLIP-Adapter, and Tip-Adapter-F. | m | 20 | 50 | 100 | 250 | |:-:|:-:|:-:|:-:|:-:| |1-shot | 61.23 | 61.45 |61.47| 61.50| **Q2:** One of the most important potentials of VLFMs is transferring to open-world scenarios, where the classes are unknown and infinite, which may make the graph impossible to build in advance. **A2:** Thanks for your insightful and valuable comments. (i) First, our GraphAdapter follows the CoOp setting, which is followed by existing adapter-style works and is devoted to improving the transferability to the seen task, including seen classes or domain generalization scenarios. Therefore, it is not specifically optimized for open-world scenarios, the same as existing adapter-style ETL works and CoOp. (ii) But, we find that our GraphAdapter inherently owns the transferability for open-world scenarios despite no special optimization or design. Notably, **in our GraphAdapter, once the training is finished, the dual knowledge graphs of our GraphAdapter are stored in models, which are not required to be constructed in the inference stage in advance even for open-world scenarios.** As shown in Fig. 2 of our manuscript, given the novel classes, their textual features can warp knowledge from an existing dual knowledge graph constructed with few-shot training data, for the classification of novel classes. CoCoOp is an excellent work for the trade-off between seen classes and unseen classes in an open-source scenario. We follow the base-to-new setting in CoCoOp, where the new classes are unseen in the training process. The experimental results are shown in the Table below: Methods | Base | New | H |:-|:-:|:-:|:-:| CLIP | 72.43 | 68.14 | 70.22 CoOp | 76.47 | 67.88 | 71.92 CoCoOp | 75.98 | 70.43 | 73.10 GraphAdapter (Ours) | 77.91 | 70.13 | 74.02 We can find our GraphAdapter can achieve a better harmonic mean (higher generalization trades-off than CoCoOp). And our GraphAdapter can achieve 70.13\% accuracy for new classes, which outperforms CoOp by a large margin. **Q3:** What is the detailed process to obtain the visual features for constructing the visual sub-graph? **A3:** The visual sub-graph is constructed with the few-shot training samples. Before the training, we exploit the visual encoder of pre-trained CLIP to extract the visual features for these few-shot samples from the same class and average them as the node of this class. The number of images used for each class is decided by the number of shots. For example, for the 2-shot task, two images are used for each node. The data augmentation strategy only contains "random resized cropping" and "random flipping". We will add this description in our revision. **Q4:** What is the overall cost of GraphAdapter **A4:** Please see the A1. **Q5:** (Minor) The reference figure in L171 is incorrect. **A5:** Thanks for it. We will revise it in the revision carefully. --- Rebuttal Comment 1.1: Title: Sincerely expect your response Comment: Dear reviewer pwW3: Thank you for your great efforts in reviewing our paper and providing constructive suggestions/comments. If our rebuttal does not address your concerns, you are warmly welcomed to raise questions. Best Wishes! Authors
Summary: The paper proposes to utilize graph learning for efficient transfer learning of large vision-language models. The graph learning consists of scene graphs of two different modalities, first one uses the textual features of prompts and the second one uses the visual features of training samples from the downstream tasks. The feature of each prompt aims to align with the the fused graph features during the optimization process. Experiments on 11 few shot classification benchmarks including a few on fine-grained classification tasks show that this method improves the transfer learning performance. Strengths: The idea of using structured knowledge is intuitive. Using textual and visual both modalities are interesting. The section of different graph adapter variants, text, visual and T-V is worth adding. The ablation experiments of the different coefficients for different modality graphs have added value. The paper proposes an intuitive idea, explains it well and the paper is well written. Weaknesses: Not a weakness, but a general question. Is there a way to utilize the semantics of visual relationships? The structured knowledge graph in its current form appear to be more of a co-occurrence statistics. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: This is a well written paper. I have one question though: Is there a way to utilize the semantics of visual relationships? The structured knowledge graph in its current form appear to be more of a co-occurrence statistics. e.g. How to distinguish between "person holding a cup" vs "person drinking from a cup" ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Broader impacts and limitations are mentioned in the supplementary material, it's the common impact that large vision-language models need to be concerned about - the nature of the pretraining dataset. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your recognition of our work. We have given careful consideration in response to your insightful question. **Q1:** Is there a way to utilize the semantics of visual relationships? The structured knowledge graph in its current form appears to be more of a co-occurrence statistic. e.g. How to distinguish between "person holding a cup" vs "person drinking from a cup"? **A1:** Thanks for your positive comments and insightful questions. Our GraphAdapter, following the setting of CoOp, has been validated on 11 downstream benchmarks for classifications. We believe the semantics of the visual relationships you mentioned can further enhance classification accuracy. Intuitively, the utilization of the visual relationship should be particularly beneficial for fine-grained and multiple-object classification in one image. Based on your insightful question, we conduct a survey of related works and found that two particular directions are closely correlated with the visual relationship within a single image, such as distinguishing "person holding a cup" and "person drinking from a cup". These two directions compose of human-object interaction[1, 2, 3] (HOI) and scene graph generation[4, 5] (SGG). Among them, human-object interaction focuses on identifying the relationship between humans and nearby objects. Scene graph generation is designed to identify the relationship between different objects within a single image. Both of these two tasks typically involve first detecting the objects/humans, followed by classifying their relationship. The above directions inspire us to consider two potential approaches to exploit the visual relationship: i) utilizing paired visual images and annotated language description of the relationship to guide both the CLIP textual classifier and the visual encoder, enabling them to be aware of the relationship. ii) detecting the objects within an image and modeling the relationship through graph learning as the SGG works. Two potential challenges arise in the utilization of visual relationships. i) Whether the annotated datasets for visual relationships are adequate for classification ii) how to design one efficient and effective scheme to exploit the visual relationships for classification. We believe it is a very good direction to investigate in future work. [1] Gkioxari, Georgia, Ross Girshick, Piotr Dollár, and Kaiming He. "Detecting and recognizing human-object interactions." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8359-8367. 2018. [2] Liao, Yue, Aixi Zhang, Miao Lu, Yongliang Wang, Xiaobo Li, and Si Liu. "Gen-vlkt: Simplify association and enhance interaction understanding for hoi detection." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20123-20132. 2022. [3] Park, Jeeseung, Jin-Woo Park, and Jong-Seok Lee. "ViPLO: Vision Transformer based Pose-Conditioned Self-Loop Graph for Human-Object Interaction Detection." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17152-17162. 2023. [4] Lin, Xin, Changxing Ding, Yibing Zhan, Zijian Li, and Dacheng Tao. "Hl-net: Heterophily learning network for scene graph generation." In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 19476-19485. 2022. [5] Tang, Kaihua, Yulei Niu, Jianqiang Huang, Jiaxin Shi, and Hanwang Zhang. "Unbiased scene graph generation from biased training." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3716-3725. 2020. --- Rebuttal Comment 1.1: Title: post-rebuttal comments Comment: I thank the authors for posting the rebuttals. I'll keep my original "accept" rating --- Reply to Comment 1.1.1: Title: Thanks for your positive comments Comment: We are greatly appreciated for your support and great suggestions for our work. We believe these insightful comments can bring one potential direction for the tuning of VLMs.
Summary: In this paper, the authors present an adapter-style tuning method, termed as GraphAdapter, that explicitly captures the dual-modality structure knowledge by utilizing a dual knowledge graph, leading to enhanced adapter-style transfer learning. Specifically, the authors identify two key challenges in existing adapter-style approaches for efficient transfer learning, including the deficiency in modeling task-specific knowledge from only a single modality, and the neglect of explicitly exploiting the structural knowledge in downstream tasks. Motivated by these two limitations, the authors propose a novel tuning method for visual-language models that incorporates task-specific knowledge for downstream tasks through the integration of textual and visual structure knowledge, based on graph learning. In particular, the proposed method establishes a dual knowledge graph consisting of a textual knowledge subgraph and a visual knowledge subgraph. Consequently, the feature adapter can effectively leverage the inner-modality and cross-modality structure knowledge for superior tuning performance. Experiments on 11 benchmarks convincingly demonstrate the effectiveness of the proposed GraphAdapter approach. Strengths: 1. The proposed GraphAdapter is very well-motivated. The authors have conducted a thorough analysis of the existing ETL approaches, identifying two key limitations, and recognizing the significance of incorporating dual-modality structure knowledge in ETL. Building upon such analysis, the authors develop GraphAdapter, which aims to effectively integrate fused textual and visual structure knowledge using GCN. 2. The experiments are thorough and convincing. The authors perform extensive experiments on 11 few-shot benchmarks, utilizing various backbones such as ResNet-50, ResNet-101, ViT-B/32, and ViT-B/16, as detailed in both the main paper and the supplementary material. In addition, the authors have specifically explored the generalization capability of GraphAdapter on four benchmarks. These experiments convincingly demonstrate the effectiveness of GraphAdapter. 3. The paper is clearly written and easy to follow. The authors extensively elaborate on the details of GraphAdapter, particularly the establishment of the dual knowledge graph. 4. The supplementary material includes extensive implementation details and visualization results. Notably, the authors demonstrate the seamless integration of the proposed GraphAdapter with existing methods such as CaFo and TaskRes*, resulting in consistently improved performance. Weaknesses: I’m almost satisfied with this paper, with only a few minor concerns as follows. 1. The authors leverage GCN to integrate dual-modality structural knowledge. However, I am interested in understanding the performance of more advanced GNN mechanisms, such as GAT and GraphSAGE, in this context. 2. The authors are suggested to include an analysis of the time complexity, particularly regarding the construction of the textual and visual knowledge subgraphs. 3. Some related works [1, 2, 3] on KG embedding can be included. [1] Wu, Han, et al. "Hierarchical Relational Learning for Few-Shot Knowledge Graph Completion." The Eleventh International Conference on Learning Representations. 2022. [2] Bordes, Antoine, et al. "Translating embeddings for modeling multi-relational data." Advances in neural information processing systems 26 (2013). [3] Xiong, Wenhan, et al. "One-shot relational learning for knowledge graphs." arXiv preprint arXiv:1808.09040 (2018). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Can GCN in GraphAdapter be replaced with other GNNs like GAT and GraphSAGE? 2. What is the time complexity associated with GraphAdapter? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations have been thoroughly discussed in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your positive comment on our work, along with constructive suggestions for the improvement of our work. **Q1:** The authors leverage GCN to integrate dual-modality structural knowledge. However, I am interested in understanding the performance of more advanced GNN mechanisms, such as GAT and GraphSAGE, in this context. **A1:** Thanks for your valuable comments. In order to delve further into this intriguing question, we replace the GCN in our GraphAdapter with GAT and GraphSAGE. Then we compare them on the Caltech dataset using a 4-shot setting. As shown in the table below, the utilization of a more advanced graph neural network (GNN) mechanism (such as GAT and GraphSAGE) result in a slight improvement in performance. Meanwhile, it also brings more resource const. We will add this analysis in the revision. Methods| GAT | GraphSAGE | GCN | |:-:|:-:|:-:|:-:| |Caltech|91.21| 91.19| 90.97| **Q2:** The authors are suggested to include an analysis of the time complexity, particularly regarding the construction of the textual and visual knowledge subgraphs. **A2:** Thank you for your insightful suggestions. In response, we have incorporated an analysis of the time complexity for our GraphAdapter in the table below. This includes the training time for each epoch, the inference time for a batch of 100 images, and the associated costs for constructing both the textual and visual graphs. All the time complexity computations were performed with the ImageNet dataset on a 16-shot setting, using a single NVIDIA GeForce 3090. As shown in the table below, the time required for training and inference of our GraphAdapter is obviously less than the typical prompt-based ETL method CoOp and Adapter-based work CLIP-Adapter, which is efficient. Moreover, the textual and visual graphs need only be constructed once at the beginning of the training process. It means the construction of these two graphs is also fast and the cost is minimal. |Methods| Training time (one epoch) | Inference time | Textual Graph |Visual Graph| Performance| |:-|:-:|:-:|:-:|:-:|:-:| |CoOp| 40.91s | 119.64s | - | - | 62.95 | |CLIP-Adapter | 45.71s | 275.22 | - | - | 63.59| |Ours| 23.28s| 4.57s | 6.63s | 14.34s| 65.70| **Q3:** Some related works [1, 2, 3] on KG embedding can be included. **A3:** Thank you for your valuable suggestions. We will incorporate descriptions of these papers in the "Graph Learning" section of our related work. --- Rebuttal Comment 1.1: Comment: Thank you for the response. The response addressed most of my concerns. Assuming the response will be incorporated to final manuscript, I raise my vote further. I strongly support acceptance for this paper. --- Reply to Comment 1.1.1: Title: Appreciate for your response Comment: Thank you for your response and strong support for our paper. Following your valuable suggestions. we assure you that the points raised in the response will indeed be incorporated into the final manuscript. We are grateful for your encouragement, and your constructive comments have greatly helped us to further improve our work.
Rebuttal 1: Rebuttal: **We thank all reviewers and area chairs for their great efforts and insightful comments!** These suggestions and questions are significantly beneficial to our paper. We believe we have addressed all the concerns of reviewers in the rebuttal. **If you have some new questions/concerns, please let us know.** We will try our best efforts to solve it. Thanks.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
STEVE-1: A Generative Model for Text-to-Behavior in Minecraft
Accept (spotlight)
Summary: This paper proposes a generative pretraining method to learn an instruction-following agent in Minecraft by fine-tuning the VPT agent. It first trains an image-goal-conditioned policy and then leverages the foundation model MineCLIP as the bridge to map the language instruction into the image-goal space. Using this method, this paper claims that it can solve any short-horizon open-ended text and visual task in Minecraft. Strengths: (1) This paper explores the feasibility of jointly using the foundation models (such as the vision-language model MineCLIP and pre-trained policy model VPT) for solving decision-making problems and shows a preliminary conclusion. (2) Using unCLIP approach to solve instruction-following decision-making problems is interesting. (3) This paper proposes to improve the instruction sensitivity with the classifier-free guidance technique, which is interesting and reasonable. (4) The paper is well-written and easy to follow. Weaknesses: (1) The paper overstates its performance by claiming that "STEVE-1 can follow nearly **any short-horizon** open-ended text and visual task in Minecraft." To verify this, I downloaded the provided code and ran the attached checkpoints on some basic short-horizon tasks like "kill sheep", "kill cow", and "kill pig". Before testing these tasks, I had already summoned 10 "pigs", "cows", and "sheep" nearby. While I found that the agent actually killed some animals, it did not seem aware of which particular animal it was targeting. The behavior was more like "kill cow if there is a cow" rather than following the specific instruction. I tried several other prompt variants such as "hunt cow" and "hunt cow in Minecraft", but they were unsuccessful. Additionally, I tested STEVE-1 on some short-horizon crafting tasks with sufficient materials in the initial inventory, like "craft stick", "craft oak plank", and "craft torch". The agent merely opened its inventory and then acted randomly. Given these observations, I do not believe that STEVE-1 has solved all short-horizon tasks, as it cannot even differentiate between "cows" and "sheep". This could lead others to misconstrue the extent of the research in the Minecraft environment. (2) This paper lacks generalization experiments on unseen text instructions. As the primary aim of this method is to support open-ended text instructions in STEVE-1, it is crucial to demonstrate its performance on unseen text instructions. The authors collected approximately 10,000 instruction-trajectory pairs, with 2,000 of them hand-labeled and an additional 8,000 instructions generated by GPT. It remains unclear whether the tasks ("dig dirt", "build a tower", "make wooden planks") in the experiments were already included in these instructions. If so, the experiments may not be particularly convincing, as neural networks can easily map the text instructions to corresponding visual embeddings in the training set. That is, pre-training STEVE-1 on visual instructions already achieves this goal and the later generative model training may be not important, which is not impressive. So, I strongly recommend including generalization experiments if the paper wants to show support for open-ended text instructions. (3) The evaluation metric is insufficient. The MineCLIP evaluation metric is not convincingly effective in measuring whether a trajectory corresponds to a given task. This is because the MineCLIP latent space cannot distinguish some trajectories well. For instance, we have found that it was unable to differentiate between two trajectories where one involved approaching a tree and the other involved moving away from it (MineCLIP was not sensitive to distance). To address this issue, I recommend that the paper incorporates a success-rate-based confusion matrix, in which the values in Figure 3 (b) are replaced with the task success rate, conditioned on the given instruction. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As stated in the weakness part. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The main limitations include (1) prompt design being too sensitive and (2) chain-of-thought prompting is not automated. The authors have adequately stated the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the great questions and for engaging deeply with the work. We’re glad that you found the approach interesting and reasonable, and the paper easy to follow. We address your questions and comments below. > The paper overstates its performance by claiming that "STEVE-1 can follow nearly **any short-horizon** open-ended text and visual task in Minecraft." > We agree that this overstates STEVE-1’s capabilities. Shortly after submitting we actually changed this language ourselves to say “follow a wide range of short-horizon text and visual instructions in Minecraft”. We hope you agree that this language more accurately describes the performance of STEVE-1. Regarding the specific examples listed (hitting a cow, crafting a stick), the behavior you noted seems about right for the weights that we uploaded to OpenReview. The phenomenon of the agent performing a seemingly related task rather than the intended one is something we noticed too and is related to the concept of goal misgeneralization (Langosco et al. 2022). Generalization can be helpful when the task we assign the agent is impossible to achieve from the current state and the agent instead performs a closely related action, but harmful when the task is achievable. We note two things: first, the powerful generalization ability of STEVE-1 probably comes from the MineCLIP embeddings and it especially improves the ability of STEVE-1 to follow visual instructions when the exact items or blocks nearby are not available in the current environment, which is an extremely common scenario. Second, we notice that the tendency of the agent to misgeneralize decreases with scale. For example, with a model trained on fewer data we find that asking the agent to look up and punch a tree to get a log often resulted in the agent looking in the air and punching nothing; training the model on more data results in the agent first walking over to a nearby tree and looking up to get a log. We are including (via the AC) an updated codebase with the ability to run the agent in an interactive mode where new instructions can be presented to the agent in real-time. This updated code also includes a new snapshot with updated hyperparameters (see general response) that shows a modest improvement in performance. We will add a section on goal misgeneralization and its relationship with scaling to the appendix of our updated paper, and we hope that future research can further investigate these issues and how to improve misgeneralization. We stress that for a wide range of tasks, including but not limited to the tasks included in our evaluations, STEVE-1 shows strong performance on both the original weights and the updated weights. > The [MineCLIP] evaluation metric is insufficient. > Please see the general response section titled “Reviewer 7QfK raised a concern regarding the strength of the MineCLIP evaluation.” > This paper lacks generalization experiments on unseen text instructions. > Thanks for pointing this out; we strongly agree that generalization experiments would improve the paper. During the rebuttal period we have performed a set of simple generalization experiments which we hope helps to answer these important questions. **Instruction Training Set Contamination:** Please refer to Table A in our rebuttal PDF. Among our evaluation instructions, the bolded instructions show up in the instruction-trajectory dataset. While some instructions do show up, most of the instructions do not show up in our training set (verbatim). **Training Set Decontamination:** To measure the effect on performance of removing a concept from the training set, we ran an experiment where we removed every instruction with the words “dirt” or “dig” in them and retrained the VAE model. This corresponds to around 10% of the instructions. We found that even without training on the concept of dirt or digging at all, STEVE-1 can still be instructed to dig holes and get dirt. This demonstrates clearly that STEVE-1 can generalize to unseen text instructions (see Figure B in rebuttal PDF) — likely because most of the text-understanding comes from the pretrained MineCLIP model which was trained on a highly diverse dataset of YouTube videos and captions. The prior VAE only needs to learn a simple mapping between the text and visual MineCLIP embeddings. Note that there is a slight decrease in performance across all tasks likely due to the smaller VAE training set (~10% less). The instruction-following capability of STEVE-1 is shared between: the policy, which learns to follow instructions in the visual MineCLIP embedding space; the MineCLIP text-encoder, which is trained to align well with the visual embeddings and performs most of the text-understanding; and our prior VAE model, which learns a simple function to translate between text and visual embeddings. While this means that the VAE doesn’t hold the most important role in giving instruction-following capabilities, we disagree with the reviewer that this is unimpressive. Contrary to this viewpoint, it is precisely our modeling setup which lets us fully exploit pretrained models such as MineCLIP to gain impressive language understanding without relying on having our own large datasets or compute. We conclude by stressing that while STEVE-1 is not perfect, it represents a large step towards a general recipe for creating generalist agents by building on pretrained models. As future generative visual-language/action models get more powerful and general, we think that approaches like STEVE-1, which can exploit and steer their capabilities, will provide great value and enable exciting future research. Thanks again for the thorough review. We hope that some of the additional experiments and explanations we have conducted in response to the issues that you raised go a long way towards improving the paper and changing your opinion. Please don’t hesitate to ask if you have any additional questions or require clarification. --- Rebuttal Comment 1.1: Comment: Thanks for the author's rebuttal. I appreciate the author's honesty and additional experiments to solve my concern. I'm going to raise the final rating. I strongly recommend the final revision of this paper should include a more detailed limitation part (illustrate what steve-1 is not capable of and why).
Summary: This paper proposed an instruction-tuned model for Minecraft, which turns previous RL model like VPT into goal-conditioned model. The experiment shows that the proposed method can follow nearly any short-horizon open-ended text and visual task in Minecraft. Strengths: 1. This paper proposes a novel technique for instruction-tuning video game RL-agent which is inspired by previous unCLIP method in the text-to-image generation field. This idea is interesting and the method can turn an RL agent into a goal-conditioned one which shows great potential for designing a more general RL agent. 2. The experiment part shows that the tuned agent can follow short-horizon goal instructions very well. When given the appropriate textual instruction, the proposed method collects 66x more dirt, 4.5x more wood, 28x more seeds, and travels 3x further than the unconditional agent. Weaknesses: This method is not applicable for long-horizon tasks in the open world. For example, in Minecraft, if the goal is obtaining a diamond, the instruction may be too difficult for the proposed method since it is not trained for long-horizon trajectories. Regarding this, it is not sure if this method can be applied to challenging tasks in the open world which may contain extremely long planning steps. In this paper, no complicated tasks like obtaining diamond are studied. Thus I am concerned the scalability of the proposed method to much complicated tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I am curious about how is the prompt designed in this paper? The detail of the prompt should be explained in the main text of the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is adequately addressed in the paper. No discussion about negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments. We’re glad to hear that you found our work interesting and that you see it as a potential method for designing more general sequential decision-making agents. Please see below for responses to your comments and questions. > This method is not applicable for long-horizon tasks in the open world. For example, in Minecraft, if the goal is obtaining a diamond, the instruction may be too difficult for the proposed method since it is not trained for long-horizon trajectories. … In this paper, no complicated tasks like obtaining diamond are studied. Thus I am concerned the scalability of the proposed method to much complicated tasks. > We agree that the STEVE-1 agent is currently not capable of accomplishing long-horizon tasks like obtaining a diamond. This is a limitation of our work and something that we think is an interesting direction for future work, as solving long-horizon tasks while taking actions using low-level mouse/keyboard controls is a very challenging and exciting research direction. Based on our experience, there are a few things that we think could help improve long-horizon performance: **1) Scaling:** Our scaling results indicate that as we train the agent on more data, more tasks become achievable. So scaling up the amount of data could enable the agent to complete longer-horizon tasks. **2) Finetuning with RL:** We also think that finetuning the STEVE-1 pre-trained agent with RL is another very interesting avenue. For example, VPT finetuned their pretrained agent with RL to learn to mine diamonds. So we think it’s likely that RL finetuning on top of the pretrained STEVE-1 could enhance long-horizon task completion abilities. **3) Using LLMs or VLMs:** Using LLMs or Visual-Language Models (VLMs) to automatically provide prompt chains to the STEVE-1 agent could be an effective way to improve long-horizon performance. We started to investigate this possibility with our prompt chaining experiments and we think that this is a very exciting direction for follow-up work. > I am curious about how is the prompt designed in this paper? The detail of the prompt should be explained in the main text of the paper. > Thanks for asking this question. We answer both how the prompts were designed and what the exact prompts are. - **Exact prompts:** Table 3 in the appendix includes a list of the full text prompts used for each task, but we agree that the exact prompts should be made more clear in the main text of the paper. We have included a modified version of the table that hopefully provides more clarity in exactly which text prompt corresponds to which evaluated task (see below). We will include this figure in future versions of the paper. | Figures 3 (Right) & 11 Label | Figures 13 & 14 Label | Text Prompt | | --- | --- | --- | | dig | dig as far as possible | dig as far as possible | | dirt | get dirt | get dirt | | sky | look at the sky | look at the sky | | leaves | break leaves | break leaves | | wood | chop a tree | chop a tree | | seeds | collect seeds | collect seeds | | flower | break a flower | break a flower | | explore | go explore | go explore | | swim | go swimming | go swimming | | underwater | go underwater | go underwater | | inventory | open inventory | open inventory | | dirt... | get dirt ... | get dirt, dig hole, dig dirt, gather a ton of dirt, collect dirt | | wood... | chop down the tree ... | chop down the tree, gather wood, pick up wood, chop it down, break tree | | seeds... | break tall grass ... | break tall grass, break grass, collect seeds, punch the ground, run around in circles getting seeds from bushes | - **Prompt design:** In our experiments we used both short and longer prompts. The short prompts are either taken from previous literature (e.g., the language-conditioning experiment in the VPT appendix) or they were simply the first thing we tried. The longer prompts were created by taking inspiration from the prompt engineering methods used with text-to-image models such as Stable Diffusion [47]. To design these prompts, we simply strung together a lot of terms related to our task in order to increase the specificity of the prompts. We were excited to discover that this style of prompt design inspired by the prompt engineering community works well in STEVE-1. Thank you again for the positive review. We are excited about the simplicity, scalability, and strong performance of STEVE-1 and the many future research opportunities that it unlocks. If you have any additional questions or needs for clarification, please don’t hesitate to ask. --- Rebuttal Comment 1.1: Title: I will keep my rating. Comment: Thanks to the authors' detailed feedback. After reading the rebuttal, I still have concerns about long-horizon ability but I do agree with some possible solutions proposed by the authors.
Summary: The paper introduces STEVE-1, a sequential decision-making agent designed to follow textual instructions and accomplish goals in the Minecraft environment. The authors utilize two pre-trained models, VPT (Video Pretraining Transformer) and MineCLIP, to facilitate this process. VPT is a transformer model trained to predict action sequences from aligned video sequences, while MineCLIP aligns consecutive video timesteps with corresponding transcripts in Minecraft. To finetune VPT, the authors employ self-supervised behavioral cloning conditioned on latent visual goals. They generate goal-conditioned data by randomly selecting timesteps from episodes and using hindsight relabeling to set intermediate goals. MineCLIP is used to map textual goals to visual goal embeddings in the unCLIP-based approach. This mapping is achieved through a conditional Variational Autoencoder (VAE) with Gaussian prior and posterior, conditioned on MineCLIP text representations. The training dataset comprises Minecraft gameplay data, combined with the OpenAI contractor dataset, and additional data generated using VPT. The authors curate 2,000 instruction-labeled trajectory segments, each consisting of 16 frames, and augment this dataset by identifying similar gameplay segments. Additionally, 8,000 additional instructions are generated using GPT-3.5 Turbo. Classifier-free guidance is employed to balance the logits between unconditional and conditional behavior. The results demonstrate that STEVE-1 successfully solves various short-horizon open-ended text and visual tasks, with a training cost of only $60. The performance of the agent is evaluated using programmatic evaluation and MineCLIP evaluation, revealing significant improvements with goal conditioning, especially visual. The authors also observe that prompt chaining, as opposed to direct prompting, proves advantageous for complex tasks like building towers or creating planks. A thorough ablation study is conducted on the classifier-free guidance hyperparameter, pretrained VPT weights, and prompt engineering, providing valuable insights into the optimal settings for these components. Overall, the paper showcases the effectiveness of STEVE-1 in achieving goals based on textual instructions in the Minecraft environment. The combination of VPT and MineCLIP, along with the conditioning techniques and ablation studies, contribute to a comprehensive understanding of the agent's performance and its potential applications. Strengths: - This unique adaptation - STEVE-1 - of the unCLIP method demonstrates the versatility and effectiveness of the approach in the context of sequential decision making in Minecraft. - The experiments and analysis conducted in the paper are highly novel and insightful. The authors provide valuable findings, such as the benefits of prompt chaining, the potential for scaling to improve certain metrics, and the limitations observed in complex tasks. These insights enhance our understanding of the proposed approach and its implications. - The paper demonstrates that prompt chaining is effective in accomplishing complex tasks, such as building towers or making wooden planks. The results show that success rates and programmatic evaluation metrics plateau after a certain number of frames, highlighting the potential and limitations of the approach. Additionally, the comparison to direct prompting reveals the superiority of prompt chaining in achieving satisfactory performance. - The thorough ablation study conducted on various components, including the classifier-free guidance hyperparameter, pretrained VPT weights, and prompt engineering, provides valuable insights into the optimal settings for these elements. - The inclusion of additional ablations, such as goal chunk sizes, VAE variants, and text augmentation, in the appendix further enhances reproducibility and facilitates a deeper understanding of the approach. Weaknesses: - The paper would benefit from clearer explanations regarding certain aspects, such as the distinction between packed hindsight relabeling and hindsight relabeling. Additionally, providing a more detailed explanation of the interpretation of the heatmap, specifically the meaning of the 1/0 values, would enhance understanding. - Some questions (see section) during the review could be addressed in the main work or in an appendix to provide a more comprehensive understanding of the approach. - The proposed approach combines existing ideas from well-established approaches, such as CVAE, MineCLIP, and VPT, which may limit the novelty of the work. However, the extensive experiments and detailed study conducted in the paper contribute to its significance and overall value. - The scalability of the approach to other environments or datasets is not discussed or addressed in the paper. Considering the dependency on pretrained video-text aligned models and large-scale pretrained transformers like VPT, the applicability of the approach to environments without such models is unclear. Addressing this limitation and discussing potential scalability issues would enhance the practical relevance and broader applicability of the approach. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Can you provide insights into why random timesteps from episodes are selected as goals? Are there alternative approaches that could be considered, and why is this method deemed the most effective? - Does randomly resetting the agent's memory and turning the agent to a random direction during data generation provide necessary benefits? What would be the implications if this step were omitted? - How does the utilization of pre-trained VPT contribute to the approach when the input is modified with conditional goal embeddings? Can you elaborate on the specific advantages and improvements gained from incorporating VPT in this manner? - Could you explain the process of generating additional text instructions based on GPT 3.5? How are these instructions generated and integrated into the training process? - It is not clear why the graph from Baker et al. is not included in the text-conditioning results. Can you provide clarification or discuss the reasoning behind its exclusion? - In Figure 4 (right), what is "relevant" and "irrelevant" in the graph? Clarifying the interpretation of these components would improve the understanding of the presented results. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: - The authors extensively discuss the limitations of the approach, including challenges with multiple steps of reasoning, prompt engineering, and potential negative societal impacts. While further discussion on these aspects could be beneficial, there are no explicit omissions or significant limitations that require specific mention. The paper adequately addresses and acknowledges its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive questions and feedback, and for recognizing the versatility, novelty, and significance of our work on STEVE-1. Please see below for responses to your comments and questions: > Clearer explanation on the distinction between packed hindsight relabeling and hindsight relabeling. > Hindsight relabelling was introduced in the Hindsight Experience Replay (HER) work [4]. With this technique, trajectories are relabelled with imagined goals, and these goals can be chosen using different strategies. *Packed hindsight relabeling* is our specific implementation of hindsight relabeling (see Algorithm 1 in Appendix), which “**packs” multiple relabeled goal sequences into a single sequence**. Specifically, we split a trajectory into multiple chunks and pick the last timestep in each chunk as the relabelled goal for that chunk (see Figure 2). > A more detailed explanation of the interpretation of the heatmap, specifically the meaning of the 1/0 values > Thanks for raising this. The ideal performance with the MineCLIP heatmap would be where the minimum values of each row lie on the diagonal. That is, the agent performs a specific task best when it is asked to perform that specific task. The numbers (1/0) on the diagonal correspond to the ranking that the diagonal has in its row. A 1 represents that the diagonal is the second lowest value in the row and 0 means it is the lowest. > The scalability of the approach to other environments or datasets is not discussed or addressed in the paper. > We designed STEVE-1 for Minecraft due to the availability of two key ingredients: 1) a strong behavioral prior (VPT), and 2) a powerful visual-language model which maps text and video to a joint embedding space (MineCLIP). However, the method used to create STEVE-1 is not specific to the Minecraft domain. Given the rapid development of generative models, we expect that similar models to VPT and MineCLIP will become available in many other domains. As these models become available, future work could investigate the applicability of the STEVE-1 approach to these other domains. Thanks for raising this key point, we will include a discussion of this in the final version of our paper. > Can you provide insights into why random timesteps from episodes are selected as goals? > We chose to randomly select future timesteps from episodes as goals primarily due to the simplicity of the approach, but also to ensure a diverse and unbiased coverage of potential goals achievable within the short horizon. By avoiding a specific heuristic strategy, we aim to prevent any potential biases in goal selection that might influence the agent's training, thereby promoting a more generalized performance. Future works can investigate the effects of alternative approaches that filter for semantically interesting timesteps as goals. > Does randomly resetting the agent's memory and turning the agent to a random direction during data generation provide necessary benefits? > Thanks for asking this question. These implementation details of the VPT dataset were chosen as heuristics to increase the diversity of the generated trajectories. Unfortunately, it’s very compute-intensive to get concrete answers about the effects that these decisions have. We will consider running additional experiments to investigate this question in the future. > How does the utilization of pre-trained VPT contribute to the approach when the input is modified with conditional goal embeddings? Can you elaborate on the specific advantages and improvements gained from incorporating VPT in this manner? > Great question. Yes, due to how we modify the inputs to the transformer, the input distribution is different from what VPT expects. However, since we finetune the modified VPT architecture on our gameplay dataset with relabelled goal embeddings, then the model learns to adapt to this new distribution. Empirically, in Figure 5 (left) we found that using the VPT pretrained weights improves performance as compared to finetuning from scratch. Note that this conditioning method via a bias to the transformer input was also explored in Baker et al.’s Appendix I. However, future works can study other mechanisms for conditioning. > Could you explain the process of generating additional text instructions based on GPT 3.5? How are these instructions generated and integrated into the training process? > We’ll update the appendix to include the GPT-3.5-turbo system prompt and user query prompt. Following the creation of approximately 8,000 additional examples from GPT, we further find potential matching clips from the gameplay dataset as described in Appendix D.2. Specifically, for each of our text labels, we find the top 5 closest (cosine similarity) timesteps in 2000 episodes from our gameplay dataset using MineCLIP’s similarity score. These 50,000 automatically-mined text-video pairs are added to the original 2,000 hand-labeled examples to form the final dataset used for training the prior. We will add an algorithm box to make this more precise in the final version. > It is not clear why the graph from Baker et al. is not included in the text-conditioning results. > We showed this comparison in Appendix E.2 (Figures 13 & 14) and we will add the results from Baker et al. to Figure 3 (left). The caveats with this comparison is that we have some differences in the experimental setup: (1) we only used half the episode length, and (2) it’s unclear exactly how Baker et al. defined their item count metric. Despite being evaluated for half the episode length, STEVE-1 achieves much stronger steerability performance. > In Figure 4 (right), what is "relevant" and "irrelevant" in the graph? > “Relevant” refers to the single relevant prompt for the task (”collect seeds” for task “Seeds Collected”, etc.) and “irrelevant” corresponds to the performance averaged over 10 other irrelevant prompts. We will clarify in the final version.
Summary: Paper presents a method to create instruction following agents in Minecraft. It starts with collecting trajectories using OpenAI’s VPT Minecraft agents, which cannot be controlled through instructions. Then some intermediate visual observations are randomly selected as visual goals. These visual goals are then encoded by the visual encoder of a pretrained text-video contrastive foundation model in the Minecraft domain called MineCLIP. VPT is then fine-tuned to achieve these visual goals (specified as MineCLIP visual embeddings) using the same trajectories. Finally, a CVAE is trained to produce textual embedding that matches the visual goal embeddings, therefore the agents can be piloted by natural language instructions. Experiments highlight the performances of presented method on some entry-level tasks including dirt, log and seed collection. Additional results on prompt chaining, scaling, prompt engineering and other hyperparameters confirm the effectiveness of the proposed approach. Strengths: +The paper is overall clear and well-written. The presentation should be very friendly to readers without RL or LfD background by foundation model in general. The research problem here is relevant to the scope of NeurIPS and its proximity to many emerging topics including open-endedness, generalist agents and large models should be of interest to large audience of this conference. +I find the approach presented here is technically sound with good results. Although the original VPT agent did offer some preliminary results on instruction following, but no enough details were provided so the method here seems to make solid contributions on building the first open-ended language-piloted agents in Minecraft. Some tricks like baseline subtraction seems helpful. Thanks for bringing this to the attention of the community. +The authors did a good job highlighting go/no-go about their method. Just list a few that I find most interesting: - MineCLIP evaluation, which help explain why their method works better on short-horizon task - Prompt-chaining and metrics over time, which clearly demonstrate how the agent progresses under different prompt conditions. - Prompt-engineering, which showcases how different instruction can have significant impact at the agent. Weaknesses: At this point, I don’t have any major concerns but here are a few suggestions: -Baselines; in the main paper, I couldn’t find any baselines or ablations other than the main ”Steve-1” model and VPT(without instruction-following). I agree that there might not be many proper counterparts available but some single-task RL and imitation learning baseline, ex. [10] could still help with better sense on the actual gain the proposed model has in Minecraft. Moreover, the baseline subtraction trick should also be ablated in the main paper with more evidences on other tasks, it seems to be a working technique, but it must be better validated. -Error bar; In Figure 6, I am not sure if the numbers in parentheses indicate error range. If they are not, error bar is needed for these results. -Scaling: please elaborate more on “we see evidence of tasks that do not require much data for STEVE-1 to learn, tasks that steadily get more reliable as the agent is trained longer, and tasks where capability suddenly spikes after the agent reaches some threshold.” If not all tasks can benefit from scaling, why “Put together, this suggests that further scaling would likely significantly improve the agent” -After reading the paper, it’s still unclear to me how the $60 budget is allocated. Please provide detailed explanation to justify this point. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See [Weaknesses] Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: One thing that could be missing in the limitation statement is how well does the cVAE model perform on producing goal embedding and whether it could be a bottleneck to the overall performances on more challenging tasks like long-horizon planning, etc. The authors seem to blame MineCLIP for this but since in the original unCLIP paper, diffusion is used instead of a cVAE, more comparisons might be necessary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and kind words. We are so glad that you found our work an approachable and valuable contribution to the community. > Baselines; in the main paper, I couldn’t find any baselines or ablations other than the main ”Steve-1” model and VPT(without instruction-following). I agree that there might not be many proper counterparts available but some single-task RL and imitation learning baseline, ex. [10] could still help with better sense on the actual gain the proposed model has in Minecraft. > The two most interesting baselines from our perspective are VPT [6] with text-conditioning (Appendix I in [6]) and the multi-task RL experiment done in the MineDojo paper [18]. As you mentioned, the VPT paper did not include many details and did not perform well. MineDojo evaluates on slightly different tasks and the authors appear to find very limited zero-shot generalization ability after training on 12 tasks. We’ll explicitly mention these two baselines in a new section of the paper and include a copy of their results. Note that Appendix E.2 does include a comparison to the VPT with text-conditioning results from Appendix I in [6], but we will include a more clear comparison in the main paper for future versions. Also, we have included a number of ablations in Section 4 and Appendix B that show the importance of our design choices, including experiments on the use of classifier-free guidance, pretraining, chunk size for packed hindsight relabeling, and VAE design choices. Thanks for pointing out [10]. We’ll explore it further. > Moreover, the baseline subtraction trick should also be ablated in the main paper with more evidences on other tasks, it seems to be a working technique, but it must be better validated. > We have updated the baseline subtraction trick figure (classifier-free guidance; Figure 5) to include our two other programmatic tasks of “travel distance” and “seeds collected”. We find that it does not make much of a difference for travel distance and improves the seed collection task. Please find the updated Figure C in our rebuttal PDF. Also, note that Appendix B.1 shows an ablation on the effect of using classifier-free guidance during finetuning training at all, which is different to testing the effect of the conditional scale parameter at inference time (Figure C in the rebuttal PDF). > Error bar; In Figure 6, I am not sure if the numbers in parentheses indicate error range. If they are not, error bar is needed for these results. > These are 95% confidence intervals. We have updated the figure caption to make note of this. > Scaling: please elaborate more on “we see evidence of tasks that do not require much data for STEVE-1 to learn, tasks that steadily get more reliable as the agent is trained longer, and tasks where capability suddenly spikes after the agent reaches some threshold.” If not all tasks can benefit from scaling, why “Put together, this suggests that further scaling would likely significantly improve the agent” > Thanks for the question. At a high level, we mean to say that since we see non-decreasing performance with scale (travel distance and seeds stay roughly constant, dirt and logs increase), then we posit that further scaling will help improve performance. We suspect that for tasks which didn’t benefit from scaling in our experiments, we are either close to optimal performance or we didn’t reach the critical amount of scale required to see strong performance. There is evidence of this type of sudden emergence in capability in both our dirt and logs task as well as in the literature [58]. We will improve the scaling section in the paper to clarify. > After reading the paper, it’s still unclear to me how the $60 budget is allocated. Please provide detailed explanation to justify this point. > The $60 cost we reported in the paper corresponds to the cost of renting a 8xA10g node using spot instances on AWS for 12 hours using our instances prices at the time. We are considering translating this into more standard numbers using on-demand pricing for A100s for the final paper release since we realized that spot prices can fluctuate and be misleading. > One thing that could be missing in the limitation statement is how well does the cVAE model perform on producing goal embedding and whether it could be a bottleneck to the overall performances on more challenging tasks like long-horizon planning > Good question. We don’t believe that the cVAE is a bottleneck to long-term planning since using STEVE-1 with visual instructions is equivalent to just bypassing the cVAE entirely and this doesn’t improve long-term planning abilities. However, a better cVAE model will likely improve text performance and we will add a discussion about this to the limitations section. Thanks for the suggestion. --- Rebuttal Comment 1.1: Comment: Thank you for the reply. Some of my concerns have been addressed. But after reading other reviews as well, some additional questions pop up. Here are some follow-ups: -It would be much better off if you can pull some results from [10] and compare them with yours in the current submission. It seems to be a very recent and close counterpart (goal-conditioned control in Minecraft), and these numbers will help readers understand how open-world control can be improved with your techniques, especially with modern neural network architectures. Some additional discussions might be needed as well. -It's interesting to know you think of long-term tasks with steve-1, we may have to use visual instruction instead. Why is this the case? I suppose specifying a long-term task with text should be simpler, no? Also, what do you think is the bottleneck of steve-1 solving long-term tasks? I understand you've made it clear on the capacity of steve-1, which is mostly about short-term but I'm sure some analysis & prospect about solving long-term goals (this could be relevant to the prompt-chaining experiments in the main paper as well) in the limitation section should be of interest to reviewers and potential future readers. -I've read other reviews and I agree with Reviewer 7QfK that saying "STEVE-1 can follow nearly any short-horizon open-ended text and visual task in Minecraft" is improper given the current state of evaluation. I don't even think "a wide range of tasks" seems to be appropriate either. My suggestion is to make it clear what kind of tasks steve-1 is able to robustly accomplish by adding a table to the main paper, even better, putting it here in the comment section for the reviewer to examine as your code&model is also provided, and change your statement to something like "steve-1 is able to robustly complete <number of tasks> of tasks in Minecraft". Please avoid using vague and non-academic terms. --- Reply to Comment 1.1.1: Title: Response to Reviewer 3aRk [1/2] Comment: > It would be much better off if you can pull some results from [10] and compare them with yours in the current submission. It seems to be a very recent and close counterpart (goal-conditioned control in Minecraft), and these numbers will help readers understand how open-world control can be improved with your techniques, especially with modern neural network architectures. Some additional discussions might be needed as well. > After reading through [10] carefully, we agree that it is an important related work and we thank you for bringing it to our attention. Both of our works focus on goal-conditioned control in Minecraft, with the major difference being that [10] trains on a fixed set of goals and STEVE-1 uses unCLIP, hindsight relabeling, and MineCLIP to learn goal-reaching behavior from a large dataset in a self-supervised way. We believe that many of the techniques in our work improve open-world control with modern neural network architectures, including our efficient packed hindsight relabeling implementation, classifier-free guidance, effectively using the knowledge in pretrained models with unCLIP, and our overall scalable recipe that learns to reach goals in a self-supervised way. We will be sure to include a discussion of this and the details of [10] (including its different goal-conditioning architecture) in the final version of the paper. Regarding comparing numbers, the results in [10] include the success rate for single-task training for “chopping trees” (50%), “combat cow” (58%), and “combat sheep” (60%). Since the task (both the world and the reward function), action space, observation space, and training data are very different, comparing to our own success rate numbers in Figure A of the rebuttal pdf is likely to mislead. Regardless, for most of the above mentioned techniques, we include ablation studies that show the improvements they can bring to open-world control. > It's interesting to know you think of long-term tasks with steve-1, we may have to use visual instruction instead. Why is this the case? I suppose specifying a long-term task with text should be simpler, no? > We are afraid there may have been a misunderstanding since we do not believe that there is a difference between visual and text instructions for long-term planning. We agree that specifying a long-term task with text should be simpler. > What do you think is the bottleneck of steve-1 solving long-term tasks? I understand you've made it clear on the capacity of steve-1, which is mostly about short-term but I'm sure some analysis & prospect about solving long-term goals (this could be relevant to the prompt-chaining experiments in the main paper as well) in the limitation section should be of interest to reviewers and potential future readers. > We think one bottleneck of solving long-term tasks is that during our packed hindsight relabeling, we limit the hindsight goals to at most 200 timesteps in the future (10 seconds). Due to this, tasks that require more than 200 timesteps to complete are technically out-of-distribution for STEVE-1. We experimented with using longer goal lengths (denoted ‘chunk size’) in Figure 10 in the appendix and found that the performance on all of our evaluation tasks tends to decrease if we increase this hyperparameter too much. We suspect that while an increased chunk size may be able to improve long-horizon performance, it also increases noise and comes at the cost of reducing performance on short-horizon goals. It is an important avenue for future work to investigate whether it is possible to achieve a better tradeoff. Improved performance on long-horizon tasks is one of the most important follow-up directions to improve upon STEVE-1. We point you to our response to reviewer vziy, where we enumerate a few other approaches that may improve long-horizon performance in future work: **1) Scaling:** Our scaling results indicate that as we train the agent on more data, more tasks become achievable. So scaling up the amount of data could enable the agent to complete longer-horizon tasks. **2) Finetuning with RL:** We also think that finetuning the STEVE-1 pretrained agent with RL is another very interesting avenue. For example, VPT finetuned their pretrained agent with RL to learn to mine diamonds. So we think it’s likely that RL finetuning on top of the pretrained STEVE-1 could enhance long-horizon task completion abilities. **3) Using LLMs or VLMs:** Using LLMs or Visual-Language Models (VLMs) to automatically provide prompt chains to the STEVE-1 agent could be an effective way to improve long-horizon performance. We started to investigate this possibility with our prompt chaining experiments and we think that this is a very exciting direction for follow-up work.
Rebuttal 1: Rebuttal: Thanks to all the reviewers for your time and effort during the review process. We appreciate that you found our work well written, insightful, and novel, and we’re glad that there is excitement about our approach to creating an open-ended agent by building on pretrained models. We have responded to each reviewer individually, uploaded a rebuttal PDF, updated our code (shared with the AC), and collected the below response to general concerns. If you find our answers responsive to your concerns, we would be grateful if you considered increasing your score, and if you have additional questions, we’re happy to engage further. > Reviewer 7QfK raised a concern regarding the strength of the MineCLIP evaluation. > While we agree that MineCLIP is imperfect, our experimentation found that it performs impressively at understanding a wide variety of texts and relating them to Minecraft videos. Following the suggestion of reviewer 7Qfk, we have created a success-rate-based confusion matrix in order to demonstrate how MineCLIP evaluation aligns well with our human intuitions about task completion. This matrix is constructed using the exact same episodes used in the MineCLIP evaluation (Figure 3), but rather than use MineCLIP to evaluate task completion, we manually inspect each episode. We find that the overall pattern remains and STEVE-1 completes the instructed task the vast majority of the time (see the figure for the numbers). The success-rate-based matrix can be found in Figure A in the rebuttal PDF and we are considering replacing the MineCLIP evaluation matrix in the main paper with this new figure to increase clarity. Thanks for the suggestion! > Reviewer 7QfK requested to see generalization experiments to unseen instructions. > We have included a new table showing the degree to which evaluation prompts appear in the training set for the VAE and we have also conducted a decontamination experiment where all instructions containing the words “dirt” or “dig” were removed from the training dataset. We find that the new model can still achieve tasks like digging and getting dirt even with this new setup, suggesting that the generalization capability of our agent to new instructions is strong and that most of the language-understanding capability comes from the pretrained MineCLIP model. We will include a discussion of these results in the updated version of the paper. Please see the response to reviewer 7QfK for more details. > We have committed to making the following changes to enhance the paper in response to helpful comments by the reviewers. Many of these points are elaborated upon in responses to individual reviewers. > - Include in the main paper the VPT text-conditioning baseline and MineDojo multi-task RL baseline along with a discussion of limitations and differences. - Include more tasks showing the effect of classifier-free guidance (baseline subtraction trick) in the appendix. - Update the Figure 6 caption to indicate that the values in the parentheses are a 95% confidence interval. - Improve the scaling section of the paper to make it more clear why we expect further scaling will improve agent performance. - Add a section to the appendix indicating how we calculated the total cost of training STEVE-1. - Add a discussion to the limitations section about the difference in performance between text and visual instructions. - Add clarification regarding the differences between hindsight relabeling and packed hindsight relabeling. - Add a more detailed explanation of the MineCLIP evaluation matrix. - Add a section in the limitations section that summarizes what is needed to use our method in a new domain. - Add a discussion of selecting goals in training randomly versus with a more sophisticated strategy. - Add the prompts used to generate the GPT instructions for the prior training dataset. - Update Figure 3 (left) to include VPT text-conditioning results as a baseline. - Clarify “relevant” and “irrelevant” prompts in Figure 4 (right). - Add a discussion on how the performance of STEVE-1 could be improved on longer horizon tasks. - Add discussion about how the prompts were designed to the paper. - Update Table 3 in the appendix to clarify which prompts correspond to which tasks in the MineCLIP evaluation matrix. - Update “STEVE-1 can follow nearly any short-horizon open-ended text and visual task in Minecraft” to “STEVE-1 can follow a wide range of short-horizon text and visual instructions in Minecraft”. - Add a discussion of goal misgeneralization to the main text. - Add a human-labeled success-rate-based evaluation matrix to replace or in addition to the MineCLIP evaluation matrix in Figure 3 (right). - Add discussion and experiments on training set contamination and generalization to new instructions to the appendix. > Updated code and new model snapshots. > We have updated the code to fix a few bugs that we found and to facilitate running an interactive session with the agent. An interactive session makes it easy to test out and change prompts within an episode and to record videos. We also have included a slightly updated snapshot of STEVE-1 (new links in `download_weights.sh`). The main difference is that we trained the agent until the validation loss stopped going down (2 epochs → 3 epochs) and changed the VAE hyperparameters by increasing its size and tuning the $\beta$ hyperparameter. The performance is marginally better than the previous snapshot. This is the version of the agent and code that we will be releasing to the public as well as in the final NeurIPS supplemental materials. We’ve shared this anonymously with the AC. We again thank the reviewers for their engagement and we appreciate all the suggestions that we believe will make the paper significantly stronger! Pdf: /pdf/23dbbb52be3b0ed7db605dd6187524ec341ab844.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Boosting Learning for LDPC Codes to Improve the Error-Floor Performance
Accept (poster)
Summary: 1. In the proposed work authors proposed a. Neural Min-Sum (NMS) decoders b. NMS decoder with block-wise training schedule that locally trains a block of weights while retraining the preceding block. c. The different weights are assigned to the unsatisfied check nodes and training is carried out. 2. The contribution of the proposed work are. a. Boosting learning using uncorrected codewords b. Block-wise training schedule with retraining c. Weight sharing technique with dynamic weight allocation 3. Sufficient results are presented and discussed. Strengths: Novelty of the work is good. Sufficient results are presented and discussed. Weaknesses: The experimetnal set up should be elaboprated more with parameters selected for training. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How the weights of the NMS decoders are initialized. 2. Is it possible to design ASIC hardware of NMS decoder 3. What about the latency of the decoder. 4. The neural decoders with recent state of art techniques with their advantages should be discussed in detail. 5. The experimental set up should be clearly mentioned with parameters used for the training. 6. How exactly the weights are updated in the training process of LDPC CODES mentioned in the figure 1 and what is the impact of weights in the training process should be clearly mentioned. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Since the authors mentioned that proposed work can be taken into hardware architecture. But clear architectute of the proposed method is not highlighted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and constructive comments. ## Experimental setup As suggested, we’ve provided a more detailed account of the experimental setup in the revised version, including the channel type, the training Eb/N0 for the post decoder for each code, the number of hidden layers, and the number of nodes in the neural network. ## Initial values Initial values are set to 1. We’ve tried various values, but there wasn't a significant difference. Research on effective initialization is an interesting topic, and we plan to pursue it in follow-up studies. ## Implementation by ASIC The proposed method requires no additional modules; hence, once trained, the NMS decoder can be implemented in the same manner as the WMS decoder. There have been numerous studies [a]-[c] on the ASIC implementation of the WMS decoder. As a result, the proposed NMS decoder can also be fabricated using ASIC hardware. [a] W. Zhang, S. Chen, X. Bai, and D. Zhou, “A full layer parallel QCLDPC decoder for WiMAX and Wi-Fi,” in Proc. IEEE 11th Int. Conf. ASIC (ASICON), Nov. 2015, pp. 1–4. [b] T. H. Tran, Y. Nagao, H. Ochi, and M. Kurosaki, “ASIC design of 7.7 Gbps multi-mode LDPC decoder for IEEE 802.11ac,” in Proc. IEEE 14th Int. Symp. Commun. Inf. Technol. (ISCIT), 2014, pp. 259–263. [c] S. Shao et al, “Survey of Turbo, LDPC, and Polar Decoder ASIC Implementations”, IEEE Communications surveys & Tutorials, Vol. 21, no. 3, pp. 2309-2333, 2019 ## Comparison with recent works Conventional neural decoder works have primarily targeted improvements in waterfall performance, whereas our study achieved improvements in error floor performance. Nevertheless, we include a comparison with the latest neural decoder study [15] in the attached file’s Table R.1. Compared to [15], our method achieves similar performance with lower training/decoding complexity. A comparison in the error floor region is not able to conduct with [15], as verifying their error floor performance is infeasible, and there are numerous differences in the experimental environments. The state-of-the-art (SOTA) result with the same experimental setting with our work is from [33], so we performed the comparison with that. The distinctions between our research and these prior works will be highlighted in the revised version. ## How to update the weights Figure 1(b) corresponds to the neural network for the LDPC code depicted in Figure 1(a). The connections between the nodes in the network vary depending on the code, and each edge has an assigned weight. Here, these weights do not have any special functions; they are just typical weights of neural networks. Therefore, the training of these weights is done using standard backpropagation, and in this paper, we employed the Adam optimizer. ## HW architecture It seems that some ambiguous expressions may have led to misunderstandings. What we meant was that the proposed method can be directly applied to the HW architecture used in WMS, not that we are proposing a new architecture. We will make corrections to the revised version to ensure clear communication of this point.
Summary: The paper proposes a training framework for the NMS decoder of LDPC codes in order to enhance the error-floor performance of these codes. \ The NMS decoder iterations are split into two cascaded parts, the first so-called based decoder is trained for decoding waterfall parts, and the second (post decoder) is for decoding uncorrected codewords caused by the error-floor.\ The paper suggests the use of a training schedule reusing learned weights to tackle the vanishing gradient of many iterations NMS.\ The proposed decoder using the trained weights showed improved error-floor performances for several standard LDPC codes and can be efficiently integrated into current NMS solutions. Strengths: The paper addresses an important issue present in BP/MS-based decoders on these codes. The solution is simple and does not require any architectural modification. The performance seems important and is obtained without adding any complexity compared to the original NMS decoder. Weaknesses: #### NMS I am not sure the comparison with NMS is fair for multiple reasons. 1) The training of NMS can be performed in several ways that should not allow zero loss. For example, the observations in lines 224 and 225 can be easily solved by using large batches and/or using batches spanning many Eb/N0 values. 2) The vanishing gradient phenomenon has been solved in [5] using a multi-loss objective, which clearly prevents the gradients to collapse at each layer. Also, in Figure 4b) NMS and the method should be compared at equal number of iterations. NMS's test FERs with l=40 and l=50 are not that far from the l=50 proposed method FER. 3) NMS remains non-negligibly better in the waterfall region. 4) Many NMS algorithms have emerged since [5] and I am wondering if the comparison with the original NMS [5](2018) is fair and conclusive in order to judge the paper. #### Clarity: Figure 2 is not clear. For example, the differentiation between the NMS and the neural network is not clear; the base and post NN should be linked by an arrow. The paper contains a very large amount of text which can become cumbersome. I believe adding clear Algorithm(s) would simplify the comprehension a lot. #### Misc: Typos: line 88: we a new \ line 121: protogrph Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) Please address the NMS subsection from the Weaknesses section.\ It would be beneficial to have an ablation study of the final accuracy for NMS augmented with the different components of the proposed methods. Also, different training strategies of the NMS must be considered. 2) Please address the clarity and Misc from the Weaknesses section. I believe adding clarity, addressing the ablations, and adding more experiments would increase the quality, impact, and rating of the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing insightful comments. ## Schemes for avoiding zero loss Thank you for your detailed feedback. Lines 224, 225 are about the Case 3. In Case 3, we use received words sampled at 4.5dB (the error floor region of the base decoder) as training samples for the post decoder without any filtering. At 4.5 dB, the FER of the base decoder is at the level of $2\times 10^{-5}$, so even if the batch size is increased to 100, on average, only 0.02 samples produce a non-zero loss. Furthermore, if we span the sampling Eb/N0 to include lower values (in the Waterfall region), it doesn't foster the post decoder training that's robust to the error floor. In other words, the core idea of our proposed method is to sample from the error floor region and use only uncorrected words by filtering, ensuring effective training for the post decoder. We will refine this section to eliminate any ambiguity and preclude potential misinterpretations. ## Multi-loss objective function As you suggested, we included a comparison with the multi-loss method (see the attached file, Figure R.1(d)). Like the iter-by-iter method, the multi-loss method also rapidly reduces the FER value in the initial iterations. However, by iteration 50, block-wise training outperforms the multi-loss method. This can be attributed to the fact that error patterns occurred in the error floor region can be better corrected when all iterations within a block collaborate closely, rather than optimizing each individual iteration. This is discussed in Section 4 of the supplementary material. While the graph in Figure R.1(d) seems not to show a significant difference in the final test FER, the precise values are 0.179 for at once, 0.156 for multi-loss, and 0.112 for block-wise, which means that the block-wise training method achieves reductions of 40% and 28% respectively. ## NMS remains non-negligibly better in the waterfall region The proposed post decoder is trained for the error floor region, so even with the addition of the post decoder, the waterfall performance remains largely unchanged. (compare the proposed method with iteration 20 and 50 in Figure 6(a)) Consequently, as you rightly pointed out, our proposed method in Figure 6(a) exhibits a slightly degraded waterfall performance compared to other NMS methods. Compared to [5], our methodology incurs a minor reduction of 0.05dB in the waterfall region (@FER $10^{-2}$). However, it can achieve an appreciable SNR gain of 1dB in the error floor region (@FER $10^{-7}$). Such a trade-off will be beneficial, especially for the target applications of our work, e.g., extremely ultra-reliable communications for 6G and storage systems. ## Comparison with other works As you've pointed out, several follow-up studies have been conducted after [5]. Unlike our work focuses on the error floor performance, the majority of the works have concentrated on improving waterfall performance. The state-of-the-art (SOTA) model-based approaches in waterfall performance is the hyper graph network [15]. A comparison with [15] in the waterfall region is added to the attached file. Our proposed boosting learning is also applicable in the waterfall region. As seen in Table R.1, our method achieves comparable performance with [15] solely through the enhancement of the training technique without any additional modules. What we would like to emphasize is that our proposed method has achieved the SOTA result in terms of the error floor performance under a low complexity environment. Specifically, our work assumes scenarios suitable for application areas like storage systems and XURLLC for 6G, encompassing: i) Error floor region (FER <=1e-7, BER<=1e-9) ii) Target metric: FER performance iii) MS algorithm iv) Without additional training/decoding cost v) Hardware-Feasible vi) Moderate and long code length (>=500) In contrast, [15] targets short-length BER performance in the waterfall region (BER~1e-7), incurs additional training/decoding costs, and is constrained to the BP algorithm. This makes it difficult to make a fair comparison. Technically, due to memory limitations in TensorFlow, it was impossible for us to train the [15] technique on the WiMax LDPC code of length 576. Moreover, the evaluation of the technique [15] in the waterfall region takes several days and it is infeasible to evaluate the error-floor performance. Our proposed method is the first in research on learning-based decoders to directly target the error floor performance. For reference, the SOTA work sharing the same environment as this paper is [33], and our proposed NMS showed better performance with about a third of the latency and without additional modules. In the revised paper, we will emphasize that the SOTA in the waterfall is the existing works [15], and will highlight that the goal of our study is the error floor. Thank you for clarifying the contribution of the research. ## Clarity and ablation study Thank you for the valuable suggestion. For the clarity of the paper, we've extensively revised the schematics as shown in Figure R.5 of the attached file and added an algorithm. Additionally, we included an ablation study in the attached file. The result shows that the boosting learning is the core technique to reduce the FER performance and the block-wise training also contributes the FER reduction, while the proposed sharing does not involve the performance degradation. --- Rebuttal Comment 1.1: Comment: Thank you for your very clear and thorough answer. 1) Thank you for the experiment. What is the impact of multi-loss applied with your method? Do you assume it won't help? 2) Comparison with other works: the comparison is not very pertinent since much better neural decoders than [15] have been recently developed (e.g., [A, B] are related to the authors of [5] and [15]). However, I admit their capacities may not be straightforwardly equivalent (e.g., [C] is not performing as well but is a more lightweight model). 3) It seems the careful selection of the data solely seems to have an impressive and surprising impact on the performance. - Do you know what is the real impact of training and fine-tuning on the whole model (base+post)? (maybe with different step-sizes) - Do you know if other data-selection policies during training have already been investigated? (related works focus on weights repartitions). Thank you [A] *Autoregressive Belief Propagation for Decoding Block Codes* [B] *Error Correction Code Transformer* [C] *Graph Neural Networks for Channel Decoding* --- Reply to Comment 1.1.1: Title: Reply to the comments by jBEt Comment: 1. In the error floor scenario with FER metric, the proposed block-wise method shows the best results, but the multi-loss method is the second most effective. The multi-loss approach has the advantage of enabling learning over a large number of iterations without suffering from the vanishing gradient problem unlike the at-once method. Also, in contrast to the iter-by-iter method, it incorporates multiple iterations in the loss function, which more effectively eliminates error patterns in the error floor. As a result, the multi-loss method achieves a lower FER compared to the at-once and iter-by-iter methods. We will mention these advantages in the revised version. 2. Thank you for the valuable comment. We will add a comparison with other works [A], [B]. (A comparison with [C] is not conducted as the source code is not publicly accessible and an apple-to-apple comparison is little hard) As is known, in the waterfall region, the performance of [B] demonstrates state-of-the-art results. Especially for high-density codes such as BCH and polar codes, there's a significant performance gap when compared with BP and MS research series (Hyper [15], AR [A], Boosting [Proposed]), whereas the gap is relatively minimal for LDPC codes. However, [B] uses an entirely different architecture based on transformer, which is not based on BP and MS algorithms, making its practicality somewhat limited as of now. Additionally, due to its intricate network structure, it's more complex than NMS decoders in terms of training time, memory requirement, and decoding complexity. In practice, the ECCT has difficulty training for the target codes of this paper, particularly for codes of several hundred lengths with very low error rates. In our GPU environment, training for the WiMax code (N=576, K=432) is feasible using only `shallow` ECCT (N=2, d=32), and at this depth, there's no performance improvement over the BP & MS series of works. On the other hand, our proposed boosting learning is trainable even in very deep networks, around 100 iterations, and long codes 1000 in length. Although our focus in this work was on improving the error floor performance of neural min-sum decoders, our approach could potentially be applied to other decoding architectures (e.g., ECCT). Application of our boosting learning to ECCT would be an enticing follow-up as a future study. We conjecture that the boosting learning via smart selection and arrangement of training samples can be effective on a large class of architectures. In summary, the contributions of our work are as follows: i) Primary contribution: Achieving SOTA in terms of error floor performance in practical situations like the NMS decoder, standard LDPC codes. ii) Secondary contribution: Achieving a competitive waterfall performance for short codes by modifying only the training method. (will be discussed in the supplementary file) iii) Additional benefit: A flexible methodology applicable to different channels and decoder architectures. 3. As you mentioned, careful selection of data samples can lead to a significant improvement in decoding performance through the proposed boosting approach. The ablation study underscores this substantial performance boost (as seen when comparing the first and second rows of the ablation study: from $3.14 \times 10^{-6}$ to $3.31 \times 10^{-7}$. Most research has focused on proposing new decoder architectures or decoding algorithms to improve decoding performance. In contrast, our proposed boosting approach demonstrates that the decoding performance can be significantly enhanced by training carefully selected data samples. We believe that the proposed boosting approach offers a promising new research direction that can improve decoding performance while minimizing the additional decoding complexity. I've explored fine-tuning of some hyper-parameters (learning rate, initial values, batch size) but didn't observe significant effects. Instead, we will include experimental results showing valid effects, such as the usage of VN weights and splitting of the number of iterations between base and post. Research on data selection has been conducted in “active learning” [16] and “BP-RNN diversity” [9]. [16] employs a single-stage decoding and gathers data samples with a moderate number of errors, inspired by active learning. However, [16] significantly differs from our proposed boosting learning method, which uses a two-stage decoding process and collects the failed codeword samples during the first stage. BP-RNN Diversity [9] requires enumerating trapping sets beforehand and because of this, it can only be applied to short codes. In contrast, the sampling of uncorrected words in the proposed boosting learning has an O(n) complexity, making it feasible for longer codes. More importantly, they focused on the waterfall performance. We will clarify this point in the revised manuscript. Thank you again for the valuable comments.
Summary: The author studies LDPC code's neural min-sum decoder's error floor problem, by solely change the training method: (1) boosting learning method, which use first network to do majority of decoding to achieve waterfall region, and second network deal with small error residuals. (2) relief deep decoding iteration's vanish gradient problem by block-wise training. (3) bundle weight in dynamic manner, to reduce the number of weights to be trained. With above techniques, the experiment runs on varies LDPC settings, shows that proposed NMS has better performance on error-floor regions, with same computational resource as NMS. Since NMS and MS have similar complexity, it is believed that proposed NMS could be possibly adapted to future LDPC decoder implementation. Strengths: (1) The paper works directly on training of NMS algorithm, which requires minimal change on existing low complexity NMS solutions. All three training methods are reasonable and are supported by experiments. The residual learning (termed as boosting learning) empirically shows advantage on error-floor regions. The block-wise training schedule does improve performance with large number of iterations. Finally the weight sharing technique adds a structural sparse limitation to neural decoder, which only uses a small portion of weights. (2) The evaluation is extensive, on multiple LDPC settings at Figure 6, we see that proposed NMS method shows better error-floor performance, compared to existing NMS and canonical decoders. Weaknesses: Here are my concerns: (1) The major contribution on Machine Learning sense, is limited. The proposed boosting learning method is common, as residual learning in ML sense; the proposed training schedule and weight sharing are also widely-used ML methods. The core contribution is on applying ML techniques in the context of channel coding field. In some sense, the targeted audience shall be channel coding researchers, rather than general ML researchers. (2) The experiments are not conducted in long block length (length>1000, which is common for QC-LDPC), which is widely used in 5G/WIFI systems. It might be interesting to check the performance on long block length, where block length coding gain is significant, and NMS shows less advantage against canonical methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comment. ## Contribution of this work We agree with your opinion that the contribution to machine learning techniques can be seen as limited. However, we believe this manuscript aligns well with the scope of NeurIPS under the category of “application of machine learning”. The primary emphasis of our work is on the novel application of ML techniques to the domain of coding theory. In a similar vein, previous works [a]-[d], presented at ML conferences, have primarily aimed at broadening the spectrum of ML applications. A unique aspect of our research is the adoption of boosting learning, a method yet to be explored in coding theory. Our boosting approach for LDPC codes achieves the SOTA performance in the error-floor region compared to other LDPC decoding strategies. [a] 2018 NeurIPS, “Deepcode Feedback Codes via Deep Learning” [b] 2019 NeurIPS, “Hyper-Graph-Network Decoders for Block Codes” [c] 2020 NeurIPS, “Learning to Decode Reinforcement Learning for Decoding of Sparse Graph-Based Channel Codes” [d] 2021 ICML, “Cyclically Equivariant Neural Decoders for Cyclic Codes” [e] 2022 NeurIPS, “Error Correction Code Transformer” ## Long length code As you suggested, we’ve added the results for the (1248, 1056) 5G LDPC code to the attached file's Figure R.1(c). This result demonstrates the proposed method is also effective for long codes. As you mentioned, there is little improvement in waterfall performance with the NMS technique at such long lengths. However, the performance improvement in the error floor region using our proposed method is clearly evident. --- Rebuttal Comment 1.1: Title: Keep my score the same Comment: After reading the response from the author, I decide to keep my rating.
Summary: The paper proposes training methods to optimize neural min-sum (NMS) decoders that are robust to the error-floor phenomenon of LDPC codes. The proposed methods include: (1) dividing the decoding network into two neural networks and training the post network to be specialized for uncorrected codewords that failed in the first network; (2) introducing a block-wise training schedule that locally trains a block of weights while retraining the preceding block; and (3) assigning different weights to unsatisfied check nodes. The proposed methods are applied to standard LDPC codes, and the results show that they achieve the best decoding performance in the error-floor region compared to other decoding methods in the literature. The proposed NMS decoder is designed only by modifying the training methodology, without adding any additional modules. Therefore, it can be seamlessly incorporated into the well-established architectures of LDPC decoders and can be immediately utilized in practical domains that demand exceptionally low error rates. Strengths: The paper introduce novel ideas: 1. Boosting learning using uncorrected codewords - The authors propose a boosting learning technique for LDPC code decoding. The technique divides the decoding network into two cascaded networks, where the first network focuses on the waterfall performance and the second network focuses on the error-floor region. The second network is able to correct uncorrected codewords that are not corrected by the first network due to the error-floor phenomenon. This results in a significant performance improvement in the error-floor region. 2. Block-wise training schedule with retraining - The authors propose a new training schedule for NMS decoders that mitigates the vanishing gradient problem. The proposed schedule divides the entire decoding iterations into sub-blocks and trains the blocks in a sequential manner. Additionally, the weights trained from previous blocks are retrained to escape from local minima. This results in a significant performance improvement over the one-shot training method and the iter-by-iter schedule. 3. Weight sharing technique with dynamic weight allocation - The authors propose a new weight sharing technique for NMS decoders that improves the performance in the error-floor region. The technique dynamically assigns different weights to unsatisfied check nodes (UCNs) and satisfied check nodes (SCNs) in the decoding process. This results in a significant performance improvement with a 2.6% reduction in the number of weights to be trained. Weaknesses: . Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Does the proposed method can be also beneficial to improve the results of neural decoder for other types of codes? such as BCH and Polar? 2. How does the model behave with other type of noise (other than Gaussian)? 3. Can you tie the weights of the different network to get improvement in terms of model size? 4. What happen if you take more then two NN? does the performance improve? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and constructive comments. ## Other types of codes The proposed boosting learning method is more effective in the error floor region than in the waterfall region. Therefore, its impact is more prominent with LDPC codes than with Polar or BCH codes, where the error floor doesn't manifest as prominently. However, boosting learning can be applied regardless of the code type and is also applicable targeting the waterfall region. To demonstrate this, we added simulation results of our method when applied to Polar and BCH in the waterfall region to the attached file's Table R.1. When compared to the SOTA model-based approach [15] in terms of the *waterfall performance*, a comparable performance is achieved. However, while [15] requires additional training/decoding complexities, our proposed method does not. In other words, our approach achieves “the *error floor SOTA* result” and exhibits comparable performance to “the *waterfall SOTA* result” [15] for other types of codes. ## Other types of noise The concept of training using uncorrected words can be consistently applied regardless of the channel type. In the attached file, Figure R.2, we included the results for the Rayleigh channel, showing the effectiveness of our proposed method holds across different channel types. ## Tie the weights The key mechanism for performance improvement is that the base and post decoders obtain decoding diversity by using different weights. Therefore, sharing weights between them would likely be ineffective. Exploring new weight sharing (or tying) techniques to reduce the model size can be an attractive avenue for future research. ## Multiple decoders It's possible to extend to the case of more than two decoders in the same manner. For instance, in the WiMax LDPC code result (Fig. 6(a)), uncorrected words from the error floor region of the base + post decoder (i.e., Eb/N0 5dB) can be collected to train a third decoder. In Figure R.1(a) of the attached file, we present numerical results that demonstrate the error floor's further reduction with the aid of this third decoder (labeled as “Base+Post+Third”). It's worth mentioning that introducing this third decoder might lead to increased decoding latency. Moreover, the process of collecting uncorrected words would be time-consuming, especially in the very low FER region. Addressing these challenges could be a promising direction for subsequent research. --- Rebuttal Comment 1.1: Comment: Thank you addressing my concerns. The answers are satisfactory.
Rebuttal 1: Rebuttal: ### Dear Reviewers and Area Chair, We sincerely appreciate the time and effort you've taken to review our paper. Your insightful feedback has undoubtedly enhanced our work. We've carefully addressed each of your remarks and inquiries, providing detailed responses for each one. We hope that our responses have addressed any concerns. If you have further questions or need clarifications, please don't hesitate to ask. Pdf: /pdf/388a4c28958ec0d212b92d1425eec37043f450cc.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents novel training techniques for the NMS decoder of LDPC codes aimed at enhancing performance in the error floor region. The proposed decoding methods comprise two stages: a base decoder and a post decoder designed specifically for uncorrected codewords from the base decoder. In order to tackle the issue of vanishing gradient during training, a block-wise training schedule is introduced in this paper. By assigning distinct weights to unsatisfied check nodes, the error floor can be lowered with a small number of weights to be trained. Strengths: 1. The impact of the proposed decoding technique is substantial, as evidenced by the figures which clearly demonstrate a significant improvement in code performance, particularly in the error floor region. 2. The analysis and interpretation of the results in Figure 3 were particularly valuable, providing a clear understanding of the distinctions between the methods employed for selecting training samples. Weaknesses: 1. The proposed decoding method in this paper builds upon the concept of two-stage decoding and incorporates a post decoder that complements the base decoder. It is important to acknowledge that this idea is not entirely new within the field of coding theory for LDPC codes, as there has been prior research exploring similar approaches. The followings are some of the examples sharing a similar philosophy and it would be valuable to mention them as relevant previous work in this direction. S. Yang, Y. Han, X. Wu, R. Wood and R. Galbraith, "A soft decodable concatenated LDPC code," 2015 IEEE International Magnetics Conference (INTERMAG), Beijing, China, 2015 J. Oh, J. Ha, H. Park and J. Moon, "RS-LDPC Concatenated Coding for the Modern Tape Storage Channel," in IEEE Transactions on Communications, vol. 64, no. 1, pp. 59-69, Jan. 2016 H. Park and J. Moon, "Improving SSD Read Latency via Coding," in IEEE Transactions on Computers, vol. 69, no. 12, pp. 1809-1822, 1 Dec. 2020 2. The clarity of the presentation style could be enhanced. For example, the specific problem addressed by the paper, such as the type of channel considered, is not explicitly stated. The reader must infer this information primarily from the related work section. Providing a clear and explicit description of the problem, including the type of channel under consideration, would greatly improve the overall clarity of the paper. 3. Nitpicks - It would greatly enhance the presentation if the theoretical performance limit based on the code rate could be included in the plots, such as in Figures 3 and 6. - Typo: On line 121, page 3, "protogrph" should be corrected to "protograph". Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: In order to harness the potential of boosting learning, this study introduces a division of the decoding network into two components, with the post decoding stage aiding the base decoding process. A natural extension of this approach is to explore the possibility of incorporating more than two decoders. It would be intriguing to learn from the authors whether they have considered this scenario and if they can provide insights into the expected outcomes. Including discussions and potential answers to this question would enhance the manuscript's appeal and stimulate further interest in the research. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A - This work does not appear to have any discernible negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments and the clarification. ## Related works sharing a similar philosophy Thank you for informing us about the related research. In those studies, they employ a two-stage decoding where the outer and inner decoders perform decoding subsequently or iteratively. Our proposed study bears similarities, as our study also carries out a two-stage decoding. However, unlike these concatenated codes, our method utilizes a single LDPC decoder. While the proposed decoder is conceptually divided into two stages (base/post) based on iterations, in practice, we employ only a single decoder with distinct weight parameter sets. Consequently, our approach avoids the need for the additional (or separate) parity bits that are typically associated with concatenated codes. Moreover, our approach has a novelty in that the post decoder is 'trained' dependent on the results of the base decoder. Nonetheless, we agree with the comment that our and previous works share the philosophy of complementary multiple decoders. We believe they are worth mentioning as relevant previous works. ## Clarity of the presentation As you mentioned, there was no explicit introduction of the channel model in the original manuscript. In the revised version, we will specify that the underlying channel is the AWGN channel. Additionally, we will add information about the sampling points of the post decoder and the size of the neural network. ## Performance limit As you suggested, we have added a graph showing the finite length capacity in the attached file, Figure R.1(a). The finite length performance limit was referenced from the paper [a]. [a]: Y. Polyanskiy, ”Channel coding rate in the finite block-length regime,” IEEE Trans. on IT, vol. 56, no. 5, 2010 ## Multiple decoders It's possible to extend to the case of more than two decoders in the same manner. For instance, in the WiMax LDPC code result (Fig. 6(a)), uncorrected words from the error floor region of the base + post decoder (i.e., Eb/N0 5dB) can be collected to train a third decoder. In Figure R.1(a) of the attached file, we present numerical results that demonstrate the error floor's further reduction with the aid of this third decoder (labeled as “Base+Post+Third”). It's worth mentioning that introducing this third decoder might lead to increased decoding latency. Moreover, the process of collecting uncorrected words would be time-consuming, especially in the very low FER region. Addressing these challenges could be a promising direction for subsequent research.
null
null
null
null
null
null
Evaluating Neuron Interpretation Methods of NLP Models
Accept (poster)
Summary: This work investigates interpretation methods in NLP that identify which neurons in a neural network are most related to particular concepts (e.g. a specific part of speech). The key idea is to compare how consistent one method's ranking of neurons is (w.r.t. a specific concept) with that of all other considered methods. The assumption is that the method that produces the most consistent rankings is the one that is closest to the true ranking of important neurons. The authors consider a number of interpretation methods and pretrained language models and show that across several settings, the method that is most consistent is Probeless (Antverg and Belinkov, 2021). Strengths: - The work considers the problem of interpreting the neuron-level representations of neural networks, which seems to be a recently emerging subarea of interpretability within NLP. I'm not fully convinced by its utility but there are indeed several recent works in this area so perhaps there would be interest in this work - The paper is mostly well written and motivated Weaknesses: The main weakness is the assumption that consensus with existing interpretability methods is desirable. This consensus of course depends on the other considered interpretability methods, and it's not clear that the number of total interpretability methods (6) is sufficient to lead to reliable results that are based on consensus. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: How do the insights about which interpretability methods is the most consistent change according to the considered interpretability methods? For example, can the authors remove 1,..n-1 interpretability methods from the consensus calculation and compare the results with those from the full set of n interpretability methods? Minor questions / notes: L100: are the examples supposed to be country names? The provided examples are cities. There are a lot more "random"concepts in a corpus than target concepts. Are the random concepts that are used for evaluation sampled such that they are equal in number to the target concepts? How consistent is this evaluation over different random samples? L185: why does the performance of a probe using random 100 neurons being higher than other neurons mean that the probe is memorizing? As the authors point out earlier in the manuscript, the concept knowledge is distributed so it makes sense that as one increases the number of random neurons that are being considered, the prediction performance would improve. If the authors were looking at randomly initialized models rather than pretrained models, then I would agree with their conclusion, but in the current setup, there seem to be alternate explanations for their results. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: See under Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: …his consensus of course depends on the other considered interpretability methods, and it's not clear that the number of total interpretability methods (6) is sufficient to lead to reliable results…** A: We acknowledge the concern. We discussed the limitation of our approach in detail in Section 5. We would also like to emphasize the significance and critical nature of the problem we are addressing. Currently, there is no standardized metric available for comparing and evaluating neuron interpretation methods. This lack of standardization has led to scattered efforts in proposing new interpretation methods. Our work aims to address this gap by establishing a foundation for evaluating and comparing neuron interpretation methods, which will be continuously updated to include new interpretation methods. Regarding the choice of six interpretability methods: we did not explicitly reject an interpretation method from inclusion into our framework. The choice of six interpretability methods is based on the recently published survey on neuron interpretation (link: https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00519/113852/Neuron-level-Interpretation-of-Deep-NLP-Models-A). Additionally, the six methods utilized in our paper were chosen for their theoretical diversity, resulting in a diverse array of discovered neurons. We welcome any discussion on alternative methods that the reviewer believes could help address the limitations of evaluating neuron interpretation techniques. We are open to considering and incorporating such suggestions to further enhance the robustness of our approach. **R: For example, can the authors remove 1,..n-1 interpretability methods from the consensus calculation and compare the results with those from the full set of n interpretability methods?** A: Thank you for suggesting an insightful experiment. In the paper, we considered a leave-one-out strategy to calculate compatibility of each method so for a set of N methods, we considered one method as a test method and used the rest N-1 methods to serve as the database of sets of discovered neurons. What you have suggested is opposite to this. We tried it for the rebuttal. We considered a combination of one, two, three and four methods to serve as a database of the sets of neurons and calculate the compatibility scores of the other methods. The trends in Table 1, 2 and 3 showed consistent results to that reported in the paper. We observe that only the inclusion of Lasso in the database, when there is only one or two methods, provides an edge to LCA during evaluation. However, this effect is minimized with the inclusion of more methods (greater than 2) in the consensus database. Rest, we did not find any correlation between the presence of a method in the database with the high compatibility score of a test method. **R: L100: are the examples supposed to be country names? The provided examples are cities.** A: Thank you for pointing out the typo. We will fix it to be city names. **R: Are the random concepts that are used for evaluation sampled such that they are equal in number to the target concepts?** A: Yes, we randomly selected “random” concepts equal in number to the target concept. We ran three random selection runs and found the results to be consistent. **R: L185: why does the performance of a probe using random 100 neurons being higher than other neurons mean that the probe is memorizing?** A: Thank you for pointing out the error. It is indeed due to the distributiveness of knowledge of the concept. Durrani et al. (2020) presented an empirical evidence of this using the controlled task. We have corrected it in the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the response and the added experiment that is aimed to examine the robustness of the proposed consensus evaluation metrics to the differences in included approaches. Based on Tables 1,2, and 3 in the rebuttal PDF, it looks like there are different results when evaluating the same method (e.g. Probeless) against the same set of methods (e.g. Gaussian + LCA, or Gaussian + Ridge). I'm assuming this is a mistake. Please provide the corrected numbers. Abstracting away from the exact numbers, it looks like 3 of the evaluated methods -- Probeless, LCA, and Lasso -- agree more with each other than on average across all datasets. This is important because including two of these in the consensus methods and the 3rd as the method to be evaluated will bias the results towards this 3rd method over the remaining methods that are less similar (Ridge, Gaussian, IoU). Perhaps this a feature of these methods recovering something closer to the "true" set of important neurons, but that is only a speculation. This needs to be discussed. --- Reply to Comment 1.1.1: Comment: Thank you for your response. **re: error in table** We sincerely apologize for the error. Table 2 is accurate. Some rows are Table 1 are mislabelled (but the numbers remain the same) and in Table 3, the Probeless numbers for "LCA, Ridge" and "LCA, Ridge, IoU" were flipped. The final Table 1 is as follows: | Consensus Methods | Probeless | IoU | |-----------------------------|:---------:|:-----:| | Gaussian | 0.110 | 0.085 | | Lasso | 0.449 | 0.195 | | LCA | 0.524 | 0.216 | | Ridge | 0.270 | 0.160 | | Gaussian, Lasso | 0.272 | 0.155 | | Gaussian, LCA | 0.291 | 0.158 | | Gaussian, Ridge | 0.202 | 0.136 | | Lasso, Ridge | 0.404 | 0.206 | | Lasso, LCA | 0.498 | 0.209 | | Ridge, LCA | 0.424 | 0.217 | | Gaussian, Ridge, Lasso | 0.337 | 0.194 | | Gaussian, LCA, Ridge | 0.356 | 0.200 | | Ridge, Lasso, LCA | 0.475 | 0.224 | | Lasso, Gaussian, LCA | 0.434 | 0.206 | | Gaussian, Lasso, Ridge, LCA | 0.426 | 0.221 | **Specifically:** In single method consensus, LCA and Lasso needs to be interchanged so row2 -> Lasso, row3 -> LCA In two methods consensus, the order of combinations is: Gaussian, Lasso Gaussian, LCA Gaussian, Ridge Lasso, Ridge Lasso, LCA Ridge, LCA Lastly, in three methods consensus, the order of combinations is: Gaussian, Ridge, Lasso Gaussian, LCA, Ridge Ridge, Lasso, LCA Lasso, Gaussian, LCA The trend did not change due to mislabelling. As you have observed, Probeless, LCA and Lasso have the highest overlap in terms of the discovered neurons. ## **re: Bias in consensus methods** Your observation is correct that Probeless, Lasso and LCA discovered the most similar top neurons, and this trend is also visible in the pairwise comparison provided in Figure 1 in the paper. We have discussed the point of potential bias in consensus in the limitation section. However, in this particular case, the similarity of the discovered neurons by these methods is less attributable to biases in the underlying methods, because they belong to two distinct theoretical classes of methods (classifier vs corpus-based). In other words, the high overlap among their discovered neurons is not due to methodological similarities, and hints that the behavior is observed because “these methods recovering something closer to the “true” set of important neurons.” However, this does not mean that these overlapping discovered neurons form a superset of “true” neurons with respect to the concept. There can be other true neurons that are not part of the overlapping set of neurons. One argument to support this point is to see the percentage of overlap between these methods (see Figure 1 in the paper). Despite high overlap, there are 30-40% neurons which are different among LCA, Probeless and Lasso. In any case, this is indeed a very insightful discussion in comparing neuron interpretation methods, and we will add it in the paper describing this spectrum of bias vs "true set" of neurons.
Summary: This work evaluates six different interpretation methods from a unified perspective. The authors focus on two challenges in this field: the absence of standard metrics and the lack of benchmarks. They propose two voting-based metrics to evaluate the compatibility among these six methods. Probeless consistently achieves the highest compatibility across all models based on their evaluation methodology. Further analytics experiments provide insight to the research community. Strengths: 1. The findings of the interpretation methods mentioned in this paper may foster research on the topic. 2. The idea of this paper is slightly novel. Weaknesses: 1. The selected six methods need to be more novel. More recent advancements in this field may exist rather than relying on L1 & L2 regularization. 2. The validation of the superiority of Probeless needs to be more comprehensive. 3. The mentioned problems (metrics and benchmarks) were not effectively addressed, making this method difficult to be reproduced in future applications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Are there additional performance comparison experiments to practically demonstrate the effectiveness of this method? For example, try to deactivate the most important 10% of neurons calculated by each interpretation method, and then observe whether Probeless exhibits the most significant performance decline. 2. To assess the efficacy of an interpretation method across various fields, is it essential to replicate multiple interpretation methods within those fields before applying the voting metrics? If so, this process can be quite challenging and should be acknowledged as a limitation in this work. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: The selected six methods need to be more novel. More recent advancements in this field may exist rather than relying on L1 & L2 regularization.** A: We selected the widely used and established neuron interpretation methods for NLP models, drawing from the existing literature. Please see the recent published survey on neuron interpretation (https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00519/113852/Neuron-level-Interpretation-of-Deep-NLP-Models-A). Our selection of neuron interpretation methods is based on it. However, kindly note that the crux of our contribution lies not in the selection of these interpretation methods but rather in the introduction of a novel evaluation framework. This framework is specifically designed to streamline the assessment of new interpretation methods and facilitate meaningful comparisons of their results. By offering this evaluation tool, we aim to foster advancements in the field of neuron interpretation of NLP models, and encourage further research in this important area. **R: The validation of the superiority of Probeless needs to be more comprehensive.** A: We conducted a large set of experiments using three pre-trained models, across all layers, and using 50 concepts of diverse linguistic annotations such as parts of speech tagging, chunking and semantic tagging. This makes approximately 2000 settings in total. All of our results showed Probeless to be the most consistent method. LCA is another competitive method to Probeless with the exception of last layer representations. Appendix Tables 6 and 7 present the results using semantic tagging and chunking tasks. We included the set of experiments that you have proposed and we would be happy to consider and discuss if there are any suggestions of more experiments. **R: The mentioned problems (metrics and benchmarks) were not effectively addressed** A: We kindly seek clarification on the matter. We have integrated all methods into a single codebase, shared the evaluation metric code, and provided discovered neurons associated with various concepts to ensure the reproducibility of the results. We are confident that replicating our findings for future applications will be a straightforward process. **R: Q1: …additional experiments… For example, try to deactivate the most important 10% of neurons calculated by each interpretation method, and then observe whether Probeless exhibits the most significant performance decline.** A: Thanks for proposing an interesting experiment. As suggested, we compared the compatibility score by iteratively removing top N neurons from the ranking of each method and calculated the compatibility score for the next top neurons. Figure 1 in the PDF file showed the average results across layer 1, 6 and 12. Probeless maintains the top or competitive to the top compatibility score for all three models, BERT, RoBERTa and XLMR. This shows that the top neurons discovered by Probeless are consistently better. **R: Q2 To assess the efficacy of an interpretation method across various fields, is it essential….** A: This is correct. The compatibility metric relies on the availability of several methods targeting a common goal. The metric is of significant value when gold standard annotations are not available and are harder to make. We have added the following text in the limitation section to acknowledge this limitation. ``` While the proposed framework is agnostic to methods used to produce ranking, in order to adapt it to other fields, it requires the presence of various methods targeting an identical goal. ``` --- Rebuttal Comment 1.1: Comment: Dear reviewer, thank you for your time and valuable comments. We would be happy to discuss the rebuttal and address other questions/points of confusion you may have.
Summary: This paper provides a comparative analysis of six neuron interpretation methods utilizing diverse concepts across three distinct pre-trained models and introduces an evaluation framework predicated on voting theory. Importantly, it offers the first comprehensive examination of multiple neuron interpretation methods and strives to mitigate the challenges in this field. The authors note similarities among the most proficient techniques within layers of neurons, with these resemblances remaining regardless of methodological deviations. It is thereby suggested, existing neuronal interpretation methodologies might have focused on a reciprocal group of top-performing neurons. The paper also discusses the lack of no recognised evaluative metrics along with the lack of gold annotations by suggesting an evaluation strategy that consists of two compatible metrics with a pairwise comparison. This approach could facilitate the creation of a means to evaluate new neuron interpretation methods. Strengths: The major strength of the paper lies in its originality; being the first work to create a ranking metric using voting theory for comparing the various neuron interpretation methods available. This methos should allow for autumatic comparison between existing and new neuron actiavtion methos. Furthermore, it embarked on a comprehensive comparative analysis via this new evaluation framework Weaknesses: The primary weakness of the paper lies in its disregard for significant, influential studies previously published in the domain, thus lacking a comprehensive deliberation on an extensive range of neuron activation methods studied by the mechanistic interpretability community. In addition, the paper grievously overlooks the complexities associated with polysemantic, superposition and neuroplastic neuron behaviors encountered in neuron activation. Work worth considering are: 1- Elhage, et al - https://transformer-circuits.pub/2021/framework/index.html 2 - Olsson et al - https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html 3- Elhage, et al - https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html 4- Henighan et al - https://transformer-circuits.pub/2023/toy-double-descent/index.html 5 - Foote et al - https://arxiv.org/pdf/2305.19911.pdf 6 - Bills et al - https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html 7 - wang et al - arXiv preprint arXiv:2211.00593 8 - Chan et al - https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing 9 - Tenney et al - https://arxiv.org/pdf/2008.05122.pdf Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Do I understand it right that if the methods under study are sub-optimal, the voting results can also be sub-optimal? In other words, the voting method does not provide any indications of how well a method may perform. Rather, it simply allows for a comparison between them. Considering that we are in the early phases of understanding the black-box nature of ML models, is it appropriate to start discussing benchmarks now? I have my reservations due to our lack of comprehensive hypotheses regarding their inner mechanisms. Hence, it presents a dilemma because it's as if we might not be in the position to start comparing or benchmarking. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors provide some insightful limitations of thier work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: re: missing references** A: Thank you for pointing out the missing references. We are certainly open to including a broader view of the interpretation field in the paper to enhance its scope. We acknowledge that some relevant references on neuron interpretation, such as Foote et al and Bills et al, appeared on arxiv in May 2023, and unfortunately, we couldn't include them in the paper at the time of submission. We have added the related work in the paper that provides a broad overview of the interpretation field and defines the scope of our work (included in the overall rebuttal). **R: …the paper grievously overlooks the complexities associated with polysemantic, superposition and neuroplastic neuron behaviors encountered in neuron activation.** A: Thank you for raising this point. It is indeed worth noting that the neuron interpretation studies in NLP, that come under the scope of this paper, have this limitation that they do not adequately consider polysemantic, superposition and neuroplastic neuron behaviors. Among the current methods, only ElasticNet regularization holds the theoretical expectation of identifying polysemantic neurons, and a few works have provided empirical evidence to support this claim. However, a detailed analysis in this particular line of work is currently lacking. We plan to address this in the limitations section to provide a comprehensive perspective on the research landscape. Thank you again for the invaluable feedback in refining the paper's clarity and purpose. Following is the text added to the paper to bring attention to the limitation of current neuron interpretation methods. ``` A limitation of current neuron interpretation methods is that they do not explicitly target the discovery of neurons of diverse nature such as polysemantic, and superposition. Theoretically, ElasticNet regularization is capable of discovering neurons learning a singular function and multiple functions. The other methods such as Probeless are incapable of discovering multifunction neurons. An explicit modeling of neurons of different nature in a neuron interpretation method may result in discovering novel sets of neurons. ``` **R: Do I understand it right that if the methods under study are sub-optimal, the voting results can also be sub-optimal?** A: Given the lack of an objective evaluation criteria or gold annotations, we believe that the next best thing is a consensus based method like the one we have proposed. Iif all the methods employed are sub-optimal, the overall voting outcome will also be sub-optimal. However, we have carefully selected a range of well-established and methodologically diverse neuron interpretation methods as well as concepts for our study, to have a variety of discovered neurons for comparison, and provide the best evaluation currently possible. Given a new interpretation method, our setup provides its evaluation from the perspective of other neurons discovered by several other methods. This is the case with the Gaussian method presented in the paper where its ranking score is quite low but better than Random selection of neurons and it encourages the inventors to perform further evaluation of their method. The conclusion and limitation section (Section 5) discussed this limitation in detail. **R: …early phases of understanding the black-box nature of ML models, is it appropriate to start discussing benchmarks now…** A: We respect the opinion of the reviewer. Interpretability is a very wide field with various subfields and every subfield is at a different level of maturity. For example, there are hundreds of papers within the last five years on representation analysis while the work on mechanistic interpretability is in its early stages. Neuron Interpretation (that our work is aiming at) has seen a number of methods in the last five years, but any comparison between these has been cursory because of a lack of a benchmark. Without any comparison, it is difficult to assess the progress of neuron analysis methods, understand their limitations, and guide further research endeavors.We believe that a benchmark like this is a first step towards a standardized yardstick to move forward as a subfield. --- Rebuttal Comment 1.1: Title: Official comment by Reviewer QcFR Comment: I thank the authors for their responses. While I agree that Bills et al. did not appear until May 2023, Foote et al. was available on OpenReview at an ICLR workshop and Arxiv in April 2023. The suggested missing references do not encapsulate a broader view of interpretability but rather those closest to the approach you considered in your paper. I would encourage the authors to examine recent developments in neuron interpretation methods that address the issue of superposition. For example: Sharkey et al.: https://www.alignmentforum.org/posts/z6QQJbtpkEAX3Aojj/interim-research-report-taking-features-out-of-superposition and Sharkey, Lee: https://arxiv.org/pdf/2305.03452.pdf To strengthen the limitations section, I believe a more thorough discussion of the challenge of superposition is needed. I am unsatisfied with the answer in the last response regarding "...appropriate time to start discussing benchmarks...". I believe the works considered in this paper lie under the subfield of mechanistic interpretability and not in the broader interpretability field. If the authors believe otherwise, I would be interested in seeing a paragraph in the paper discussing how methods considered in this paper do not lie under mechanistic interpretability. Without such a discussion, I am less confident in recommending acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your response. ## **re: Foote et al** We are sorry for missing Foote et al. and we have added these references to the paper. In our general response to the reviewers, we provided a draft of the related work with the suggested references. We will improve it further for the final version of the paper. ## **re: superposition of neurons** Thank you for pointing out an exciting work on analyzing superposition neurons. Indeed most of the neuron interpretation with respect to a concept lacks in explicitly analyzing superposition neurons. We will add a discussion on the neurons of different nature and what the possible challenges are in discovering and evaluating them. ## **re: Mechanistic interpretability** Representation and Neuron analysis (scope of this paper) primarily involves examining the learned representations or embeddings within a neural network, aiming to unveil patterns and relationships within the data that the network has learned to capture [1,2], while Mechanistic interpretability takes a different route for model understanding and tries to reverse engineer the model itself [3]. While we make this distinction, we also understand that the area of interpretability is fairly new and it has seen major growth in recent years, and the lines between various subfields are blurrier and evolving with time. Moreover, perhaps neuron analysis itself can be divided into separate subfields, one targeting what knowledge is learned (methods in this paper), and one focusing more on the inner workings and how neurons interact with each other (closer to the mechanistic work). In this paper, our intention was to have a focus on neuron interpretation w.r.t knowledge learned specifically, and stick to the recent papers and survey on neuron interpretation to define our scope. We will expand this into a discussion in the paper to shed light on these subtleties and clarify the scope further. [1] Intrinsic Probing through Dimension Selection https://aclanthology.org/2020.emnlp-main.15.pdf [2] Neuron-level Interpretation of Deep NLP Models: A Survey https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00519/113852/Neuron-level-Interpretation-of-Deep-NLP-Models-A [3] https://transformer-circuits.pub/2022/mech-interp-essay/index.html
Summary: This paper proposes a standardized evaluation metric and benchmark for comparing various neuron interpretation methods, based on ideas from majority voting. The benchmark is based on the hypothesis that "neurons that are commonly discovered by different interpretation methods are more informative than others", and uses this hypothesis to rank & score methods by their alignment with the majority (in terms of the neurons they assign behaviors to). I discuss some thoughts/limitations of this hypothesis below (and the authors discuss it as well in their limitations section). The authors demonstrate empirically why the current commonly-used evaluation metric (fitting a classifier to the neurons to predict the concept) favors certain methods over others and suffers from the same common issues as probing classifiers in general. Then then propose two "compatibility"-based metrics, one set-based and one rank-based, for comparing the top-k neurons deemed informative for explaining a concept by each method, and provide in-depth analysis using the metrics to compare 6 different existing neuron interpretation methods (3 corpus-based, and 3 classifier-based). Overall, I believe this paper is sound, well-executed, thorough, and clear, and will serve a useful role in the neural model interpretability sub-community. Based on the authors' response to my below questions, I may be inclined to raise my score further. Edit: I read the authors' rebuttal and it addressed my concerns. I did not change my score as I originally insinuated above, in part due to finding validity in the other reviewers' critiques. I still think the paper should be accepted. Strengths: Originality & Significance: - The paper serves an important role in the mechanistic interpretability sub-community, and is likely to be of good value to this community. - I am not aware of any other attempts to organize methods in this manner, or propose a standardized benchmark. Quality & Clarity: - The experimental section is sound. The lack of ground-truth in makes evaluating explanation evaluation metric "faithfulness" difficult, but the authors have nonetheless provided a convincing set of results. - The paper is well-organized and well-written. The paper uses proper mathematical notation throughout. - The authors have promised to release the code upon publication. Weaknesses: Agreement with hypothesis underlying the benchmark: - I am not sure I am totally convinced by the claim that the most informative neurons will be discovered by the most methods, given that neurons that are easily discoverable can often be those which are perhaps more simplistic in what they encode (e.g., only serve a singular function rather than multiple, concept is encoded only in that singular neuron rather than spread over multiple, etc.) However, the granularity and assumptions underlying neuron interpretation methods is a nascent research question in the interpretability community overall, so I will not fault the authors for their hypothesis (i.e., I think the paper is appropriately scoped and the hypothesis is reasonable in the context of existing/popular neuron interpretation methods). It would be nice for the authors to discuss limitations of the hypothesis (or neuron interpretation methods in general) in the limitations section. More minor: - The paper is lacking a related works section. While the methods studied in the paper are described in detail in Section 2, it would be nice to have some sort of summary paragraph of the field of neuron-level interpretation as a whole after the introduction to situate the work (and potentially mention other methods that are not tested in the paper). - There is some minor conflation it seems between the decision to treat each neuron interpretation method as producing a ranked list vs. performing binary set membership classification. This is particularly confusing with the classifier methods, which are directly trained to do the latter (4.1) but tested on the former via the NeuronVote method. However, the inclusion of the set-based metric, AvgOverlap, mitigates this issue to some extent. I think it would be good for the authors to clarify early in the paper that each method *can* be viewed as producing a ranking, even though this is not always how the methods have been proposed or used in practice, and that the inclusion/comparison of *both* evaluation metrics is designed to provide a view that does not unnecessarily favor one method over the other, by considering both the set membership and the rank views. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - L12: colon instead of semicolon? - L20: ungrammatical sentence - Would be good to introduce the term "probe" somewhere in l.37-43 of the introduction (or earlier). You mention the "Probeless" classifier multiple times in the intro/abstract, but don't define this term anywhere. - L102: represents --> represent - L150: extra "i" - L205-206 describe AvgOverlap as set-based (not taking ranking into account); this contradicts lines 222-223, where I think the word "ranking" is meant as "set". Can you clarify? - I don't understand the role of Section 3.3 and its associated equation. Why wouldn't you just use Eqns. 7 and 8 with $|\mathcal{M}| = 2$? - L255: the choice of hyperparameter values here seems like it could have a big impact on the results; did you do any search? - Table 2 is never referred to in the text Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes- see one suggestion above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: ….It would be nice for the authors to discuss limitations of the hypothesis…** A: We appreciate your comment. Given the variety of methods we considered, we anticipate discovering neurons with diverse properties, including polysemous characteristics. For instance, the ElasticNet regularization (Section 2.2.3) is theoretically capable of identifying both singular and multiple function neurons. We also found in the pairwise analysis (Figure 1) that neuron interpretation methods discover partly different neurons. In other words, the discoverability of a neuron varies depending on the neuron interpretation method used. We acknowledge the value of discussing these findings as it would offer insights into the potential limitations of each neuron interpretation method discussed in the paper and will provide future research directions. We have added the following limitations of interpretation methods in the paper. ``` A limitation of current neuron interpretation methods is that they do not explicitly target the discovery of neurons of diverse nature such as polysemantic, and superposition. Theoretically, ElasticNet regularization is capable of discovering neurons learning a singular function and multiple functions. The other methods such as Probeless are incapable of discovering multifunction neurons. An explicit modeling of neurons of different nature in a neuron interpretation method may result in discovering novel sets of neurons. ``` **R: The paper is lacking a related works section….** A: We agree that having a related work section will be useful to understand the work in the context of the other work on interpretability and neuron interpretation. We have added a related work section that provides a broad view of the field, clearly describes the scope of our paper and mentions other works on neuron interpretation. We have provided the related work section in the overall rebuttal. **R: I think it would be good … each method can be viewed as producing a ranking …** A: That’s a great suggestion. We will clarify this in the paper and explicitly mention that the evaluation metrics are designed in a way that won’t result in biased evaluation with respect to set-based and ranked-based methods. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the response. I appreciate the addition of the related works section. I will keep my score.
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments, questions and suggestions. We have incorporated their suggestions, and answered specific concerns below each review. At a high level, we have significantly overhauled the related work (*included below for reviewers z745 and QcFR*), and defined the scope of our work within the field. We have run additional experiments to further confirm the results (*pdf attached*), as well as revamped the limitations section to incorporate the thoughtful suggestions by the reviewers. Following is the related work section that we have added to the paper. ``` Related Work The area of interpreting deep learning models constitutes a broad expanse of research. This section provides a synthesized overview of diverse interpretability subareas within deep learning models applied to Natural Language Processing (NLP), while also outlining the scope of our study. Attribution Methods Feature importance and attribution methods endeavor to identify the contribution of input features to predictions. These methodologies predominantly rely on the gradient of the output concerning the input feature and determine input feature importance by evaluating the magnitude of gradient values (Denil et al., 2014; Sundararajan et al., 2017). Please see Danilevsky et al. (2020) for a comprehensive survey. Counterfactual Intervention revolves around an intricate analysis of the interplay between input features and predictions. This approach involves manipulating inputs and quantifying resulting output alterations. Diverse intervention strategies, including erasing input words, removing multiple input words, and substituting input words with different meanings, have been scrutinized (Li et al., 2016b; Ribeiro et al., 2018). Attention Weights Numerous investigations have been directed towards interpreting components of deep learning models at varying levels of granularity. For instance, attention weights have emerged as a viable metric to gauge the interrelation between input instances and model outputs (Martins & Astudillo, 2016; Vig, 2019). Along these lines, Geva et al. (2021) delved into the analysis of feedforward neural network components within the transformer model, revealing their functionality as key-value memories. Additionally, Voita et al. (2019) demonstrated that pruning many attention heads has minimal impact on performance. Mechanistic Interpretability puts a focus into the reverse engineering of network weights to comprehend their behavior. Building upon the Distill Circuits thread, Elhage et al. (2021) investigated two-layered transformer models with attention blocks, identifying attention heads contributing to in-context learning. This understanding was further extended to larger transformer-based language models by Olsson et al. (2022). To enhance neuron interpretability, Elhage et al. (2022) introduced a Softmax Linear unit as an activation function replacement. Wang et al. (2022) attempted to bridge mechanistic interpretability findings in small networks to large ones, particularly GPT-2 small. Their approach involved iteratively tracing influential model components from predictions using causal intervention. They showcased the potential of mechanistic interpretability in understanding extensive models, while also highlighting associated challenges. Representation Analysis involves probing network representations concerning predefined concepts, particularly linguistic, to quantify the extent of knowledge captured in these representations Belinkov et al. (2017); Conneau et al. (2018); Liu et al. (2019a); Tenney et al. (2019). This is often realized through training diagnostic classifiers for specific concepts, wherein classifier accuracy serves as an indicator of concept knowledge within representations. See Belinkov & Glass (2019) for a comprehensive survey. Neuron Interpretation A more intricate form of representation analysis, termed neuron interpretation, delves into how knowledge is organized within the network (Sajjad et al., 2022b). This approach establishes connections between neurons and predefined concepts, offering insights into where and how specific concept knowledge is assimilated. Work done on neuron analysis can be broadly classified into 3 groups: Neuron visualization involves manual identification of patterns across a set of sentences (Li et al., 2016a; Karpathy et al., 2015).More recently Foote et al. (2023) proposed an automated approach to enhance interpretability of Large Language Models (LLMs) by extracting and visualizing individual neuron behaviors as interpretable graphs. Corpus-based Methods explore the role of a neuron through techniques such as ranking sentences in a corpus Kádár et al. (2017b), generating synthetic sentences Poerner et al. (2018) that maximize its activation, or computing neuron-level statistics over a corpus Mu & Andreas (2020); Suau et al. (2020); Antverg & Belinkov (2022). (Bills et al., 2023) recently proposed an algorithm to generate neuron explanations, simulating activations using a simulator model (an LLM), and scoring the results. Probing Methods identify salient neurons for a concept by training a classifier using neuron activations as features Radford et al. (2019); Lakretz et al. (2019); Dalvi et al. (2019) or fitting a multivariate Gaussian over all neurons and then extracting individual probes for single neurons Torroba Hennigen et al. (2020). In this paper, we focus on the neuron interpretation methods that take a concept as input and find neurons with respect to the concept. We considered all methods mentioned in the recent survey on neuron interpretation Sajjad et al. (2022b) in our study. We propose an evaluation framework to formalize the evaluation and comparison of results across methods. Moreover, we propose a novel method, MeanSelect and present a case study of using the evaluation framework. ``` Pdf: /pdf/37f753e398f771d3461fb2cf704d46e39cdf9bc2.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Near-Optimal Bounds for Learning Gaussian Halfspaces with Random Classification Noise
Accept (poster)
Summary: The paper considers the problem of learning $d$-dimensional halfspaces $h(x) = \mathrm{sign}(w \cdot x + t)$ over Gaussian marginals under random classification noise (where with probability $\eta$, a sample is given the incorrect label). While the homogeneous case (where $t = 0$ and the bias $p = \frac12$) by prior works, the paper focuses on the in-homogenous case where $t$ is unknown. The results give a nearly tight characterization of this regime: * Theorem 1.3 shows that there exists a polynomial-time algorithm for learning in-homogeneous halfspaces that uses $N = \tilde{O}(d / ((1 - \eta)\epsilon) + d / \max(p(1 - 2\eta), \epsilon)^2) \log \frac1\delta$ samples. The algorithm (given and analyzed in Section 2 and Appendix B) works as follows: * The Initialization procedure chooses a weight vector $w_0$ and predicts the bias $\hat{p}$ by iteratively averaging together signed samples until the norm of their average is sufficiently large. * The Optimization procedure locally updates $w_0$ by running Riemannian subgradient descent on a band of inputs on unit sphere of width $\hat{t}$, for each $\hat{t}$ belonging in an $\epsilon$-net over all thresholds $t$. * The Testing procedure identifies which threshold gives the lowest error on a new sample and outputs such a classifier. * Theorem 1.5 shows the near-optimality of the upper bound by showing that SQ-learning $p$-biased halfspaces under constant RCN requires either an exponential number of queries or at queries to an oracle of accuracy $d^{1/2 - c} / p^2$, whose dependence on $p$ matches the upper bounds. Strengths: The paper is well-written, the bounds are mathematically interesting and have clear proofs, and the matching dependence on $1 / p^2$ in the upper and lower bounds is a strong argument that these bounds are nearly optimal. While I did not review all of the details of the proofs in the appendices, I didn't find any technical red flags. Theorems and lemmas are well-stated and easy to follow. Weaknesses: A minor note: The pseudocode for Algorithm 1 could be written in way that makes it more easily digestible for readers. Some helpful details (like the sample complexity for Initialization and Testing) could be included, while terms like $M$ could probably be given asymptotically for simplicity. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The upper and lower bounds have a substantial gap in $d$. Do the authors believe that this gap could be closed by a future work? And does a similar gap exist in the literature for learning homogeneous halfspaces? Likewise, do you expect it to be possible to prove a lower bound with a more sensitive dependence on the classification noise $\eta$? It would be interesting to see if the problem gets much harder as the noise level approaches $\frac12$. I don't exactly see why the Optimization procedure is subgradient descent on a leaky ReLU rather than just a scaled ReLU; if it were leaky, shouldn't there be another additive term in the gradient computation multiplied by $\eta$, rather than just one term in the band multiplied by $(1 - \eta)$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: All limitations are well-documented in the assumptions underlying the theoretical results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort and their positive assessment. Below we reply to the questions in detail. 1. (**Question 1**) We would like to remark that the optimal lower bound should have a linear dependence on $d$, since even for the simpler realizable case the sample complexity of PAC learning a halfspace has a linear dependence on $d$ (actually, the sample complexity is $\tilde{O}(d/\epsilon))$. However, note that a lower bound of $\Omega(d/\epsilon + \sqrt{d}/\mathrm{max}(\epsilon,p)^2)$ follows from our results, where the first linear term $d/\epsilon$ comes from the information-theoretic bound. Therefore it is the second term that should be potentially strengthened from $\sqrt{d}$ to $d$. Improving our lower bound requires new techniques and it is an interesting future direction. For learning homogeneous halfspaces (with RCN under Gaussian) there is no statistical-computational tradeoff. There is an efficient algorithm using the information-theoretically optimal sample size (within $\log$ factors). This also follows from our upper bound when $p=1/2$. The conceptually interesting contribution of our work is that an information-computation gap appears for non-homogeneous halfspaces. 2. (**Question 2**) Regarding question 2 about a lower bound that is more sensitive with $\eta$: it is known that (even for computationally inefficient algorithms) there exists a sample complexity lower bound of order $\Omega(d/((1-2\eta)\epsilon))$ for RCN (see line 36). 3. (**Question 3**) Regarding question 3, we thank the reviewer for pointing this out. There is a typo in line 5 in the optimization subroutine, which we have fixed in the final version. The correct equation should contain $(1-2\eta)$ instead of $1-\eta$. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions and offering to edit the paper for clarity. I continue to believe that the paper is a useful contribution, and my score continues to reflect that.
Summary: This paper studies the problem of learning Gaussian half-spaces with random classification noise. Given labeled data $(x,y)\sim D$ where $x$ follows the standard Gaussian distribution, a halfspace function $f$ such that $y=f(x)$ with probability $1-\eta$ and $y=-f(x)$ with probability $\eta$ and a precision parameter $\epsilon$. The goal is to construct a halfspace function $h$ such that for $(x,y)\sim D$, $h(x)=y$ with probability at least $1-\eta-\epsilon$. This paper provides lower and upper bounds on the sample/time complexity in the general case where $f$ can be biased: $p=\min(Pr[f(x)=1], Pr[f(x)=-1])\neq 1/2 $. An efficient algorithm is proposed with a sample complexity $\tilde{O}(d/\epsilon + d/\max^2(p,\epsilon) )$. A lower bound on the sample complexity of $\Omega(\sqrt{d}/ \max^2(p,\epsilon) )$ is proved for SQ efficient algorithms. Strengths: Generalizing the problem of learning Gaussian halfspaces with RCN to not necessarily homogeneous halfspaces is a good contribution. Moreover, the bad term of the sample complexity $\tilde{O}( d/\max^2(p,\epsilon) )$ could not be improved in terms of the dependency in $\max(p,\epsilon)$ thanks to the lower bound proved. The algorithm is not a trivial generalization of the one for homogeneous halfspaces with RCN. Weaknesses: One weakness is that the lower bound has does not match the upper bound in terms of the dependency in $d$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Do you think your results could be generalized to (some) distributions $D$ with non Gaussian marginal $D_x$? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort. We address specific questions/comments by the reviewer below. 1. (**Weakness 1**) Gap between upper and lower bound (as a function of $d$): We refer the reviewer to bullet 4 in the response to reviewer m9mP for a detailed discussion. Our work showed an SQ lower bound of $\Omega(d^{1/2}/\mathrm{max}(p,\epsilon)^2)$ for the testing version of the halfspace learning problem; we also provided a matching upper bound (see Appendix C.6). We conjecture that our learning algorithm attains the optimal sample complexity as a function of $d$ as well (within the class of polynomial time algorithms). It remains an interesting question to develop a lower bound with the correct dependence on $d$ for the learning problem. 2. (**Question 1**) Regarding the reviewer’s question about an extension to non-Gaussian marginals: we believe that our algorithmic approach can be generalized to some log-concave distributions; this is left as an interesting direction for future work. We note that such an extension would require non-trivial extensions to our current analysis.
Summary: The authors provide an algorithm to learn d-dimensional halfspaces under random classification noise upto error \eps. The algorithm has time complexity O(dN/\eps^2), where N is the number of samples; and sample complexity ~ \tilde{O}(d/\eps + d/\eps^2). They also prove a lower bound, in the statistical query model, that requires \Omega(\sqrt{d}/\eps^2) samples, unless the model makes high accuracy queries. Strengths: - The problem of learning half spaces under noise is an important and fundamental problem in machine learning. - The results are tight in the statistical query model up to a factor \sqrt{d}. - Clear and lucid technical overview.. Weaknesses: No significant weaknesses. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Some of the writing in the proofs can be simplified, constants like 0.000098 can be jarring. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes, adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort in providing feedback. We will make sure to polish the final version of the paper and simplify the statements where possible, as suggested.
Summary: This paper studies the PAC learning complexity of half spaces (or linear threshold functions), when the labels are flipped randomly with some probability $\eta$. For realizable hypothesis classes, it is known that $O(d/\epsilon)$ samples are enough to learn a halfspace with $\epsilon$ 0-1 error. Under random classification error, the problem becomes much more challenging. This paper makes two contributions to this problem: - First, it presents an efficient algorithm with comparable scaling to $\tilde O_{\eta}(d/\epsilon + d / (\max(p, \epsilon))^2 )$. - Second, it proves statistical query lower bounds that match the above upper bound under certain regimes of $\eta$. The upper bound involves three subroutines, including a warm start initialization that returns a vector close enough to the target. Second, it issues queries near the threshold, akin to learning a leaky ReLU loss. Lastly, it uses a simple hypothesis testing procedure, which draws a fresh sample and selects a hypothesis with the lowest test error. Ther SQ lowe bound establishes the existence of a large set of distributions whose pariwise correlations are small. The construction builds on prior work of Diakonikolas, Kane and Steward (FOCS'17). Strengths: S1) The paper makes two contributions to learning halfspaces with random classification noise--this would be a nice contribution to this literature. The results seem solid and technically challenging. S2) The authors make an effort to explain their results at a high level, which is helpful. Weaknesses: W1) My main concern with this paper is the presentation of their technical results; in particular, there is a large number of notations, many of which are not clearly explained or are difficult to follow. This would limit the potential audience of their work within the broader NeurIPS community. W2) There are some inconsistencies between the statements, in particular, line 10 and line 46, which should be fixed or better explained. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Is the assumption of $p$-biasedness necessary for showing the sample complexity guarantee? - The comparison with a concurrent work of Diakonikolas et al. (2023) needs to be stated more clearly. - How critical are the results inherent to Gaussian inputs? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have not discussed the limitations of their work in the main text. The paper is of a highly technical nature, so potential negative societal impacts of their work would be limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort in providing feedback. Below, we provide a response to the comments and questions raised by the reviewer. 1. (**Weakness 1**) Regarding the reviewer’s comment: ‘main concern of this paper is the presentation of their technical result…’: We would like to argue that the definitions and relevant notation are already given clearly in the text and the algorithm. For example, the definition of $p$ (bias) and $t$ (threshold) are given in the introduction. In the main algorithm, $M$ is simply a parameter that denotes the total number of grid points we construct in $t$, and $\gamma_m$ denotes the width of the band on which we conditioned the gradients, as displayed in line 4 of the optimization subroutine. We believe these notations and parameters are necessary for a clear and succinct description of our algorithm. 2. (**Weakness 2**) Regarding the reviewer’s comment: `there are some inconsistencies between the statements, in particular, line 10 and line 46…’, we are afraid that the reviewer might have misunderstood our statements. In line 10, we presented our lower bound for learning general halfspaces under Gaussian with RCN, which is $\Omega(d^{1/2}/\mathrm{max}(p,\epsilon)^2)$; whereas in line 46 we provided the sample complexity of our algorithm, which is $\tilde{O}(d/\epsilon + d/\mathrm{max}(p,\epsilon)^2)$, i.e., an upper bound for this problem. There is a gap in the dependence of $d$ between our upper bound and lower bound, and we believe it is an interesting question to obtain a matching lower bound as a function of $d$ as well. 3. (**Question 1**) Regarding the reviewer’s first question (assumption on bias $p$): we emphasize that we do not make any assumptions on the bias $p$. The bias $p$ is an unknown parameter of the target halfspace (a number between $0$ to $1$). Our algorithm works for all possible values of $p$ and outputs a halfspace achieving error $\eta + \epsilon$. It turns out that the sample complexity of our algorithm depends on $p$. Importantly, our algorithm is adaptive in the sense that it does not need to know $p$ in advance. Finally, we reiterate that the sample complexity of our algorithm is near-optimal as a function of $p$ and $\epsilon$ within the class of efficient SQ algorithms. 4. (**Question 2**) Regarding the reviewer’s second question: A detailed comparison to the prior work of Diakonikolas et al. (2023) appears in Appendix A. The key points are that neither their algorithm nor their lower bound have any implications to our Gaussian setting. We will move this comparison to the main body in the revised version of the paper. 5. (**Question 3**) Regarding the reviewer’s third question (on Gaussian distribution): We start by pointing out that the Gaussian assumption makes our SQ lower bound stronger, as it holds even for the basic case that the feature vectors are Gaussian distributed. The analysis of our algorithm is currently specific to the Gaussian distribution, as this was the goal of our work (understanding sample-time tradeoffs for learning general halfspaces with RCN in the Gaussian setting). That said, we believe that our algorithmic approach can be modified to give near-optimal algorithms for more general marginal distributions (such as log-concave distributions). We leave this as an interesting extension for future work.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and effort in reading and reviewing our paper. In particular, we are encouraged by the positive feedback and that our paper is appreciated by the reviewers in the following aspects: (i) **clear presentation and lucidity** (m9mp, BT2g, QgYm, 2pgi, Imug) (ii) **technical solidness** (m9mp, QgYm), (iii) **significance of solving a fundamental problem in machine learning** (m9mp, BT2g, 2pgi, qpWY).
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper gives new positive and negative results for the problem of learning general (nonhomogenous) Gaussian halfspaces under the random classification noise (RCN) model. The motivating question is whether there exists a polynomial time algorithm nearly achieving the (known) minimal sample complexity required for solving the problem. Positive result -- The authors give an efficient algorithm that solves this problem; this algorithm works with a sample complexity that is only off of the known optimal sample complexity by logarithmic factors. Specifically, the runtime of the algorithm is roughly $O(dN/\varepsilon^2)$, where $N$ (sample complexity) is roughly $O(d/\varepsilon^2 \cdot \log(1/\delta))$ and $\varepsilon$ is the target suboptimality. (note that I have omitted the dependences on the noise rate $\eta$ and the bias $p$ of the target function -- both are probably optimal) To obtain the positive result, there are three main algorithmic contributions. The first is an initialization step, which returns a vector whose correlation with the true weight vector is pretty high. The second is an optimization procedure that takes as input a guess for the threshold of the target halfspace and returns a halfspace of low error with that fixed threshold. The third is a procedure that tests several hypotheses and selects the one with the lowest error. Negative result -- The authors give a Statistical Query (SQ) model lower bound on the sample complexity of any algorithm solving the above problem -- either the statistical queries have to be very accurate, or there need to be an exponential number of queries made in order to recover a linear separator with nontrivial suboptimality. The authors also use a recent result almost equating SQ algorithms and low-degree tests to give a lower bound in the low-degree testing model. The exact statement is a bit technical, so I'll omit it here. It can easily be found in Theorem 1.5. EDIT 2023-09-01 -- As mentioned in my response, my review stands. Thank you for answering the questions! Strengths: The paper almost (up to pesky logarithmic factors) resolves the motivating question of whether the optimal sample complexity can be achieved by an efficient algorithm for this problem. The algorithmic primitives are natural and easy to understand. The problem is pretty fundamental. Finally, the paper is clearly presented. Weaknesses: There are minor typographic issues that can be cleaned up (e.g. spelling mistakes, long math strings, etc.) -- of course this is very minor. Although the optimality of the sample complexity is discussed, there appears to not be as much emphasis on the optimality of the runtime (in particular, the worst case dependence on $\varepsilon$ in the runtime is $\varepsilon^{-4}$ -- is this unavoidable?) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: What do you think are interesting future directions? e.g. do you think it is possible to obtain an even faster algorithm for this problem? If so, do you think extensions of the ideas you presented are likely to work, or do you think a totally different algorithmic approach is necessary? I probably missed this in the paper, but what is known about this problem when the input distribution is uniform over the vertices of the hypercube (i.e., $\mathsf{Unif}(\{\pm 1\})^d$? Do the techniques you give in this work readily transfer to this setting? It feels to me as though the Boolean setting is a more basic variant of the problem you study, but I would believe that the two are morally very similar. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the effort and the positive assessment. We address specific comments and questions by the reviewer below. 1. (**Weakness 2**) Regarding the reviewer’s comment ‘...not be as much emphasis on the optimality of the runtime…’: The focus of our work was to develop the first polynomial-time algorithm with near-optimal sample complexity (within the class of computationally efficient algorithms). We believe that a near-linear time algorithm (i.e., an algorithm with runtime $\tilde{O}(Nd)$) exists, and it is an interesting direction for future work. The bottleneck to achieve this with our current algorithm is estimating the unknown threshold to the desired accuracy. 2. (**Question 1**) Future directions: A number of interesting future directions remain. An immediate open question is to strengthen the SQ lower bound to match our upper bound as a function of $d$ as well. We believe that our upper bound is tight up to log factors. Another direction concerns generalizing our algorithm to succeed for more general marginal distributions (e.g., isotropic log-concave) with the right sample complexity and understanding information-computation tradeoffs under such more general distributions. 3. (**Question 2**) Uniform Distribution on Hypercube: We start by pointing out that we do not think our methods can be modified to work for the uniform distribution over the hypercube. The main reason is that the anti-concentration property, which played a vital role in our analysis, is no longer possessed under the uniform distribution on the hypercube. Ergo we cannot compute the probability mass of an $\gamma_m$-width band (as we did in the Gaussian case), hence we are unable to utilize the gradient conditioned on the $\gamma_m$-width bands. To the best of our knowledge, the only known algorithm for learning halfspaces with RCN over the hypercube involves using a distribution-free (RCN tolerant) PAC learner. As a result, the sample and computational complexity is polynomial, but the degree of the polynomial is potentially not optimal. --- Rebuttal Comment 1.1: Comment: Thank you for answering the questions! Seems like the hypercube case could be a nice problem... In any case, my assessment stands :)
Summary: This paper studies the problem of learning non-homogeneous halfspaces in the presence of Random Classification Noise, where the marginal distribution is standard Gaussian in $d$ dimensions. On the upper bound side, this work provides an efficient algorithm that achieves learning an $\epsilon$-optimal halfspace with sample complexity $\tilde{O} (\frac{d}{\epsilon} + \frac{d}{\max (p,\epsilon)^2})$ where $p$ is the bias of the target halfspace. The algorithmic idea involves Initialization, Optimization, and final Testing subroutines. The algorithm runs for $O(1/\epsilon^2)$ guesses of the threshold $t$ and selects the best output halfspace. On the lower bound side, this work establishes a nearly matching lower bound that no efficient SQ algorithm can learn a Gaussian halfspace with $\eta = 1/3$ with less than $\Omega( \frac{\sqrt d}{\max (p,\epsilon)^2} )$ samples. Strengths: 1. This work studies learning halfspace with RCN, which is a fundamental problem in machine learning theory. Although it has been known that halfspaces are efficiently PAC learnable with RCN, this work refines the sample complexity of the previous works in the Gaussian marginal distribution setting. Moreover, it provides formal evidence that the quadratic dependence on $ \frac{1}{\max (p,\epsilon)}$ is the best one can hope for an efficient algorithm. 2. This paper is technically strong. 3. Along with the formal technical statements and proofs, this work exhibits some valuable informal explanations and implications of the results, which makes it easier for the readers to understand and appreciate the significance of the results in this paper. 4. The main algorithm consists of several subroutines, each coming with a nice explanation and theoretical performance guarantees. Weaknesses: 1. It could be nice to have a more comprehensive review of the related works. E.g., how is the sample complexity in this work related to the other papers on learning halfspaces with RCN with comparable settings? 2. It could be good to include in the main algorithm the case where the threshold $t$ is significant. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could the main algorithm be modified a bit so that it could directly tell if the true threshold $t$ is significant enough and a constant hypothesis suffices? 2. In Algorithm 1, the Optimization procedure is invoked for $M = O(1/\epsilon^2)$ times, and $N_2 = \tilde{O} (\frac{d}{\epsilon} )$ samples are drawn in each iteration. Could you please explain a bit more about why the total sample complexity (the dependence on $\epsilon$) is less than $ \tilde{O} (\frac{1}{\epsilon^3} )$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There are no obvious limitations to be addressed in my opinion. One question that might be interesting to study is if it is possible to attain matching bounds in $d$. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the effort and the positive feedback. Below we respond to the reviewer’s questions in detail. 1. (**Weakness 1**) Regarding the reviewer’s comment on the related works, we have added more detailed comparisons with prior works in the related work section and Appendix A in the final version. Importantly, we remark that there is no other work that provides an efficient algorithm with the optimal sample complexity under Gaussian marginals for **general** halfspaces in the presence of RCN. Most prior works (e.g., [1], [2]) mainly aim to find computationally efficient algorithms for **homogeneous** halfspaces with RCN. In Appendix A, we also compared our work to a very recent paper [3], where the authors studied the setting of RCN corrupted halfspaces with margin assumptions. 2. (**Weakness 2 & Question 1**) Regarding the reviewer’s comment ‘could be good to include … the case where the threshold t is significant’ and the question about determining the case that a constant hypothesis suffices: In the realizable setting (or when $\eta=O(\epsilon)$), it is possible to distinguish between these cases by drawing ~$1/\epsilon$ samples and checking if there are different labels. In the case where $\eta=\Omega(1)$, a constant fraction of the samples will have positive and negative labels, therefore this naive approach will not work. In our case for determining whether a constant hypothesis suffices, we point out that this can be done by looking at the value of $p$, instead of $t$. Indeed, as mentioned in lines 204-205, when $p=\Theta(\epsilon)$, a constant hypothesis would suffice. 3. (**Question 2**) Regarding the reviewer’s question about why we only need $\tilde{O}(d/\epsilon)$ samples for the optimization subroutine rather than $\tilde{O}(d/\epsilon^3)$ samples, we would like to clarify that this is because we only draw one batch of samples before we start the optimization subroutine (line 4 in the main algorithm), rather than drawing fresh samples at each iteration. This is because our analysis leverages the uniform convergence of the empirical gradient estimation to the population gradient (under Gaussian), as manifested in Lemma B.6. The total sample complexity consists of the samples needed for the initialization subroutine, which is $\tilde{O}(d/\mathrm{max}(p,\epsilon)^2)$, plus the samples needed for the optimization subroutine, which in total is of the order $\tilde{O}(d/\epsilon + d/\mathrm{max}(p,\epsilon)^2)$. 4. (**Limitation**) Regarding the dependence on $d$: Our SQ lower bound applies to a natural testing (decision) version of our learning (search) problem (see Definition 3.1), which reduces to learning but is not necessarily equivalent to it. We show that any efficient SQ algorithm for this testing problem requires at least $\Omega(\sqrt{d}/\mathrm{max}(p,\epsilon)^2)$ sample complexity. Interestingly, we also show a matching upper bound for the testing problem (see Appendix C.6). It remains an interesting open question to develop a stronger lower bound technique that gives the correct dependence on $d$ for the learning problem. We conjecture that the sample complexity of our algorithm is optimal as a function of $d$ as well (within the class of all polynomial-time algorithms). References: [1] C. Zhang and Y. Li. Improved algorithms for efficient active learning halfspaces with massart and tsybakov noise. In Proceedings of The 34th Conference on Learning Theory, COLT, 2021. [2] C. Zhang, J. Shen, and P. Awasthi. Efficient active learning of sparse halfspaces with arbitrary bounded noise. In Advances in Neural Information Processing Systems, NeurIPS, 2020. [3] I. Diakonikolas, J. Diakonikolas, D. M. Kane, P. Wang, and N. Zarifis. Information-computation tradeoffs for learning margin halfspaces with random classification noise. CoRR, abs/2306.16352, 2023. Conference version in COLT’23. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. I am still a bit confused about Question 1, as $p$ is not required to be known ahead of time. My other questions are addressed. I will keep my rating.
null
null
null
null
Anytime Model Selection in Linear Bandits
Accept (poster)
Summary: This paper introduces AlExp, a novel algorithm to address the problem of online model selection in the context of bandit optimization. The algorithm interacts with the environment by randomly choosing a learner (linear bandit regret minimizer) each round, then the chosen learner tries to propose the optimal action. The exploration-exploitation trade-off is dealt with both by randomizing the choice of the learner and by the learner's policy itself. The authors provide theoretical guarantees and empirical validation and compare this approach with the existing literature. Strengths: + The paper presents an extensive literature review and a detailed comparison between the proposed approach and the existing ones. + The algorithm's routine is easy to understand, and its logic is well-commented. + The algorithm effectively improves in some dependencies (e.g., the number of learners) w.r.t. to the existing approaches, both theoretically and empirically. + The analysis presents technical novelty. Weaknesses: - In some points, the presentation is not easy to follow, and it will be nice to have a more intuitive grasp of the theoretical quantities considered. - It will be nice to have more discussion on the computational aspects of the proposed approach: numerically computing some of the quantities seems to be very expensive and prohibitive when the number of learners scales. This contrasts with the main strength of the approach, i.e., the better regret dependency on the number of learners. - The algorithm requires the knowledge of theoretical quantities (e.g., expected values). In practice, this can be bypassed by a Monte Carlo sampling procedure. However, when performing finite sampling, there's the need to explain how quickly the estimator converges to the theoretical quantity and explain the computational consequences (see previous point) or eventual corrections to the algorithm to deal with the estimation error (e.g., adding an upper confidence bound on the quantities) if due to computational reasons only a small number of samples can be generated. Technical Quality: 3 good Clarity: 3 good Questions for Authors: + Can you please provide an examination of the computational limitations of this approach? E.g., providing the solving complexity of the optimizations involved. + Due to the strong dependence (on both algorithm and analysis) on quantity C, can you please provide a more intuitive grasp on the magnitude of this quantity w.r.t. other problem-dependent quantities? Is there an available lower/upper bound on it? I feel that the quality of your theoretical results strongly depends on this quantity, and a characterization (at least qualitative) of its magnitude should be discussed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - I feel that the main limitation of this work relies on its application in practice. However, the authors performed a sufficient experimental campaign, validating their approach. Unfortunately, the code for the experimental campaign has not been released, so reproducibility is another limitation of this work (even if they provide details to reproduce the experiments). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Our response to your questions and concerns follows. **"The algorithm has a strong dependency on quantity $C$, can you provide intuition on it?"** Could you please clarify which $C$ are you referring to? In the text we have $C_{\mathrm{min}}$ and $C(M, d, \delta)$. We give a response on both. We have also updated the paper to give intuition on how $C_{\mathrm{min}}$ affects the algorithm. - **On $C_{\mathrm{min}}$:** If the action domain is *explorable* then $C_{\mathrm{min}}$ is large, and in this case our bounds improve. In many scenarios, this quantity can be treated as an absolute constant, e.g. 1, since we normalize the actions and the feature maps. In the revised text, *we have added a new appendix section* which presents *lower bounds on this quantity* under two scenarios. We prove that if the action domain is a convex body, or if the feature maps are orthonormal, this quantity is an absolute constant. *We have also added a corollary* to the main theorem, which bounds the regret for orthogonal feature map, **independent** of $C_{\mathrm{min}}$, and with the same rate as the main theorem. - **On $C(M, d, \delta)$:** This quantity has no intrinsic meaning and is defined only for a compact presentation of the theorem. The performance of the algorithm is not tied to it, rather, it depends on the values of the parameters $M, n, d, \delta$ and $C_{\mathrm{min}}$. Regarding its value, as defined in the paper $$C(M, \delta, d) = C_1\sigma\sqrt{1 + \log(M/\delta) + (\log\log d)_+ +\sqrt{\log (M/\delta)+(\log\log d)}}$$ where $C_1$ is an absolute constant larger than $160\sqrt{10}$. **"The Algorithm has to calculate theoretical quantities, e.g. expected values, which are costly."** Many BO agents (e.g. UCB or Greedy) take actions deterministically, therefore the probability distribution $p_{t,j}$ is a single dirac delta at some point $\boldsymbol x_{t,j}$, which for instance maximizes the UCB. Taking the expectation w.r.t $p_{t,j}$, boils down to simply evaluating the function $\hat{\boldsymbol\theta}^\top_t\boldsymbol \phi(\cdot)$ at the point $\boldsymbol x_{t,j}$, and will not need sampling techniques. As you mentioned, upon using complex randomized agents, the expectations need to be approximated. **On computational complexity of ALEXP.** The computational complexity of the algorithm scales linearly with $M$, since all agents/models have to be updated. However, updating the models can be done fully in parallel (using tools such as Ray), in which case, increasing the number of models will not affect the runtime of the algorithm. Even without parallelization, the algorithm is light and one complete run of ALEXP with $n=100$, $d=2$, and $M=45$ takes $2.9\pm0.2$ minutes on a single CPU core. Figure 1 in the rebuttal pdf supplement shows how the run-time of our algorithm and other baselines scale with $M$. In general, we suspect that without further assumptions (e.g. relation between $\boldsymbol \phi_j$) linear computational dependency on M may not be avoidable. While algorithms such as CORRAL update only one of the models at every step $t$, their statistical complexity scales polynomially with $M$ (since they require more steps to converge), which results in an overall polynomial computational complexity with $M$. In the revised paper, we have added Figure 1, and a discussion on computational complexity to Appendix F on “Experiment Details”. This being said, We would like to highlight that the contributions of this paper are primarily theoretical. We provide experiments mainly to give additional insights. **"Reproducibility is limited due to lack of code."** We will publicly release the code upon acceptance. In general, the implementations are fairly standard and straightforward (cf. pages 31-34 for details). --- Rebuttal Comment 1.1: Comment: Thank you for your response. The authors addressed my concerns, except for the ones on computational complexity. I will keep my score for now. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. Is it possible that you have perhaps missed the paragraph titled "On Computational complexity of ALEXP" in our rebuttal? In your review you had asked for "an examination of the computational limitations of this approach". As a response, in our rebuttal we provided runtime curves of our algorithm and the oracle. Moreover, we conjectured that model selection algorithms will require $O(M)$ many operations to converge. We believe this can not be lifted, unless certain correlation/structure is assumed between models. This being said, we would like to highlight that our work is primarily of theoretical nature, and is the first to show that sample-efficient model selection is possible on generic action domains in linear setting.
Summary: based on the time-uniform analysis of the Lasso, the anytime exponential weighting algorithm based on Lasso reward estimates with the nature of anytime regret guarantees on model selection linear bandits is developed. The result neither requires knowledge of the horizon n, nor relies on an initial purely exploratory stage. Strengths: the anytime exponential weighting algorithm based on Lasso reward estimates are horizon-independent, explores adaptively without requiring an initial exploration stage. Weaknesses: 1 The dimensionality of the proposed algorithm model will be ultra-high, and the complexity of the algorithm's runtime depends on the efficiency of the sparse regression model algorithm. 2 The results of the algorithm still belong to the class of multiplicative weights Update algorithms. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1 What is the relationship between the number of models and their final performance? 2 During the entire learning process, for two cases, i.e., the number of models is fixed, and the number of models is changed. Can you consider these situations and discuss them? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. **Our Contributions.** We would like to highlight the contributions of this paper as it seems to have missed the attention of the reviewer. We address the open problem of Agarwal et al. 2017, and are the *first* to show the feasibility of the conjectured $\log M$ rate for model selection on infinite action domains, when the reward is linearly parameterizable. This is a theoretically challenging problem, and our work is the first to show that such rates are attainable. Our algorithm ALEXP demonstrates how one can perform adaptive model selection while simultaneously optimizing for an objective, at a $\log M$ rate. Our experiments show that It performs on par with an oracle solver, which knowledge of the true model. Crucially, this work presents a *novel time-uniform analysis of the Lasso* and establishes an important connection between online learning and high-dimensional statistics, as pointed our by reviewer Vj2P. **Questions.** 1. What is the relationship between the number of the models and their performance? There is no relationship. The only assumption is that there exists one model that is able to solve the problem. 2. During the entire learning process, for two cases, i.e., the number of models is fixed, and the number of models is changed. Can you consider these situations and discuss them? Following prior work on model selection, we assume that the number of models is fixed during the learning process. A problem setting where new models are introduced during learning could be an interesting future avenue of research. **Weaknesses.** 1. "The dimensionality of the proposed algorithm model will be ultra-high.” Could you please clarify what you mean by dimensionality of the algorithm? ALEXP can be computationally expensive, as it simultaneously updates all $M$ agents. However, this step can be fully parallelized with tools such as Ray. This way, the runtime of the algorithm will remain constant as $M$ grows. 2. “The algorithm belongs to the class of multiplicative weights algorithms.” Indeed exponential weights is a variant of the multiplicative weights algorithm. Could you please clarify why you find this to be a weakness? Having addressed the points you raised in your review, we kindly ask you to reconsider your assessment of our paper. We would appreciate it if you further express your questions and concerns, particularly given that you found the contributions of this work to be limited. We would be happy to answer them. --- Rebuttal Comment 1.1: Comment: Since the authors are too polite to put this plainly, I will do so: this reviewer put down a confidence level of 5, yet their review is nonsensical. I would urge the AC to disregard this review completely.
Summary: The paper uses tackles model selection for linear bandits with $M$ models. In particular, rewards are estimated from the $M$ models using Lasso and then EXP4 is ran on-top of these estimated rewards to update individual model probabilities. The use of Lasso over ridge regression reduces variance, leading to a $\log M$ dependence rather than usual $\text{rm}poly \, M$. The paper extends the usual martingale mixture analysis from ridge to lasso; this is something I thought would be quite tricky. I look forward to reading it in more detail in the future. Strengths: Model selection is a very important problem. Problem is explained and motivated very well. Paper is very well written and polished to the standard of a camera-ready. Weaknesses: I'm not an expert on model selection in linear bandits or full information learning. As such, I cannot identify any weaknesses in this work. It took me a couple minutes to see why (1) is indeed (group) Lasso---I was expecting to see a 1-norm. I see that you have a sentence or two explaining this after, but somehow this didn't do the trick for me. Maybe some rewording or an extra comment here could be useful, relating it to the more usual notion of Lasso. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: None Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations clearly discussed in text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Our response follows. **Add more insight on the group Lasso loss.** Thank you for pointing this out. In the revised version, we have included an explanation of the loss, focusing on how the $(2-1)$-norm induces sparsity at the group level. **Extending online Ridge analysis to Lasso is a tricky problem.** This is indeed a precise observation! We present time-uniform confidence sets, which shrink at *the lasso fast rate*. This is the key technical contribution which allows us to solve the model selection problem. --- Rebuttal Comment 1.1: Comment: It seemed to me that this paper would be a done deal in terms of acceptance, and hence I have not bothered to write a lengthy review. This appears to have been a mistake. I will try to find the time to provide a more in-depth review shortly.
Summary: The paper considers the problem of model selection in (lifted) linear bandits. There are M hypothesis models, each of which have a different feature map, and one of these is the true model; it is unknown to the optimizer which of the M models is the correct one. At each timestep, the optimizer chooses one model and obtains instantaneous regret in accordance with the chosen model's selection. The goal is to minimize overall regret across n timesteps. If feedback were obtained for all model's chosen query at each timestep, then the multiplicative weights update could be used to update the probability of any model being the true oracle model. And the guarantees from the literature would apply. However, we have limited information and only observe the reward for the chosen model's query. To get around this obstacle, the authors consider an aggregated model with all M model's features combined together. And they train this aggregated model on all the data using a sparse LASSO estimator. Then, they use this aggregated model to fill in the missing regrets for the unchosen models. And these imputed values are then used for the MW updates. The analysis consists of proving the recovery result on the LASSO, and adjusting the original proof to use the imputed data instead of the original. The authors verify their findings on synthetic data. Strengths: Clarity: The problem is well presented, and the key difficulties of the problem are clearly identified - namely the missing regret data for unchosen models. The exposition of the solution is geared towards addressing the difficulty by imputing the missing values. The structure of the proof is well-outlined in the main text. I also appreciated the intuitive explanation around introducing bias to reduce overall error as the reasoning behind the success of the algorithm. Quality: The arguments appear correct and the correct tools are used to obtain the desired results. Significance and Originality: As far as I know, this problem has not been considered before, and the improvement in the scaling with M is helpful. Weaknesses: My main concern is Q1 below. Other weaknesses are mostly minor 1. It would be useful for the authors to highlight more real-world examples where bandits with multiple models would be most applicable. 2. In line with the above, real-data experiments would be useful to verify the applicability of the algorithm. However, the theoretical contributions are already sufficient so I think this can be deferred to future work. 3. Figure 3 is missing label on the vertical axis and is difficult to interpret. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. If the LASSO performs well already, then what is the need to use the LASSO to perform model selection. Why can we not just use the LASSO directly? 2. While the rewards for the query point of the unchosen model are not available, the predictions on the chosen $x_t$ are available. So, why can we not use the prediction error on these points for all the models to choose the weights? Instead of using the imputed data. 3. What is special about the LASSO that makes it suitable for the choice of the aggregate model? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We have updated the Introduction and Experiments section, incorporating your feedback. Regarding your questions, see below. **What is special about Lasso that makes it a suitable choice?** To obtain $\log M$ rates in online model selection, we require a reward estimator/hallucinator, whose bias and variance *both* scale with $\mathcal{O}(\log M)$. Lasso happens to hit this balance, while OLS, Ridge, or Importance Weighted estimates have a variance which grows with $\sqrt{M}$. **When updating the models, why not hallucinate the reward for the chosen actions?** We actually considered this, but did not look into it too far, as it seemed not to affect the rate of the regret bound. We expect this approach to also work, and would be interesting to see how it changes the overall dynamics. **Why not just use LASSO directly?** Our randomized algorithm is more robust, particularly when features are correlated. We maintain a probability distribution based on Lasso estimates that encourages exploration on model selection level. In contrast, Lasso will deterministically discard some of the models, and in cases (e.g. orthogonal feature maps) will *never* sample them again. In practice, Lasso often does not perform well for variable selection.The variable selection property of lasso highly relies on orthogonality of the feature maps, and the choice of regularization parameter. In our experiments, we introduce the ETS algorithm (cf. Algorithm 5), which uses Lasso for variable selection to select the model, and we see how it fails when there are many models, or the models are correlated (cf. Figure 1 in the paper). On the technical side, our analysis allows for bounding the model selection regret, i.e. directly comparing the reward obtained by ALEXP with the oracle agent. It is not clear to us if/how this type of guarantee would be possible when performing Lasso variable selection to select the model. --- Rebuttal Comment 1.1: Title: Thanks for response Comment: I thank the authors for their response. I will keep my score.
Rebuttal 1: Rebuttal: We have responded to our reviewers individually. Attached is the pdf supplement, which is referred to in our responses to some of the reviewers. Pdf: /pdf/d5897bc8b01b24854039eed68a0b5b0453f3bb13.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper considers linear bandit problem given $M$ models, or sets of feature mappings. In this problem, it is necessary to select the appropriate action as well as the model based on the bandit feedback. This paper provides an algorithm with an anytime regret bound of $O(n^{3/4} \sqrt{\log M} + \sqrt{n \log^3 M} + n^{5/8}\sqrt{d \log n})$, where $n$ and $d$ represent the time horizon and the dimensionality of each model. Strengths: - The motivation for the research is well explained. - Experimental results support the effectiveness of the proposed method. Weaknesses: - The obtained regret bound includes $O(n^{3/4})$-term, which means that the bound is suboptimal if the number of rounds is large. - The proposed algorithm requires $O(M)$-time computation in each round. This can be a major computational bottleneck. In fact, in the application of sparse linear bandits, $|M|=\binom{p}{s}$, which is exponentially large w.r.t. $s$. Further, we need $M = \Omega(\exp(n^{1/4}))$ to ensure the regret is of $O(n \log^3 M)$ (i.e., to ensure $n^{3/4} \sqrt{\log M}< \sqrt{n \log^3 M}$). This means that the proposed algorithm is effective only when $M$ is exponentially large w.r.t. $n$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can you add a description of the known regret lower bound and how it compares to the main result? - Line 290: $\{1, ..., p \}$ <- $\{ 0, 1, ..., p \}$ ? - Line 274-275: "the rate conjectured by Agarwal et al. [2017]" Can you tell me where I can find the corresponding description in Agarwal et al. [2017]? - The condition of $\eta_t = O(..)$ and $\gamma_t = O(..)$ in Theorem 1: Do the authors mean $\Theta(..)$? I guess that too small $\eta_t$ and $\gamma_t$ do not work well. - It is difficult to see what figures 1 and 2 refer to. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have no concerns about the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and your feedback on the notation. We have updated the text, fixing it on the instances that you mentioned. Our response follows. **On the difficulty of model selection, lower bounds, and minimax optimality.** Thank you for this comment. We have added a discussion on lower bounds to the Conclusion section. An overview follows. Online model selection for bandits is generally perceived to be a hard task with many unresolved problems. There are environments in which model selection may not be possible and the regret is $\Omega(n)$ (cf. Theorem 2 Agarwal et al. 2017). There also exist environments in which the model selection algorithm will perform strictly worse that the algorithm with oracle knowledge, in terms of dependence of the regret on $n$ (cf. Theorem 6.2, Pacchiano et al. 2020). Therefore, the focus of prior so far has been on feasibility, rather than recovering minimax optimal rates. To the best of our knowledge, minimax lower bounds for model selection in linear bandits are *an open problem*. Suppose $B(d,n)$ is the lower bound of the oracle bandit algorithm (for example $B(d, n) = \Theta(d\sqrt{n})$ when the action domain equals the unit ball). It is *not known* if there exists an algorithm that achieves a $\mathcal{O}(B(d,n)\log M)$ model selection regret. Therefore, we do not know if without further assumptions, the $n^{3/4}$ dependency of our regret bound can be improved or not. Our work is the *first* to show the feasibility of a $\log M$ upper-bound for model selection in linear bandits, on a general infinite action domain. We are not aware of any prior work which attains a $\mathcal{O}(n^{\alpha}\log M)$ with $\alpha<1$ dependency on general action domains, let alone $\mathcal{O}(\log M\sqrt{n})$. To put our contribution in perspective, we mention the results of Foster et al. (2019), which considers model selection for linear bandits over a *finite* action domain of size $K$ and requires knowledge of the horizon $n$. In this setting, they obtain a regret of $$\mathcal{O}\left( \min \Big\[(Mn)^{2/3} (Kd)^{1/3}, K^{1/4}(Mn)^{3/4}+\sqrt{KdMn} \Big\] \right)$$ which scales polynomially with $M$ and has a potentially suboptimal dependency on $n$. **On computational complexity of ALEXP.** The computational complexity of the algorithm scales linearly with $M$, since all agents/models have to be updated. However, updating the models can be done fully in parallel (using tools such as Ray), in which case, increasing the number of models will not affect the runtime of the algorithm. Even without parallelization, the algorithm is light and one complete run of ALEXP with $n=100$, $d=2$, and $M=45$ takes $2.9\pm0.2$ minutes on a single CPU core. Figure 1 in the rebuttal pdf supplement shows how the run-time of our algorithm and other baselines scale with $M$. In general, we suspect that without further assumptions (e.g. relation between $\boldsymbol \phi_j$) linear computational dependency on M may not be avoidable. While algorithms such as CORRAL update only one of the models at every step $t$, their statistical complexity scales polynomially with $M$ (since they require more steps to converge), which results in an overall polynomial computational complexity with $M$. In the revised paper, we have added Figure 1, and a discussion on computational complexity to Appendix F on “Experiment Details”. **”Algorithm is effective only when M is exponentially large’’**. In our experiments, we establish that ALEXP achieves a performance competitive to the oracle, when $M>n$, and $M$ is of the same order of magnitude as $n$. Our regret bound demonstrates an upper bound on the worst-case performance of the algorithm. Indeed $n^{3/4}$ is a worse rate than $\sqrt{n}$. However, there are many effective algorithms (e.g. kernelized bandits) who’s (minimax optimal) regret has a worse dependency on the horizon than $\sqrt{n}$. For instance, the commonly used GP-UCB with the $\nu$-Matern kernel satisfies a $\mathcal{O}(n^{\frac{\nu+2d}{2\nu+2d}})$ regret [Whitehouse et al. 2023] which is strictly worse than $\sqrt{n}$. **”Where is the conjectured rate mentioned in Agarwal et al. 2017?”** This can be found in Section 6 of their paper, titled “Conclusion and Open Problems”. Having addressed your concern about rate optimality of the bound, and given the contributions of this paper to the bandit literature, we kindly ask you to reconsider the assessment of our paper. We would be happy to answer any remaining questions or concerns. --- ### References Agarwal, Alekh, Haipeng Luo, Behnam Neyshabur, and Robert E. Schapire. "Corralling a band of bandit algorithms." In Conference on Learning Theory, pp. 12-38. PMLR, 2017. Foster, Dylan J., Akshay Krishnamurthy, and Haipeng Luo. "Model selection for contextual bandits." Advances in Neural Information Processing Systems 32 (2019). Pacchiano, Aldo, My Phan, Yasin Abbasi Yadkori, Anup Rao, Julian Zimmert, Tor Lattimore, and Csaba Szepesvari. "Model selection in contextual stochastic bandit problems." Advances in Neural Information Processing Systems 33 (2020): 10328-10337. Whitehouse, Justin, Zhiwei Steven Wu, and Aaditya Ramdas. "Improved Self-Normalized Concentration in Hilbert Spaces: Sublinear Regret for GP-UCB." arXiv preprint arXiv:2307.07539 (2023). --- Rebuttal Comment 1.1: Title: Rebuttal Follow-up Comment: We hope that our rebuttal has answered your main question about optimality of the bound, in particular your concern about $n^{3/4}$ growth rate being sub-optimal. We emphasize that: the **minimax optimal dependency on horizon is unknown** in this problem setting, and in fact, may not be $\sqrt{n}$. Hao and Lattimore 2020 prove a dimension-independent lower-bound of $\Omega(n^{2/3})$ for *sparse linear bandits* which hints that **going below the $n^{2/3}$ rate might not be possible** for model selection either. We updated the paper and added a short discussion on minimax optimality to sections 5.2. Further, we mentioned the unsolved lower bounds as future work in Section 7. We hope this update lifts the key concern of your review. It is much appreciated if you could please reconsider your assessment, or respond with questions/suggestions so that we can improve the paper in this regard. --- Hao, Botao, Tor Lattimore, and Mengdi Wang. "High-dimensional sparse linear bandits." Advances in Neural Information Processing Systems 33 (2020): 10753-10763.
null
null
null
null
null
null
Autodecoding Latent 3D Diffusion Models
Accept (poster)
Summary: In this manuscript, a novel method for unconditional and text-conditional generative models of 3D shape and texture representations is proposed. More specifically, the authors propose to train a 3D diffusion model of radiance and RGB fields that can be trained from 2D image and object mask supervision. The method comprises two stages, where in the first stage, an auto-decoder is trained, while in the second, a 3D UNet modeling a denoising diffusion process is trained on the latent features. At test time, random latent representations can be sampled and denoised to generate 3D radiance volumes that can be rendered from arbitrary viewpoints. Strengths: - The authors propose a valid method for training 3D diffusion models from posed 2D images. The key idea to first train an auto-decoder, and then in a second stage a 3D UNet describing a 3D denoising diffusion process is interesting and, with the success of recent latent 2D diffusion models in mind, an important field of study. - The authors perform an extensive evaluation on 5 different datasets. To be able to run experiments on these different datasets, multiple automated masking and/or filtering steps needed to be performed (compare e.g. L. 252). - The manuscript is well written, the technical explanations are sound, and the use of mathematical terms and symbols is correct. - The manuscript contains helpful and well-organised figures such as the overview figure 1 Weaknesses: - Experimental Evaluation in Table 1: Why does pi-GAN achieve such a high FID value of 52.71? Could the authors share some intuition on why the score is so bad? This seems not to be coherent with comparable prior works, for example GRAF [47] or GIRAFFE [43] report an FID of 34/20 on this dataset. Note that not improving over GAN-based approaches in FID on these single-object datasets is OK I believe, but it is important that the numbers are accurately reported. - Qualitative Comparison: I believe the manuscript would benefit from a qualitative baseline comparison, e.g. for the methods reported in Table 1. It would further be interesting to discuss to a greater extent the differences of the results from the different generative model types, i.e. compare GAN-based results to the proposed diffusion model. - The results are still quite limited, both qualitatively and quantitatively. I do appreciate that the authors tackle this hard task "feed-forward" of unconditional/text-conditional 3D generation, but e.g. the texture from the samples in Fig. 5 are very limited. I believe at least a text-based comparison and discussion to test-time optimization-based methods such as Dreamfusion is relevant here, as they lead to significantly better results. Potentially the proposed method could also be extended with an additional test-time-optimization step in future work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can the authors explain why the FID metric is so bad for the pi-GAN method? - Can the authors explain what are the main reasons for the limited quality for the texture predictions in Figure 5? - How important is it that the datasets contain multiple views (as opposed to only one) of the same object? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The authors have discussed limitations as well as potential negative societal impact. As stated above, I believe the manuscript could benefit from a comparison/discussion against test-time-optimization-based methods due to the limited quality of the shown results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank R.nmDk for their detailed response. We appreciate their highlighting of our work’s strengths. Namely, R.nmDk (a) finds our proposed 3D Diffusion model an interesting idea in an important field of study; (b) commends our extensive evaluation on different datasets, while appreciating the intricacy of their preparation; (c) appreciates our manuscripts writing, technical analysis and formulas, and well organized figures. Below, we try to address R.nmDk’s concerns and questions: **W1 and Q1 (pi-GAN FID):** We agree with R.nmDk on the importance of accurately reporting the numbers! For Table 1, we borrow all the metrics from DiffRF [a] (we will add this information on the table) , as we closely follow their dataset preparation and evaluation pipeline, which is different from GRAF’s. In particular, DiffRF renders “...15,576 chairs using Blender Cycles [13] from 200 views on an Archimedean spiral.” [a]. In contrast, GRAF renders “...150k Chairs from Photoshapes [49] following the rendering protocol of [46]” [b]. The dataset differences can vastly affect the output metric. **W2 (Qualitative comparison with baselines).** As R.nmDk points out, we do not include a visualization or discussion of diffusion-models and GAN-based models. We found that these qualitative results and analyses have been covered by DiffRF[a] and we focused on the differences of how we tackle the 3D Diffusion modeling problem compared to [a]. Similarly to DiffRF we observe that diffusion-based methods can achieve overall better quality and more meaningful structure and do not have view-dependent artifacts. For our qualitative comparison with DiffRF we would like to refer R.nmDk to Figures 7 and 8 in our supplement; for the discussion please see L51-55, L108-113, in the main text and L11-30 of the supplementary text. We appreciate R. nmDk’s suggestion and believe that including these comparisons will make our work more complete and comprehensive for the reader. We will add them in the final version of the manuscript. Summarizing these points, DiffRF is learning a single voxel grid for each object and is training a diffusion model on a dataset of learned voxel grids. This approach has several limitations: you need to save a Radiance Field for each item in the dataset: a time- and space-consuming process, especially for larger voxel grids. Additionally, 3D UNet training is very costly and thus prohibitive for larger voxel grids limiting the practical grid size. Our approach alleviates both of these problems. Our AutoDecoder represents the whole dataset as a collection of per-object embeddings plus the weights of a single decoder network. We run diffusion on a 8x8x8 grid which is significantly faster, and independent of the final output resolution. These changes permit our method to tackle large-scale datasets. **W3 (Comparison with Dreamfusion):** We discuss optimization-based methods in our related work section (L116-121). While we agree that Dreamfusion shows high-quality results, we disagree it is an apt comparison for our approach. They aim for generating a single object of high quality based on a text prompt; we aspire to enable large-scale 3D diffusion models. The main benefit is that Dreamfusion requires 1.5 hours on 4 TPU chips per object, while our method needs less than a few seconds to generate an object. In summary, we find Dreamfusion is tackling a different problem to ours. Nevertheless, we find that R.nmDk’s suggestion to combine the two approaches an excellent idea and would definitely explore it for future work. We believe score-distillation from a pretrained 2D model could not only help improve the texture quality, but also help distill additional knowledge to the one available in our 3D datasets. **Q2 (Texture prediction):** The Fig.5 contains objects generated from the model trained on images from the Objaverse dataset. We render these objects with uniform lighting, to exclude potential lighting inconsistencies. Because many objects in this dataset just have uniform texture, the model learned to use uniform color as texture for many objects (see sample from this dataset if Fig. C of the rebuttal pdf). Please, also refer to the supplementary material we show a more representative sample of the generated objects, many of which contain more complicated textures. **Q3 (Single view training):** As we mentioned above, the AutoDecoder offers a compressed representation of the dataset; it encapsulates prior knowledge. In contrast to NeRF, our method can work with Single-View for single-category datasets. Our method can roughly learn the shape of the objects from multiple instances, but it struggles with details such as chair legs, so multiple views is still beneficial for precise reconstruction. We show reconstruction results in Figure D of the rebuttal PDF. We are now running the diffusion stage and will add the results to the next version of the paper. [a] Diffrf: Rendering-guided 3d radiance field diffusion. CVPR 2023. [b] Graf: Generative radiance fields for 3d-aware image synthesis. NeurIPS 2020. [c] HoloDiffusion: Training a {3D} Diffusion Model using {2D} Images. CVPR 2023 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the extensive and very informative rebuttal. The comparison w/ Dreamfusion was rather imagined as a discussion in the paper, i.e. pros and cons, and, as outlined, potential combinations; I agree that a side-by-side comparison of results is not required. I have no further questions at this point. Thanks!
Summary: This paper presents a 3D generation framework that generalize to large-scale 3D dataset and articulated objects. The method comprises two parts. The first part is a 3D auto-decoder, reconstructing 3D objects from multi-view images or monocular videos. The second part is a latent diffusion model for unconditional or text-conditioned 3D generation. Extensive experiments are performed on 5 datasets, including the largest 3D dataset objaverse. Strengths: - I am impressed by the workload of this work. Not only are diverse static 3D objects are supported, but also articulated objects like 3D human heads. - Experiments are solid. A total of 5 datasets are used, including Objaverse, which is the largest and the most challenging 3D dataset. Qualitative results on tables and chairs are reported and compared with 3D-aware GAN and 3D diffusion methods. - Using the normalized IQR and median to normalize the latent features is an interesting trick. It is crucial for latent diffusion methods to deal with the latent normalization. Weaknesses: - The method itself is not novel, which I think is a minor issue since the main contribution of this work I think is the scaling up and generalizing the articulated objects. - The quality of generated objects is inferior. I think this is limited by the reconstruction part. The resolution is limited to 64^3 due to the computational complexity of 3D volumes. I am wondering whether a CNN refinement/ super resolution module would help (e.g. EG3D, DiffRF)? - The text-conditioned generation can only correctly generate the overall color and object category. Most detail descriptions are ignored. Other than the caption quality, is there any other possible reason for that, like the conditioning method design? Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We were delighted to read R.iNSR’s review! They appreciate the workload needed to achieve large and diverse static 3D object generation as well as synthesizing articulated human heads. Moreover, they commend our proposed Robust Normalization and De-Normalization scheme and its importance for latent diffusion modeling. In the following text, we aim to clarify the points they raised: **W2 (CNN refinement)**: Indeed CNN refiner could help to improve the visual fidelity of the results, although it may be at the expense of 3D consistency. We try autodecoding stage where we use CNN to refinement for ABO Tables dataset, we observe an improvement of the visual fidelity, however 3D geometry become more noisy. We provide visual results in the Fig. F of the rebuttal pdf. We will also add this experiment to the supplementary material. **W3 (Text Conditioning)**: We believe that the main problem is quality of the captions, mainly all the captions include some spurious details not related to the depicted content or do not describe detail at all, we provide some examples in the Fig. C of the rebuttal pdf, thus the model learns to completely ignore these details. Since we use exactly the same conditioning mechanism as Stable Diffusion, we think that this should not be a problem. --- Rebuttal Comment 1.1: Comment: I thank authors for providing thorough rebuttals. My concerns are addressed. Therefore, I would keep my rating.
Summary: This paper proposes a diffusion model that learns to generate 3D objects, using only multi-view images or videos for training. It first trains a 3D convolutional autodecoder to embed the dataset; this maps latent vectors via a 3D feature space to voxelised scenes, and is trained for reconstruction of volumetrically-rendered images. Then, it trains a diffusion model over the feature space of this autodecoder, to enable a-priori generation; there are experiments on which layer of 3D features are best to use. The method is demonstrated on a variety of datasets, both synthetic and real, including rigid objects such as chairs, and non-rigid objects such as human faces. Strengths: The proposed pipeline is novel, its components are clearly described, and most design decisions are well motivated/justified (and appear sound). There are insightful discussions on how the autodecoder architecture affects performance. The evaluation is rather comprehensive, covering five datasets of somewhat different character. Some of these are synthetic and use ground-truth camera poses, while others use imperfect poses from SfM. Generation results on synthetic images (chairs & tables) are quantitatively better than baselines – both FID and KID are lower than pi-GAN, EG3D, and DiffRF. Qualitative results here also look good. Generation results on real images are found to be quantitatively of higher quality (lower FID/KID) than those generated by naively sampling in the latent space of the autodecoder (without any diffusion process). There is an ablation study covering various important design decisions in the autodecoder (mainly architectural choices), and an additional set of experiments investigating which feature layer of the autodecoder the diffusion is performed over, and how many diffusion steps are used. Weaknesses: While it may be accurate that "no prior work demonstrates the ability to generalize to such large-scale datasets" (L282), I feel there should be some attempt at a quantitative comparison here – e.g. retraining the best of the prior works on these datasets to see how well or badly they perform. In particular, Objverse, CelebV-Text and MVImgNet are all extremely recent, so it may simply be that the prior works have not been tested on those datasets (as they were not yet available), but would still work to some degree. This is particularly important to put the (rather high!) FID scores of tab. 3 in context. An alternative that would go some way to mitigating this problem would be to use one of the more-constrained by still photographic datasets used in pi-GAN or EG3D (e.g. human/cat faces or cars), and evaluate how well the proposed method performs on this. Qualitative results on MVImgNet (only given in the supplementary) are not very impressive – it is often impossible for this reviewer to determine the class of the generated objects. Similarly, the results on face generation (with a target) look significantly lower fidelity than other recent methods like EG3D. The qualitative text-conditioned generation results in fig. 4 & 5 are not particularly impressive. Moreover, the 'ground-truth' text labels are from a pretrained captioning model, and thus noisy. It would be valuable to include an experiment with high-quality captions, so the impact of caption quality vs model power can be understood. The technical contribution is a little small for NeurIPS – both latent diffusion and autodecoding of 3D shapes are pre-existing techniques, and their composition does not seem to be particularly complicated. This is somewhat mitigated by the detailed experiments and extensive discussion of design choices. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Why is an autodecoder approach chosen, instead of (for example) VQ-GAN or autoencoding? See above under weaknesses regarding performance comparison with baselines on real-world data – this is my largest concern. How are the 'driven' face animations in fig. 2 generated? I didn't see this described anywhere in the text. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: There is adequate discussion of both limitations and broader impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank R.xDwW for the informative review. We are delighted that they found our pipeline novel and our evaluation comprehensive. R.xDwW also highlights the comparison with respect to the baselines and ablation study. In the following we address R.2ZWE points: **W1 and Q2 (Prior methods on large scale dataset):** We agree with the R.xDwW, that comparison of prior gan-based methods on large scale dataset such as objaverse will be useful. To this end we launch a training run of EG3D on Objaverse dataset. Since the dataset is very large, training is only at about 50%, the best FID was 105.15 (that was reached at 20% of the training) while FID for our method is 40.49, the visual results are provided in the Fig. A of the rebuttal pdf. We will provide final results in the next version of the paper, however we observe that FID stops improving at some point and now oscillates at around 120, thus we believe it will not become much better after this point. We advocate this behavior to the notorious difficulty of training GAN-based with diverse categories, without additional supervision such as class labels. We believe this experiment already demonstrates the scalability issues of the GAN based models. We would like to highlight that running EG3D on MvImgNet will be even more problematic, since the camera distribution in this dataset is unknown and depend on the considered object. For example some clothing items may be only shown from frontal view, while some toys may only shown from the top. CelebV, on the other hand, has articulated objects which EG3D does not support. **W2 (Qualitative results)**: We agree with the R.xDwW, that it may sometimes be hard to understand the class of the object from generated results in MvImgNet, however we believe that this is partially a dataset issue. This dataset features a lot of fruits and vegetables and their pieces which can be hard to recognize. We provide a sample from this dataset in Fig. B of the supplement rebuttal. Regarding CelebV-Text, while indeed the visual fidelity is lower than EG3D we would like to highlight that the considered setting is very different, we target articulated objects while EG3D targets static ones. Moreover, unlike EG3D we did not use ground truth poses, so a fair comparison would be EG3D w/o cameras where the face geometry is completely flat (see Fig.4 in EG3D supplementary material). **W3 (High quality captions)**: We fully agree that the method will hugely benefit from better captions, however at the time of the submission no caption for the MvImgNet and Objaverse was available, thus we resort to off-the-shelf captioning system. After the submission, one of the concurrent works [a] proposed a method for annotating the Objaverse dataset; we plan to utilize these captions for a future work. **Q1 (Why autodecoder)**: The problem of both VQGan and autoencoder, is the encoder part. The encoder assumes that the output of the autoencoder is already known in advance, which does not work for 3D generation, since for most of the real world datasets we only have images. There are several methods [b, c] that work with images as input, but the reconstruction quality of these works is significantly lower compared to optimization approaches. Another disadvantage of these approaches is that they can only be trained with a small number of views, thus it hard to utilize datasets with large number of views such as PhotoShape Chairs and ABO Tables (about 100 views) and CelebV-Text (up to 1000 frames for each video). **Q3 (Driven animation)**: For driving animation we generate an animatable asset form our diffusion model and then we animate it with poses extracted by our pose predictor from driving video shown at the left. We will explain this in more details in supplementary material, for more technical details on how articulated generation is performed please refer to **Sec. B1** of the supplementary material. [a] Scalable 3D Captioning with Pretrained Models - Arxiv 2023 [b] pixelNeRF: Neural Radiance Fields from One or Few Images - CVPR 2021 [c] ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers - ECCV 2022 --- Rebuttal Comment 1.1: Title: Post-rebuttal Comment: Thanks for the detailed rebuttal. This resolves some of my concerns, and overall I remain positive about this paper.
Summary: The paper proposes a 3D autoencoder to learn a latent volumetric space on the training dataset, which can be decoded into a radiance volumetric representation for novel-view synthesis, and then learn a 3D diffusion model on the latent volumetric representation. The latent volumetric space is acquired by training NeRF on multiple views (or frames). Strengths: - Diverse datasets, including synthetic and real-world, rigid and articulated objects, are used for evaluation. Weaknesses: 1. Some claims in the paper are not verified by the evidence. See *Questions* for examples. 2. Only novel-view synthesis is demonstrated. Since the authors use a volumetric representation, the method is restricted by the voxel resolution. The visualized images look small and coarse. It will be better if the authors can show some 3D results (e.g., extracting meshes from radiance fields). Thus, compared to prior and concurrent works (see *Questions*), this method does not seem to be more scalable or extendable. Especially, the voxel resolution is a bottleneck. 3. Camera poses still seem to be necessary in this work, either automatically being annotated in synthetic data or estimated for in-the-wild data. It is a little tricky to say that this work does not need "3D supervision". The authors can do such an experiment to convince the readers that the proposed method is robust to camera poses: comparing a model trained with estimated poses on synthetic data and one trained with GT poses. Minor typo: - L212: One of our key observation -> One of our key observations? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In L41-L43, the authors claim that "our approach is thus designed to be robust to" different sources of poses. However, I do not find relevant explanations in the method or experiment section. Can the authors elaborate on it? 2. In L150-L151, the authors claim that "intermediate representations such as feature volumes or tri-planes, as it is more efficient to render and ensures consistency across multiple views". However, several prior or concurrent works (e.g., 3D Neural Field Generation using Triplane Diffusion, 3DGen: Triplane Latent Diffusion for Textured Mesh Generation, Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction) show that feature volumes or tri-planes can work well. Can the authors provide any ablation study to support the claim? 3. In L62, "To identify the best scale" is unclear, as the word "scale" is not mentioned before. 4. It is unclear which datasets are used for training and evaluation. In L71-76, it is unclear whether the authors train 3 models on 3 datasets separately or they train a model on 3 datasets progressively. And in Sec 4.1, it is unclear whether all datasets are just for evaluation, or some of them are used for training. The authors should clarify their training and evaluation protocols. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank R.2ZWE for their thought-provoking review. We are glad they appreciate that our method works on diverse datasets, both real-world and synthetic, as well as rigid and articulated objects. Let us address R.2ZWE points: **Q4 (Training and Evaluation Protocol):** Regarding evaluation and training. We follow the evaluation protocol of DiffRF, we train on one dataset and then compute FID and KID on the same dataset. In more detail we train 5 different models, for each of the five datasets considered in this paper: PhotoShape Chairs, ABO Tables, CelebV-Text, MVImgNet and Objaverse. Everything in these experiments is separated and they don’t have any interactions of any kind. **W3 and Q1 (Camera poses):** Regarding the camera poses, we use different sources of the camera poses for different datasets. Predicting camera poses is an extremely challenging task, and we are not claiming to provide a solution for it in this work. This is especially challenging for arbitrary object categories. We claim that our method could work with different sources of camera pose: With ground truth camera poses that are available in synthetic datasets: PhotoShape Chairs, ABO Tables and Objaverse. With datasets of rigid objects where COLMAP can provide reliable estimates of the camera, such as MvIMGNet. Non-rigid objects where we train a camera prediction model (which in this case also acts as pose prediction model) together with the autoencoder, without any additional supervision or COLMAP predictions, see CelebV. So hence we are saying that "our approach is thus designed to be robust to" different sources of poses, in the meaning “our approach is flexible to work with different sources of poses”. Here we did not claim that it is “robust to pose estimation errors from COLMAP”. We are sorry for the confusion and we try to make it clearer in the final version. **W2 and Q2: (Concurrent Triplane works)** We would like to bring to R.2ZWE attention the fact that all the works R.2ZWE mentioned [a, b, c] were not published during the time of the submission, and thus are concurrent works. Because of this we did not include comparison with them in our original manuscript. We provide extracted meshes in the Fig. H (rebuttal pdf). As R.2ZWE correctly mentioned, all these works use triplanes as intermediate representation, and we agree that theoretically Triplanes can be used for datasets where a lot of views for supervision is available, such as objaverse. When the multi-view supervision is scarce and ground truth camera information is not available, such as in video datasets like CelebV, Triplanes tend to degrade to prediction of the flat objects, which was observed in the prior work [d]. Another disadvantage of Triplanes and feature volumes is additional MLP requirement, this MLP significantly increases time for each individual forward pass. We provide a timing comparison below: | Representation | Time for 1 iteration (s) | | :----: | :----:| | Voxel Grid $4\times64^3$ | 0.22| | Triplane $96\times64^2$ | 0.33| | Triplane $96\times128^2$ | 0.33| | Triplane $96\times256^2$ | 0.38| | Triplane $96\times512^2$ | 0.46| Here we render at 128x128 resolution with 128 points per ray, for Triplane we use 2 layer MLP with 32 hidden neurons and the triplane generator uses exactly the same architecture as our Voxel generator. While the generation of the Triplane is relatively lightweight, the MLP is very heavy. Coming back to the concurrent works mentioned by R.2ZWE, only [b] was shown to be trained on a large scale multi category dataset. This is achieved by training a latent diffusion model with auto**encoder** where the input to it is dense point cloud, autoencoder need sufficiently less iteration to converge thus it is feasible to train with Triplanes as intermediate representation. However the requirement of dense point clouds require ground truth object meshes, which is not available for the datasets like MVImgNet or CelebV. Thus our method covers significantly more possible scenarios than [b]. **Q3 (best scale):** Thank you for pointing this out, here it should be “To identify best intermediate representation …”. **W3 (observations typo):** Thank you, we will fix the typo. [a] 3D Neural Field Generation using Triplane Diffusion. CVPR - 2023. [b] 3DGen: Triplane Latent Diffusion for Textured Mesh Generation. Arxiv - 2023. (Probably ICCV - 2023) [c] Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction. ICCV - 2023 [d] Unsupervised Volumetric Animation, CVPR - 2023 --- Rebuttal Comment 1.1: Comment: Thank the authors for their reply. My concerns about training and Evaluation Protocol as well as camera poses have been resolved, which enhances the strength of this paper as it can handle different datasets with different sources of poses and multi-view images. - For related works using tri-plane representation, I am not sure whether it is reasonable to claim that [a] (CVPR 2023) is a concurrent work. But it will not affect my rating. - "When the multi-view supervision is scarce and ground truth camera information is not available, such as in video datasets like CelebV, Triplanes tend to degrade to prediction of the flat objects, which was observed in the prior work [d]." I agree with this claim. However, I wonder whether and why the voxel representation trained using rendering representation can handle this. - I actually appreciate the authors' efforts to add visualizations of extracted meshes. However, can the authors also extract and show colors from NeRF representation as well? --- Reply to Comment 1.1.1: Title: Reply to Reviewer 2ZWE Comment: We are happy that R.2ZWE concerns about training and evaluation protocol as well as camera poses have been resolved. We will try to address the rest of the questions: ***Regarding Triplane’s flat geometry solution:*** This is an observation made in [d]. Our intuition is that tri-planes have a design bias towards flat surfaces due to their plane representation. Voxels, however, do not have this, due to their true 3D structure, and thus are more robust against flat-surface local minima. ***Regarding the colored meshes:*** We thank R.2ZWE for the suggestion, we will add more visualizations in the supplementary material. Please find some unconditional diffusion results in this anonymous image link: https://imgur.com/a/wwMORpv [d] Unsupervised Volumetric Animation, CVPR - 2023
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their thoughtful and detailed reviews. We are delighted to see that they found our method broadly applicable (R.Nn3Q), our modification effective (R.Nn3Q) and evaluation extensive (R.2ZWE, R.xDwW, R.iNSR, R.nmDk). It is also nice to see that reviewers appreciate the importance of our proposed robust normalization technique (R.Nn3Q, R.iNSR). We also would like to thank R.nmDk for appreciating the quality of writing and figures, and both R.nmDk and R.iNSR for highlighting the workload needed to handle experiments on such a diverse set of large scale datasets. We provide the response to each individual reviewer in the corresponding section. Pdf: /pdf/a182d097d18564bb99e333fc9573c3f3f91e3eca.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors propose an approach to learning a latent diffusion model for 3D assets from image (or video) supervision. The core is that they use an autodecoder architecture, which learns embeddings in the latent space by decoding them into a volumetric representation for rendering. Then, the authors identify the appropriate intermediate volumetric latent space and introduce a normalization technique to learn a latent diffusion model. In addition, this method can use either existing camera information or no camera information during training. The experiments demonstrate the proposed method is able to generate both static and articulated 3D objects. Strengths: - The proposed method has a broader application domain and is extendable to large-scale datasets. It can be applied to various datasets, including rigid static and articulated moving objects. - The proposed modifications are effective in improving the 3D autodecoder. - The authors propose a robust normalization trick to train latent diffusion models on every intermediate volumetric latent space. Weaknesses: - The authors miss a proper discussion with an important related work, GAUDI [a]. GAUDI first learns a 3D autodecoder from images and then trains a diffusion model on the latent space as a prior for generation. There are several similar points between this work and GAUDI. However, GAUDI is not even cited. - The learned 3D voxel grids are treated as "Canonical Representation." However, how to properly define a canonical view of large-scale 3D datasets? For example, in Objaverse, the object poses are not well-aligned. Even for a single category (e.g., chair), the objects are randomly placed. Will this case influence the performance of the proposed method? Do you have any observations? - Considering the autoencoder-based method needs to regularize the bottleneck, the autodecoder can also regularize their bottleneck as done in [c, d]. Have you tried to train a 1D diffusion model (see [e]) in the latent space of the autodecoder to see the generation ability? - Missing comparison with some important baselines, such as GET3D [f]. I also suggest the authors train GET3D and DiffRF for Table 3 for a fair comparison. - I have some questions or suggestions according to the writing: - In Line 1, the authors assume diffusion models are all latent diffusion models. I suggest the authors polish their writing to avoid this misuse. - In Line 57, the authors mention, "First, our autodecoders do not have a clear "bottleneck." Is the bottleneck here a latent space? If it is the case, the embedding space is the latent space of the autodecoder. - In Line 147, it is suggested to add some credits to prior work on trilinear interpolation for volumetric rendering, such as Plenoxels [b]. - Can you elaborate more on the direct latent sample? - It is suggested to move the "Hash Embedding" part to the main paper. Otherwise, the vanilla autodecoder approach can not be scaled to large-scale datasets due to the large size of the embedding. - It is suggested to add [a-e] to the reference. - In LDM [46], they use a normalization technique on the latent space. The proposed robust normalization seems to be an extended version of this. [a] GAUDI: A Neural Architect for Immersive 3D Scene Generation. NeurIPS 2022. [b] Plenoxels: Radiance Fields without Neural Networks.CVPR 2022. [c] Demystifying Inter-Class Disentanglement. ICLR 2020. [d] Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement. ICCV 2021. [e] Latent Point Diffusion Models for 3D Shape Generation. NeurIPS 2022. [f] GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images. NeurIPS 2022. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the weakness section. I will adjust my rating according to the rebuttal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitation and broader impact part look adequate to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank R.Nn3Q for their review. We appreciate their highlight of broader method application, effectiveness of our 3D autodecoder modifications and application of robust normalization. Next we answer weakness and questions: **W1 (GAUDI Discussion):** Thank you for pointing that out, GAUDI[a] is indeed a relevant work, we will add citation and discussion into the final version of the paper. We would like to highlight a couple of differences between our work and [a]. GAUDI is used for a pretty narrow application - indoor scene fly-through generation. The datasets in this application are relatively small, from 18 to maximum 1.8k scenes. While in our work we try to tackle datasets of 2 orders of magnitude higher. This work also does not model articulated objects. Moreover the diffusion process is trained in the z space of the auto-decoder, which we later show has significantly lower performance compared to the space with spatial arrangement, e.g. $8^3$ voxel, in our setting. **W2 (Canonical Object Orientation):** The canonical object orientation is indeed important for autodecoder fitting. The better alignment objects have, the easier it is to fit an auto-decoder. To demonstrate that we run the experiment of PhotoChairs dataset, where we randomly flip some of the objects upside down. The fitting error in terms of perceptual loss is 0.76 versus 0.52 (for original dataset). Thus significant effort was made towards pre-processing large scale datasets. For an objaverse dataset we observe that most of the objects are axis aligned, thus we just center at (0,0) and rescale according to the largest axis. In MVImgNet, there is no canonical orientation, thus we instead use partial sparse point clouds provided with a dataset to center and then perform PCA to orient. On the other hand, canonicalization for CelebV is obtained automatically, because of the joint camera estimation and object reconstruction. We believe that automatic canonicalization for a more general dataset, such as MVImgNet could be interesting future work. **W3 (1D Diffusion model):**. We find that because autodecoder is trying to compress large scale dataset into relatively small latent space, the early layers act more like a dictionary, not as an upsampler, because of this it maybe hard for the diffusion to operate in the early layers. We show this phenomenon for $4^3$ representation in Fig.3 of the main paper. Additionally we run an experiment with 1D representation; the visual samples are provided in Fig. G of the rebuttal pdf. We observe that at this stage the model behavior is very similar to direct latent sampling as the method fails to produce correct object geometry. **W4 (Get3D comparison):** Since we already compare with two GAN-based baselines, we did not include Get3D. However we scheduled a training run on Objaverse, the preliminary results are 259.18 FID (see Fig. F of supplementary pdf for visual results), which is significantly larger than our model with 40.49. We believe this is because GAN-based methods struggle with fitting diverse categories, without additional supervision such as class labels, Get3D will not produce good results. We still plan to finish the training on this dataset and add this experiment in the supplementary material, but we do not expect it to perform significantly better than this. We also would like to point out that DiffRF requires fitting a NeRF for 300k objects as a preprocessing stage, which is infeasible with our current resources. Running DiffRF on MvImgNet will not work, since in this dataset most of the objects have only partial visibility. For example some clothing items may be only shown from frontal view, while some toys may only shown from the top. This issue will also complicate application of Get3D, since it is not trivial to devise appropriate camera distribution. CelebV, on the other hand, has articulated objects which both Get3D and DiffRF do not support. **W5 (All Diffusion models are latent):** We fixed this in the new version of the paper. **W5 (Autodecoder bottleneck):** Sorry for the confusion, we used the “bottleneck” term for “latent representation on which diffusion operates”, since for other latent diffusion models they are the same, but we agree that in our case it makes sense to use a different term. We will change this in the future version of the paper. **W5 (Plenoxels):** Thank you for the suggestion, we will add the proposed work. **W5 (Direct Lattent Sampling):** The direct latent sampling is basic multivariate gaussian fitting on the 1D embeddings for the entire dataset. This method was used in prior work [1]. In case of hash embedding we follow [a] and randomly sample the indices of each hashtable. **W5 (Hash Embedding):** According to your suggestion, we will shortly describe hash embedding in the main paper, and keep the details for supplement. **W5 (Citations):** We will add all the proposed methods to the related work. **W6 (LDM normalization vs robust normalization):** Similarly to LDM we confirm that normalization of the latent space is a crucial part of the diffusion training. However we want to clarify that we did not claim this as our finding. But we disagree that proposed robust normalization is an extension of LDM normalization. LDM normalization is based on KL minimization during training, this is used as an additional loss, which may hurt the reconstruction. Instead our robust normalization can be applied after the training, thus does not affect the reconstruction in any way. Another benefit of the proposed normalization is the ability to select the representation on which to operate after the autodecoder training. [a] StyleGenes: Discrete and Efficient Latent Distributions for GANs. ArXiV 2023 --- Rebuttal Comment 1.1: Comment: Thanks for your detailed rebuttal. I decide to raise my rate to WA. Please remember to add the promised content if accepted.
null
null
null
null
null
null
Bandit Social Learning under Myopic Behavior
Accept (poster)
Summary: The paper considers social learning in a two-armed Bernoulli bandit scenario, where agents sequentially arrive and pull an arm with the highest index, where the index is arbitrarily chosen to be within some confidence bound of the empirical mean of the arm, parametrized by $\eta$. This behavior subsumes greedy behavior ($\eta = 0$) and regret-optimal policies such as UCB1. The main contribution is a tight characterization of the probability of learning failure, i.e., most agents will pull the suboptimal arm as a function of the reward gap and $\eta$. They extend this result to the case where agents are Bayesian and use truncated priors to inform indices. Strengths: - As the paper suggests, while the fact that greedy behavior causes learning failure with constant probability in standard classes of K armed bandit problems is folklore, there is no formal study of the boundaries of regimes where greedy behavior starts failing. Thus the motivation of the paper is solid. - The techniques for proving the lower bounds on the probabilities of failure are novel relative to standard bandit literature. Weaknesses: - The paper's conceptual takeaways are not surprising, and the technical contribution adds little beyond tightly characterizing the probabilities of failure as a function of $\eta$, the value of which is unclear for such a specific model of social learning. - The examples where the greedy behavior suffices in prior literature are contextual bandit environments where context diversity is the driver of exploration. So it seems that understanding the boundaries of greedy behavior must work with some interpolation between contextual and independent armed environments. Instead, the present paper focuses on a two-armed bandit environment where the confidence bounds are parametrized, the value of which is unclear since one anticipates learning failure for any fixed parameter value. - The interpretation of the results for the non-Bayesian setting is obfuscated by the dependency on $N_0$, the initial number of samples. It seems that a lot of technical maneuvering (e.g., assumption 3.2) arises because, when $N_0$ is small, one cannot eliminate the possibility of avoiding learning failure due to the confidence bounds being truncated at the boundary of $[0,1]$. This seems orthogonal to the central issue of focus (in this regard, Theorem 3.9 certainly appears to be cleaner). This makes the results appear too technical without adding anything substantial to the dialogue on the sufficiency of greedy algorithms for bandit learning. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Could you comment on the qualitative difference between Theorem 3.9 and Theorem 3.1, and whether focusing on Theorem 3.9 with the very reasonable assumptions in P1-P2 would achieve the goals of the paper? --------------------------- Post rebuttal: Thanks for the clarifications. While my original concerns about the practicality of regimes and the surprise-factor remain, I see the conceptual value of the results and believe they deserve to be published. I have raised my score accordingly. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The results pertain to a very specific two-armed bandit model and so the paper is explicit about its limitations and applicability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful review which raises several rather subtle issues. Let us address the stated weaknesses point-by-point. **[W1]** What might count as “surprising” is that we managed to *prove* any/all these things, let alone in such generality. Given that such results remained unproved for so long, despite its obvious foundational value, we believe that success was very unclear in hindsight. (Note that proving “expected” results is often valuable – as a super-extreme example, consider proving P\neq NP.) One “conceptual takeaway” that may actually be surprising per se is that we provide a lower bound on the failure probability that does not degrade with the “strength of initial beliefs”. This point is most clearly stated in the “fully Bayesian” result (Thm 6.1), where the failure probability is lower-bounded by the Bayesian-expected gap, independently of the Bayesian prior. For the frequentist results in Sec3, the situation is a bit more subtle: “strength of beliefs” is driven by $N_0$, and the non-trivial regime is $\Delta < O(1/\sqrt{N_0})$ (otherwise the initial data alone essentially resolves the best arm). Then we get a lower bound of $\Delta \cdot e^{O_c(\eta)}$. This should be contrasted with the “naive” failure mode when all examples from the good arm fail, which only happens with probability exponential in $N_0$ (see Remark 3.7). For good measure, let us recap our other "conceptual takeaways": - Greedy algorithm fails, essentially for any two-armed bandit instance. - In fact, any myopic behavior fails, with "severity" driven by $\eta$. (More on this under "generality" in the general rebuttal.) - Optimism is essentially optimal for a given $\eta$, while pessimism does much worse. - Small fraction of optimists goes a long way. - The failure results admit both "frequentist" and "Bayesian" framing. We believe the significance of $\eta$ is very clear: it defines the range of permissible behaviors, has a clear interpretation in terms of confidence intervals, and drives the “severity” of failures throughout various twists and turns of our technical story. **[W2]** We completely agree that a good path forward is to interpolate between a few complex environments where the greedy algorithm is known to work and the simple structure(s) in which it is known to fail. In fact, this direction is/was very much on our radar! However, we believe our current paper is a necessary precursor – and a badly missing one! Please also see para1 and “greedy” in the general rebuttal. A minor point: what drives the positive results on Greedy in prior work is not just the diversity of contexts but also some structural assumption that enables aggregation (linearity in some papers, separability in some others). A few other positive results are driven by a (very) large number of arms (under some additional assumptions). **[W3 and Q]** The meaning of $N_0$ is that it controls the “strength of initial beliefs”, in the a natural frequentist interpretation thereof as the amount of initial data. So, in a sense, it is more than just an annoying technicality that we need to account for. In a narrower technical sense, we need to consider larger $N_0$ in order to avoid a trivial failure mode when *all* initial samples of the good arm have reward 0 (see Remark 3.7). Then, indeed, we indeed need some “technical maneuvering” around the case of very small $N_0$. To answer your direct question, please see “not merging Thm 3.1 and Thm 3.9” in the general rebuttal. We emphasize, however, that there’s much more to our technical story than the distinction between Thm 3.1 and Thm 3.9, e.g., see “technical story” in the general rebuttal.
Summary: The paper posits a bandit social learning (BSL) problem, which consists of a multi-armed bandit (MAB) problem where at each round an arm is pulled by a newly arrived agent, as a function of the history. This is motivated by reviews on online platforms, where agents pick decisions sequentially based on past reviews. Compared to standard MAB, where a centralized algorithm is run to minimize regret, in BSL each agent acts myopically and can be, e.g., greedy, optimistic, or pessimistic w.r.t. confidence intervals constructed around the reward estimate for each arm. The authors analyze the 2-arms setup and provide several learning failure results in identifying the optimal arm. Notably, the learning "fails" when agents are greedy or pessimistic, while it achieves optimal regret when a small fraction of the agents are optimistic. This was a general belief in standard MAB, but to the best of authors' knowledge their results are the first ones to assess it theoretically. Similar learning failures are also established for Bayesian agents who act according to their posterior. Strengths: - The paper reads really well and the results are sound. Moreover, the authors did a good job of positioning it into the existing related literature. - The introduced BSL problem is simple, albeit very interesting, and of practical relevance, e.g., in review systems. - Although the flavor of the results is quite specific to the BSL problem (where agents have different myopic behavior), the negative results apply to the more general MAB, constituting a relevant contribution to the broader bandits community. Weaknesses: - The authors consider 2-armed bandits for the sake of their analysis and negative results. However, it is not clear how the picture would change in the presence of more arms. - No experiments are performed. Although the paper is of theoretical nature, would be nice to demonstrate the proven failure probabilities and how the injected optimism facilitates learning. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I would like to know the author's view on extending such results to more than 2 arms. In particular, is a bigger set of arms always more detrimental in terms of learning/exploration? Should the initial number of samples $N_0$ intuitively scale with the number of arms? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful and a largely positive review! **[W1 and Q]** Please see “K>2 arms” in the general rebuttal. In particular, adding more arms could affect the failure probability positively, negatively, or not at all, depending on the problem instance. Re the semantics of $N_0$: indeed, it would make sense to have $N_0$ samples of each arm. (Note, however, that we think of $N_0$ as an exogenous parameter.) **[W2]** We’ve included some experiments as requested, please see in the general rebuttal. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. I keep my score.
Summary: This paper studies bandit social learning problem with two arms. Instead of aiming to design an efficient algorithm with theoretial guarantee, the authors demonstrate negative results regarding myoptic behaviors of agents. The main contribution of this paper is proving the regret lower bounds of $\eta$-confidence agents, together with nearly matching upper bounds, which explains why greedy algorithms i.e., always exploit, are not efficient and why UCB1 algorithm requires extreme optimism. Strengths: * This paper is well written and easy to follow. The author does a great job of explaining complex concepts in a clear and concise way. * The proofs appear solid and complete to me that both of UCB and Bayesian agents are taken into consideration in this paper. Weaknesses: * I believe this is a good paper because it provides solid theoretical insights into why agents perform less optimistically as UCB-type algorithms perform worse, and how this degradation varies with the degree of optimism. However, I feel that the proofs do not introduce any new techniques, which lowers my overall score. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It might be better to add experiment simulations to validate the theory. *Typo* Line 39: the number of agents $T$...: should here be number of time steps? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful and a largely positive review! **[W]** Re techniques, please see “techniques” in the general rebuttal. We would also like to re-emphasize the generality of allowed behaviors, please see “generality” in the general rebuttal. **[Q]** We’ve included some experiments as requested, please see in the general rebuttal. Re Line 39, “the number of agents $T$”: in our setup a new agent shows up in each round, so agents and rounds are essentially the same. --- Rebuttal Comment 1.1: Title: After rebuttal Comment: Thanks authors for their detailed response. I think my concerns are resolved and I will keep my initial score. Good luck!
Summary: The paper proposes the model of social learning under myopic behavior, where a 2-armed bandit problem is considered with agents that behave myopically. Upper and lower bounds on the probability that all but \leq n agents choose the bad arm are derived. Strengths: - The paper is well written and easy to follow. - The considered model is novel. - The bounds on failure probability are tight. Weaknesses: - The results are only limited to the case of 2 arms. - Assumption 3.2 is strong and seems to be unnecessary. For example if mu_1 is very small, then a lot of initial samples N0 are required even if mu_2 is close to 1 which is an easy to solve case that does not require a lot of samples. - The considered myopic strategies are limited to a few number of strategies (confident, unbiased, optimistic, pessimistic, bayesian). - The results are not surprising and can be obtained using standard concentration inequalities in the bandit literature, e.g., see [1]. - As the paper is only concerned about failure probabilities without the need to decide on a strategy, I am not convinced about the importance of the results. [1] Lattimore, Tor, and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful and explicit review. Let us respond point-by-point to the stated weaknesses, in the same order. **[W1]** Please see “K>2 arms” in the general rebuttal. **[W2]** Assumption (3.2) is not that strong in the theoretical sense: it merely requires $N_0$ to be larger than a constant. Essentially, it ensures that the confidence intervals are proper subintervals of [0,1] – which matters even if $\mu_1$ is close to 0 and $\mu_2$ is close to 1. This assumption is needed for our analysis of Thm 3.1. However, we do remove it later on – which is the whole point of Thm 3.11 – but at the cost of minor-but-technical assumptions on the behaviors and some substantial complications in the analysis. We’d be thrilled to get rid of this assumption in any simpler way, suggestions welcome! **[W3]** On the contrary, $\eta$-confidence in Thm 3.1 (and Thm 3.9) allows for a range of behaviors, incl. unbiased and pessimism/optimism as special cases. Please see “generality” in the general rebuttal. **[W4]** Our negative results critically rely on anti-concentration and martingale tools, which are **not** standard in bandit lower bounds. In particular, it is really unclear how to get them while only using concentration tools, let alone standard ones such as Azuma-Hoeffding. Our positive results require *non-standard* tools to handle concentration, and (more importantly) a rather delicate way to define “clean events” and argue about them. Please also see “techniques” in the general rebuttal. **[W5]** There are thousands of papers on designing bandit algorithms, but ours is the first one on learning failures. Thus, we believe we fill an important gap in the literature. Further, learning failures are a very common theme in the vast literature on social learning, and usually are considered a main result (and often *the* main result). Learning failures arise in different technical settings and due to different reasons; we discuss this literature in Appendix A. --- Rebuttal Comment 1.1: Comment: Dear reviewer, we were wondering whether our response has addressed your concerns. We'd be more than happy to provide any additional clarifications. Regarding the "surprise" factor, please also see our response to Rev Nim8 ("[W1]" therein).
Rebuttal 1: Rebuttal: Thanks for the thoughtful reviews. Many of our points are relevant to several reviews at once. **[SIMULATIONS: NEW]** As requested, we provide simulations to illustrate our main findings. We focus on the fundamental regime when agents are homogeneously all $\eta$-optimistic (resp., all $\eta$-pessimistic) for some fixed $\eta\geq 0$. Mirroring our negative results, we investigate the probability of a learning failure. Consider the event $F_t$ that the bad arm is chosen in all rounds between $t$ and the time horizon $T$. We re-run the simulation 1000 times, and plot the fraction of runs for which $F_t$ happens, as a curve over time $t$. We plot such curves for several representative values of $\eta$, ranging from LCB to greedy to UCB. Qualitatively, we find significant failures which (predictably) get worse as $\eta$ decreases (treating LCBs as negative $\eta$). We consider mean rewards $0.5 \pm \epsilon$, with $\epsilon=0.05$ ("large gap", top) and $\epsilon=0.01$ ("small gap", middle). We also investigate UCBs with larger $\eta$, and find similar failures (with smaller probs) at a larger $T$. (We check that including "weaker" failure modes would not change this plot by much.) We'll be happy to include [even] more refined simulations in the final paper, if requested. \ \ **[K>2 arms]** We emphasize that 2 arms is the fundamental case for negative results, which are the main theme in our paper. (The purpose of our positive results is to better characterize the failures.) Besides, many papers on social learning concentrate on two arms as a fundamental case, even for positive results. So, we strive to fully understand the 2-armed case, which – as we find - allows for elegant guarantees, yet requires a rather complex “technical story” (see below). That said, our negative results trivially extend to some instances with K>2 arms: e.g., just add K-2 arms with 0 reward. For some other instances, failure probabilities get much smaller (e.g., with many “best” arms, all of them need to experience a “bad” random event), or much larger (e.g., one good arm and many equally bad arms). We can spell out such extensions in the revision. However, a general characterization of failure probability for K>2 arms is likely to be much more cumbersome, as it would now depend on several arms. While it may be within reach given our techniques, the “technical story” in such characterization is likely to be quite complex (e.g., as complex as the story in our paper, and possibly more so). We feel strongly that we are already near the limit of what one could put into a single paper, both conceptually and because of the page limit. \ \ **[Generality]** While focusing on the basic learning problem, we allow considerable generality on the “behavioral side” (Lines 32-38, 183-189). We allow any behaviors consistent with the confidence intervals, possibly randomized and/or correlated across arms. In addition to greedy/unbiased behavior and varying levels of optimism/pessimism, this includes, e.g., versions of “active arms elimination” and “Thompson Sampling” (a.k.a. “probability matching” in behavioral economics). These behaviors can be arbitrarily different across agents, and also across arms (e.g., optimism on one arm, pessimism on the other). We also accommodate a form of “recency bias”, whereby one is more optimistic about an arm if more recent observations are better than the older ones. All these behaviors are well-documented in the literature on behavioral economics. We will include a more detailed discussion in the revision. \ \ **[Greedy]** We believe the failure analysis of the greedy algorithm should stand on its own, as a badly missing foundational piece for bandit theory. While consistent with our expectations, it was not clear in hindsight which assumptions would be needed and what would be the “strength” and generality of the learning failures. \ \ **[Techniques]** Our lower-bound analysis relies on anti-concentration and martingale tools, which are not very standard per se (see Lines 280-285), and very non-standard for bandit lower bounds. However, the main technical complexity is in _applying_ these tools, i.e., setting up the right events and arguing what happens when these events hold (especially so for Thm 3.9). The positive results reuse the standard UCB machinery, but require a much more delicate setup and analysis of the “clean events” (particularly so in Thm 4.5). Unfortunately, we could only hint at all this complexity in the body of the paper! (Lines 85-103). Our techniques and tricks provide foundation for subsequent work on bandit social learning in more complex learning problems. \ \ **[Technical story]** The technical story in our paper is quite intricate, illustrating the complexity of the problem even for 2 arms, and necessitating a particular order of presentation. The story proceeds from the main failure result (Thm 3.1) to handling a somewhat trickier case of small $N_0$ (Thm 3.9), to proving a much stronger failure for pessimistic agents (Thm 3.10). Bayesian beliefs are handled as a special case (Thm 5.1). The upper bounds proceed from uniform optimism (Thm 4.1) to upwards-varying optimism (Thm 4.4) to a small fraction of optimists (Thm 4.5). Finally, there’s a much stronger result in a “fully Bayesian” setup, with a different proof. \ \ **[Not merging theorems Thm 3.1 and Thm 3.9]** Merging them would lose the appealing generality of behaviors in Thm 3.1, as well as its relative simplicity compared to Thm 3.9 (both in the statement and in the analysis). Further, the symmetry assumption (P1) in Thm 3.9 may actually break if an agent gives more “benefit of a doubt” to one of the arms when both arms look very bad. (E.g., what if when both Chinese and Italian restaurants look bad, one chooses Chinese.) This is why we chose to separate these two results, specialized Thm 3.9 for the case of very small $N_0$, and in fact used Thm 3.1 to *motivate* Thm 3.9. Pdf: /pdf/c268d4ceb56515aa8e3e53652589d95123643daf.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper considers a social learning problem motivated by reviews on online platforms, where (myopic) users make (purchase) decisions based on historical reviews and generate new reviews in an online fashion. It considers several different user behavioral types, such as the confidence based optimistic, pessimistic and neutral users. In addition, the users could have Bayesian belief on the mean reward and act according to the posterior update based on the history. The paper characterizes several cases where the learning failure occurs i.e., when all but a few agents choose the bad arm. Strengths: 1. This paper introduces and analyzes an interesting setup of social learning. I can see how this setup may be applied to many real world applications, such as online recommendation systems (especially in presence of purchase decisions). I also expect many research projects to follow up on this fundamental social learning model. 2. The technical tools used in this paper are somewhat different from the standard bandit literature. 3. The possible results as the implications regarding the optimistic agents are very interesting. Weaknesses: 1. The presentation and structure of this paper requires some improvements. The current version seems to have many results scattered around the paper, making it hard for us readers to switch the context from one section to another. Section 3 is named as "learning failure", but it actually considers several different setups (theorem 3.1, 3.9, 3.10). Why not merge theorem 3.1 and 3.9? It might also be a good idea to use a table to summarize the results under different setups, e.g., when it fails, when it doesn't. 2. Overall, I find the problem setting interesting, but the technical results are not that surprising. For example, Theorem 3.1 requires a fixed \eta for every agent, which more or less violates its motivation of social learning --- the agents tend to have heterogeneous behavior types. I also wish the paper could connect its results and setups to real world applications (instead of imagining purely idealized scenarios). 3. The entire paper chooses to only focus on the two arms case. I expect the author to point out the exact technical challenges (or practical motivations) that prevent the analysis to be extended to the general cases. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please see the points listed up the weakness part. I really liked the problem setup, and I am willing to raise my score if the authors could convince me of the technical/conceptual significance of their existing results. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful review. Let us respond point-by-point to the stated weaknesses. **[W1]** The structure of the paper is driven (and necessitated) by the somewhat intricate collection of results, see “technical story” in the general rebuttal. Given the commonality / connections both in story and in techniques, we believe it makes sense to group all negative results in one section and discuss them jointly; likewise, all positive results and all “Bayesian beliefs” results. In particular, negative and positive results match “globally” but not “point-by-point”: instead each “side” has its own “sub-story”. Re “not merging Thm 3.1 and Thm 3.9”: please see the general rebuttal. Adding a table is a great idea, we’ll do it! **[W2]** Please note that our negative results allow heterogeneous behaviors. In Thm 3.1, “$\eta$-confidence” with fixed $\eta$ defines not a particular behavior but a wide range of allowed behaviors, including heterogeneity, see “generality” in the general rebuttal. Likewise Thm 3.9 (under minor restrictions of symmetry and monotonicity). Even Thm 3.10, while focusing on pessimists, allows for varying levels of pessimism. With this generality, we plausibly capture what myopic real-world agents might do when faced with a simple learning problem. Moreover, the particular behaviors that we capture are well-documented in the literature on behavioral economics and/or cognitive psychology (e.g., optimism/pessimism, probability matching, recency bias). We note, however, that results for homogeneous behavioral types are quite common in the literature, as they tend to make the intended points in a most concrete and elegant way. For example, Thm 4.1 obtains a clean upper bound that matches our negative results. (Meanwhile, Thm 4.4 and Thm 4.5 drill deeper to investigate heterogeneity.) **[W3]** Please see “K>2 arms” in the general rebuttal. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response. After reading the rebuttal and other reviews, I decide to maintain my initial score. --- Reply to Comment 1.1.1: Comment: Thanks for your response! We would appreciate if you could point out whether we've addressed some of your concerns, and what are the remaining ones. Regarding what might count as "surprising", please also see our response to Rev Nim8 ("[W1]"). Many thanks, The authors
null
null
null
null
null
null
Your representations are in the network: composable and parallel adaptation for large scale models
Accept (poster)
Summary: The paper presents a study on the benefits of training a small cross-attention based adapter instead of performing full fine-tuning of a large VIT model. The authors propose a cross-attention layer they dub InCA that has trainable queries to cross attend to intermediate layers in the large pretrained VIT model and then extract the information by average pooling, layer-norm and Linear. They claim a single query is enough in most cases but also propose a multiple query version they call OpenInCA. Strengths: * There are strong benefits of having a methodical study of adaptation and I think this paper fits the bill. * The method is simple, well explained and a priori easy to reproduce and expand. * The experiments are broad enough to be interesting to the community. * There are clear computational benefits from the two stage training. Weaknesses: * The claim in L177 is a bit of a red flag for me. If only one query is enough it may be that the way this is achieved is when the query is very far from the "latent" data and thus the attention weights become flat ie performing average pooling. The nice thing is that it would be an easy and i think useful experiment to check this. Redoing figure 3 with avg pooling but with all the extra bits like the layer-norm that is in the model. I presume the LP doesn't have it ? * I haven't found any experiments to validate that the class-incremental learning benefits of the Open InCA architecture which is claimed in line 203. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Have you tried multiple self-attention layers instead of just one? This would be equivalent to running a small perceiver model [30] and then either average the resulting latent tokens. * The average pooling has to be done to make sure that the cross attention IS the right architecture here and not a simpler one. It would also add a bit of depth to the paper and answer an obvious question. I think this is required to recommend acceptance. * Why do you include both tables 1 and 3 ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I think there is no obvious negative impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and thoughtful review of our work and the insightful questions. We are pleased that the reviewer appreciates the the methodological experimentation presented in the paper, the generality of the method, and the discussion of two-stage-training. Below we address key questions raised by the reviewer and run the experiments suggested by the reviewer. >If only one query is enough it may be that the way this is achieved is when the query is very far from the "latent" data and thus the attention weights become flat ie performing average pooling. [An] Experiment to check this... [is] redoing LP with avg pooling but with all the extra bits. Thank you for the detailed and insightful question regarding the number of queries. The subtlety that can clarify this point is the presence of **multi-head** cross-attention in InCA. With multi-head cross attention, a single query is projected differently for each head. In our case a single query corresponds to 16 different query projections, thereby allowing a single query to surface multiple task specific representations from the activation map. **Experiment with average pooling** Following the reviewer’s suggestion, we repeat the adapter experiment using average pooling. Specifically, as you suggest we keep all the extra bits of InCA (e.g. LayerNorm, projection layer of cross attention) and simply replace the learnable cross attention layer with average pooling (but still keep the projection layer for additional capacity). We will refer to this as InLPX for intermediate linear probing “extended”, which now has a layer norm, average pooling, projection layer, and a classifier head. We use InLPX applied to the same activations as InCA in the same exact settings of Table 1 ViT L/16. We report the average test error on the 11 datasets presented in Tab. 1 of the paper in the table below. In short, as the experiment shows, InCA outperforms average pooling which given the experimental settings can be attributed to the relative expressivity of the multi-head cross-attention of the InCA adapter. **Top-1 Test error over the 11 datasets presented in Tab. 1** | Method | Full fine-tuning | InCA | Intermediate linear probing | InPLX (*New*) | | -------- | -------- | -------- | -------- | -------- | | Average | 10.0 | 9.8 | 17.2 | 14.1 | | Maximum gap to full FT | 0 | 2.3 | 35.7 | 26.4 | As you suggest we observe that adding the additional bits of InCA to intermediate linear probing does lead to an improvement, namely from 17.2 → 14.1 average test error. **However**, even by repeating the entire procedure aside from cross attention as in InCA we still observe that InLPX is still has a maximum gap of 26.4 points as compared with InCA that has 2.3 point gap (>10x larger maximum gap), and has 44% worse test error relative to InCA. Given the experimental design of making InLPX match InCA on all aspects except cross attention, the gain can be directly attributed to the cross attention layer. **Detailed explanation on expressivity** We point out 2 key factors when looking at cross attention vs. average pooling 1. **Multi-head**: The InCA adapter uses a multi-head cross attention module which has 16 heads by default (follows the number of heads in the pre-trained model). In this example, the single query is equivalent to 16 distinct smaller dimensional queries applied on the activation map, enabling diverse querying of task-relevant representations. 2. **Instance-based**: Further another aspect we believe is important in the cross attention layer is that even if the queries are fixed, the aggregation of the activation map will not be constant between images, this is because the computed keys of the activation map are going to depend on the particular input instance which will further result in dynamic aggregation. Lastly note Appendix D is concerned with analytically describing the benefits of cross attention and can be relevant to this discussion. >I haven't found any experiments to validate that the class-incremental learning benefits of the Open InCA architecture which is claimed in line 203. The experiments for Open-InCA are presented in Appendix A. We provide a quick summary of the results below. Recall in Open-InCA, we learn a query specific to each class separately, as well as class specific heads (diag-head). This completely disentangles the learning of additional classes from existing classes which enables extremely flexible class incremental learning and class forgetting. Out of the box, Open-InCA achieves competitive results on the Split-CiFAR100 Class Incremental Learning benchmark without task specific architectural changes (e.g. a split routing classifier) used by state of the art methods. >have you tried multiple self-attention layers instead of just one? This would be equivalent to running a small perceiver model [30] and then either average the resulting latent tokens. We have not trained InCA adapters with access to multiple activation maps, this is an interesting future research. However, we observe that a *single adapter* utilizing a *single relevant activation map* is already capable of outperforming all other efficient adaptation methods, achieving within the same accuracy on average as full fine-tuning for ViT-L/16. We also point the reviewer to Appendix E2 that presents a preliminary ensembling study of InCA adapters. >Why do you include both tables 1 and 3 ? Table 1 and Table 3 of ViT and SWIN architectures are included to demonstrate the generality and diversity of InCA on non-vanilla transformer architectures. **Conclusion** We thank the reviewer again for the thoughtful review of this work. We hope that by addressing the main concerns raised, specifically 1) the additional details on Open-InCA experiments (presented in Appendix A) and 2) clearing up the query question and repeating the experiment with average pooling, the reviewer considers raising their recommendation. --- Rebuttal Comment 1.1: Title: comment Comment: I thank the authors for their additional work and comments. I think this makes the paper more easy to justify in my mind. i will thus change my ranting to borderline accept. --- Reply to Comment 1.1.1: Title: Response to reviewer Comment: We thank the reviewer for taking the time to consider our rebuttal response and change their rating accordingly.
Summary: - This paper proposes a new way to adapt a pretrained deep neural network for downstream tasks called InCA. InCA does not modify the intermediary representations of the pretrained network and thus doesn’t require backproping through it, which makes it memory- and compute-efficient. To use InCA, one first heuristically identifies a few layers, the activations of which are processed through cross-attention with trainable query tokens. - The cross-attention setup is more expressive than linear maps, which is supported by empirical and theoretical arguments. It also allows the addition of new classes through the injection of new query tokens. - Finally, because the forward pass of the original network is not at all modified, one can cache the relevant activations of the entire dataset once and apply InCA without calling the original model at all. It is also possible to adapt to many tasks in parallel for the same reason. - Experiments on many image datasets show that InCA performs competitively with finetuning and other efficient adaptation methods. Strengths: - This work includes a broad range of baselines in their experiments. - The proposal is simple yet appears to be effective. The stated benefits are technically sound. Weaknesses: - The paper “[presents] a framework for transfer learning that efficiently adapts a large base model by learning lightweight cross-attention modules attached to its intermediate activations.” However, the experiments are limited to mostly computer vision classification tasks. Some of the baseline approaches, such as LoRA and BitFit, are extensively used for language understanding, language generation, and image diffusion. The result of this paper will be much more convincing if experiments in these domains are included, especially if InCA can outperform existing baselines. - A significant portion of the benefit of InCA comes from using a subset of the layer activations. However, this important choice (as described in B.2) is done heuristically according to Section 3. The paper can benefit from more clarity on how to choose such layers. For example, how are these layers chosen for the experiments? - The clarity of the writing can be improved. For example, it was not clear during my first pass what the dimension of z is and what T is. The description for diag-head is confusing: it appears to be a simple matrix multiplication between [a_1, …, a_c] and W. Section 3 overall can be better organized by heuristics, methodology, and benefits. - The “signature” phenomenon is interesting, but the given analysis doesn’t provide much insight. It would be great to hear more about what we can learn from these “signatures,” especially given that they are presented as one of the key advantages of InCA in the conclusion. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How competitive is InCA in other domains where many of the compared baselines, e.g., LoRA and BitFit, dominate? E.g., language understanding, language generation, diffusion, etc. - How is the subset of layers chosen for the experiments? Could a similar selection process benefit other baselines? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review of our work and appreciate that they found InCA a simple and comprehensive approach with broad range of baselines and sound benefits. below we address the main points. >A significant portion of the benefit of InCA comes from using a subset of the layer activations... how are these layers chosen for the experiments? We would like to clarify our layer selection process — we do not heuristically select the layers for each experiment, rather we use the parallel training property of InCA (Lines 148-157) to **exhaustively train all layers in parallel**. We test the performance of each adapter using cross validation and finally report the best performing layer (retrained on all the training set). This is one of the key motivations of InCA, we wanted to find a way to exhaustively analyze each layer. Done sequentially, this is very expensive, instead InCA’s parallel training becomes an effective solution since we are capable of training 40+ adapters in parallel on a single GPU. In Sec. 3, we note that when running the experiments *we patently* observe that the best performing layers are from activations with residual connections and tend to be from the later half of the network. Therefore we use that simple criteria for the adapter set, however we can, and have trained InCA attached to all of the network’s blocks. > Could a similar selection process benefit other baselines? Yes, in Tab. 1 and 2 we report the In-LP and In-MLP-3 adaptation results (with “In” refers to intermediate). This corresponds to applying InCA’s parallel training procedure with intermediate representations for linear probing and MLP networks. For other baselines that modify the internal representations of the network (e.g. LoRA, VPT) InCA’s procedure can not be applied since the adaptation modifies the backbone’s execution. > The clarity of the writing can be improved. For example, it was not clear during my first pass what the dimension of z is and what T is. The description for diag-head is confusing... Thank you for the feedback, we updated the notation issues of T (# of tokens) in line 171 and on, to be consistent in our notation and remove this ambiguity. To clarify regarding, diag-linear, it is not exactly a matrix product, rather computes the diagonal of a matrix product, i.e. diag-linear([a_1, ... a_c], [w_1, ... w_c]) → [<a_1, w_1>, ... <a_c, w_c>]. >The “signature” phenomenon is interesting, but the given analysis doesn’t provide much insight. It would be great to hear more about what we can learn from these “signatures,” especially given that they are presented as one of the key advantages of InCA in the conclusion. We provide an extensive analysis with insights from the signatures produced by InCA in App. B. These results were relegated to the appendix due to the page limit. Below we provide a self-contained summary of InCA’s signatures and some of the results presented in the appendix. We encourage the reviewer to check out App. B if they would to find more details about the intermediate representation signatures. + **Signatures**: Training InCA produces a set of adapters attached to different network’s activations. Evaluating each adapter (in parallel) gives the performance of each particular activation map on a task. We observe unique patterns for the signatures on different tasks indicating which intermediate representations are most helpful. + **InCA and partial tuning have matching curves (App. B.1 and Fig. 4)** + In App. B.1 we compare the InCA and partial tuning signatures. In partial tuning, we repeat fine-tuning experiments where we fine-tune all layers up to a “freezing point” e.g. training the last K layers of the network for varying values of K=1, ... L. + In the same fashion as InCA, we produce partial tuning signature by running a set of experiments with different partial tunings and their evaluated performance. By running this for each freezing point we get a signature (albeit at a much larger cost than a single InCA training run). + For partial tuning is that as you tune more layers - the tuning is more expressive and the test accuracy increases. Interestingly, the partial tuning plots follow an “elbow behavior” where after a certain number of layers are used for tuning, the performance improves dramatically and the performance roughly saturates afterwards (Fig. 4). + Even more exciting, the top InCA adapter matches the location of the “elbow”. Meaning the point of the elbow is at the same location where the representation found by InCA can be harnessed by the partially tuned network. This systematically shows the behavior of fine-tuning as “surfacing” existing representations. That is, when a particular activation layer is unfrozen it can be leveraged for downstream representation via tuning, or alternatively InCA efficiently finds those representations directly. + **Layer affinities (App. B.2, Fig. 5)** + In App. B.2 we review the InCA signature of the same task applied to different backbones and architectures. (Fig. 5). + What we observe is that for many datasets, the intermediate representation signature is highly consistent between different architectures and same type of pre-training to the level of exact layer matches. This intriguing property shows how much of the representations of the network are independent of the architecture and are mostly a function of the pre-training task, even for very different architectures such ViTs vs CNNs. >Applying InCA for language generation, and image diffusion Applying InCA to generative tasks is an interesting avenue for future work where we think the modularity and efficiency of InCA can be leveraged. However in the current work we focus on core discriminative tasks. We thank the reviewer for the detailed review and hope that by addressing the main concerns regarding layer selection and "signatures" that the reviewer considers raising their recommendation. --- Rebuttal Comment 1.1: Title: Thanks for the detailed response Comment: My concerns are addressed. Please include the clarification in the revised manuscript. I've raised the score accordingly. --- Reply to Comment 1.1.1: Title: Response to reviewer Comment: We are glad the rebuttal addressed the main concerns raised by the reviewer. We would like to thank the reviewer for taking the time to review the contents of our rebuttal.
Summary: This paper proposes an efficient fine-tuning method that works parallel to the pre-trained network. Based on cross-attention between intermediate activations, InCA can generalize to various classification tasks from different domains. Additionally, the framework inherently supports class incremental learning, multi-task learning and continual learning objectives. The overall objective is cross-layer feature merging using cross-attention between them while freezing the base model. Strengths: 1. There are various parameter-efficient fine-tuning methods in the literature; the major strength of this work is described in the last part of Sec. 3 - using cached activations to train the model in mere seconds. 2. The overall formulation of cross-attention is simpler and elegantly extends to multi-task and continual learning paradigms. 3. Experiments are shown on various vision classification datasets and one multi-task learning benchmark. Weaknesses: 1. The formulation of Open-InCA is not completely clear and can be better presented; However, I understood the complete idea, and I had to re-read it a couple of times for deeper understanding. 2. The field of efficient fine-tuning/transfer learning is rapidly moving, and using standard benchmarks helps understand the performance gains more. The authors can give results on VTAB-1k, few-shot learning experiments, etc, as shown in SSF[1], VPT[2], FacT[3] 3. Also, I found comparison with existing methods a major weakness, as only LoRA, BiTFit and AdaLN are shown. [1] Scaling & shifting your features: A new baseline for efficient model tuning. [2] Visual Prompt Tuning [3] FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness sections. 1. Standard benchmarks and a thorough comparison with existing works will make this work a lot stronger 2. A major emphasis on peak training peak memory, inference latency and parameter overhead needs to be provided. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The formulation of Open-InCA is not completely clear and can be better presented; However, I understood the complete idea, and I had to re-read it a couple of times for deeper understanding. We thank the reviewer for the engaging review of this work and value their appreciation of the elegance of IncA for modular multi-task and CIL settings. Regarding unclear presentation of Open-InCA in Sec. 3, we first note that we provide a more extensive description and presentation of Open-InCA and class incremental learning experiments in Appendix A. As the reviewer is suggesting, we re-wrote and combined the presentation of Open-InCA in Sec. 3 that was previously presented into 2 separate parts (L178 - 184) and continued in (L194 - 212) to give a complete picture of Open-InCA that fills in all the details directly in the main manuscript. The revised writing includes the discussion on “query only training” and highlights the disentangling of class representations for class incremental learning and forgetting. > A major emphasis on peak training peak memory, inference latency and parameter overhead needs to be provided. Regarding peak training memory we refer the reviewer to Fig. 1 (Right) which reports **peak** training GPU Memory with the training of InCA as compared with other adaptation approaches, the GPU memory is measured for the training process with a batch size of 32. Regarding inference latency, as presented in Tab. 4, the inference speed of the adapter in InCA is dwarfed by the inference of the pre-trained model, with 2.6% additional overhead for an IncA adapter on ViT L/16. This is because the InCA adapter is a small shallow network made out of a few cheap sequential operations with cross attention scaling linearly unlike self-attention. In the case of multi-task learning, the shared backbone inference computation leads to 4.4x speedup on the ImageNet to Sketch benchmark. For regular, single task inference, as noted above the overhead of InCA adapter is < 3%. Further, if the top adapter appears at a particular intermediate layer, one can discard subsequent layers and use a truncated pre-trained model. As an example, taking ViT-L/16 with Stanf. cars (best block, 16) adaptation we run an inference benchmark for the purpose of this review and use InCA with a truncated backbone, from this experiment we observe using InCA with truncated backbone leads to 31% faster inference as compared with a fine-tuned network since InCA benefits from removing the unnecessary later layers. Lastly, regarding parameter efficiency, InCA is parameter efficient with 1.3% of the parameters of a ViT L/16. Nonetheless as we discuss in the paper (L115-119) we believe that parameter efficiency is only a piece of the picture for an efficient adaptation which should also enable efficient training. --- Rebuttal Comment 1.1: Comment: The revised writing will make things much clear. However, a major concern is a limited comparison with existing works which is still not presented in the rebuttal. This makes the work rather weak. The authors need to address this issue during the discussion period. --- Reply to Comment 1.1.1: Title: Response to reviewer Part 1 Comment: We disagree with the premise that we do not compare with existing work. In fact we already extensively compare with the suggested work of VPT that the reviewer ask that we compare with (already in the manuscript, e.g. see Tab. 1, Tab. 7). Below we summarize the results of VPT on ViT L/16 as compared with InCA. | Method | Full fine-tuning | InCA | VPT Deep | |------------------------|------------------|------|----------| | Average | 10 | 9.8 | 12.3 | | Maximum gap to full FT | 0 | 2.3 | 6.8 | We observe that InCA has a smaller maximum gap to full-fine tuning, 2.3 vs 6.8 of VPT. In addition, in Appendix C we provide a detailed comparison with the **computational efficiency** of VPT and InCA applied to larger models. We observe that per-run, InCA is 290% faster than VPT on ViT-L/16 (with VPT efficiency issues exacerbated for bigger models, cf. line 692, Fig 1 Right). The 290% speed-up is for a single training run and the gap widens if we compare *per-dataset* due to the extensive hyper-parameter search required for good results with VPT (see the original work of VPT paper, Tab. 6). For the other works of SSF, and FaCT, we added and cite both of them as additional interesting work in the related work section for parameter efficient approaches. Note each of SSF, FaCT and VPT require back-propagation through the entire model as we discuss (lines 75-80) this makes them expensive for fine-tuning of large-scale models akin to the computational costs of running full-fine-tuning. On the other hand InCA does not require back-propagation through the pre-trained model which makes the method much more scalable to larger models (see Fig. 1, Right for comparison of model scaling for InCA and VPT, Full-FT). Further as we discuss (lines 55 - 58) the modification of the backbone execution limits each of the approaches (SSF, FaCT, VPT) in more flexible learning and inference scenarios such as multi-task and class incremental learning, whereas InCA’s modular adaptation has a “one to many” property allowing for computation sharing of the backbone computation. In "Response to reviewer Part 2" below we compare and discuss FaCT, SSF in detail and compare them with InCA. As we note above both FaCT and SSF are parameter efficient but lack some of the other key benefits of InCA.
Summary: The paper presents a method termed InCA (Introspective Cross-Attention) to learn compact adapter modules for large vision models that can be used for various downstream tasks (image classification domains). The proposed approach has an advantage over the entire model finetuning due to its parameter efficiency resulting in a smaller GPU memory footprint required for training. In addition, the proposed introspective cross-attention module is architecture-agnostic, enabling simple implementation for different intermediate network representations and network types. Strengths: The proposed approach (InCA) demonstrates stable performance when applied across different model types (ViT, SWIN, CNN). Unlike previous architecture-specific adaptors, the method can be implemented without modifications with different base model architectures. The experiments demonstrate that the method can reach performance comparable to the entire model finetuning. At the same time, the number of trainable parameters constitutes only a tiny fraction (1-2%) of the backbone model weights. Because the training process doesn’t require backpropagation through the backbone, it is GPU memory efficient, making the large pre-trained transformer-based backbones reusable for different downstream tasks. Compared to the linear probing, the proposed cross-attention architecture of the adaptor module has a significantly larger “extraction capacity” that leads to an improved classification accuracy of the adapted model. When applied in parallel to different activation maps, the method produces a network “signature” - w.r.t. to the downstream tasks. All this highlights the fact that the internal representations of the pre-trained large models have sufficient representation power for many downstream domains. Weaknesses: The paper builds on the large body of literature on efficient transfer learning. While exploring the utility of cross-attention as a choice for adaptor architecture, the authors follow the steps of many prior works, such as Perceiver [30] or [A]. On the other hand, the current study combines empirical results for different backbone architectures (e.g., both CNNs and Transformers) and suggests applying cross-attention modules in parallel to different activation maps to identify the most relevant features for the downstream tasks systematically. Thus, despite being a comprehensive study, the novelty is limited. As discussed in the Appendix, InCA, as an efficient and modular model adaptation framework, can be helpful for domain practitioners (e.g., medical imaging domain) to bridge the gap between cutting-edge research in visual representation learning and real-world applications. [A] Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation, in EMNLP 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you highlight the most interesting (maybe counterintuitive?) findings? “The second best adaptation approach is LoRa … at additional training costs” invites the comparison along axes other than the error rate. Could you compare the training costs directly? Table 2: “\dagger indicates full FT was avoided due to prohibitive computational costs” – could you elaborate on what costs were considered prohibitive and why? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the through review of our method noting the parameter/training efficiency, the signatures produced by InCA and the compactness and modularity enabled by the method as well as that InCA "can be helpful for domain practitioners". We address the points of the reviewer below. > follow the steps of many prior works, such as Perceiver [30] or [A] The reviewer is correct that we are not the first to utilize cross attention for it’s extraction capacity and indeed in the related work section we cover different uses of cross attention in the literature. We also thank the reviewer for pointing reference [A] which we have added in the related work section for it’s expressivity of cross attention. *However* regarding [A] we note that the approach by which the cross attention is used is different both in terms of the tasks considered - natural language translation, but also in 2 major ways in which the cross-attention is actually used. In the suggested work, an encoder-decoder transformer is used for machine translation task. In that setting the base architecture already crucially utilizes cross attention in the decoder layers, and the cross attention happens between the encoded tokens (as keys) and the previously generated tokens (as queries). In the work the authors show that just optimizing the cross-attention layers provides the expressivity needed for fine-tuning on the task. However we note that InCA introduces randomly initialized adapters that did not previously exist in the architecture. Further the queries themselves in the cross-attention layer used by InCA are not computed from the activations but rather are learned and optimizable parameters. Regarding the novelty of InCA, we note that while the architecture of the InCA adapter is simple, the parallel and exhaustive training of InCA does not appear in perceiver (or any other work) and is key for the efficiency and the search for relevant activations existing in the network. The lightweight and parallel approach of InCA enables it’s use to conquer large-scale pre-trained models at extremely efficient adaptation costs (e.g. adapting ViT G/14 on 1 GPU). The parallel training of InCA is also directly responsible for it’s ability to produce intermediate representation signatures in a feasible manner via the computation sharing of the different trained adapters which are all new. Further we would like to point out that we extend InCA to Open-InCA (presented in Sec. 3 and additionally in Appendix A with results) which introduces a novel adaptation approach where the learning of each class is done in a disentangled and separate fashion. This makes InCA into a highly modular and flexible adaptation approach for class-incremental learning and unlearning that further illustrates the novelty and the flexibility of the proposed approach. > “The second best adaptation approach is LoRa … at additional training costs” invites the comparison along axes other than the error rate. Could you compare the training costs directly? In our experiments we observe InCA outperforms LoRA in all architectures tested, with LoRA being the second best approach. With regard to training costs, InCA scales to larger models much more efficiently. In Figure 1 (right) we observe that since InCA does not back-propagate through the pre-trained backboned it makes training even architectures as large as ViT-Gigantic/14 feasible under fairly modest computation and GPU memory constraints. We run an experiment to compare InCA with LoRA on the a ViT-H/14 architecture, where for InCA we use 20 adapters trained in parallel. We observe that even with 20 adapters, InCA has **65.3% lower GPU memory footprint** than training one LoRA adaptation on ViT-H/14. In addition InCA and Open-InCA enable computation sharing for multi-task learning, and continual learning which are not feasible with LoRA (since LoRA modifies the backbone execution whenever it is applied to a model). Lastly InCA is architecture agnostic and can be applied to CNNs whereas LoRA dictates the existence of self-attention layers in the architecture. > dagger indicates full FT was avoided due to prohibitive computational costs” – could you elaborate on what costs were considered prohibitive and why? For each entry in Tab. 2 we run multiple training runs with different learning rate on each of the 11 datasets used in Tab. 1 to report the average accuracies on those tasks. For ViT Gigantic/14 which has 1.8B trainable parameters, running training with full-fine tuning on each of the listed datasets incurred prohibitively large costs for the given timeframe. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work presents a method called Introspective Cross Attention (InCA), which aims to identify high-performing adapter models for handling downstream tasks using large-scale pre-trained models. InCA achieves competitive performance compared to the well-established baseline of full fine-tuning, while also enabling parallel multi-task inference. The experimental results effectively demonstrate the effectiveness of this method in terms of both performance and efficiency. Strengths: 1. The paper is well-written and easily comprehensible. The Introduction section, in particular, effectively establishes the context for the entire work. 2. The exploration of introducing adapters for large-scale models in the context of vision transformers is still relatively less explored. 3. The promising results on well-known transfer learning datasets at a smaller scale indicate the potential of the proposed method. Weaknesses: 1. The datasets employed in this study may not be the most appropriate test-bed for evaluating large-scale pre-trained models. Previous transfer learning works [1, 2, 3, 4] have achieved comparable accuracies using lightweight CNN models that demand significantly fewer FLOPs and parameters. To gain a comprehensive understanding, it would be valuable to compare the FLOPs and parameter counts. 2. While I acknowledge that the authors have utilized commonly used datasets for transfer learning tasks, it is worth noting that these datasets may not provide sufficient challenges when employing models such as ViT-G/14 and other large variants. References 1. [Co-Tuning for Transfer Learning](https://proceedings.neurips.cc/paper/2020/hash/c8067ad1937f728f51288b3eb986afaa-Abstract.html) 2. [Stochastic Normalization](https://proceedings.neurips.cc/paper/2020/hash/bc573864331a9e42e4511de6f678aa83-Abstract.html) 3. [Bi-tuning of Pre-trained Representations](https://arxiv.org/pdf/2011.06182) 4. [$\Delta$-Networks for Efficient Model Patching](https://arxiv.org/pdf/2303.14772) -- Table-7 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Could you please provide (or comment) a comparison in terms of FLOPs? 2. In terms of memory and compute how does CNN based transfer learning approaches compare with ViT based approaches? 3. This approach might suit very well for tasks like model patching, see [PAINT](https://model-patching.github.io/) and [Delta-Networks](https://arxiv.org/abs/2303.14772) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Discussed in the appendix Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review of this work and are encouraged that they find the presentation to have effective context and and the method to be be presented comprehensively including in the realm of large-scale modern architectures. We address the points raised by the reviewer below > This approach might suit very well for tasks like model patching, see PAINT and Delta-Networks We thank the reviewer for suggesting model patching. This is a very suitable testbed for the modular design of the InCA and Open-InCA adaptation, we add and cite this relevant application during our multi-task discussion in the manuscript. By not modifying the pre-trained backbone architecture and by sharing the majority of the computation with the pre-trained backbone, InCA adapters can be used for patching an existing base model. Similar to model patching are the flexible learning frameworks we consider in Sec. 3 (multi-task learning) and Appendix A (Class Incremental Learning and forgetting) which both do not exactly follow the model patching frameworks yet are highly adjacent. Namely through Tab. 4 we observe InCA is the top performing method in the ImageNet-to-Sketch multi-task benchmark and enables efficient multi-task inference which is be crucial for efficient patching networks. Further in Appendix A we show how even at the level of a single adapter and a single class one can add additional classes or unlearn a class flexibly. The corroborated results of InCA on multi-task learning and continual learning make it a great method for model patching and we added this context in the paper. > Previous transfer learning works [1, 2, 3, 4] have achieved comparable accuracies We thank the reviewer for the suggested relevant references, we have incorporated [1,2] in the related work section as methods that focus on the learning objective and references [3,4] as additional interesting approaches for adaptation (e.g. improving normalization). We agree with the reviewer these works can be complementary to the the adaptation approach presented InCA. However, we note when discussing similar accuracies it’s important to keep the discussion in the context of the learning task among other details. For example, the suggested work of [2] changes the *learning objective* by introducing additional un-supervised losses, as mentioned these approaches are complementary with InCA yet operate orthogonally. In general, final test accuracies may also be influenced by factors such as different data augmentations, modified image resolutions among many other boosting methodologies which improve results. We refrain from using such boosting methods and our study simply uses the 224 input resolution with traditional random crop augmentation for fair benchmarking and to avoid conflating multiple aspects of the learning task. We can not find those details in some of the suggested work which can also make it more challenging to directly compare. > In terms of memory and compute how does CNN based transfer learning approaches compare with ViT based approaches? The memory and compute of most transfer learning methods will directly correlate with the size of the pre-trained network (especially for methods that require back-propagating through the pre-trained backbone). this is mostly independent of the architectural family between ViT and CNNs. The majority of proposed “efficient” adaptation methods are parameter efficient but are often not compute efficient (e.g. VPT, see Tab. 7 of the Appendix) which makes them challenging to be used with large scale models. On the other hand InCA is compute efficient and does not back-propagate through the pre-trained backbone making it highly scalable to massive architectures. > Could you please provide (or comment) a comparison in terms of FLOPs? InCA is computationally efficient with regards to FLOPs as compared to other methods. There is no back-propagation through the pre-trained network, which eliminates the FLOPS associated with computing gradients through each intermediate activation in the model’s operation graph. The only backwards operations being propagated are on the small adapters parameters themselves. During parallel training multiple adapters share the FLOP computations associated with the backbone's forward pass which amortizes the computation between the adapters. For a comparison in terms of training costs (measured in wall clock time) between InCA and state of the art transformer adaptation method VPT see Appendix C Tab. 7. --- Rebuttal Comment 1.1: Comment: I have carefully reviewed the initial submission, the authors' response, and the feedback from other reviewers. I appreciate the effort that has been invested in addressing the concerns raised, and the responses have addressed my concerns to a reasonable extent, however, I remain aligned with my initial assessment.
Summary: Summary: Firstly, I would like to kindly point out that this paper proposes an "adaptation" method; however, I believe there may be some serious concerns in multiple aspects. Firstly, the paper appears to lack innovation in its methodology. Secondly, the experimental design also seems to have some significant shortcomings, such as lacking adequacy and detailed experimental evidence. Additionally, I would like to express my concerns about the writing aspect of the paper, where some details seem to be missing and where some over-claims are made. Strengths: (Positive) Although this paper possesses some flaws, including the lack of innovation and rigorous experimental design, I must acknowledge that the approach in this paper appears reasonable to some extent. Additionally, it is commendable that the authors have provided a significant amount of supplementary material. However, I believe these aspects may not fully compensate for the shortcomings observed in other areas of the paper. I genuinely hope that in future research, the authors will earnestly consider these critical points and endeavor to enhance the quality of their work. Weaknesses: (Negative) Allow me to further inquire about some of the claims made in the paper. In the first sentence of the article, the authors claim that in natural language tasks, the data and hypothesis space are shared, which appears quite astonishing. Considering the diversity of tasks in natural language, can this assumption be readily applicable to every task? I believe this general statement may lack sufficient basis and in-depth investigation. (Negative) Furthermore, the authors assert that their model can enhance "robustness," but it is important to note that "robustness" is an extremely specialized concept. Therefore, I hope they can provide more detailed experimental evidence to support this claim. A simple assertion may not be sufficient to substantiate such an important proposition. (Negative) I must emphasize that the approach presented in this paper does not seem to demonstrate significant differences compared to existing methods, including but not limited to LoRA. This apparent lack of innovation in the technical aspects is somewhat disappointing. (Negative) Moreover, it appears that the authors' method has not undergone sufficiently large-scale model validation to demonstrate its feasibility in terms of large model transferability. Additionally, the model they have chosen may not entirely qualify as a so-called "foundation model." This lack of rigor in the experimental design raises doubts about the viability of this method. (Negative) Additionally, the authors seem to have not thoroughly validated the capabilities of their method on various diverse tasks. The scope of their experiments appears to be rather narrow, omitting coverage of multiple tasks such as generative models and discriminative models simultaneously. For instance, in generative models, the training cost of the so-called "stable diffusion model" may be high, but its adaptation is crucial. Unfortunately, the authors' in-depth research and contributions in this area seem to be limited. (Negative) What is also concerning is that the authors of this paper appear to have a somewhat limited perspective, primarily focusing on a few tasks they studied, while neglecting the introduction of task details. Moreover, they may hold the belief that ImageNet pretraining is the default choice worldwide, a viewpoint that is challenging to comprehend. In reality, a foundation model should not be confined to ImageNet pretraining but should consider larger benchmarks to enhance its generalization ability. I would like to strongly urge the authors to be mindful of this and clearly indicate in the paper where their model was actually pretrained. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See *Weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No. See *Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for reviewing this work. We address each of the reviewer’s points below: >authors hold the belief that ImageNet pre-training is the default choice worldwide, a viewpoint that is challenging to comprehend While we consider ImageNet pre-training for some of our experiments, we do not believe it to be the default choice. Indeed, we consider diverse pre-training data. In our experiments we use pre-training from different large-scale datasets **including LAION, OpenAI 400M**, and **Instagram 3.5B-17K**. Tab. 1 uses an ImageNet-22K pre-training and we note that ImageNet still serves as a strong pre-training in computer vision [1,2,3,4]. >this paper does not seem to demonstrate significant differences compared to existing methods, including but not limited to LoRA. This apparent lack of innovation in the technical aspects is somewhat disappointing. As we note in lines 87-90, InCA has key differences with respect to LoRA and other common adaptation methods, which enable new use cases (continual and multi-task learning). Moreover, unlike LoRA, InCA enables massive parallelization and efficiency boost both at inference and training time (lines 55-57), with 65.3% lower GPU memory footprint than LoRA on ViT-H/14. At the same time, InCA achieves 47.7% lower relative test error than LoRA (Tab. 1). We provide the details on the differences below: * **Architecture agnostic**: InCA can be automatically extended to any backbone architecture whereas LoRA does not. InCA is simply applied on top of any architecture — including CNNs — without any change (as we show in Sec. 3, Tab. 2). On the other hand, LoRA relies on self-attention layers, limiting its use to transformers, and requires manual and cumbersome changes, replacing self-attention with new custom layers. * **Different learning scenarios** * **Continual learning**: Unlike LoRA, the modular Open-InCA (Sec. 3) disentangles the learning of each class. This allows both class incremental learning (CIL) and unlearning (forgetting) of a class without effect on other classes. In App. A we show that Open-InCA achieves competitive results on the Split-CIFAR100 CIL benchmark. * **Multi-task learning**: Unlike methods such as LoRA, VPT, etc. InCA does not alter the pre-trained model execution, therefore allowing computation sharing for multi-task inference. In Sec. 3, Tab. 4, InCA achieves the top result in terms of accuracy and performs well in inference time due to computation sharing. * **Computational efficiency**: InCA adapters share almost all forward computations with the backbone, however during training do not require back-propagation through it's layers. For example on a ViT-H/14 architecture, InCA with 20 learned adapters has **65.3% lower GPU memory footprint** than training one LoRA adaptation on ViT-H/14. >the authors' method has not undergone sufficiently large-scale model validation to demonstrate its feasibility..Additionally, the model they have chosen may not entirely qualify as a so-called foundation model. * Our study of InCA includes 9 separate pre-trained models each evaluated on 11 vision datasets as well as CIL and multi-task benchmarks. Our experiments include foundation models with diverse pre-trainings that have large architectures and use massive amounts of pre-training data. We provide the specific details below: * **Diverse pre-trainings:** In the paper, we experiment with models pre-trained on diverse datasets, such as * **3.5B Instagram images** based pre-training (ResNext101)[5] * **400M image-text pairs** for the OpenAI CLIP pre-training (ViT-L/14)[6] * **LAION 2B+ image-text pairs** used for the OpenCLIP pre-training (ViT-H/14, ViT-Gig/14)[7] * **Large architectures**: In Tab. 2 we use the ViT Gigantic/14 foundation model of OpenCLIP [7]. With over 1.8B parameters, this makes the ***largest public pre-trained vision backbone*** available today (on `timm` and `Huggingface` vision backbones). >"robustness" is an extremely specialized concept. Therefore, I hope they can provide more detailed experimental evidence to support this claim. To clarify, the paper focuses on developing a modular and efficient method generalizing to a variety of visual classification tasks. In this context, robustness of the method refers to the ability of InCA to be applied to a diverse and challenging set of classification tasks *across a variety of domains*. We added a statement in the revised paper to avoid confusion with different notions of “robustness” as defined for example in the adversarial attack or optimization communities. In Table 1, 2, 3 we observe that InCA achieves uniformly better transfer than other methods on average over the 11 different tasks and different architectures. >Allow me to further inquire about some of the claims made in the paper. In the first sentence of the article, the authors claim that in natural language tasks, “the data and hypothesis space are shared”, which appears quite astonishing. To clarify on the statement, the field of NLP has seen great progress in recent times due to LLMs. Part of this success is due to the fact that many different NLP tasks can be recasted as a sequence-to-sequence generation problem (where both the input and output sequence (data and hypothesis) live in the same shared space) [8]. For example, traditional NLP tasks such as entailment, and NER are casted as a seq-to-seq task via prompts. We note that paper does not aim to explore NLP foundation models and this should be taken as introductory to the narrative which then directs the reader towards the vision domain. it is **not** a main claim to be established by this work. >Using InCA in generative models / the so-called "stable diffusion model" Our paper is concerned with performing fine-grained visual classification, a discriminative task. We do not make claims for InCA regarding generative tasks for this manuscript including stable diffusion adaptation. --- Rebuttal Comment 1.1: Title: response to authors Comment: -- ImageNet pretrained models are not foundation models. In this paper, only Table 2 reports a few results regarding CLIP, ViT-H/G, and ResNext-101. The primary focus of this paper is on ImageNet pretraining. Could this potentially be seen as an overclaim? -- If the authors are not acquainted with NLP, kindly avoid employing unscientific overclaims. -- If the authors are not familiar with certain scientific terms, such as "robustness," please refrain from using them without providing a precise definition. In summary, this paper seems to include some overclaiming, which might diminish its scientific quality. It would be advisable to consider toning down the language.
null
null
null
null
Learning Trajectories are Generalization Indicators
Accept (poster)
Summary: The paper introduces a novel generalization bound that incorporates trajectory information, aimed at providing deeper insights than existing methods on generalization at different points during training. The key idea is to analyze the increase in generalization error at each point in training by linearizing the network. Experimental results confirm the effectiveness of the proposed approach in capturing generalization error throughout the training process, even with adjustments to learning rates and label noise levels. Strengths: **Originality** As far as I am aware, this is the first generalization bound of its kind; I'm not aware of any other that uses the approach used in this paper. A major strength of the paper is that the derivation of the bound and bound itself are simple to understand, but still seem relatively powerful. **Quality** The theory in this paper appears solid. Experiments are also conducted well; testing on VGG13 trained on CIFAR-10 is a good choice. The authors experimentally validate the theoretical assumptions, which also is a major plus. I also view the fact that the bounds predict empirical generalization performance in Figure 3 as a major strength. **Clarity** Generally speaking, the paper is understandable. **Significance** The paper seems to take a relatively different approach than other works deriving generalization bounds for neural networks. Thus, I view the main significance of the paper as providing a new set of theoretical techniques. This paper is likely to be of significance to the field of developing practical generalization bounds for neural networks. Weaknesses: In my mind, the main drawback of the paper is that it's hard to establish the significance of the proven results without comparing the numerical generalization bounds implied by Theorem 3.6 with those proved by prior work. Table 2 is helpful, but it would also be good to establish the relative tightness of these bounds numerically if possible. Also, as the authors point out, the small learning rate assumption is restrictive, but also in line with some prior work. Also, here are several issues with the writing throughout the paper. For instance, - "Even though, the function space of DNN is large," in the introduction - "studying the generalization of DNNs by exploring property of SGD," in related work - " the value of concerning" in section 3 - "restricttion" in section 3 - "popular gradient" in section 3 Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Is there some way to at least estimate $\gamma'$ and $\mathbb{V}_m$? Relatedly, how does the tightness of the bounds produced by this method compare to other practical generalization bounds? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are adequately addressed; the authors devote an entire section of the paper to limitations. No negative potential societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comment. In the overall responce, we device a toy dataset to compare our results with previous works. We will check and fix the typos in the paper.
Summary: This paper studies the generalization of a general function class under the (S)GD algorithm with minimal assumptions. Specifically, it gives a generalization (upper) bound based on several training trajectory characters during training: variance of the gradients, gradient norm, training loss values and the learning rate. Numerical experiments are conducted to verify the assumption and the generalization bound. Strengths: * The paper is written clearly and the ideas have been presented in the proper order. * This paper studies the generalization for a very general class function that not only includes for neural network models, which can provide more insights in other non neural network based scenarios. Weaknesses: The major weakness is that this paper does not go into detail about studying the asymptotic order of the generalization bound in Theorem 3.6. To show the superiority of this new proposed method in this paper, it is very important to provide (at least) the asymptotic analysis on the generalization bound with respect to the sample size $n$. For example, in which case does the model benign overfitting (generalization gap goes to zero as $n \xrightarrow{} \infty$)? Can this new generalization bound cover (or outperform) previous bounds in some specific settings, e.g., linear regression, kernel regression, or overparameterization neural networks? In the current version of this paper, even though it provides a general framework for generalization analysis, I cannot see the potentiality of this generalization bound. --- Another weakness is for the Assumption 3.4. In fact, this assumption is trivially hold for finite set $\mathrm{w}$ because we can always take the supremum $\sup_{w \in \mathrm{w}} \|\nabla F_{\mu}(w)\|$ and the infimum $\inf_{w \in \mathrm{w}} \|\nabla F_{S}(w)\|$ and then get $\gamma$. However, once the set $\mathrm{w}$ contains a point that is a stationary point of the empirical risk, i.e., $\|\nabla F_{S}(w^*)\| = 0$ for some $w^* \in \mathrm{w}$, then Assumption 3.4 implies that $w^*$ is also a stationary point of the population risk $F_{\mu}(w)$, which is a very restrictive property as the data distribution matters. It is also shown in the Figure 1 that as the training loss goes to zero (epoch increases), the $\tilde{\gamma}_t$ may diverge. That is to say, the relaxed Assumption A.7 is more reasonable. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: For Theorem 3.6, the optimal generalization bound one can get is $\mathcal{O}(\eta_m)$, where $\eta_m = \max_t \eta_t$. This means that to get a non-vacuous bound, $\eta_m$ should decay to zero with respect to the sample size $n$, but the sample size dependent learning rate is not common in the practical setting, so why there is such a term? Compared with the uniform-stability based generalization bound $\mathcal{O}\(\frac{\sum_{t=1}^T \eta_t}{n}\)$ in *Hardt eta al.*, we can see that as long as $T = o(n)$, constant step-size SGD is provably to having a non-vacuous bound, so what is the gap here? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: This paper proposes a framework on the generalization analysis for a wide class of function, but more works are still needed to be done to show the superiority of this new method. As mentioned in the weakness part, I think this paper can benefit from the following two aspects: * more asymptotic analysis on the generalization bound in common machine learning settings, such as linear regression, kernel regression and overparmeterized neural networks. * since the bound in this paper does not require that $f$ is a neural network model, I think it would be more convincing if the authors could add some examples to show the generalization of other non neural network models, such as gradient boosting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time. The two points you raised (1. Analysis on machine learning settings and 2. Analysis on non-neural network models) are indeed interesting and beneficial for a deeper understanding of our proposed method. ## Assumption: The primary objective of our paper is to examine the generalization behavior of overparameterized neural networks trained using GD/SGD, which is a challenging issue. In the case of overparameterized neural networks, reaching a stationary point is a rare occurrence, as shown in reference [41] and validated in Figure 1. For general machine learning cases, this is a strong assumption. Consequently, we introduce the relaxed Assumption A.7 and **its corresponding bound (Theorem A.8)** in the Appendix. ## Question: 1. For the bound in Hardt eta al, the batch size used in their proof is 1. This means that the $T$ is usually large in setting. If we use the learning rate as fixed $\eta$ and training for $e$ epochs. Then $\mathcal{O}(\frac{\sum_{t=1}^T \eta_t}{n})=\mathcal{O}(\frac{e \times n \times \eta}{n})=\mathcal{O}(e \eta )$. Actually we have $\lim \limits_{n\to \infty} \mathcal{O}(\eta_m)=0$ (Proposition A.1 in The appendix.). However, we cannot figure out a concrete form like $\mathcal{O}(\frac{1}{n^c})$. This is a limitation of our current work. 2. The analysis of $n$. We will first analysis the dependent of $n$ for $\mathbb{V}$. The $\mathbb{V}$ is calculated as $\mathbb{V}(\mathbf{w})=\frac{\Vert \nabla F_S(\mathbf{w})\Vert}{\mathbb{E} _ {U \subset S} \Vert \frac{|U|}{n} \nabla F_U(\mathbf{w})-\frac{n-|U|}{n}\nabla F_{S/U}(\mathbf{w}) \Vert}$. Obvious, the gradient of individual sample is unrelated to the sample size $n$. And $\vert U \vert \sim n$. Therefore, $ \mathbb{V}=\mathcal{O}(1)$. Similarly, we have $\mathbb{E} \int_t \frac{d F_S(\mathbf{J_t})}{\sqrt{n}} \sqrt{1+\frac{\operatorname{Tr}(\Sigma(\mathbf{J_t}))}{\| \nabla F_S(\mathbf{J_t}) \|_2^2}}=\mathcal{O}(\frac{1}{\sqrt{n}}) $. As for the $\mathcal{O}(\eta_m)$ term in Theorem 3.6, we have $\lim \limits _ {n \to \infty} \mathcal{O}(\eta_m) =0$ according to Proposition A.1. We simply assume that $ \mathcal{O}(\eta_m)=\mathcal{O}(\frac{1}{n^c})$. Therefore, our bound has $\mathcal{O}(\frac{1}{n^{\text{min}\lbrace 0.5,c \rbrace}})$ ## Two suggestions: #### 1 Linear Regression. We observe that our methods in linear regression tend to degenerate to a form resembling the traditional Rademacher complexity method. This is understandable, as our approach originates from Rademacher complexity and is more adept at analyzing complex neural networks. We denote the data as: $z_i \triangleq \lbrace x_i,y_i \rbrace$ and $S=\lbrace z_i \rbrace_{i=1}^n$, where the $x_i \in \mathbb{R}^{in}$ is the data and $y_i \in \mathbb{R}$ is the corresponding label. And the matrix of all data and labels are denoted as $\mathbf{x}\in \mathbb{R}^{in \times n}$, $\mathbf{y}\in \mathbb{R}^n$. The function $f(\cdot)$ is defined as $f(\mathbf{w},z_i)=\frac{1}{2}( y_i -\mathbf{w}^{\mathrm{T}} x_i ) ^2$. Therefore, we have $\nabla f(\mathbf{w},z_i)=(\mathbf{w}^{\mathrm{T}}x_i-y_i)x_i$. The weights after $t$-th update as $\mathbf{w} _ t$. The generalization error of our method has the form $\mathcal{O}(\int _ t \frac{\Vert \mathrm{d} \mathbf{w} _ t \Vert}{\sqrt{n}} \sqrt{F _ S(\mathbf{w} _ t) \text{max} _ i(\Vert x _ i\Vert)}$. For the Rademacher complexity, we have $\mathcal{O}(\frac{\Vert \mathbf{w}_T \Vert}{\sqrt{n}} \sqrt{\text{max}_t F_S(\mathbf{w}_t)\text{Tr}(\frac{1}{n}\mathbf{x}^{\mathrm{T}} \mathbf{x})})$. Therefore, on the linear regression, our bound is similar to that of Rademacher complexity. #### 2 Gradient boosting Gradient boosting is **beyond the scope of our paper**, as it focuses on function space while our method requires updates in weight space. Nonetheless, we offer a preliminary study using our framework. $F^{(j)}: \mathbb{R}^d \to \mathbb{R}$ is the ensemble of $j$ models. $l: \mathbb{R} \times \mathbb{R} \to \mathbb{R} _ {+}$ is the loss function. We consider a distant measure $d(\cdot,\cdot)$. We choose the function $f^{(j)}:\mathbb{R}^d \to \mathbb{R}$ from function space $\mathcal{F}^{(j)}$. We have $F^{(j)}=\sum _ {k=0}^{k=j-1} \delta _ k f^{(k)}$. We denotes $S=\lbrace(x _ i,y _ i)\rbrace _ {i=1}^n$ as our training data and $S' = \lbrace( x' _ i,y' _ i)\rbrace _ {i=1}^n$ as the test data. We simplify the notation that $F^{(j)} _ i=F^{(j)}(x _ i)$, $F'^{(j)} _ i=F'^{(j)}(x' _ i)$, $f^{(j)} _ i=f^{(j)}(x _ i)$ and $f'^{(j)} _ i=f'^{(j)}(x' _ i)$. In the gradient boosting, we choose the $f^{(j)}$ such that $\sum _ i \frac{1}{n} d(f^{(j)} _ i,-\nabla l(F^{(j)} _ i,y _ i))$ is small. We define $\mathcal{L}=\sum _ {i,j} \frac{1}{n} \delta _ j d(f^{(j)} _ i,-\nabla l(F^{(j)} _ i,y _ i))$ and $\mathcal{L'}=\sum _ {i,j} \frac{1}{n} \delta _ j d(f'^{(j)} _ i,-\nabla l(F'^{(j)} _ i,y' _ i))$. Here we focus on analyzing the difference between $\mathcal{L}$ and $\mathcal{L}'$. We denote $g_i(f^{(j)})=g_i^{+1}(f^{(j)})=d(f^{(j)}_i,-\nabla l(F^{(j)}_i,y_i))$ and $g'_i(f^{(j)})=g_i^{-1}(f^{(j)})=d(f'^{(j)}_i,-\nabla l(F'^{(j)}_i,y'_i))$. We denfine two complexity measure of function space $\mathcal{F}$ based the distance measure: $R^a _ S(\mathcal{F})=\mathbb{E} _ {\sigma} \sup _ {f \in \mathcal{F}} \frac{1}{n} \sum _ {i=1}^n\sigma _ i g _ i $ and $R^b _ S(\mathcal{F})=\mathbb{E} _ {S,k} \inf _ {f \in \mathcal{F}} \frac{1}{n} \sum _ {i,\sigma_i=-1} d(f _ i,f _ k)$. If samples of $S$ and $S'$ follow a same distribution, we have the following conclusion: $$\mathbb{E}[\mathcal{L'}-\mathcal{L}] < 2 \sum_j \delta_j( R_S^a(\mathcal{F}^{(j)})+R_S^b(\mathcal{F}^{(j)})) + 2\sum_{j} \delta_j \mathbb{E} d(\nabla l(F_i^{(j)},y_i),\nabla l(F'^{(j)}_i,y'_i)) + \epsilon $$ The first term is the complexity of the function space and the second term is to measure how the $F^{(j)}$ depends on the set $S$. The dependence exsits because the choose of $f^{(j)}$ is related to the $S$. --- Rebuttal Comment 1.1: Title: Rebuttal Update Comment: I thank the authors for the rebuttal. I have decided to increase my score.
Summary: This paper present a new generation bound with moderate realistic assumptions that incorporate new information from gradients and trajectory of learning. Strengths: 1 - the paper is well written and easy to follow. 2 - the paper does a great job comparing their results with previous literature on this topic. 3 - their experiments shows a promising result. Weaknesses: 1- experiments are limited. This paper can definitely benefit from more experiments, however due to page limit, it is too much to ask. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper address their limitation in their paper well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review the paper. We have included additional experiments in the overall response, featuring a toy dataset to compare tightness, as well as experiments on ResNet18 with CIFAR-10 and WikiText-2 datasets.
Summary: This work proposed a novel generalisation error bound which takes the learning trajectory of neural networks into consideration. Instead of focusing on the post-trained neural networks, the proposed bound bases on the parameter updates during the learning of the neural networks. The core proof of the bound relies on the assumption 3.4, as well as decomposing the difference between the updates on the true generative distribution and the training instances, i.e. $F\_{\mu}(\mathbf{J}\_{T}) - F\_{S}(\mathbf{J}\_{T})$, into a linear part $\mbox{gen}^{lin}(\mathbf{J}\_{T})$ and non-linear part $\mbox{gen}^{nl}(\mathbf{J}\_{T})$. With reasonable assumptions, the authors bound the non-linear part with $\mathcal{O}(\eta\_{m})$. Regarding the upper bound of the linear part, it relies on the other assumptions introduced in Section 3.2. Strengths: 1. The main result, i.e. a generalisation bound that takes the learning trajectory information, is novel and interesting. Practical exploration also demonstrates that the learning trajectory information helps neural networks to obtain better generalisation performance. For example, knowledge distillation [1] can be considered a method to update the original labels to incorporate the predictions from the teacher model. There also exists works that directly modify the labels with learning trajectory information, e.g. [2], and they have shown that the learning trajectory information are indeed helpful for generalisation. So, a theoretical work that can bound the generalisation errors with learning trajectory information is interesting to the community. 2. The proof is well sketched, thus relatively straightforward to follow: not only the sketch from line 227 to line 238, also the formal proof in Appendix A.1. 3. The authors have also provided empirical evidence to show that the core assumption 3.4 hold, as well as how the generalisation error varies during the learning of neural networks. 4. The overall structure of the paper is clear, and the structure is also well-organised (excluding Section 1, my reasons are given below). [1] Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." arXiv preprint arXiv:1503.02531 (2015). [2] Ren, Yi, Shangmin Guo, and Danica J. Sutherland. "Better supervisory signals by observing learning paths." arXiv preprint arXiv:2203.02485 (2022). Weaknesses: ### Major 1. **Displaying the main result at the beginning**: This is not a concern about the technical side of this work, but rather a comprehensibility issue. Without reading Section 3, the meanings of the notations in Equation 1 are mysterious, and I cannot interpret it. From my perspective, the Section 1.1 does hinder the comprehensibility of this work. 2. **Lack of necessary details of the experiments in Section 4**: After reading the supplementary materials, there are still some necessary details of the experiments in Section 4 I didn't find. For example, in line 284, the authors specify $S'$ to be another data set. However, since the data distribution of CIFAR is unknown, I can only assume that both $S$ and $S'$ are subsets of CIFAR, whereas I'm not certain about that. To fully reproduce these experiments, more details are necessary. 3. **Experiment in line 271 on Gaussian data**: The authors omit $\gamma'$ due to the lack of the true data distribution in line 271. This can be relatively easily solved by sampling training instances from a Gaussian distribution. ### Minor 1. The first $w$ in line 148 is not bold. 2. The font size of all figures is too small. (I understand that the authors may want to save some space, but the texts are really too small to read.) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: My questions are pretty much the same as the weakness I listed above. The only suggestion I want to raise is to move Section 1.1 after Section 3.2. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes, the authors have explicitly and clearly pointed out the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback. Based on your suggestions, we propose the following improvements to the paper: Major 1: Indeed, Section 1.1.1 may hinder the paper's understandability. We plan to move Section 1.1.1 to a location after Theorem 3.6 and Remark 3.7. Major 2: We will provide additional details about the experiments. The $S$ and $S'$ represent the training set and test set of the dataset, respectively. We will include this information to enhance the paper's readability. Major 3: We appreciate your advice. Conducting experiments on a Gaussian dataset is an excellent suggestion. We designed the experiments and discovered that the $\gamma$ is linearly related to the generalization error, which aligns with our experimental findings. Additionally, we compared the tightness of our bound with previous stability-based bounds. The results are presented in the overall response. Minor 1: We will make the necessary revisions. Minor 2: Thank you for pointing this out. Indeed, the font is too small. We will modify all images in the figures to resemble the format used in Figure BC in the PDF of the overall response. --- Rebuttal Comment 1.1: Comment: I confirm that I've read through the rebuttals from the authors. The updates look good! Good luck!
Rebuttal 1: Rebuttal: We thank ACs, SACs, PCs, and reviewers for the efforts and time spent in handling our paper. Figures in the pdf. A: Result of toy dataset for Question 2 below. B: Results of ResNet18 on Cifar10. C: Results of Transformer on WikiText2. The training config of ResNet18 on Cifar10 is the same as the experiment of VGG. For the config on WikiText2, the Transformer is 2 layers, 2 head and 200 embedding size. The model trained using SGD with learning rate 0.001, batch size 20. Question 1: Tightness of the proposed bound In a toy dataset setting, we compare our generalization bound with stability-based methods. We choose a toy dataset because calculating $\beta$ (under the $\beta$-smooth assumption) and $L$ (under the $L$-Lipschitz assumption) in stability-based work, as well as the values of $\mathcal{V}$ and $\gamma$ in our proposed bound, is challenging. Additionally, stability-based methods require a batch size of 1. In the following, we discuss the construction of the toy dataset used to compare the tightness of the generalization bounds. The training data is $X_{tr}=\lbrace x_i \rbrace_{i=1}^n$. All the data $x_i$ is sampled from Guassian distribution $\mathcal{N}(0,\mathbf{I}_d)$. Sampling $\tilde{\mathbf{w}} \sim \mathcal{N}(0,\mathbf{I}_d)$,the ground truth is generated by $y_i=1 \ \ \text{if} \ \ \tilde{\mathbf{w}}^{\mathrm{T}} x_i>0 \ \ \text{else} \ \ 0$. The weights for learning is denoted as $\mathbf{w}$. The predict $\tilde{y}$ is calculated as $\tilde{y}_i =\mathbf{w}^{\mathrm{T}}x_i $. The loss for a simple data point is $l_i=\left\Vert y_i- \mathbf{w}^{\mathrm{T}}x_i \right\Vert_2$. The training loss is $\mathcal{L}=\sum_{i=1}^n l_i$. The test data is $X_{te}=\lbrace x'_i \rbrace$, where $x'_i= \tilde{x}'_i$ and $\tilde{x}'_i \sim \mathcal{N}(0,\mathbf{I}_d)$. We evaluate the tightness of our bound by comparing our results with those in references [11] and [42] from the original paper. We set the learning rate as $\eta_t=\frac{1}{\beta t}$. Our reasons for comparing with these two papers are: 1. [11] is a representative study, 2. Both papers have theorems using a learning rate setting of $\eta_t=\mathcal{O}(\frac{1}{t})$, which aligns with Corollary 3.8 in our paper, and 3. They do not assume convexity. We use 100 samples for training and 1,000 samples for evaluation. The model is trained using SGD for 200 epochs. The generalization bounds we compare include Corollary 3.8 from our paper, Theorem 3.12 from [11], and Theorem 5 from [42]. Our results are: Gen Error | Ours | [11] | [42] |-------|-----|------|------| 1.49 | 3.62| 4.04 | 4417.00 Our bound is tighter under this setting. The reason for the value of [42] is large is because that our and [11] has dependent on $\frac{L^2}{\beta}$, while [42] depends on $L^2$. $L$ and $\beta$ are usually large numbers. (Note that in corollary 3.8 of our works, we can replace $M_2$ and $M_4$ with $L$ because $M_2\leq L$ and $M_4 \leq L$ and $c$ in Theorem 3.12 of [11] is equal to $\frac{1}{\beta}$ with our setting.). Question 2. Effect of $\gamma$ Under the aforementioned settings, we modify the variance of test data samples such that $\tilde{x}'_i \sim \mathcal{N}(0,\kappa \mathbf{I}_d)$. By using different values of $\kappa$, we obtain various values of $\gamma$ and discover a linear correlation between $\gamma$ and generalization error (Figure A in the rebuttal PDF). This finding is consistent with our theorem. [11] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient 352 descent. In International conference on machine learning, pages 1225–1234. PMLR, 2016. [41] J. Zhang, H. Li, S. Sra, and A. Jadbabaie. Neural network weights do not converge to stationary points: An invariant measure perspective. In International Conference on Machine Learning, pages 26330–26346. PMLR, 2022. [42] Y. Zhang, W. Zhang, S. Bald, V. Pingali, C. Chen, and M. Goswami. Stability of sgd: Tightness analysis and improved bounds. In Uncertainty in Artificial Intelligence, pages 2364–2373. PMLR, 2022 Pdf: /pdf/fc2099d52820314b8563e6df59e328885668e63e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the connection between the learning trajectories of DNNs and their generalization when optimized using SGD. Its main contribution is that it provides a good perspective for generalization error analysis by studying the contribution of the learning trajectory. Based on their analysis of the learning trajectory, a new generalization bound is provided for DNNs. Strengths: On the whole, the theoretical analysis perspective and ideas of this paper have a certain value for the theoretical research work of deep neural networks. The main strengths are: 1. It provides a perspective from the learning trajectory for the theoretical analysis of deep neural networks. 2. The proposed generalization error bound track changes in adjustments of learning rate and noise level. Weaknesses: 1. It is unknown how tight the generalization error bounds given in the paper are. Furthermore, the impact of commonly used learning rate schedulers (e.g., exponential decay schedulers) on generalization error bounds has not been adequately analyzed. 2. The description of the insight of the theorem is not detailed enough. For example, the insight that Theorem 3.6 can bring to the reader is not presented in detail. In addition, what insights can Corollary 3.8 bring to readers? 3. The description of the experiment in the paper is not detailed enough. For example, the neural network models used for Cifar-100 and SVHN are not indicated. 4. The experiments done were not extensive enough. I'm curious to know whether the proposed theorem is also applicable to other commonly used neural network structures like ResNet, not just VGG. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the authors give an analysis and description of how tight the proposed generalization error bound is? Under what conditions does the generalization error inequality in Theorem 3.6 take the equal sign? Besides, how does the learning rate scheduler affect generalization error? 2. Equation (11) in Corollary 3.8 is confusing. In this formula, both the integral of the variable t and the sum of t are included, which makes it unclear to the reader that the author regards the learning trajectory as a continuous variable (maybe the reason is that the learning rate is assumed to be sufficiently small?), or treat the trajectory as a discrete variable? 3. Can the author describe in detail the insights Theorem 3.6 and Corollary 3.8 bring to the reader? What insights can these theorems bring to researchers about the optimization of neural network algorithms? 4. Can the authors describe their experiments more specifically? For example, a more specific description of the neural network model used for different datasets. 5. Why didn't the author show the test accuracy of the trained neural network model on different datasets? Can the test accuracy be put in an appendix to show that the experimentally obtained accuracy is reasonable or acceptable? 6. Is the proposed theorem (generalization error bound) also suitable for NLP tasks? The supporting material provided by the author contains the relevant code of WikiText2, but the relevant results do not seem to be shown in the paper. Can the authors show experimental results on the WikiText2 dataset? 7. Authors need to correct the full text. In some sentences, there is a missing space between the content of the paper and the index of the reference, such as "experiments[9, 13] have" in line 40 of the paper. Besides, Equation 38 is incomplete. 8. Why \Delta FS (Jt)- \Delta FS(Jt) (line 293 of the paper) can be used to approximate the generalization error? It is different from the generalization error in formula (10). What is the author's reason for using it instead of generalization error in Equation (10) in the experiments? 9. The symbols of L2 norm in the paper are not unified. Some formulas do have punctuation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors describe their limitation as their method requiring small learning rates. How to eliminate this assumption is a direction worth studying in the future. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Question 1 (Weakness 1): We provide a comparison on a toy dataset due to the challenges in calculating the bound, and the results are displayed in the overall response. Although we cannot currently determine the exact conditions for the equal sign, we will continue to investigate. Identifying such conditions is a difficult problem, even for widely accepted stability-based generalization bounds. In Appendix A.6, we analyze the influence of the learning rate on the generalization bound. A larger learning rate benefits generalization by pushing $\mathbf{J_t}$ into a position where $\operatorname{Tr}(\Sigma(\mathbf{J_t}))$ is small. In this regard, exponential decay schedulers may improve generalization compared to using a small learning rate throughout the training process. An initially larger learning rate can result in a relatively small $\operatorname{Tr}(\Sigma(\mathbf{J_t}))$. In the later phase, even if $\operatorname{Tr}(\Sigma(\mathbf{J_t}))$ becomes large due to the small learning rate, the term $d F_S(\mathbf{J_t})$ is small at this stage. As a result, it will reduce the term $- 2 \gamma' \mathbb{V}_m \mathbb{E}\int _t \frac{d F_S(\mathbf{J_t})}{\sqrt{n}} \sqrt{1+\frac{\operatorname{Tr}(\Sigma(\mathbf{J_t}))}{\Vert \nabla F_S(\mathbf{J_t}) \Vert _ 2^2}}$ in our bound. Question 2: The trajectory is considered as a discrete variable due to the nature of the update method (SGD). The integral is merely a simplification of the symbol. We denote this in line 136. Indeed, it is confusing to use these two types of symbols. To unify them in the paper, we will rewrite $\gamma' \mathbb{V}_m M_4^2 \sqrt{ \mathbb{E} \sum_t \frac{1}{n \beta^2 (t+1)^4} \left(1+\frac{\operatorname{Tr}(\Sigma(\mathbf{J_t}))}{\| \nabla F_S(\mathbf{J_t}) \|_2^2}\right)} $ in collary 3.8 as $\gamma' \mathbb{V}_m M_4^2 \sqrt{ \mathbb{E} \int \frac{1}{n \beta^2 (t+1)^4} \left(1+\frac{\operatorname{Tr}(\Sigma(\mathbf{J_t}))}{\| \nabla F_S(\mathbf{J_t}) \|_2^2}\right) \mathrm{d}t}$. Question 3 (Weakness 2): The primary insights of Theorem 3.6 are described in Section 1.1.1. Our generalization bound reveals the relationship between the "Bias of Training Set," "Diversity of Training Set," "Complexity of Learning Trajectory," and generalization error. The "complexity of learning trajectory" is associated with the gradient norm, gradient covariance, and training loss during the learning trajectory. The placement of Section 1.1.1 is not ideal, so we plan to move its content to a position closer to Theorem 3.6 to make it more accessible to readers. Regarding Corollary 3.8, we aim to provide a comparison between our proposed method and the stability-based method by presenting the bound with the learning rate schedule used by the stability-based method. We will add the comparison result from question 1 in the overall response near this corollary. Question 4: The architecture used for CIFAR-10, CIFAR-100, and SVHN is VGG13. We will include these descriptions in Appendix A.5. Question 5 (Weakness 3): The test accuracy for CIFAR-10, CIFAR-100, and SVHN are 87.64%, 55.08%, and 92.80%, respectively. Thank you for pointing this out. We will add these details to the Appendix. Question 6 (Weakness 4): The results on WikiText-2 are provided in Figure C of the PDF of the overall response. Additionally, we have conducted experiments with ResNet-18 on CIFAR-10, which can be found in Figure B of the overall response. We will include these results in the Appendix. The training config of ResNet18 on Cifar10 is the same as the experiment of VGG. For the config on WikiText2, the Transformer is 2 layers, 2 head and 200 embedding size. The model trained using SGD with learning rate 0.001, batch size 20 Question 7: Thanks for point out. I will fix it in the new version of paper. Question 8: Apologies for the unclear statement in the paper. Our focus is on analyzing the value of $F_{\mu}(\mathbf{J_T}) - F_S(\mathbf{J_T})$. However, we cannot calculate $F_{\mu}(\mathbf{J_T})$ due to the unknown distribution $\mu$. One approach to address this issue is to design an unbiased estimate in the experiment. Since $\mathbf{J_t}$ is dependent on $S$, we can sample a new dataset $S'$, making $\mathbf{J_t}$ independent of $S'$. Based on this, we have $\mathbb{E}F_{S'}(\mathbf{J_T}) = F_{\mu}(\mathbf{J_T})$, which implies that $F_{S'}(\mathbf{J_T})$ is an unbiased estimate of $F_{\mu}(\mathbf{J_T})$. This aligns with the practical wisdom of using the gap between the performance on the test set and the performance on the training set as a measure for generalization behavior. Question 9: Thank you for bringing this to our attention. Indeed, there are some issues with the symbols in the paper. We will address and fix these errors. --- Rebuttal Comment 1.1: Comment: I have read through the author's response, and my concerns are addressed appropriately.
null
null
null
null
null
null
Real-Time Motion Prediction via Heterogeneous Polyline Transformer with Relative Pose Encoding
Accept (poster)
Summary: This paper proposes a method to do future motion prediction for autonomous driving agents with the focus of having a computational complexity that is suited for real-time deployment. To do so, they propose a new attention mechanism, called KNARPE, and a hierarchical transformer architecture, called HPTR. By leveraging these components, they achieve SOTA performance (found in agent-centric models) while being close to maintaining the efficiency of scene-centric models. They empirically show this by comparing it to relevant models on both the Waymo and Argoverse 2 datasets. Strengths: S1) The paper is well-written, has good notation, and is easy to understand. Especially compared to many of the prior works. S2) Agent-centric approaches may be infeasible for real-time systems because they need one forward-pass per agent. The proposed approach mitigates this issue and reaches the good performance of agent-centric approaches while having a lower computational cost. S3) The computational cost is rather thoroughly measured, as it looks at GPU memory consumption, offline inference time, and online inference time. S4) The proposed approach obtains good performance on two standard benchmarks. S5) There is a theoretical comparison to WayFormer (besides the empirical results). As a "side-note strength", the authors state an intention to release the code publicly, which should be valuable to the research community. Weaknesses: W1) The proposed approach is claimed to reach SotA performance at a computational cost that scales in a nice way with the number of agents for which predictions are to be made. There is a thorough comparison to WayFormer, but not to other approaches. For instance, GoRela seems to (very slightly) outperform the proposed approach while also scaling gracefully with additional agents. W2) The performance of WayFormer in table 2 seems low. Is this some variant of it? In the WayFormer paper, no results are provided on the valid-set, which is what is used for table 2. However, the valid-set seems to be a bit easier than the test-set, so one would expect WayFormer to land a bit above 0.4335 soft mAP, instead of the provided 0.397. Why is this the case? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1) What is Figure 1 contributing to? In my opinion, it is just confusing and lacks proper motivation. Q2) At l139, should not $t=0$ be included in the history? Q3) The training takes around 10 days. How is the model convergence, in terms of losses and final KPIs? Q4) Do other approaches train as long as the proposed approach? Q5) During experimentation, it is key to revise the method swiftly. If training takes 10 days, did the development of this approach use a shorter schedule for experimentation? Q6) Based on the development of this approach, do the results of a shorter training schedule seem to correlate well with the results of a longer training schedule? Q7) How computationally efficient is the proposed approach compared to GoRela? Also see W1. Q8) See W2. Q9) The attention to all (all2all) should be a superset of the proposed HPTR architecture. However, as indicated in Table 2, the all2all model is not as good as the HPTR model. Why is this the case? Is there some intuition on this? Q10) When caching the map features during online inference, these are cached for one timestep, right? I.e., the next timestep the map features are recomputed? Or, are these cached temporally in some way as well? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is a discussion on limitations that provides some additional clarity and insight. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your helpful comments and suggestions! We kindly ask you to read our global response which discusses the comparison with GNN-based pairwise-relative methods and the long training time of our models. Now in this post we answer your questions as follows. --- >**Q1**: What is Figure 1 contributing to? In my opinion, it is just confusing and lacks proper motivation. **A1**: We use Figure 1 to demonstrate the limitations of agent-centric approaches and introduce the problem of online inference, both of which are addressed by the approach proposed in our paper. At the moment this figure seems to help the understanding of other reviewers. We will modify it in the camera-ready if other reviewers have the same concern. --- >**Q2**: At l139, should not t=0 be included in the history? **A2**: Thanks for pointing out! We will fix this in the camera-ready. --- >**Q3**: The training takes around 10 days. How is the model convergence, in terms of losses and final KPIs? **A3**: The training takes 10 days because we have limited GPUs (4 RTX 2080Ti). The training time can be reduced given more computational resources. In Fig. 2 of the global response PDF, we provided the curves of validation metrics. As shown in this figure, the model has largely converged after 5 days of training. Therefore, only the models for the final leaderboard submission are trained for 10 days, whereas models for development and ablation are trained for 5 days. --- >**Q4**: Do other approaches train as long as the proposed approach? **A4**: Please refer to the global response for this question. --- >**Q5**: During experimentation, it is key to revise the method swiftly. If training takes 10 days, did the development of this approach use a shorter schedule for experimentation? **A5**: Only the models for final submission are trained for 10 days. Other models are trained for 5 days. In practice, 5 days are still too long for development. But for our cluster, a 4-GPU 5-day job is far easier to be scheduled than an 8-GPU 2-day job. Therefore we use this 4-GPU 5-day setup for experimentation. --- >**Q6**: Based on the development of this approach, do the results of a shorter training schedule seem to correlate well with the results of a longer training schedule? **A6**: Fortunately yes. In our case training for 5 days is enough for telling the performance of a model as shown in Fig. 2 of the global response PDF. --- >**Q7**: How computationally efficient is the proposed approach compared to GoRela? Also see W1. **A7**: Please refer to the global response for this question. --- >**Q8**: See W2. The performance of WayFormer in table 2 seems low. Is this some variant of it? In the WayFormer paper, no results are provided on the valid-set, which is what is used for table 2. However, the valid-set seems to be a bit easier than the test-set, so one would expect WayFormer to land a bit above 0.4335 soft mAP, instead of the provided 0.397. Why is this the case?. **A8**: There are multiple potential reasons. Firstly, our model does not apply ensembling, which affects the soft mAP significantly. Secondly, Wayformer is not open-sourced, so it could be that our reimplementation is not perfect. In fact, our reimplementation has a smaller model size due to the limited computational resources (we use 4 RTX 2080Ti, Waymo uses 16 TPU). We will open-source our Wayformer reimplementation such that the community can improve it. Thirdly, all models in Table 2 are trained for less epochs because they are meant for ablation studies, not for the final submission. And finally, in the Wayformer paper we can still find the performance of their ablation models on the validation split, not in the Tables or in the text but in the Figures (on the y-axis of Fig. 4,5,6 in the Wayformer paper). We can see their minADE is never below 0.9, which is way larger compared to their test split submission. --- >**Q9**: The attention to all (all2all) should be a superset of the proposed HPTR architecture. However, as indicated in Table 2, the all2all model is not as good as the HPTR model. Why is this the case? Is there some intuition on this? **A9**: This is because for a fair comparison, the number of layers of models in Table 2 are selected such that their total numbers of learnable parameters are roughly the same (15M in our case). As a consequence, the all2all model in Table 2 is not as deep as other models. Given more layers and longer training time, the all2all model can reach the same performance as our HPTR. It is intuitive that the all2all model is more difficult to train because without any inductive bias it allows all possible attentions. Our point here is to show the hierarchical architecture we proposed can achieve the same performance with less parameters and training time. --- >**Q10**: When caching the map features during online inference, these are cached for one timestep, right? I.e., the next timestep the map features are recomputed? Or, are these cached temporally in some way as well? **A10**: During the online inference, the encoded map features are computed only once at the beginning $t=0$ and cached. At $t=0$ we will predict the future trajectories at $t={1,2,\dots, T}$. At the next time step $t=1$ we will reuse the cached map features and predict the future trajectories at $t={2,3,\dots, T+1}$. The cached map features can be reused because we assume the map is static, i.e. it does not change from $t=0$ to $t=1$. In our online inference experiments (right plot Fig. 4), the map features cached at $t=0$ are reused for 100 times (i.e. 10 seconds) and we compute the average latency over these 100 inferences. In this case we assume the map does not change within 10 seconds. In the real world we can reuse the cached map features for a longer period of time, say days or maybe weeks. Recomputing the map features is necessary when the HD maps are changed, which does not happen very frequently. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough answers. I have read the other reviews as well together with their rebuttals. In my opinion, this paper would be a valuable addition to the machine learning community.
Summary: This paper proposes a motion prediction framework HPTR. As agent-centric presentation usually has a high computational cost and poor scalability, the paper uses the transformer to encode pairwise-relative representation with K-nearest neighbor attention and relative pose encoding. It proposes a hierarchical transformer-based framework to efficiently encodes intra-class and inter-class information which allows asynchronous update for better online inference efficiency. Experiments on Waymo Open Motion Dataset and Argoverse-2 motion dataset show its superior performance and good efficiency. Strengths: 1. The proposed hierarchical transformer-based framework is efficient and enables asynchronous token update which is usually ignored in other motion prediction works. 2. The paper adopts relative pose encoding to better unleash the expressiveness of the pairwise-relative representation. 3. Experiment results have shown that the proposed method has achieved a good balance between performance and efficiency. 4. The overall writing is clear and easy to follow. Weaknesses: 1. It is claimed in the paper that this paper uses transformer and pairwise-relative representation which is less computational than GNN. However, GNN and transformer can be viewed as equivalent if attention is used to aggregate and update information among the nodes as in HDGT. A more in-depth analysis should be provided to better clarify the difference and advantages of the proposed method over GNN based method with pairwise-relative representation like HDGT. 2. ProphNet also achieves very good results in terms of both performance and efficiency. And in ProphNet's paper, the single model (without ensembling) has achieved 1.89 brier-min FDE6 on the AV2 dataset. The author should include this result and compare the proposed with it in the efficiency part as well. And the analysis of the comparison is also important. 3. As previous pairwise-relative representation methods do not use pairwise-relative representation, the ablation experiment on RPE should be included to analyze how much performance gain is due to RPE. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As the proposed method is built upon transformer blocks and the transformer is well-recognized for its good scalability, have the authors tried to scale up the model to see its performance improvements? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have well discussed the limitation in Sec5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your helpful comments and suggestions! We kindly ask you to read our global response which discusses the comparison with GNN-based pairwise-relative methods and the long training time of our models. Now in this post we answer your questions as follows. --- >**Q1**: It is claimed in the paper that this paper uses transformer and pairwise-relative representation which is less computational than GNN. However... **A1**: We agree that Transformer can be formulated as a special case of GNN and theoretically attention mechanism is not computationally more efficient than message passing. However, in practice, most of the time Transformers are more efficiently implemented on GPUs compared to GNNs. As shown in Fig. 1 of the global response PDF, KNARPE is implemented with the most basic matrix operations (matrix indexing, summation and element-wise multiplication). Based on KNARPE, our HPTR uses only these basic matrix operations which are easier to be efficiently deployed than most of the GNNs. In terms of performance, our method outperforms HDGT by a large margin (cf. Table 2 and the WOMD leaderboard). In terms of efficiency, our method is faster than HDGT by an order of magnitude (cf. Fig. 3 of the global response PDF). Please refer to the global response for the detailed discussion. --- >**Q2**: ProphNet also achieves very good results in terms of both performance and efficiency. And in ProphNet paper, the single model... **A2**: We agree that ProphNet (CVPR 2023) as a concurrent work achieves excellent performance and efficiency on AV1 and AV2 datasets. According to Table 3 of the ProphNet paper, it achieves 1.89 brier-min FDE6 on the AV2 dataset with a single model. According to Sec. 4.3 of the ProphNet paper, that single model "uses 3 heads and each with 6 output trajectories". According to Sec. 3.6 of the ProphNet paper, ProphNet "generates more proposals than the required number of output modality". Therefore, we believe it is fair to claim ProphNet predicts more futures than required, whereas it is controversial to claim ProphNet uses ensembling because the single model ensembles only part of its network, i.e. the hydra heads. The performance of ProphNet in Table 1 of our paper was obtained from the AV2 leaderboard accessed early this year when we wrote our paper. That submission entry is now removed from the AV2 leaderboard, but the performance can still be found in the appendix of the ProphNet paper (Table 7). According to the appendix A of the ProphNet paper, they "train three different models for ensembling". So the ProphNet in the Table 1 of our submission actually used ensembling. In the camera-ready we will update Table 1 and replace the old ProphNet with ensembling by the new single-model ProphNet. We will also remove the dagger in front of ProphNet as the single-model ProphNet is not exactly an ensembling. Since ProphNet is a concurrent work and it is not open-sourced, we don't have enough time to reproduce it and provide a detailed efficiency comparison in our submission. However, since ProphNet is still agent-centric, the scalability problem is inevitable. We hope the comparison with Wayformer (ICRA 2023), another SOTA agent-centric method, should be enough to demonstrate the advantage of our method over general agent-centric methods. Moreover, according to Table 4 and Sec. 4.5 of the ProphNet paper, they have 28ms latency per agent (64 agents in total), which means per episode the latency is 1792ms. This is more than a magnitude slower than our approach which has 60ms latency during the offline inference with 64 agents. --- >**Q3**: As previous pairwise-relative representation methods do not use pairwise-relative representation, the ablation experiment on RPE should be included to analyze how much performance gain is due to RPE. **A3**: We think there is a typo in this question, it should be "As previous pairwise-relative representation methods do not use ~pairwise-relative representation~ RPE, the ablation experiment on RPE should be included to analyze how much performance gain is due to RPE.". Our answer is as follows. RPE can be ablated from different aspects. Firstly, we can ablate the "relative" aspect, i.e. instead of the pairwise-relative representation we use the agent-centric or scene-centric representation. This ablation has been done in Table 2 of our paper. Now given the pairwise-relative representation, there are different ways to use it. Our specific methods RPE and KNARPE are defined respectively in Eq. 1-3 and Eq. 4-5 of our paper. Eq. 1-3 are the standard positional encoding which is applied to almost all Transformer-based methods. Since it is a common practice to use positional encoding to pre-process the inputs to Transformer blocks, we omit its ablation in our paper. In Table 1 of our appendix we have provided detailed ablations on Eq. 4-5, i.e. different attention mechanisms for RPE. We show that our KNARPE achieves the best performance and efficiency. Back to the question "previous pairwise-relative representation methods do not use RPE", this is because all prior works are based on GNNs and our RPE is specifically designed for Transformers. Combining GNNs with RPE (Eq. 1-3) is an interesting idea but it is out of the scope of our paper which focuses on Transformers. --- >**Q4**: As the proposed method is built upon transformer blocks and the transformer is well-recognized for its good scalability, have the authors tried to scale up the model to see its performance improvements? **A4**: We do observe a proportional relationship between the performance and the model size. However, due to the limited computational resources (4 RTX 2080Ti), we cannot experiment with a larger model size without further reducing the batch size, which is already rather small (B=12) at the moment. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for the detailed responses. My main concerns have been resolved and I would like to raise my rating to weak accept.
Summary: This work proposes a novel method for motion forecasting which uses an efficient attention mechanism with pairwise relative representation and asynchronous updates for the static & dynamic parts of the scene. Extensive experiments on Waymo and Argoverse datasets show competitive performance to existing methods while being more efficient than agent-centric methods. Strengths: - This work incorporates a pairwise relative representation in a k-nearest neighborhood for attention mechanism which is more efficient than quadratic attention. - The proposed method minimizes redundancies in computation by sharing context among agents and using asynchronous updates for static and dynamic tokens. This makes it as efficient as scene-centric methods. - The proposed method is competitive to existing approaches (Table 1) on Waymo and Argoverse datasets which do not use ensembles, while more efficient in terms of memory consumption and inference latency. - The ablations (Table 2) are helpful in understanding the benefits of different components in the proposed method. Also, mean & standard deviation over 3 runs are reported to account for the randomness in training. Weaknesses: - For pairwise relative poses, it'd be useful to compare with a simpler alternative of using relative distances directly (instead of sinusoidal encoding), as done in Interaction Transformer (eq 4. in [1]). - How does pairwise-relative representation retain the good scalability of scene-centric representation? Doesn't the scalability come from k-nearest neighbors which reduces the number of agents considered for attention? - L165 states that the performance deteriorates if the input is scene-centric. Why is this the case? The pairwise relative representation should help with standard self-attention as well. - What is the difference between the middle and right plots in Fig. 4? - It'd be useful to have efficiency comparison against other approaches, eg. GoRela since it also uses pairwise relative representation and achieves similar performance. - MTR-e2e and GoRela have similar performance to the proposed approach in Table 1. Without efficiency comparisons with these methods, the performance benefits are not clear. - Does the WF baseline in Table 2 use ensembles? [1] Li et al. End-to-end Contextual Perception and Prediction with Interaction Transformer. IROS 2020 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The main concern is that the performance benefits of the proposed approach are not clear. MTR-e2e and GoRela have similar performance to the proposed approach and efficiency comparisons with these methods are not provided. Other clarifications required are mentioned in the weaknesses above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Limitations are discussed. --- I have read the rebuttal, other reviews, and discussion. I appreciate the additional ablations and clarifications provided by the authors. While I agree that efficiency is an important consideration, I am not convinced about the effectiveness of the proposed approach. Since efficiency is the central claim, I would expect to see clear gains over baselines that use some combination of vectorized inputs, pairwise relative information, and/or transformer in their architecture. Looking at the results in Wayformer & HiVT papers, they report latency in the range of 30-60ms for different variants which is in the similar range as HPTR. I think the results should be shown in the form of performance vs latency vs capacity plots (similar to Fig. 4,5,6 in Wayformer paper) while comparing to different baselines to show the benefits of HPTR. So, I am retaining my rating of `Borderline reject`. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your helpful comments and suggestions! We kindly ask you to read our global response which discusses the comparison with GNN-based pairwise-relative methods and the long training time of our models. Now in this post we answer your questions as follows. --- > **Q1**: For pairwise relative poses, it'd be useful to compare with a simpler alternative... **A1**: Unfortunately we cannot provide additional experimental results during the rebuttal phase because the cluster of our institution has been undergoing maintenance. However, we can discuss this idea from a theoretical perspective. In contrast to the relative pose, the relative distance does not contain the necessary information for making driving decisions. It only tells us how far away an object is, but not at which direction it is located with respect to the agent of interest. Therefore, replacing relative poses with relative distances would not work in our case. Given distances and orientations, it is common practice to use sinusoidal positional encoding (such as Eq. 1-3 in our paper) to pre-process the inputs to Transformer blocks. Nowadays, almost all SOTA Transformers have the positional encoding in their design. Nevertheless, we think this is an interesting idea and we will add this discussion to the camera-ready. --- > **Q2**: How does pairwise-relative representation retain the good scalability... **A2**: Let us take a lane segment as an example. Say we have N agents to be predicted. Agent-centric methods will transform this single lane segment to the coordinate of each of the N agents. Effectively, we will have N copies of the same lane segment and we will encode the same segment N times. Sene-centric methods present the lane segment in the global coordinate, hence it is encoded only once. Pairwise-relative methods decompose the lane segment into a high-dimensional local attribute ($dim\gg3$), and a low-dimensional global pose ($dim=3$). The high-dimensional local attribute is encoded only once and shared by all agents, hence the good scalability. The 3D global poses will be used to compute the 3D relative poses between the lane segment and N agents. To conclude, both the KNN and the pairwise-relative representation contributes to the scalability. Considering $N$ tokens and hidden dimension $D$, KNN reduces the complexity from $\mathcal{O}(N^2D)$ to $\mathcal{O}(NKD)$ by restricting the attention of each token to its K-nearest neighbors. The pairwise-relative representation reduces the complexity from $\mathcal{O}(N^2D)$ to $\mathcal{O}(N^2\cdot 3+ND)=\mathcal{O}(N^2\cdot 3)$ by sharing the high-dimensional local attribute. --- > **Q3**: L165 states that the performance deteriorates if... **A3**: This question includes two parts. Firstly, why the performance deteriorates if the input is scene-centric? The representation $(p_i,u_i)$ in L165 is scene-centric because the global pose $p_i$ is in the global coordinate. Using $(p_i,u_i)$ directly as input to the standard self-attention is investigated in SceneTransformer, which is outperformed by other methods by a large margin because scene-centric representation is not rotation and translation invariance. Secondly, shouldn't the pairwise-relative representation help with standard self-attention as well? In Eq. 1-5 we propose a new attention mechanism to process $(r_{ij}, u_i)$ . In Sec. C of our appendix we have ablated other variations of attention mechanism for the pairwise-relative representation. Since the standard self-attention cannot process the pairwise relative representation, we experimented with some variations which are very similar to the standard self-attention. The results show that using pairwise relative representation with (a moderately modified version of) standard self-attention outperforms scene-centric methods, but it is not as good as the KNARPE we proposed. --- > **Q4**: What is the difference between the middle and right plots in Fig. 4? **A4**: The middle plot is offline inference and the right plot is online inference. Offline means doing inference with datasets, whereas online means doing inference with streaming inputs, as if on a real car. During offline inference, for each episode we will inference only once at $t=0$. During online inference, for each episode we will inference consecutively at $t=\{0,1,\dots,T\}$, where $T=99$ in Fig. 4. Our HPTR allows the encoded static map features to be reused across these time steps during the online inference, hence the latency of our models in the right plot is lower than in the middle plot. We will try to improve the clarity of the caption of Fig. 4 in the camera-ready. --- > **Q5**: It'd be useful to have efficiency comparison against other approaches, eg. GoRela... **A5**: Please refer to the global response. --- > **Q6**: MTR-e2e and GoRela have similar performance to the proposed approach in Table 1... **A6**: As shown in Table 1 (WOMD valid), our method outperforms MTR-e2e substantially in mAP (0.415 vs. 0.3245), which is the major metric considered by the WOMD leaderboard. On the AV2 dataset, our method is on a par with GoRela. However, GoRela focuses on AV2 and does not provide any results on WOMD, whereas we focus on WOMD and tune all hyperparameters for WOMD. We believe our performance on the AV2 leaderboard could be further improved given sufficient tuning on the AV2 dataset. In terms of efficiency, Wayformer is one of the most efficient agent-centric methods. Since MTR-e2e is also agent-centric and it is slower than Wayformer (cf. the appendix of MTR and the results section of Wayformer) which we clearly outperform in terms of efficiency, we believe an efficiency comparison with MTR-e2e is unnecessary. For the efficiency comparison with GNN-based pairwise-relative methods such as GoRela, please refer to the global response. --- > **Q7**: Does the WF baseline in Table 2 use ensembles? **A7**: No, none of the models in Table 2 uses ensembles. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I have read the rebuttal and other reviews. I appreciate the additional comparisons and clarifications provided by the authors, which helped me get a better understanding of the paper. I also went through the related work again to identify the differences between the proposed work and existing methods. From my understanding, the central claim of the paper is efficiency. To achieve this, 2 components are proposed - KNARPE and HPTR. I need some more clarification regarding these: **KNARPE**: It has 2 main parts - K-nearest neighbor attention and pairwise relative pose encoding. - K-nearest neighbor attention: This is computed based on L2 distance (L169) and the value of K is different for different transformers (L248-252), going upto K=360 for AC-to-all transformer. Since L2 distance is used, this can also be considered as applying attention in a neighborhood of a certain radius. This has been done in previous works, eg. HiVT [70]. In this regard, can K-nearest neighbor attention be interpreted as attention over a local region (as in HiVT)? Or are there any significant differences between the two? - Pairwise relative pose encoding: This takes into account the relative translation and orientation between different entities in the scene. This has been considered in prior work, eg. HiVT in agent-agent, agent-lane, and global interaction modules, which also uses attention & transformers in the architecture. Are there any significant differences between the pairwise relative encodings in the proposed work and HiVT? **HPTR**: It involves a polyline transformer architecture and asynchronous updates of heterogeneous tokens (map features are cached). - The architecture consists of a transformer applied to vectorized inputs (in the form of polylines) with relative pairwise encodings. This is similar to HiVT architecture. - The main difference is the asynchronous token updates. While the map features are cached, the main bottleneck in computation would come from AC-to-all transformer since it contains the most tokens (K=360). Is this correct? **Results** Since the central claim is efficiency, the most important experiment is efficiency analysis. In Sec 4.4, it is stated that HPTR can make predictions in 37 ms, which can be reduced to 25 ms (40 fps) with better implementation. From the inference speed results in HiVT (Sec 4.3 and Table 5), it seems like HiVT can also run real-time with similar latency as the proposed approach. Since the experiment settings are different in the 2 papers, it might be hard to compare the two inference speed directly. It'd be helpful if the authors can provide more insights into the runtime & performance comparison between HPTR and HiVT (since it seems to be the most relevant baseline). --- Reply to Comment 1.1.1: Title: Answers to the questions regarding HiVT Comment: Dear Reviewer Ccz6, Thanks for your comments. We are glad that our rebuttal has addressed your previous concerns. In the following we answer your questions regarding HiVT. >Q1: Can KNN attention be interpreted as attention over a local region (as in HiVT)? Yes, the KNN attention can be interpreted like that. However, HiVT uses a distance threshold for selecting neighbors, whereas we use a threshold directly on the number of neighbors. This leads to significant differences in the implementation in practice, because the distance threshold does not enforce an upper bound on the number of neighbors. As a consequence, the Transformers of HiVT are implemented with message passing and GNN libraries, which are less efficient than our HPTR implemented with basic matrix operations. This highlights the importance of the KNN design in our KNARPE module. >Q2: Are there any significant differences between the pairwise relative encodings in the proposed work and HiVT? As mentioned in L93 of our paper, the most fundamental difference between HiVT and our method is that HiVT considers vectors whereas our HPTR considers polylines. The pairwise-relative polyline representation boils down to an agent-centric representation if polylines are singletons, i.e. vectors. As a result, the local encoders of HiVT are actually agent-centric. It is a bit confusing because HiVT formulates its inputs in a pairwise-relative way, but essentially it is agent-centric. To verify this, we can observe HiVT does not share information among agents or across time steps, similarly to other agent-centric methods. Given a new vector, the local encoders of HiVT transform the vector to the local coordinate of each agent. Nevertheless, HiVT is still closely related to our method because its global interaction module follows the concept of pairwise-relative representation. HiVT can be seen as an agent-centric method augmented with a pairwise-relative module (the global interaction) during the decoding phase in order to realize multi-agent prediction. As shown in Table 1 of the HiVT paper, without the global interaction module, the pure agent-centric HiVT could still achieve reasonable performance. >Q3: The architecture consists of a transformer applied to vectorized inputs with relative pairwise encodings. This is similar to HiVT architecture. HiVT and our HPTR differ fundamentally in terms of how to apply Transformer to vectorized inputs with relative poses. As stated in L93 in our paper, HiVT uses the standard Transformer, whereas we proposed our own attention mechanism. One of our main contributions is the attention mechanism defined in Eq. 1-5 of our paper. In contrast to our method, HiVT does not compute the RPE as we did in Eq. 1-3; it rather concatenates the relative poses directly with other attributes. After that, HiVT uses the concatenated tensors as the input to the standard attention; it does not propose a new attention mechanism as we have done in Eq. 4-5. In our appendix we have ablated HPTR using the standard attention, i.e. we do not apply Eq. 4-5 similar to HiVT. As shown in Table 1 of our appendix, this does not improve the performance but significantly increase the demand of computational resources. >Q4: While the map features are cached, the main bottleneck in computation would come from AC-to-all transformer... Yes, the main bottleneck comes from the Transformer block that contains the most layers and the largest attention matrix. >Q5: Why HiVT is not considered in the experiments? For two reasons. Firstly, HiVT reports performance only on AV1, which is outdated and has been replaced by AV2. Secondly, HiVT has been outperformed by many publications by a large margin on AV1. The rankings on the AV1 leaderboard are 35th for HiVT, 17th for MultiPath++, and 6th for Wayformer. According to Table 1 of our paper, our HPTR is on a par with MultiPath++ on WOMD. Since we have compared with the more recent SOTA methods on the most recent and challenging datasets, we think it is redundant to consider HiVT in our experiments. In terms of run-time, HiVT does not focus on efficiency and scalability. The latency of HiVT is close to that of our method, but HiVT considers a simpler dataset (AV1 vs. WOMD) and it uses fewer parameters (2.5M vs. 15M) and a small perceptive field (50m) in its implementation, all of which reduce latency but not the algorithmic complexity. Therefore, as the reviewer has pointed out, it is hard to compare the inference speed directly. However, since HiVT is essentially agent-centric, it still suffers fundamentally from poor scalability due to its higher complexity. --- To conclude, our contributions are still valid and novel, as they are not presented in the HiVT paper. Compared to GoRela and HDGT which we have thoroughly examined in our initial rebuttal, HiVT is less relevant of a baseline for our method. Nevertheless, we think this discussion about HiVT is very intriguing and we will add it to our appendix.
Summary: This paper introduces several ideas to boost the efficiency of marginal motion prediction: (1) represent all input entities as polylines without global pose attributes, (2) use transformer architectures but limit attention to K nearest neighbors, (3) directly use relative pose in transformer computations, (4) apply full self-attention only to map tokens, which can be cached during online inference, (5) obtain traffic light and agent features hierarchically, with cross-attention, and (6) use a final cross-attention block for all agent-anchor pairs to directly decode trajectories without any clustering or ensembling. All of these ideas are intuitive and an ablation study discusses some of their individual contributions. The final model obtains reasonable performance on WOMD and Argoverse 2 while scaling to dense traffic much more feasibly than one of the existing SoTA methods, Wayformer. Strengths: The key contribution of this work lies in clearly highlighting of some problematic practices which are still commonly used in most research on motion forecasting in autonomous driving (heavy emphasis on the offline setting), and bringing efficiency for online inference to the forefront. The ideas presented to improve efficiency are not all new, but in combination interesting and well-motivated. Despite the large number of complex technical concepts covered in the draft, the presentation is clear and it is possible to follow and understand all components. Weaknesses: While all the proposed ideas are simple and intuitive, putting them all together yields a complex architecture with a large space of design choices and hyper-parameters. This model trains for 10 days, despite the efficient vectorized input and hierarchical architecture focused on efficiency. The runtime analysis only presents a comparison to an agent-centric baseline Wayformer, which fails to provide evidence for whether the proposed model is efficient among relative pose based forecasting methods (e.g., no evidence for the claim made in L092 that GNNs are more demanding than transformers in the online inference setting). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is the training time of HTPR similar to Wayformer for the same number of training epochs? 2. How is the KNARPE operation implemented in practice? Do you still compute and mask a dense attention matrix, or implement custom kernels to only compute attention where needed? 3. How important is the post-processing described in L254-257? Are these techniques commonly applied by methods on these leaderboards? 4. Could you please elaborate on L261-262, what does sampling 25% and 50% mean in this context? 5. Would it be possible to compare the inference time (Fig. 4) to a GNN method with relative pose encodings? Minor: 1. Have you tried adding the blue dots from Fig. 1b to Fig. 1a as well? This could make it clearer to understand which agents are being used in Fig. 1b. 2. Could Fig. 5 be simplified, in particular by removing the striking colors for the map elements? An alternative option would be to add a legend describing all colors. Update: Thank you for the detailed responses to all questions. The rebuttal addresses all of my concerns, and I would like to maintain my positive rating. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your helpful comments and suggestions! We kindly ask you to read our global response which discusses the comparison with GNN-based pairwise-relative methods and the long training time of our models. Now in this post we answer your questions as follows. --- > **Q1**: Is the training time of HTPR similar to Wayformer for the same number of training epochs? **A1**: No. Each training epoch of HPTR takes longer time than Wayformer does. Given 5 days of training time, we can train our reimplementation of Wayformer for 110 epochs, whereas for HPTR it is 60 epochs. However, HPTR is more sample efficient. As shown in Table 2 of our submission, the performance of Wayformer at epoch 100 is roughly the same as HPTR at epoch 60. So as a result, HPTR and Wayformer converge roughly at the same speed if measured in wall time. The training of HPTR could be further accelerated by pre-computing and saving the relative poses during the dataset pre-processing. --- > **Q2**: How is the KNARPE operation implemented in practice? Do you still compute and mask a dense attention matrix, or implement custom kernels to only compute attention where needed? **A2**: We implement our custom multi-head attention with matrix indexing, summation and element-wise multiplication. Given $src\in \mathbb{R}^{B\times M\times D}$ and $tgt\in \mathbb{R}^{B\times N\times D}$, where $B$ is the batch size, $M$ is the length of $src$, $N$ is the length of $tgt$ and $D$ is the hidden dimension. The first step is to get the K-nearest-neighbor $tgt$ for each $src$, i.e. get $\text{tgt}_{knn} \in \mathbb{R}^{B\times M \times K \times D}$ by indexing $tgt$ based on the L2 distances $dist \in \mathbb{R}^{B\times M\times N}$ between $src$ and $tgt$. After that, we use element-wise multiplication instead of matrix multiplication to compute the attention matrix $A \in \mathbb{R}^{B\times M\times K}$. All tensors have fixed shape in our implementation and we use masking to address missing tokens. Figure 1 in the PDF of the global response illustrates in detail how KNARPE is implemented in practice. We will add this figure and the implementation details of KNARPE to our appendix. --- > **Q3**: How important is the post-processing described in L254-257? Are these techniques commonly applied by methods on these leaderboards? **A3**: The non-maximum suppression post-processing can improve the mAP and soft mAP significantly. It is used by almost all methods submitted to the WOMD leaderboard. Our specific implementation follows MPA [1], which is one of the winners of the WOMD challenge 2022. --- > **Q4**: Could you please elaborate on L261-262, what does sampling 25\% and 50\% mean in this context? **A4**: We apologize for the confusion. This sampling is just because we want to perform validation, metrics logging and checkpoint saving more frequently. By sampling 50\% of the training dataset at each epoch, we effectively reduce the duration of each training epoch by 50\% but still ensure that all data from the training split is used for training. This is not a necessary step. Training for 60 epochs while sampling 50\% of the training dataset at each epoch is equivalent to training for 30 epochs while using the complete training split at each epoch. The difference is negligible in our case because our training runs for many epochs and all samples from the dataset are alike (they are from the same domain). The only effective difference is that the former logs the validation metrics twice as often as the latter, and the parameters of the learning rate scheduler should be changed accordingly if it schedules based on epoch numbers. --- > **Q5**: Would it be possible to compare the inference time (Fig. 4) to a GNN method with relative pose encodings? **A5**: Please refer to the global response for this question. --- **Others**: Thanks for the suggestions. We will add the blue dots from Fig. 1b to Fig. 1a in the camera-ready. We will also try to make Fig. 5 more readable in the camera-ready. Due to the limited space in the main paper, we will add the explanation of the visualization of Fig. 5 to the appendix. --- [1] Stepan Konev. Mpa: Multipath++ based architecture for motion prediction. arXiv preprint arXiv:2206.10041, 2022 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses to all questions. The rebuttal addresses all of my concerns, and I would like to maintain my positive rating.
Rebuttal 1: Rebuttal: ## Global Response We thank all reviewers for their helpful feedback. We are glad that the reviewers appreciate our technical contributions, specifically the KNARPE attention mechanism, the asynchronous token update and the efficiency comparison. Moreover, we are happy to see that the reviewers agree with us on the importance of identifying and solving the efficiency problems of motion prediction in autonomous driving. In this global response, we will address two concerns raised by multiple reviewers. --- ### 1. The efficiency comparison with other methods based on GNN and pairwise-relative representation. To the best of our knowledge, currently there are only two such methods, GoRela and HDGT. GoRela is published on ICRA 2023 and it is not open-sourced, so we would need to reimplement it. However, the hyperparameters of its network, such as the hidden dimension, number of layers of each component etc, are not disclosed in the GoRela paper. Without this information, we cannot reproduce its performance and conduct a fair efficiency comparison. After receiving the NeurIPS reviews, we immediately sent an Email to the authors of GoRela asking for these implementation details, but we haven't received any response yet. As such, we cannot provide an efficiency comparison with GoRela during this rebuttal phase. In terms of performance, our method is on par with GoRela on the AV2 dataset. However, GoRela focuses on AV2 and it does not provide any results on WOMD, whereas we focus on WOMD and tune all our hyperparameters on WOMD. We believe our performance on the AV2 leaderboard could be further improved given sufficient tuning on the AV2 dataset. Fortunately, two weeks ago on 2023-07-20 HDGT was open-sourced. HDGT is published in CoRL 2022 and TPAMI 2023. It does not perform as good as our method and GoRela, but since it is based on GNN and pairwise-relative representation, we believe the efficiency comparison with HDGT should address the reviewers' concern effectively. As shown in the left plot of Fig. 3 in the global response PDF, HDGT demonstrates good scalability in terms of GPU memory consumption because it uses the pairwise-relative representation. As shown in the middle plot of Fig. 3, in terms of offline inference speed, HDGT is slower than our HPTR, and it is actually even slower than our agent-centric Wayformer baseline. To confirm that HDGT runs correctly on our setup, in the right plot of Fig. 3 we reproduce the inference time of HDGT on the complete WOMD validation split with different validation batch size and we compare the reproduce numbers with the reported number in the TPAMI paper. The slow inference speed of GNN-based methods such as HDGT is mainly because the GNN libraries cannot utilize the GPU as efficiently as the basic matrix operations do. As shown in Fig. 1 of the global response PDF, our KNARPE is implemented with the most basic matrix operations (matrix indexing, summation and element-wise multiplication), hence it is better suited for real-time and on-board applications. --- ### 2. The long training time of our models. Our final models for the leaderboard submission are trained for 10 days and models for ablation and development are trained for 5 days. This long training time is because on the one hand WOMD is a very large-scale dataset, and on the other hand we only use 4 RTX 2080Ti GPUs for the training. While comparing the wall time duration of training, the computational resources should be taken into consideration. As a reference, HDGT uses 8 V100 and trains for 4-5 days, GoRela uses 16 GPUs (model not specified but most likely A100/V100), MTR uses 8 RTX 8000, Wayformer uses 16 TPU v3 cores and ProphNet uses 16 V100. All of these methods use a much higher number of more powerful GPUs than we use. If we had their computational resources, the training time of our method could be reduced to 1-2 days. In terms of sample efficiency, our method is on par with other methods. As shown in Fig. 2 of the global response PDF, our HPTR converges after 15 epochs (5 days) and our final model is trained for 30 epochs (10 days) on WOMD. As a reference, HDGT is trained for 30 epochs, MTR is trained for 30 epochs and ProphNet is trained for 60 epochs on WOMD. Pdf: /pdf/c63789d2e1e7d42bb64a3b1e9155c25b794a0d55.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: -- Strengths: -- Weaknesses: -- Technical Quality: 3 good Clarity: 2 fair Questions for Authors: -- Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: -- Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Since this review does not provide any detailed comments, we will omit the rebuttal in this case.
null
null
null
null
null
null
Fast and Regret Optimal Best Arm Identification: Fundamental Limits and Low-Complexity Algorithms
Accept (poster)
Summary: This work focuses on simultaneously achieving regret minimization and best arm identification in multi-armed bandits, which is called Regret-Optimal Best Arm Identification (ROBAI). That is, the goal is to identify with high probability and as fast as possible / within a certain time frame the optimal arm, and play until round T. In practice, authors propose three algorithmic contributions, which have guarantees on the cumulative regret, the time before commitment to exploiting a single arm (called stopping time in the paper) and the error on the identification of the optimal arm. The first two algorithms are asymptotically optimal regretwise for Gaussian bandits, and respectively tackle the pre-determined stopping time case and the adaptive stopping time case. They consist in exploring arms using the UCB principle, and committing to the arm which maximizes the LCB. The last one is a variant of the previous algorithms which achieves asymptotical regret optimality for subgaussian distributions, by leveraging KL confidence intervals. Strengths: - Originality: This paper tackles an interesting problem which has been seldom considered. Although the algorithms reuse well-known principles (UCB / LCB), their analysis allow to prove very good results (regret optimality, interesting bound for adaptive “stopping” time). Table 1 is clear and shows that the work is well-grounded in prior literature. - Quality: The results seem technically sound, although I did not check the appendix in detail. - Clarity: The submission and the proof sketches are clearly written. - Significance: This paper provides and substantiates an interesting insight on the behavior of UCB algorithms with respect to over-exploration Weaknesses: - Significance: I am not 100% convinced about the real-life applications of this framework. If the “stopping” time is fixed because of budget limits (predetermined setting), I don’t see what ROBAI brings more than classical regret minimization (i.e., why playing pessimistically at the end of the sampling phase is better), especially since it is not feasible in practice, as the related algorithm requires the knowledge of Delta_min to be optimal. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - What does ROBAI bring more than classical regret minimization (i.e., why playing pessimistically at the end of the sampling phase is better) in the predetermined stopping time setting? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: This paper deals with theoretical work, and does not raise significant concerns about negative impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for the precious time they spent on reviewing our paper. We discuss the points raised by the reviewer below. ### Clarification on Pre-determined Stopping Time and Fixed-Budget ### We thank the reviewer for raising this question, and we would like to make a clarification on how the pre-determined stopping time setting is different from the fixed budget setting usually considered in the literature. The stopping time $T_c$ in our setting is chosen based on the parameters of the problem instead of being enforced by the environment. Generally, our algorithm is not designed for the fixed-budget setting because EOCP has to decide the stopping time based on the problem parameter. The goal of fixed budget is to find the best arm as accurately as possible with a given budget which does not adapt to the problem parameters. It is not clear whether our algorithm makes optimal decisions when the budget is different from the stopping time we need to have. For example, if the stopping budget $T_c$ is smaller than our stopping time then the algorithm may have to explore more aggressively than EOCP since the commitment needs to be made much earlier. However, if $T_c$ (the budget) is larger than the stopping time, then our algorithm can be used to stop earlier without using the full budget to achieve $T^{-1}$ confidence. It is an interesting question whether we should use the remaining budget for exploration or commit without further exploration. As we can see from the numerical example, over-exploration hurts the system's performance. This somewhat counter-intuitive observation could be an interesting implication for the fixed budget setting. ### Comparison to Regret Minimization in Pre-determined Stopping Time Setting ### We would also like to make a clarification on how ROBAI is different from classic regret minimization. In ROBAI, we care about the regret performance for the total horizon $[1,T]$, instead of just the regret before the pre-determined stopping time $[1:T_c]$. If there is no commitment requirement, the problem is same as regret minimization with horizon $T$. However, since ROBAI requires the algorithm to commit as soon as possible while maintaining asymptotic optimal regret, and also note that EOCP uses $T_c = \mathcal{O}(\log T)$ which is much smaller, the agent must identify the optimal action with sufficiently large confidence ($T^{-1}$) to have good regret in commitment. Playing pessimistic actions after sampling phase ensures selecting the best arm for commitment with this high probability. This guarantee cannot be achieved by simply playing an UCB action in commitment (optimistically selecting the arm which has highest UCB). We conducted an empirical experiment to compare the accuracy of different commitment strategies: ``` | Algorithm | Number of committing wrong arm | Accuracy | | ------------------------ | ------------------------------ | -------- | | EOCP | 11 | 0.000011 | | UCB Explore + UCB Comm | 17293 | 0.0173 | ``` It can be witnessed that LCB provides better accuracy than simple UCB commitment which ensures low regret. ROBAI with time horizon $T$ is also different from regret minimization with horizon $T_c$ if $T_c$ is fixed due to budget limits. For regret minimization problems with horizon $T_c$, the theoretical regret limit is $\mathcal{O}(\log T_c) = \mathcal{O}(\log \log (T))$, which is much smaller than ROBAI with time horizon $T$. To achieve this limit, the UCB algorithm should choose a different optimistic bonus (a bonus $b_t(a) = \sqrt{\frac{2l}{N_t(a)}}$ with an exploration function $l$ in the order of $\log T_c$ instead of $\log T$). With this much smaller bonus, even though the regret of this algorithm will match $\mathcal{O}(\log \log (T))$, it will not be possible to identify the best arm with $T^{-1}$ confidence at time-step $T_c$ (even with LCB commitment), since the sub-optimal arms have not been explored enough and the empirical reward estimation is not accurate enough. So, ROBAI requires more aggressive exploration compared to regret minimization with horizon $T_c$, but it is less agressive than regret minimization with horizon $T$ as there is an interesting over-exploration phenomenon shown in our Fig.1. We hope that our response addresses the reviewer's concerns about the the contributions of ROBAI, and we are happy to answer any additional questions or concerns. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I thank the authors for their detailed answers. My apologies, I had misunderstood the pre-determined stopping time setting and the differences with regret minimization. However, the answer to the former question raises a concern (already mentioned by Reviewer T4df in Weakness #2 and left unaddressed in the rebuttal) of practicality due to the necessary knowledge of the minimal gap for that specific setting. As such, I will keep for now the score as it is. --- Reply to Comment 1.1.1: Comment: We are glad that our response clarifies the difference between the two settings. For the concern the reviewer raised (and the weakness #2 by reviewer T4df), we would like to comment that our algorithm only needs a value that is no larger than the minimum gap $\Delta_{\min}$ instead of the exact value. While it is not completely "model-free", it is much easier and practical than knowing the exact value of $\Delta_{\min}$. For example, we don't need to use binary search routines to estimate the exact value of $\Delta_{\min}$ as suggested by Reviewer T4df. In practice, the agent can choose a small number $\epsilon$ to replace $\Delta_{\min}$ in the algorithm. For any problem where $\Delta_{\min}$ is larger than $\epsilon$, the algorithm will be regret optimal and commits in $\mathcal{O} (\log T)$ rounds. In order to have an algorithm that works for all $\Delta_{\min},$ we proposed the adaptive stopping time setting and Algorithm 2, which however has a higher sample complexity. We thank the reviewer again for the positive review and quick response!
Summary: This paper studies the classical multi-armed bandits problem with the goal to design algorithms achieving tight regret bound and tight sample complexity to identify the best arm simultaneously. To this end, three algorithms are proposed based on upper confidence exploration and lower confidence commitment for three settings: Gaussian bandits with known and unknown suboptimality gap $\Delta_{\min}$, general bandits when suboptimality gap (in terms of KL divergence) is known. It is proved that all the three algorithms enjoy tight regret bound up to the constant in front of the dominating term ($\log T$), and they also have polylog sample complexity scaled linearly with $1 / \Delta_{\min}^2$. There is also a lower bound on the sample complexity given the regret to have a tight constant in the $\log T$. The lower bound states that the rate of sample complexity is tight when $\Delta_{\min}$ is given, and can also be tight when $\Delta_{\min}$ is unknown as long as the order of remaining terms in the regret is smaller than $\log T$. Strengths: 1.The paper addresses the classical settings of regret minimization and best arm identification problems in bandit theory. The investigation of their combination opens up new avenues for fundamental methods in this field. The paper stands out for its clean and clear presentation, making it accessible to readers and facilitating understanding of the proposed techniques. 2.The paper introduces several novel techniques that enable the simultaneous bounding of regret and sample complexity. The incorporation of lower confidence bound (LCB) commitment following upper confidence bound (UCB) exploration is a particularly interesting departure from the traditional approach of UCB commitment. This novel technique offers fresh insights and potential improvements in bandit algorithms. Additionally, the stopping criterion employed in the unknown gap setting is highly appreciated in theory by leveraging the deep connections between the number of pulls across different arms and the total pulls of suboptimal arms. 3.The analysis of lower bounds in the paper sheds light on the fundamental limits and trade-offs involved in identifying the best arm. These findings provide valuable insights into the design of algorithms, emphasizing the need to balance regret and sample complexity. The paper's techniques offer the potential for achieving a near-perfect balance between these two factors in practical applications of multi-armed bandits. Weaknesses: 1. The unified achievement of both optimal regret and optimal sample complexity is an important problem in theory and application. However, I am not fully convinced of the motivation of this paper to study the constant-level tight regret bound but rate-level tight sample complexity. It somehow looks like a beam search of different settings in multi-armed bandits that are not fully addressed, since algorithms with both rate-optimal regret and rate-optimal sample complexity have been extensively studied. The vanilla multi-armed bandits is a fundamental model in theory, and the algorithms of multi-armed bandits provide prominent insights in many other problems. However, the UCB and LCB algorithms, which are conceptual in nature, require significant domain-dependent modifications to be applicable in real-world scenarios. This lack of naturalness in the setting detracts from the paper, despite the appreciation for the introduction of new techniques. 2. Two of the three proposed algorithms require the minimal gap to be known, which I believe is not very possible in application (except for the synthetic task). There are two reasons for it. Consider some tasks that aim to identify the user's favorite items: the expected reward is 1 if the item is liked by the user, and 0 otherwise, with some noise to model random attributes of the user. In this case, the gap is known and large, which greatly simplifies the problem. In general, it takes a very short period of time to find the favorite items even in the presence of noises, so it is totally acceptable that the regret is rate-optimal instead of constant-optimal. When the gap is not determined manually and small, it is not likely one can obtain the exact value of the gap. People often guess this gap by some binary search routines. However, the Algorithm 2 suffers a $\log^2 T$ sample complexity, where $\log T$ is the dominating term. This means the overall sample complexity of this algorithm is even not rate-optimal, though the regret is constant-optimal. It would be better in this case to use other methods such as elimination to keep both regret and sample complexity rate-optimal. 3. The experimental results presented in the paper have limitations in their scope. Firstly, comparing and plotting cumulative rewards instead of regret would provide a more convincing demonstration, as the sum of rewards and regret is a linear function rather than a constant. Additionally, the experiments should be conducted in more diverse environments, including scenarios with a higher number of arms. The current results do not suggest a significant improvement over the vanilla UCB algorithm, raising questions about the practical advantage of the proposed EOCP and its variants (see the questions below). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why you only consider the setting where the regret should be constant-optimal while the sample complexity can be rate-optimal. From the perspective of theory, is it possible to obtain both constant-optimal algorithm or constant-optimal sample complexity and rate-optimal regret? 2. How is the identification accuracy of the vanilla UCB algorithm at the step it gets the same regret of EOCP? Is this accuracy comparable to the commitment accuracy of EOCP? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for the precious time they spent on reviewing our paper. We discuss the points raised by the reviewer below. ### Motivation of Constant-level Tight Regret but Rate-level Tight Sample Complexity And Contributions ### Whether it is necessary for constant-level tight regret bound or we could just be satisfied with rate optimal regret bound depends on the application. In some applications it is better to use elimination based algorithms to maintain rate optimality for both measures. However, the applications that motivates our research are those where reward performance is more important than commitment time, even though commitment time is also important. One conceptual example would be investment strategies, where the reward is the profit of a strategy and the commitment time is the time that the company decides on the strategy. In this case, a commitment needs to be made as early as possible but the profit is more important so it makes sense to take more time to make a better decision. The same also applies to people's occupation choices, where the potential career achievement and the commitment time is when one figures out which career he/she wants to pursue. It makes sense that achieving life goals is more important than choosing one's major early. We believe in these cases, rate optimality in regret is not satisfactory enough. The trade-off between constant-level regret optimality and commitment time we discovered in our paper shed lights on these applications. On the other hand, even though the upper bound of Algorithm 2 is $\log^2 T$, the empirical stopping time is quite close to Algorithm 1 (Fig.2). In general, we don't necessarily disagree with the the comment raised in Weakness 2, but again, which regime is the best model to study is application dependent. Building upon the fundamental MAB model, one major contribution of our paper is to consider three fundamental questions in online learning in a single model: how to explore? when to commit? which arm to choose? Through our stylized model, we demonstrate that these three questions are fundamentally related. In the current literature, the first question is studied under online MAB, the third question is considered under offline MAB and the stopping time was not well studied directly. We are hoping our model motivates the research interest in looking at these three questions together instead of individually. ### Both Constant Optimality in regret and sample complexity? ### It is possible to achieve constant optimal sample complexity with rate optimal regret. Take Gaussian 2-armed bandit for example, this can be done by an explore-then-commit algorithm which uniformly samples two arms and starts commitment once enough statistical separation can be witnessed from the past samples with confidence $T^{-1}$, i.e., BAI-ETC [20]. Its sample complexity is asymptotically $8\log(T)/\Delta$ which is constant optimal (compared to Theorem 1 [34]), and its regret is order optimal (Theorem 5 [20]). It is not possible to achieve constant optimality for both, as they require different trade-offs in exploration and exploitation. According to (Theorem 1 [34]), any algorithm to achieve constant optimal sample complexity in Gaussian 2-armed bandits should pull the two arms equal likely in exploration in order to stop early, and this aggressive strategy completely ignores the exploration-exploitation trade-off to maintain low regret. Therefore, it incurs regret asymptotically larger than UCB algorithms (rigorously shown in [20]). Our paper is dual to [20], in the way that [20] contributes to the regret performance of algorithms with constant-level optimality in sample complexity, while we contributes to the sample complexity of algorithms with constant-level optimality in regret. ### Identification Accuracy of Vanilla UCB algorithm ### The vanilla UCB algorithm does not output a best arm for us to commit to, and our proposed algorithm's exploration strategy is in fact the UCB algorithm. One could modify the vanilla UCB algorithm to stop at a certain time, and then makes a decision on the best arm, just like we did in our paper. The accuracy in this case will depend on the design of the stopping time and the decision rule. We remark that our major contribution in terms of the algorithm is to design such a stopping rule and an action identification rule using LCB. We illustrates the improvement of LCB by comparing EOCP to an algorithm which uses the UCB exploration strategy and the same stopping time as our algorithm (which would results in the same regret in exploration), but uses UCB to identify the best arm instead of LCB. The accuracy is compared below: ``` | Algorithm | Number of committing wrong arm | Accuracy | | ------------------------ | ------------------------------ | -------- | | EOCP | 11 | 0.000011 | | UCB Explore + UCB Comm | 17293 | 0.0173 | ``` Time horizon is $10^3$ and experiment is done over $10^6$ iterations. It can be witnessed that LCB provides better accuracy than simple UCB commitment and UCB commitment does not satisfy the required $T^{-1} = 10^{-3}$ accuracy. ### Numerical Experiments with More Arms ### We conducted an experiment in a $4$-armed bandit model to compare our algorithms to existing algorithms in the literature, including an Action Elimination algorithm. The results are presented in our submitted rebuttal pdf file. In general, the observations we see in 2-armed bandit models can still be witnessed in this $4$-armed bandit model, which demonstrates the regret improvement of our algorithm and the over-exploration phenomenon is not unique to the 2-armed case. We hope that our response addresses the reviewer's concerns are happy to answer any additional questions or concerns. We would also be grateful if the reviewer could consider reevaluating the review and rating based on our response. --- Rebuttal 2: Comment: Dear Reviewer T4df: We want to follow up to see whether our response addresses your concerns. Please don't hesitate to let us know if you have any other questions/comments. Thanks! We also want to comment on weakness #2 raised by the Reviewer. ### Require Known Minimum Gap ### For the weakness #2 raised by the Reviewer, we would like to comment that our algorithm only needs a value that is no larger than the minimum gap $\Delta_{\min}$ instead of the exact value. While it is not completely "model-free", it is much easier and practical than knowing the exact value of $\Delta_{\min}$. For example, we don't need to use binary search routines to estimate the exact value of $\Delta_{\min}$ as suggested by the Reviewer. In practice, the agent can choose a small number $\epsilon$ to replace $\Delta_{\min}$ in the algorithm. For any problem where $\Delta_{\min}$ is larger than $\epsilon$, the algorithm will be regret optimal and commits in $\mathcal{O} (\log T)$ rounds. In order to have an algorithm that works for all $\Delta_{\min},$ we proposed the adaptive stopping time setting and Algorithm 2, which however has a higher sample complexity. But on the other hand, according to our Theorem 3 in adaptive stopping time setting, it is not possible to maintain constant optimal regret and commit at $\mathcal{O} (\log T)$ rounds at the same time. --- Rebuttal Comment 2.1: Comment: I thank the authors for your efforts to address my concerns. I really appreciate the additional experimental results that illustrates the effectiveness of LCB commitment strategy in a UCB algorithm. I do agree this is a technically solid theory paper studying an important fundamental problem in bandits, so I decided to slightly raise my score. However, I am still concerned with the motivation and setting of the paper. The UCB (or LCB) is a conceputal algorithm useful in theory, but the industry uses a much more complicated algorithm pipeline for their purpose based on the principle of UCB (or LCB). Given the motivating example of the paper, I am not sure about why requiring the regret to be constant-level optimal is so important **in a theory paper**. Moreover, the reply to the Weakness #2 seems to contradict with the purpose of the paper. Say, you have an $\epsilon$ in your algorithm that equals $\Delta_{min}/2$, then the regret would not be constant-level optimal anymore, since now your have a dependency like $2\log(T)/\epsilon = 4\log(T)/\Delta_{min}$. I hope the reviewer to give some intuitions on how to decide the $\epsilon$ to keep the regret to be constant-level optimal. --- Reply to Comment 2.1.1: Comment: We are glad that our additional experimental results address the concerns. We thank the reviewer for the additional comments and for raising the score. We would like to clarify that substituting $\Delta_{\min}$ with $\epsilon$ will not affect the regret performance in Theorem 1. Namely, with $\epsilon$ smaller than $\Delta_{\min}$ in the algorithm, the regret performance remains to be $2\log (T) / \Delta_{\min}$. However, the sample complexity result (Corollary 1) will be affected. Instead of having $\mathcal{O} ( \log(T)/\Delta_{\min}^2)$ commitment time, it becomes $\mathcal{O}(\log(T)/\epsilon^2)$. The regret is still constant-level optimal.
Summary: The work studies how to design an algorithm with an asymptotically optimal regret rate such that it will also commit to the best arm with high probability after a stopping time (e.g., O(logT)). The paper proposes two algorithms in the Gaussian bandits setting: one with pre-determined stopping time (EOCP), which requires the knowledge of the minimum reward gap, and another with adaptive stopping time (EOCP-UG). The authors prove both algorithms are asymptotically optimal and will commit to the best arm with confidence O(1/T) after O(log(T)) and O(log^2(T)) respectively. The paper further shows the corresponding lower bounds for commitment times (expected stopping time) and finds EOCP is sample optimal and EOCP-UG is nearly sample optimal. In addition, the authors extend the EOCP to KL-EOCP for general bandits, whose commitment time also matches the lower bound. Strengths: 1) In general, the studied question is interesting and the paper is well-written and easy to follow. 2) The proposed algorithms are novel which are based on well-designed stopping rules. 3) The theoretical results and the proofs are clear. The numerical results also show the advantages of the proposed algorithms compared to the benchmark algorithms, including UCB and BAI algorithms in the literature. Weaknesses: 1) The problem setting is the fixed-confidence BAI setting (note for the setting of EOCP, the stopping time is pre-determined by the algorithm instead of by the environment). It would be great if the authors can give some discussions on the fixed budget setting (the stopping time is given), like the challenges of extending the proposed algorithms. 2) In the general bandits setting, the authors only extend EOCP to KL-EOCP, which needs the KL_min of arms. The authors demonstrate if KL_min is not known, an extension of EOCP-UG can deal with this setting. But there is no pseudo-code or provable results on it. More details or discussions would be helpful. Minor comments: In Table 1, it might be helpful to add another column ‘Setting’, since ‘Optimality’ is a bit confusing for explaining ‘Gaussian’ or ’General’. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Na. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for the precious time they spent on reviewing our paper. We discuss the points raised by the reviewer below. ### Fixed-budget Setting ### We thank the reviewer for the great question! Generally, our algorithm is not designed for the fixed-budget setting because EOCP has to decide the stopping time based on the problem parameters. For fixed budget, the goal is to find the best arm as accurately as possible with the given budget. The budget does not adapt to the problem parameters but the decision and learning process do. The challenge of extending our algorithm to the fixed budget setting is that it is not clear whether our algorithm makes optimal decisions when the budget is different from the stopping time we need to have. For example, in a two-armed Gaussian bandit example, if the stopping budget $T_c$ is much smaller than our stopping time, then the algorithm may have to explore more aggressively than EOCP since the commitment needs to be made much earlier. However, if $T_c$ (the budget) is larger than the stopping time, then our algorithm can be used to stop early without using the full budget. It is an interesting question whether we should use the remaining budget for exploration or commit without further exploration. As we can see from the numerical example in the paper, over-exploration hurts the system's performance. This somewhat counter-intuitive observation could be an interesting contribution (or at least an implication) to the fixed budget setting. ### KL-EOCP with Unknown Gap ### We provide a short psudo-code and theoretical guarantees below: *** > ### Algorithm: KL-EOCP-UG ### > 1: Initialize by pulling each arm once. >2: **While** $\max_a \min_a' N_{t-1}(a) - l N_{t-1}(a')\leq 1$ **do** >3: &nbsp;&nbsp;&nbsp;&nbsp; take action $A_{t+1} = \arg\max_a\mathsf{UCB}_{t-1}(a)$ >5: **end while** >6: Let $T_c = t-1$, $\hat{a} = \arg\max_{a} \mathsf{LCB}_{T_c } (a)$ >7: Commit to $\hat{a}$ for the rest of time horizon. *** where $\mathsf{UCB}_t(a)$ and $\mathsf{LCB}_t(a)$ are defined according to Line 4 and 7 in Algorithm 3. We can also derive the following theoretical guarantee for this algorithm which requires additional assumptions on the $b(\theta)$ function defining the KL-divergence (Line 270): *** > ### Aymptotic Optimality for KL-EOCP-UG ### >If we choose $l = \log(T) + 4\sqrt{2\log(T)}$, suppose $b(\theta)$ is strongly convex and smooth, the expected regret of the KL-EOCP-UG algorithm is asymptotically upper bounded by: >$\lim\sup_{T\to\infty} \frac{Reg(T)}{\log T} \leq \sum_{a:\Delta_a >0} \frac{\Delta_a}{\mathsf{KL}(\mu_a,\mu_1)}$ *** ### Table 1 ### We will add another coloum in table 1 to make sure there is no confusion. We thank the reviewer for pointing it out. We hope that our response addresses the reviewer's questions regarding the fixed-budget setting and the KL-EOCP algorithm with unknown gaps, and we are happy to answer any additional questions or concerns. We would also be grateful if the reviewer could re-evaluate the rating and review based on our response. --- Rebuttal 2: Comment: Dear Reviewer WGiE, We want to follow up to see whether our response addresses your concerns on fixed-budget bandits and algorithm in general bandit settings. Please don't hesitate to let us know if you have any other questions/comments. Thanks! --- Rebuttal 3: Comment: Dear Reviewer WGiE, We want to follow up to see whether our response addresses your concerns. We are happy to answer any other questions/comments. Thanks!
Summary: The paper delves into the study of the 'Explore Then Commit' (ETC) policy, where the algorithm is divided into two stages: exploration and commitment. During exploration, the algorithm is permitted to switch actions, while the commitment phase restricts the algorithm to pulling only the commit arm. The primary objective of the algorithm is to reduce the expected commit time, denoted as $T_c$, and ensure that at time $T_c$, $\hat a \neq a^*$ holds true with a probability of $O(1/T)$. Three variations of the ETC policy are introduced in this paper: EOCP, EOCP-UG, and KL-EOCP. For EOCP, it's structured for a known gap setting, achieving regret in the order of $2\log T/\Delta_i$. The authors assert that this outcome is asymptotically optimal. However, this claim appears to be incorrect. As per reference [On Explore-then-Commit Strategy], the asymptotic optimal regret for a known gap should be $\log T/(2\Delta_i)$. Furthermore, the authors have not discussed this setting's reference in a comprehensible manner. For instance, DETC with a known gap achieves exact asymptotic optimality. For EOCP-UG, the authors show that it achieves asymptotic optimality with a commit time of $\log^2 T$. As for KL-EOCP, it necessitates knowledge of certain parameters for the design of the pre-determined stopping time. In summary, the claim that the EOCP and KL-EOCP algorithms are asymptotically optimal appears to be overstated. The upper bound for EOCP-UG and its associated lower bound is the same as DETC. As for the lower bound, the assumption seems excessively strong. The authors did not clearly indicate whether algorithms must adhere to the equation given in Line 241. Primarily, this paper targets two-armed bandit problems, focusing on asymptotic regret. However, I am intrigued by the finite-time bound. I would appreciate an algorithm that is not only asymptotically optimal but also demonstrates a robust finite-time bound. In addition, the paper only considers a fixed horizon T setting. What about the case with an unknown $T$? From my understanding, DECT and its variant [Almost Optimal Anytime Algorithm for Batched Multi-Armed Bandits], are also suitable for an unknown T setting. Furthermore, the necessity of the assumption in Line 141 is not clearly articulated and requires clarification. Regarding experimental results, the performance of ETC and DECT presented in this paper is inconsistent with previous papers. This discrepancy might be due to parameter adjustments in these algorithms. I would appreciate seeing empirical results with parameter tuning for both ETC and DETC. ---- # At the end of discussion phase The authors fail to address my primary concerns. Below, I elaborate on these issues: 1. Line 174 of the manuscript asserts that in the pre-determined setting (where the gap $\Delta$ is known), $\frac{2\log T}{\Delta}$ is asymptotically optimal. This claim is misleading. Algorithm 4 in reference [21] shows that in the pre-determined setting with $T_c = \log^2 T$, there exists an algorithm with a regret of $\frac{\log T}{2\Delta}$. Moreover, the known lower bound (Theorem 6 in [20]) for known gap setting is $\frac{\log T}{2\Delta}$. Therefore, stating $ \frac{2\log T}{\Delta}$ as asymptotically optimal is an overstatement. In their response, the authors attempt to equate the lower bound of the pre-determined setting with the lower bound of the known gap setting in the ETC strategy [20]. This comparison is problematic and has the potential to mislead other readers. Specifically, Table 1 in the authors' response erroneously claims that $4\log T/\Delta$ is the lower bound for the known gap setting, whereas this is actually the lower bound proven for the **unknown gap** setting in Theorem 4 of [20]. Additionally, they assert that $2\log T/\Delta$ is the lower bound for the known gap and pre-determined settings, which contradicts Algorithm 4 in [21] that shows regret $\frac{\log T}{2\Delta}$. 2. As I pointed out in my initial review, DECT [21] (Algorithm 5 and 2) can be directly applied to the unknown gap setting. The results for DECT in an unknown gap setting are identical to those presented in this paper. Given the paper's significant overstatements and the paper's limited contributions, I am revising my score from 4 to 3. Strengths: In this paper, the authors propose an algorithm called Explore Optimistically then Commit Pessimistically (EOCP) to solve the Regret Optimal Best Arm Identification (ROBAI) problem. It first uses an optimistic modified UCB algorithm to explore actions with a slightly larger exploration function , and then commits to actions according to a pessimistic LCB algorithm when the exploration ends. The main contributions include designing new stopping rules with both pre-determined stopping time (vanilla EOCP) and adaptive stopping time (the EOCP-UG variant), which provably balance the trade-off between regret minimization and optimal action identification Weaknesses: Please refer to the Summary. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to the Summary. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The main consideration of mine is that the proposed algorithm considers asymptotic bounds other than finite $T$ (probabily unknown). This limits the contribution of the paper heavily. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the comments/suggestions and included a detailed response below. We also would like to point out a couple of misunderstandings the reviewer had in the preliminary review (see 1 and 2 below). ### Limitation of Targeting Two-armed Bandits ### The main theorems (Theorem 1, 2, and 4) of our paper are for general MAB with more than 2 arms. Even though our Theorem 3 focuses on a 2-arm case, it can be generalized to models with more than two arms. ### Limitation of Asymptotic Regret and Lack of Finite-time Bounds ### Our asymptotic regret guarantees (Theorem 1,2 and 4) are derived from the finite-time bounds, specifically, the bounds in Theorem 5, 6 and 7 in the complete version (included as the supplementary material of the original submission). We chose to keep the asymptotic results n our main body due to its simplicity but we are happy to add the finite-time bounds to the main body as well. ### Unknown $T$ ### Any algorithm for ROBAI should satisfy the two properties : (1) the termination time $T_c$ of exploration (the commitment time) is a stopping time (either pre-determined or depends on the samples before $T_c$). (2) the confidence of best arm is $T^{-1},$ in order to guarantee regret optimality. Theoretically, it is impossible to design an ETC algorithm satisfying the above two properties without knowing $T$, because no matter when the agent stops exploration and what confidence $\delta$ it holds, an adversary can always choose $T$ large enough such that $\delta> T^{-1/2}$, which violates property (2). The variant of DETC works for an unknown $T$ because its exploration termination time is not a valid stopping time. In their algorithm, they guess $T$ in epochs. In the $r$-th epoch, they assume $T=2^r$ and perform an exploration period with a commitment period. If after $t=2^r$, the interaction continues, they will re-guess $T=2^{r+1}$ and consider the commitment period in the $r$-th epoch as part of the exploration. In other words, the "commitment" continues to change in their algorithm. For our problem, once a commitment is made, the agent cannot change the decision. However, If we don't require $T_c$ to be a stopping time and allow the agent to change the commitment as in DETC, it is straightforward to adapt our algorithms to the unknown $T$ setting using the exact same idea. The theoretical guarantees can also be straightforwardly extended. ### Overstating Asymptotic Optimality in Pre-determined Setting ### We respectfully disagree with the reviewer on this because the pre-determined setting is more difficult than the known gap setting studied in [20,21], and thus $\log T/2\Delta$ is not a tight lower bound. (1) Algorithms in the known gap setting, e.g., SPRT-BAI [20] and DETC-KG [21], use the exact value of $\Delta$ to design all sampling, stopping, and action decision rules so that the regret performance can be lower than $2\log T/\Delta$. Changing $\Delta$ to any other value (even a slight mismatch) would harm the performance. The is also implied by the proof of Theorem 6 from [20] that one would need the exact value of $\Delta$ to achieve $\log T / 2\Delta$ lower bound. Our algorithms in the pre-determined setting only requires a value smaller than $\Delta$ and only use it in stopping time. The theoretical regret performance would still be the same if we under-estimate $\Delta$. (2) In the known gap setting, the algorithm can use reward samples from exploration to design the stopping time, but in the pre-determined setting, $T_c$ must be pre-specified before the exploration starts. Based on the two comparisons above, we believe $\log T/2\Delta$ is not achievable in the pre-determined setting. In fact, this setting is more comparable to the fixed-design setting [20] whose regret lower bound is $4\log T/\Delta$. But it requires all actions to be pre-specified before exploration, which is harder. Overall, we believe $2\log T/ \Delta$ is a more valid lower bound since the algorithms do not entirely depend on $\Delta$, i.e., any hyper-parameter lower than $\Delta$ would suffice. We compare all scenarios as follows: ``` | Setting | Lower Bound | Knowledge of $\Delta$ (Where it is used) | Stopping Rule | | ----------------------- | ----------------- | -------------------------------------------- | ------------- | | Fixed-Design | 4 \log T / \Delta | Yes (Stopping Time) | Pre-determined| | Known Gap | \log T/ 2\Delta | Yes (Sampling, Stopping, Arm Identification) | Adaptive | | Unknown Gap | 2\log T/\Delta | No | Adaptive | | Pre-determined Stopping | 2\log T/\Delta | lower bound of $\Delta$ (Stopping Time) | Pre-determined| | Adaptive Stopping | 2\log T/\Delta | No | Adaptive | ``` ### Lower Bound Condition ### We require all ROBAI algorithms to satisfy the condition in line 241. It is not excessively strong because ROBAI focuses on asymptotic regret optimal algorithms, i.e., the regret dominating term is $2\log T/\Delta$. These algorithms naturally satisfy the condition in line 241 with a $c<1$. ### Empirical Results with Parameter Tuning ### We tune the parameters for BAI-ETC and DETC, and the results are presented in our uploaded pdf file. After tuning $T_1$ and the confidence bounds, the performance of DETC beats BAI-ETC and almost matches UCB, but not our algorithms. Given we indeed provide general results for bandits with more than two arms and also finite-time bounds for all algorithms in the original paper, which are the main limitations the reviewer is concerned about, we would appreciate it if the reviewer could re-evaluate the rating and review based on our response. We are happy to address any additional questions and concerns. --- Rebuttal Comment 1.1: Comment: I'm skeptical about the claim of asymptotic optimality in the pre-determined setting as presented in Line 174. In Algorithm 1, the gap, $\Delta$, is utilized to define the parameter $T_c$. This leads me to question the assertion that EOCP is asymptotically optimal. A more fitting comparison would be with the known gap lower bound rather than the unknown gap lower bound. In their response, the authors draw parallels between this setting and the fixed-design setting referenced in [20], where the regret lower bound is given as $\(4\log T/\Delta\)$. I find this comparison misleading for several reasons: 1. The ETC strategy in [20] ensures that both arms are pulled an equal number of times up until the point of commitment. Contrarily, the algorithms examined in this paper allow for varying numbers of pulls for each arm. As a result, the lower bound outlined in [20] for ETC isn't applicable to the present study. Invoking the result $\(4\log T/\Delta\)$ in this context is not justifiable. 2. Algorithm 1 inherently caters to the known gap setting since it mandates the specification of $T_{c}$ based on the known gap, $\Delta$. In contrast, $\(4\log T/\Delta\)$ is recognized as the lower bound for the ETC strategy with an unknown gap. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the response, and we want to point out the following misunderstanding the reviewer had regarding our rebuttal and our paper. ### Utilizing $\Delta$ in Designing Parameter $T_c$ ### We want to point out that our algorithm **only requires a lower bound of $\Delta$** to define $T_c$ instead of the exact value. While it is not completely "model-free", it is much easier and practical than knowing the exact value of $\Delta$. This lower bound does not provide all the information that $\Delta$ has. In practice, the agent can choose a small number to replace $\Delta$ in the algorithm. For any problem where $\epsilon$ is larger than $\Delta$, the algorithm will have exactly the same performance as in Theorem 1 and Corollary 1. On the other hand, ETC strategy in [20] in the known gap setting requires **the exact value of $\Delta$**, and replacing it with any value other than $\Delta$ would either harm the best arm identification accuracy or harm the sample complexity and regret. Based on this essential difference and the fact that our algorithm does not require the exact value of $\Delta$. We consider it not fair to compare our algorithm 1 to the lower bound $\log T/ 2\Delta$. ### Comparison to the Fixed-Design Setting ### It is true that in the fixed-design setting, all arms need to be pulled uniformly, and the commitment time is pre-determined, which results in the $4\log T/ \Delta$ regret. The reason why our algorithm 1 can achieve lower regret than $4\log T/ \Delta$ is due to the adaptive arm sampling rule. However, our design of commitment time is still pre-determined. We said that our pre-determined setting is more similar to the fixed-design setting, instead of the known gap setting. Based on the reviewer's argument, we found the known gap setting lower bound $\log T/2\Delta$ is as not applicable to the present study as the lower bound $4\log T/\Delta$ in fixed-design setting. The full comparsion of all settings are presented in the original rebuttal. Given we indeed provide general results for bandits with more than two arms and also finite-time bounds for all algorithms in the original paper, which are the main limitations the reviewer is concerned about, we would appreciate it if the reviewer could re-evaluate the rating and review based on our response. --- Rebuttal 2: Comment: Dear Reviewer tQKP: We want to follow up to see whether our response addresses your concerns. Please don't hesitate to let us know if you have any other questions/comments. Thanks! --- Rebuttal 3: Comment: Dear Reviewer tQKP: We want to follow up to see whether our response addresses your concerns and we are happy to answer any additional questions/comments. Thanks! --- Rebuttal 4: Comment: Dear Reviewer tQKP: We want to follow up to see whether our response addresses your concerns. We are happy to answer any additional questions/comments. Thanks!
Rebuttal 1: Rebuttal: We thank the reviewers for their precious time spent on reviewing our paper. To address the questions and concerns raised in the preliminary reviews, we present additional numerical results in the uploaded pdf File. Due to the limited time, we can only provide results with Gaussian bandits, and the results for Bernoulli bandits will be added to the revision. In the PDF file, Fig.1 is a comparison of EOCP with existing algorithms in the literature under bandit models with more than 2 arms, and Fig. 2 is a comparison of EOCP with the tuned versions of DETC and BAI-ETC. Numerical results show that EOCP and its variants still outperform existing algorithms, with a similar trend as shown in Fig.1 in our original paper. The extended results demonstrate EOCP's ability to generalize to more complex settings. We hope that our response addresses the reviewers' questions regarding the numerical results of EOCP, and we are happy to answer additional questions and concerns. Pdf: /pdf/011cf66f79f4a053c3b7067b87c0b17761a3cf80.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
BiMatting: Efficient Video Matting via Binarization
Accept (poster)
Summary: The authors propose the first binarized video matting network, namely BiMatting. They first analyze the bottlenecks of the direct binarization of video matting models and propose an accurate and efficient binarization method. Compared with other full-precision neural networks and other binarization methods, the authors confirm the effectiveness and great potential of BiMatting. It is worth noting this paper constructs a binary backbone that achieves similar acceleration compared to binarized mobilenetv3, a super-lightweight architecture, for the first time and realizes the practical application on video matting. The proposal and application of this binary network mean that the practicality of binarization has been significantly improved in general. Strengths: The authors propose the first binary neural network for video matting tasks for reducing computation consumption significantly while retaining practical accuracy. Since video matting tasks usually run on resource-constrained devices, this research is practically significant. In terms of method, the authors successfully design an ultra-lightweight video matting network by binarization and make it have a significantly improved accuracy rate. In general, the authors' method tightly combines video matting tasks and architectures. 1. The binary backbone with SBB is interesting, and I think it is the major contribution of this paper. Compared with the existing binarization schemes, the authors create a backbone network with reasonable accuracy and comparable efficiency with the binarization mobilenetv3 through the careful design of the architecture. This is very important for the practical application of binary neural networks. Moreover, the authors’ motivation (or paradigm) for designing the binary backbone network has a general contribution to the binarization community, that is, "the crucial paradigm of an accurate binarized encoder is the computation-dense form of binarized block", this motivation can explain the success of binary networks such as Bi-Real and IR-Net, and may lead to more lightweight binary networks with higher accuracy. 2. Using representations of different scales, SAB successfully uses sparse masks to further reduce the computational load of the binary decoder without significant performance degradation. This demonstrates that the representational ability of the binarized computational unit allows the model to exploit less informative representations. The experimental and video results are solid. The accuracy of BiMatting not only far exceeds the binarized version of the existing model but also exceeds the full-precision matting model such as BGMv2 while achieving significant storage savings. The visualization results also show that BiMatting has improved significantly in detail. Weaknesses: 1. For the backbone with SBB, the authors should provide more detailed information. First of all, I suggest the authors give a more detailed ablation about the efficiency, including the FLOPs and the number of parameters of direct full-precision/binarized mobilenetv3 and SBB backbone. In addition, since the authors add the pre-training process on the ImageNet dataset as stage 0, they should also provide the pre-training ablation results on this dataset, which has clarified the performance improvement of the binarized backbone alone. 2. For SAB, a strange phenomenon is that the results in Table 1 show that using SAB in the direct binarized matting model will cause a crash. Although SAB does not directly target the accuracy bottleneck, it even causes the model to be worse than direct binarization. Can the authors explain this phenomenon? 3. In Table 2, ReCU and BNN present the same collapse, however, as reported in their original papers, the former usually performs significantly better than the latter on ImageNet, even better than methods such as DoReFa-Net. I suggest the authors discuss this phenomenon and report the ImageNet pre-training accuracy of these binary beckbones. 4. I suggest the authors discuss the possibility of BiMatting's actual deployment, including how to deploy it on edge devices and possible hardware inference performance. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see weaknesses --After rebuttal-- Thanks for the detailed response, which has well addressed my concerns. I also read other reviewers' comments and the authors' responses. I am satisfied with the rebuttal and increase the rating score. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors discuss the limitations of their works in section 4.3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1**: For the backbone with SBB, the authors should provide more detailed information. First of all, I suggest the authors give a more detailed ablation about the efficiency, including the FLOPs and the number of parameters of direct full-precision/binarized mobilenetv3 and SBB backbone. In addition, since the authors add the pre-training process on the ImageNet dataset as stage 0, they should also provide the pre-training ablation results on this dataset, which has clarified the performance improvement of the binarized backbone alone. **A1**: We thank you for your attention and provide detailed information on BiMatting following your suggestions. We first compare the FLOPs and number of parameters of full-precision/binarized MobileNetV3 and SBB backbones, as well as their pre-training accuracy on ImageNet in Table A1. The results show that there will be severe performance degradation with the direct binarization of MobileNetV3, which directly causes the performance degradation of binarized video matting models. Our SBB backbone in BiMatting achieves significantly improved pre-training performance (56.1% vs. MBV3-BNN 24.62%) with 8.9x speedup and 21.4x compression ratio. Table A1 (*Table 4 of the attached PDF*): Pretrained results comparison on ImageNet. | Backbone | \#Bit | \#FLOPs(G) | \#Param(M) | \#Accurary@1 | |--------------------|-------|-------------------|--------------------|--------------| | MBV3 | 32 | 1.07 | 11.34 | 63.00 | | MBV3-BNN | 1 | 0.07 | 0.46 | 24.62 | | MBV3-DoReFa | 1 | 0.07 | 0.46 | 22.38 | | MBV3-ReCU | 1 | 0.08 | 0.53 | 32.76 | | SBB | 1 | 0.12 | 0.53 | 56.09 | > **Q2**: For SAB, a strange phenomenon is that the results in Table 1 show that using SAB in the direct binarized matting model will cause a crash. Although SAB does not directly target the accuracy bottleneck, it even causes the model to be worse than direct binarization. Can the authors explain this phenomenon? **A2**: We hereby explain this phenomenon. As mentioned in our paper Sec 4.1 L279, the reason for the poor results of RVM + SAB is that SAB is designed to break the bottleneck of efficiency rather than accuracy, which leads to its cooperation with the original encoder or even results in poorer accuracy. Our SAB reduces the calculation of the decoder by masking the repeated intensive computation of continuous regions, which means that the remaining (not masked) regions in the feature need to provide enough effective information, and these features are provided by the encoder. Fig. 3 shows that when using the original encoder (directly binarized MobileNetV3), even if all other parts are restored to the full-precision counterparts, the performance still drops significantly. Therefore, the crashing can be expected when we use the original binarized encoder together with our efficient and lightweight (binarized) SAB decoder. > **Q3**: In Table 2, ReCU and BNN present the same collapse, however, as reported in their original papers, the former usually performs significantly better than the latter on ImageNet, even better than methods such as DoReFa-Net. I suggest the authors discuss this phenomenon and report the ImageNet pre-training accuracy of these binary backbones. **A3**: We further show the results of ImageNet pre-training in Table A1 according to the reviewer's suggestion. From these results, we can see that the backbone network of RVM-ReCU is significantly better than RVM-BNN. These results imply that the ReCU binarization method will fail in the video matting task, although it belongs to SOTA methods in the evaluation on ImageNet. Some existing work [1] also shows that the transfer of SOTA binarization methods on different tasks is not straightforward. For example, ReCU also leads to model collapse on 3D ShapeNet and GLUE benchmark tasks. It further verifies our motivation, *i.e.*, accurate and efficient binarization models should be tailored for the video matting task. [1] Qin H, et al. BiBench: Benchmarking and Analyzing Network Binarization. ICML 2023. > **Q4**: I suggest the authors discuss the possibility of BiMatting's actual deployment, including how to deploy it on edge devices and possible hardware inference performance. **A4**: The deployment of 1-bit BiMatting is supported by open-source libraries such as Larq [2] and daBNN [3] for ARM devices. Referring to the daBNN library and the performance of binarized operators implemented by it on the Raspberry Pi 4B, we believe that the binarized BiMatting can achieve similar speedup and compression on the hardware claimed in our paper. [2] Geiger L, Team P. Larq: An open-source library for training binarized neural networks. JOSS, 2020. [3] Zhang J, et al. dabnn: A super fast inference framework for binary neural networks on arm devices. ACM MM, 2019. --- Rebuttal Comment 1.1: Title: The response well adresses my concerns Comment: I thank the authors for their detailed response, the authors put a lot of effort to answering the reviewers' questions. Considering all reviewers' comments and the author's responses, I can confirm that bimatting's contributions, especially its proposed lightweight binarization backbone, may have a broad impact on the field of binarization research in the future. The reproducibility of this work is also great, and I look forward to the authors releasing their complete training code and pre-trained models in their final version. So I would like to raise my score.
Summary: This paper propose an efficient solution that utilizes binarization to achieve real-time video matting on for devices constrained by computational resources. The proposed BiMatting constructs shrinkable and dense topologies of the binarized encoder block to enhance the extracted representation, while sparsifying the binarized units to reduce the low-information decoding computation. Extensive experiments shows that it outperforms SOTA binarized video matting methods by a large margin. Strengths: This work claimed itself as the first binarization solution for video matting tasks, which may provide a new effective solution to the real-time matting community. The work is based on reasonable analysis and observation that shown in the Section 3.1. The proposed method also effectively address the pointed challenge. Moreover, although the performance is lower than the full-precision models, the proposed method achieves satisfactory results compared with the binarized video matting models. Weaknesses: 1. The authors need to check the equations and make sure that all the notations are explained. For example, the ⊗ in Eq. (2) is not mentioned in the passage. The |W| is also remained unexplained. 2. Some key ideas are not fully verified by experiments. For examples, as the author claims that dense connection is very important to recover the performance, this assertion is somewhat unwarranted without the comparison of the proposed SBB and SBB w/o dense skip connections for each BiConv. 3. Since the Matting training phase has 4 stages, and each stage has its own independent training settings, the reviewer doubted that whether the Matting training phase can work well without extensive ablation study of different training settings. It would be better to provide experiments to demonstrate the effectiveness and robustness of the proposed method under different training settings. Otherwise, the robustness of the proposed method is hindered. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1**: The authors need to check the equations and make sure that all the notations are explained. For example, the ⊗ in Eq. (2) is not mentioned in the passage. The |W| is also remained unexplained. **A1**: $|{\mathbf{W}}|$ means to take the absolute value of the weight ${\mathbf{W}}$, $\otimes$ denotes the inner product of two binary vectors with bitwise XNOR and Bitcount operations. We will clarify our notation in the paper. > **Q2**: Some key ideas are not fully verified by experiments. For example, as the author claims that dense connection is very important to recover the performance, this assertion is somewhat unwarranted without the comparison of the proposed SBB and SBB w/o dense skip connections for each BiConv. **A2**: Thanks for pointing it out, we followed your suggestion to include more ablation experiments to show that dense connections of SBB are important to restore performance. We compared our BiMatting with the binarized model removing connections in SBB (BiMatting-NoConn) in *Table 2 of the attached PDF*. We find that after removing the connections, the performance of the binarized video matting model dropped significantly, which means that the connections in SBB are impossible. We also ablate the connection in SBB with the 1x1 binarized convolutions (BiMatting-BiConv) and summing up (BiMatting-SumConn), and then conduct the ablation experiments. The results in *Table 2 of the attached PDF* show it does not bring significant improvements while the binarized convolutions incur additional computation. The results further demonstrate the strengths of our proposed SBB, and we will update these results and discussion in our final version. > **Q3**: Since the Matting training phase has 4 stages, and each stage has its own independent training settings, the reviewer doubted that whether the Matting training phase can work well without extensive ablation study of different training settings. It would be better to provide experiments to demonstrate the effectiveness and robustness of the proposed method under different training settings. Otherwise, the robustness of the proposed method is hindered. **A3**: Please note that our training pipeline completely follows that of the baseline RVM, using the code from their public GitHub repository. We do not add additional training stages or other complications. We also adopt the same stopping conditions as RVM for a fair comparison. We also provide the detailed training pipeline of BiMatting in our General Response and will release our training code in the final version. We also provide the accuracy of checkpoints at the end of every training stage in Table A3. The results show that the accuracy of BiMatting is steadily increasing at each stage, and it already shows obvious advantages in the first few stages over existing binarized video matting models (RVM-BNN, RVM-DoReFa, RVM-ReCU, and RVM-ReAct). This phenomenon means that the results of our BiMatting are robust. Table A3 (*Table 3 of the attached PDF*): Low-resolution comparison on VM, D646, and AIM datasets for each stage. | | | | | | | Alpha | | | FG | |---------|-----------|-------|-------|-------|-------|-------|-------|-------|-------| | Dataset | Method | Stage | \#Bit | MAD | MSE | Grad | Conn | dtSSD | MSE | | VM | BiMatting (Ours) | 1 | 1 | 15.06 | 8.75 | 2.83 | 1.76 | 2.70 | - | | 512x288 | BiMatting (Ours) | 2 | 1 | 13.50 | 7.02 | 3.32 | 1.52 | 2.69 | - | | | BiMatting (Ours) | 3 | 1 | 12.75 | 7.03 | 2.78 | 1.41 | 2.64 | - | | | BiMatting (Ours) | 4 | 1 | 12.82 | 6.65 | 2.97 | 1.42 | 2.69 | - | | D646 | BiMatting (Ours) | 1 | 1 | 61.52 | 52.10 | 11.70 | 16.21 | 2.59 | 22.30 | | 512x512 | BiMatting (Ours) | 2 | 1 | 82.81 | 73.84 | 12.36 | 21.80 | 2.40 | 24.75 | | | BiMatting (Ours) | 3 | 1 | 66.98 | 59.05 | 12.06 | 17.61 | 2.52 | 23.59 | | | BiMatting (Ours) | 4 | 1 | 32.74 | 24.48 | 9.34 | 8.62 | 2.21 | 5.86 | | AIM | BiMatting (Ours) | 1 | 1 | 54.26 | 44.24 | 13.31 | 14.30 | 2.41 | 23.40 | | 512x512 | BiMatting (Ours) | 2 | 1 | 61.19 | 51.44 | 13.99 | 16.11 | 2.21 | 24.12 | | | BiMatting (Ours) | 3 | 1 | 63.19 | 53.88 | 13.69 | 16.59 | 2.30 | 20.23 | | | BiMatting (Ours) | 4 | 1 | 35.17 | 26.53 | 9.42 | 9.24 | 1.82 | 7.00 | --- Rebuttal Comment 1.1: Comment: I have read the responses and other reviewers' comments. Although some of my concerns are well addressed, the effectiveness of the proposed SBB and SAB is still weak in the original version. And the authors will add more comparison results and explanation. So I tend to accept this paper and keep my original rating score.
Summary: The paper proposes a new video matting method called BiMatting. It is based on Binary neural networks (BNNs), a more compact network to reduce the computational and storage requirements of video matting. Specifically, the authors addressed the accuracy bottleneck of BNNs by re-designing its encoder and decoder architecture. Performance was evaluated on several video benchmarks and compared with SOTA methods. Strengths: This is the first time BNNs was used for video matting application. This problem was well-motivated and several challenges were addressed by careful architecture design and new training procedures. The authors provided a comprehensive comparison with other SOTA methods and demonstrated that this method offers a good tradeoff between performance and storage. It is computationally efficient that BiMatting reduces computational FLOPs by 11 times and storage by 21 times. Weaknesses: 1. Although this method outperforms existing binarized video matting models, it is not yet on par with its full-precision counterpart in visual quality. 2. Another potential weakness is the complexity of the training pipeline of Bi-Matting, which contains a pre-training phrase and a matting training phrase with four stages. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. The authors claimed that binarizing the encoder causes a significant drop in performance. Why is that the case? Do the authors mean binarizing the activation of the encoder? 2. Is there a reason why the paper does not have a related work section? I would be curious to see recent works and other applications of BNNs to image segmentation and other relevant fields. It would help me evaluate the novelty and impact of current methods. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The author addressed the limitation that the method is not yet on par with 32-bit RVM. Quality-wise, it tends to get blurred and prefers simpler backgrounds. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1**: Although this method outperforms existing binarized video matting models, it is not yet on par with its full-precision counterpart in visual quality. **A1**: As we present in our limitations paragraph and figures, BiMatting is not as accurate as full-precision models in certain highly dynamic scenes, due to the representation loss caused by the extreme bit-width compression. Nonetheless, our BiMatting significantly reduces the accuracy gap between the binarized video matting models and full-precision ones. Considering that BiMatting is the first binarization model in the video matting domain and has attained impressive gains in acceleration (12.4x) and compression efficiency (21.6x), we believe that the binarized video matting model holds substantial potential for further improved accuracy. > **Q2**: Another potential weakness is the complexity of the training pipeline of Bi-Matting, which contains a pre-training phrase and a matting training phrase with four stages. **A2**: Please note that our training pipeline completely follows that of the baseline RVM for a fair comparison, using the code from their public GitHub repository. We do not add additional training stages or other complications. We also adopt the same stopping conditions as RVM. We also provide the detailed training pipeline of BiMatting in our General Response and will release our training code in the final version. > **Q3**: The authors claimed that binarizing the encoder causes a significant drop in performance. Why is that the case? Do the authors mean binarizing the activation of the encoder? **A3**: We clarify here that the main reason for the significant performance drop when binarizing the encoder. It is caused by loss of representation capability induced by the coarse discretization (1-bit) of **both the model weights and activations**, especially when binarizing already lightweight architectures, like MobileNetV3. In the binarization process, 32-bit weights and activations are compressed to 1-bit. The representation capability and accuracy are therefore greatly reduced (from 2^32 to 2 possible states per weight or activation), which is the direct cause of the performance degradation of the binarized backbone. See Sec 2.1 and 3.1 for further discussion. > **Q4**: Is there a reason why the paper does not have a related work section? I would be curious to see recent works and other applications of BNNs to image segmentation and other relevant fields. It would help me evaluate the novelty and impact of current methods. **A4**: We discuss related work in Sec 2, including the related work on binarization (Sec 2.1) and video matting (Sec 2.2). We will follow the reviewer’s suggestion and add the section title of related work to make the manuscript more clear. We will also add a discussion on more related work on BNNs for image segmentation as suggested by the reviewer: Group-Net [1] demonstrates successful application to the semantic segmentation task on PASCAL VOC. Frickenstein et al. introduce Binary DAD-Net [2], the first BNN-based semantic segmentation network for drivable area detection in the autonomous driving field. Zhou et al. present CBNN [3], which incorporates multiple subnets with learnable global lateral paths and evaluates its performance on a segmentation dataset. However, the corresponding full-precision counterparts of these binarized networks are almost classical ResNet-18 architectures, and are thus not applicable to ultra light-weight architectures, such as MobileNetV3, which are susceptible to accuracy collapse when binarized. While image segmentation methods ultimately predict a discrete class, alpha matting requires the dense accurate prediction of a continuous alpha value. Segmentation methods are therefore not easily transferable to the video matting task. Please note that including those related works will not affect the main claims and findings of this paper. [1] Zhuang B, et al. Structured binary neural networks for accurate image classification and semantic segmentation. CVPR 2019. [2] Frickenstein A, et al. Binary dad-net: Binarized driveable area detection network for autonomous driving. ICRA, 2020. [3] Zhou X, et al. Cellular Binary Neural Network for Accurate Image Classification and Semantic Segmentation. IEEE TMM, 2022.
Summary: The authors analyzed the operations inside the deep video matting networks and proposed an efficient binarization method to greatly reduce the computation cost. Specifically, they re-designed the encoder and also sparsified the decoding process. The proposed methods are shown to outperform the existing baselines and achieves reasonable visual quality. Strengths: - Section 3 shows binarization of the encoder parts bring the most harmful degradation of the accuracy, and the current decoder consumes the most computational resources. The analysis and preliminary experiments demonstrate the reason of the proposed methods. The logic and story-telling of the paper make a lot of sense. - The review likes the reasoning of the encoder (lacking of short connection within the blocks) and decoder (many redundancy in computation), and the connections between the analysis and the proposed designs. - The design of SAB makes sense for any video applications. Weaknesses: - The training pipeline is too complicated. - Table 1 is confused. So for BiMatting (Ours), is it the same as RVM (SBB+SAB)? It's better to state the difference of RVM and BiMatting, and show it clearly in the table. Why RVM + SAB has much worse results than the one without any of SBB and SAB? Is it possible the bad performance is due to the bad training or need a better parameter set different from the proposed one? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In section 3.2.2, ep(5), the authors proposed to use mean to compute the values for 'short connection'. How did the authors come up with that solution? Why not using another convolution layers to reduce the channel number or taking the addition? Has the authors done some comparison? - Any plans to extend the work to other video applications? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The SAB designs may fail when the video is dynamic or having more foreground object movement. - Complicated training procedures and vague stopping conditions for each stage. It makes the results in the paper hard to be reproduced. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1**: The training pipeline is too complicated. **A1**: Please note that our training pipeline follows that of the baseline RVM for a fair comparison, using the code from their public GitHub repository. We do not add additional training stages or other complications. We also adopt the same stopping conditions as RVM. In addition, we provide the detailed training pipeline of BiMatting in our General Response and will release our training code in the final version. Please see the general response above for more details. > **Q2a**: Table 1 is confused. So for BiMatting (Ours), is it the same as RVM (SBB+SAB)? It's better to state the difference of RVM and BiMatting, and show it clearly in the table. **A2a**: Yes, BiMatting (ours) would be the same as "RVM (SBB+SAB)" in Table 1 of the original manuscript. To make the table more clear, we revise its notation, as shown in Table A2a. In the revised table, the checkmark and crossmark on the SBB column represent applying proposed SBB architecture and binarized MobileNetV3, respectively, and the checkmark and crossmark on the SAB column represent the applying SAB technique and direct binarization in the decoder, respectively. We will add this clarification in the next paper revision. Table A2a (*Table 1 of the attached PDF*): Ablation result of BiMatting on VM dataset. | SBB | SAB | \#Bit | \#FLOPs(G) | \#Param(MB) | MAD | MSE | Grad | Conn | dtSSD | |--------------|--------------|-------|-------------------|--------------------|---------|--------|--------|--------|--------| | - | - | 32 | 4.57 | 14.5 | 6.08 | 1.47 | 0.88 | 0.41 | 1.36 | | ✕ | ✕ | 1 | 0.55 | 0.64 | 28.49 | 18.16 | 6.80 | 3.74 | 3.64 | | ✓ | ✕ | 1 | 0.57 | 0.67 | 14.81 | 7.63 | 3.16 | 1.70 | 2.70 | | ✕ | ✓ | 1 | 0.35 | 0.67 | 189.13 | 184.33 | 15.01 | 27.39 | 3.65 | | ✓ | ✓ | 1 | 0.37 | 0.67 | 12.82 | 6.65 | 2.97 | 1.44 | 2.69 | > **Q2b**: Why RVM + SAB has much worse results than the one without any of SBB and SAB? Is it possible the bad performance is due to bad training or needs a better parameter set ...? **A2b**: Our SAB reduces the computation of the decoder by masking the repeated intensive computation of continuous regions, which means that the remaining (not masked) regions in the feature need to provide enough effective information, and these features are provided by the encoder. Fig. 3 shows that when using the original encoder (directly binarized MobileNetV3), even if all other parts are restored to the full-precision counterparts, the performance still drops significantly. Therefore, the crashing can be expected when we use the original binarized encoder together with our efficient and lightweight (binarized) SAB decoder. We will clarify this point by revising Sec. 4.1. In terms of settings, for fairness, we used exactly the same training settings and pipelines in all ablation experiments, which were completely consistent with the official code of the RVM baseline without any adjustments. This helps us reveal the real performance of our proposed techniques under equal settings. > **Q3**: In section 3.2.2, eq (5), the authors proposed to use mean to compute the values for 'short connection'. How did the authors come up with that solution? Why not using another convolution layers to reduce the channel number or taking the addition? Has the authors done some comparison? **A3**: As concluded from the analysis in Sec. 3.2.1, it is important to create a shortcut connection for each binarized convolution. We use the current form of connection for the following accuracy and efficiency considerations: (1) effectively recovering the representation and (2) constructing by parameter-free operations. For accuracy consideration, when we introduce an additional binarized convolution in connection to change the channel number, the introduced convolutions in connection also suffer representation degradation caused by binarization, which makes it hard to solve the degradation of the original binarized unit. The results in *Table 2 in the attached PDF* (BiMatting-BiConv) also show that this method does not bring significant improvement over the no-connection method (BiMatting-NoConn). We also compared the direct addition method (BiMatting-SumConn) mentioned by the reviewers. Although it avoids the information loss caused by binarization, directly summing up the features leads to the loss of fine-grained representation, making it still inferior to BiMatting in accuracy. For the efficiency consideration, even though it is binarized, the convolution introduced in the connection still brings additional computation and storage burden (see *Table 2 in the attached PDF*). > **Q4**: Any plans to extend the work to other video applications? **A4**: In this paper, we focus on the video matting task that is challenging for binarization, because the video matting model requires predicting continuous alpha instead of binary segmentation mask, and it also has a wider range of real-time applications on mobile devices. In future work, we plan to apply our proposed methods to speed up other dense tasks such as video segmentation and depth estimation. > **Q5**: Limitation: The SAB designs may fail when the video is dynamic or having more foreground object movement. **A5**: As we mentioned in our limitation paragraph, BiMatting does not perform as well as full-precision models in some highly dynamic scenes, due to the representation loss caused by the extreme bit-width compression. However, considering BiMatting is the first binarization model in the video matting field and achieves 12.4x acceleration and 21.6x compression gains, we believe the binarized video matting model has significant potential for improving accuracy in future work.
Rebuttal 1: Rebuttal: We deeply appreciate all reviewers for the positive reviews and constructive feedback. All reviewers agree that our BiMatting is highly efficient and contributes to both video matting and binarization fields significantly. Your expertise and insightful comments greatly help us to further improve our paper. **Training details**: Here, we first clarify the training details of our BiMatting, which was asked by three reviewers (Reviewer PTWg, wDXB, and busB). We want to emphasize that our training pipeline strictly follows the baseline Robust Video Matting (RVM) for a fair comparison, using the code provided in their public GitHub repository. In particular, the training configurations and commands for BiMatting are identical to those outlined in the train.py#L8-L75 of the repository, without any modifications. Furthermore, we have made the complete network definition of BiMatting available in the BiMatting_code/model folder in the supplementary materials, making it easy for others to train and reproduce our results. Additionally, since the RVM baseline uses MobileNetV3 pre-trained on the ImageNet dataset by default (from PyTorch official), our SBB backbone of the BiMatting network is equivalently pre-trained on ImageNet. Thus, we do not add any additional training stages or other complications. In the final version, we will release our complete training code and saved checkpoints, further facilitating the training and reproduction of BiMatting. In the following responses to each reviewer, we provide detailed answers to all the questions raised. Pdf: /pdf/4311295743b70bbfa23f422cc1e554fc0d062aca.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Toolformer: Language Models Can Teach Themselves to Use Tools
Accept (oral)
Summary: The paper explores an interesting area to extend large language models (LLMs) with external tools. The authors show that LLMs can teach themselves to better utilize tools. They tested GPT-j on several tools (calculator, QA system, search engine, translator, and calendar). The experimental results well support the claim, and the model even surpasses GPT-3 despite owning far fewer parameters. Strengths: + The paper is well-written and easy to follow. + The idea is novel and the supporting experimental results are extensive. + The authors study a very interesting topic which I believe will be impactful in the LLM era. Weaknesses: The experiments are only conducted on GPT-j (a non-instruction tuned model), which I believe is not enough, considering the existence of more powerful open-source LLMs such as LLaMA and Vicuna. I actually tested that Vicuna, ChatGPT, and GPT-4 already have excellent capabilities in utilizing the tools mentioned in the paper (almost perfect). These models can skillfully manipulate tools based on very simple prompting, achieving far better performance than the reported number in this paper (I'm not sure why this paper only includes GPT-3 as the baseline, which apparently performs poorer than most of the current LLMs). Hence I doubt whether the proposed method could still benefit well-tuned SOTA LLMs. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: NA Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The experiments are only conducted on GPT-j (a non-instruction tuned model), which I believe is not enough, considering the existence of more powerful open-source LLMs such as LLaMA and Vicuna. We completely agree with the reviewer, but at the time during which this work was conducted, neither of the models were available. We are actively working on the experiments with the LLaMA family of models and intend to include those results. > I actually tested that Vicuna, ChatGPT, and GPT-4 already have excellent capabilities in utilizing the tools mentioned in the paper (almost perfect). These models can skillfully manipulate tools based on very simple prompting, achieving far better performance than the reported number in this paper (I'm not sure why this paper only includes GPT-3 as the baseline, which apparently performs poorer than most of the current LLMs). Hence I doubt whether the proposed method could still benefit well-tuned SOTA LLMs. It is challenging to respond fully to this question without details about the specific experiment that the reviewer conducted, but it is plausible that models like GPT-4 are capable of utilizing tools when the specific tool description is included in the prompt (and given that the details of this model are not released, it is not implausible that ChatGPT and GPT-4 have already seen tool usage in the pre-training or alignment stage). However, the very act of mentioning the tool potentially hints to the model that it should use a tool, thereby inadvertently simplifying the problem of tool usage. Our approach aims to enable Toolformer to automatically know when to leverage these tools and how. We conjecture that simple prompting is not sufficient when more, possibly redundant, tools become available: verifying this conjecture is part of our future research plans.
Summary: This paper proposes an approach to augment language models with the ability to call "tools" during decoding, such as a calculator, retrieval system, or machine translation system. This requires only a few human-written examples, and then uses the LM to generate a larger fine-tuning datasets constructed from raw text. When fine-tuned on this dataset, and augmented with the ability to execute external tools, performance of the LM is improved for a range of downstream tasks, across various model scales. Strengths: * The paper proposes a relatively elegant way to integrate tools with language models in a way that requires only a limited amount of human-written examples of API calls per tool. The proposed method to synthetically construct the fine-tuning dataset appears to work well in practice. * By showcasing a variety of tools and their impact across a collection of tasks, this paper showcases the potential impact of integrating such tools and their ability to address some common limitations of LMs. The paper seems likely to influence future work. Weaknesses: * I did not find any significant weaknesses in the proposed approach, execution of the experiments, or technical descriptions in the paper. * My only gripe is in the wording of the title claim that LMs can "teach themselves to use tools". I can see what the authors mean, as a LM is used to generate the fine-tuning data, but I don't find this to be a helpful description of the method and I think the paper would read better without this bit of hype. Additionally, for new tools, the approach still requires a prompt, a handful of examples, and heuristics for selecting relevant subsets of a corpus. Anyways, this gripe shouldn't be blocking for publication, and I don't expect the authors to change their selected title. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * The use of a threshold based on the likelihood assigned by the LM with and without the tool use is clever, but I also wonder whether this could be misleading in some cases. For instance, the LM may have been trained (?) on some of the CCNet data, so this may lead to an overly optimistic likelihood without tool usage relative to the optimal tool usage at test time, especially for, .e.g., temporally-sensitive facts. Were any such limitations related to the filtering process observed in practice? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The use of a threshold based on the likelihood assigned by the LM with and without the tool use is clever, but I also wonder whether this could be misleading in some cases. For instance, the LM may have been trained (?) on some of the CCNet data, so this may lead to an overly optimistic likelihood without tool usage relative to the optimal tool usage at test time, especially for, e.g., temporally-sensitive facts. Were any such limitations related to the filtering process observed in practice? We agree that when the LM has been trained on some data, this may lead to an overly optimistic likelihood without tool usage. We actually don't see this necessarily as a limitation: for some data/knowledge already in the model's weights, it is positive to not call a tool but rely on its weights. Conversely, when the model "knows it doesn't know", which is the case with well calibrated probabilities, we want it to call the tool. This is quite analogous to humans that would rely on, for example, a calculator, for complex calculus. If the training data covers a sufficiently long time span, temporally-sensitive questions will tend to have more uncertain answers and would fall into this category. We agree that more quantitative analysis would be interesting to measure such behavior. --- Rebuttal Comment 1.1: Comment: Thank for your response! I have read the response and the other reviews and confirm my original rating.
Summary: This paper proposes an innovative method for enabling Language Models (LMs) to utilize tools. The authors prompt the LM to generate API calls based on human demonstrations, which are then executed in tools. Any non-contributing API calls are filtered out. A dataset is then augmented with these API calls, and used to fine-tune the LM. The Toolformer surpasses larger models in many tasks, offering a significant contribution to the field. Strengths: This paper outlines a remarkably simple yet effective strategy for curating a dataset that empowers LMs to utilize tools. The method is well-explained and detailed, boasting a universal applicability across multiple datasets and tools. The authors have carried out extensive, well-designed experiments that showcase the performance boost facilitated by their method. The comparison experiment involving Toolformer, a disabled Toolformer, and GPT-J+CC is particularly commendable, as it eliminates the potential of additional fine-tuning data contributing to performance improvement. This research addresses a practical and intriguing topic that is likely to attract considerable interest from both the research and industrial communities. The potential to integrate more sophisticated tools and utilize larger LMs holds promise for advancing LM capabilities. Weaknesses: The proposed method has some limitations. First, there's a dependency on fine-tuning when adapting the LM to new tools, which could impede broad usage and necessitate additional work. Secondly, the use of square brackets for the "<API>" token, without any special escaping mechanism, might present issues when square brackets form part of the original text. Lastly, the MLQA experiment raises a few questions. The performance of OPT(66B) and GPT-3(175B) suffers due to their inability to provide answers in English, suggesting a potentially inappropriate evaluation setting. Toolformer also underperforms GPT-J in certain languages, seemingly due to the impact of fine-tuning on CCNet. However, it's unclear why Toolformer lags behind GPT-J+CC in German and Arabic. The MLQA experiment fails to convincingly support the paper's main claims. Additionally, the paper lacks an analysis of cases where the LM fails to use tools effectively during inference. For instance, the reasons behind the LM's failure when using a calculator during inference are not investigated. Is it due to the inability to generate the <API> token or the candidate? Or does it fail to provide the correct answers even when the API call is successful? An examination of these failed cases and the issues causing them would be enlightening. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: To summarize the inquiries raised in the Weaknesses section: Does the use of square brackets as API tokens interfere with the standard usage of square brackets in the text? Why does Toolformer underperform in comparison to GPT-J+CC in German and Arabic in the MLQA experiment? Under what circumstances does Toolformer fail to utilize tools effectively? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The proposed method has some limitations. First, there's a dependency on fine-tuning when adapting the LM to new tools, which could impede broad usage and necessitate additional work. At the time this work was conducted there was no evidence that effective tool use could be achieved purely through in-context learning. Indeed, in the absence of reproducible descriptions of how models such as ChatGPT and GPT-4, which do exhibit tool use capabilities, were trained, one has to entertain the possibility that they too were fine-tuned towards tool use in some way. Our work introduces a simple, effective, and reproducible way to endow a model that had no tool training with the capability to use tools, something that we believe is relevant to anyone wishing to train their own model. > Secondly, the use of square brackets for the "<API>" token, without any special escaping mechanism, might present issues when square brackets form part of the original text. Does the use of square brackets as API tokens interfere with the standard usage of square brackets in the text? This is an understandable concern, however, an API call requires not only the square bracket but also that the specific tool name follows the square bracket (e.g., <API> Calendar() </API>). Consequently, it would be highly unlikely for parts of the original text to be confused with a true API call, but use cases where usage of the square bracket is common would have to consider this issue, which we have added to the Limitations section. > Why does Toolformer underperform in comparison to GPT-J+CC in German and Arabic in the MLQA experiment? Our hypothesis is that the GPT-J+CC model already has some knowledge of other languages so often the model doesn’t need to translate the question into English in order to answer it correctly. In fact, performing the API call can sometimes confuse the model due to the special characters, so the model’s answers may be worse in such cases. However, it is not clear why GPT-J+CC performs better for some languages but not others. From our analysis, the model calls the correct API (MT in this case) with appropriate arguments (either the entire or part of the question) most of the time. The relative poor performance on the MLQA benchmark comes from the inability of answering these questions, even after they’ve been translated into English. However, this is orthogonal to the main goal of the paper which is learning when and how to call a certain API, which we believe our model does to a reasonable extent. > Under what circumstances does Toolformer fail to utilize tools effectively? Please see the general response for more details. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed reply. I have no other concerns at this moment.
Summary: This paper proposes a method to finetune pretrained autoregressive language models such that they learn when and how to use external tools to achieve good performance in downstream tasks. Following an in-context learning scheme, humans provide a few examples of inserting API calls at appropriate location in the natural language sentence, which is ised by the trained language model to automatically generate an API-augmented variation of a natural language corpus. These API augmented sentences are filtered via a threshold over a criterion measuring whether adding the API calls and their result as a prefix improves the perplexity of the natural language sentence under the LM. This filtered API augmented dataset is used for further finetuning of the pretrained LM so that it learns to insert API calls at appropriate places. This approach called toolformer is compared against using the LM alone, finetuning the LM on the natural language corpus, using toolformer but aritificially suppressing its ability to call an API, and larger general purpose language models. 5 different APIs are considered in this setting. The model is evaluated on several downstream tasks for which access to the APIs might be beneficial. Strengths: -- The paper is very well motivated. This capability of querying external API while generating text is a natural solution to the pathologies like hallucination that the language models exhibit today. This approach is a step toward endowing a language model with such capabilities reliably. -- The experimental setup is well designed and the choices of APIs, baselines, and downstream tasks to evaluate on lead to informative analysis. -- This approach outperforms baselines convincingly on the downstream tasks while not drastically affecting the language modeling capabilities as measured by perplexity on the held-out set. Weaknesses: -- From the writeup, this approach doesn't seem to generate multiple API calls in a sentence and also doesn't perform well with nested API calls. More discussion on this would be useful. -- While it is discussed in the limitation section and Table 2, a more thorough analysis of sample efficiency of this approach would be helpful. How many sentences are enough to learn tool-use functionality? Is it possible to collect enough high-quality API augmented sentences easily? -- Ablation study: performance as a function of filtering threshold that controls the quality of the API-augmented sentences would give more insight into the learnability of tool-use and sensitivity to the "correctness" of the API-augmented dataset. -- A thorough error analysis of failure modes would improve the understanding of the limitations of the proposed approach more clearly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > From the writeup, this approach doesn't seem to generate multiple API calls in a sentence and also doesn't perform well with nested API calls. More discussion on this would be useful. This is currently touched upon in the Limitations section: “API calls for each tool are generated independently; as a consequence, there are no examples of chained tool use in the finetuning dataset.” We will make clearer that “chained tool use” encompasses multiple API calls. > While it is discussed in the limitation section and Table 2, a more thorough analysis of sample efficiency of this approach would be helpful. How many sentences are enough to learn tool-use functionality? Is it possible to collect enough high-quality API augmented sentences easily? Ablation study: performance as a function of filtering threshold that controls the quality of the API-augmented sentences would give more insight into the learnability of tool-use and sensitivity to the "correctness" of the API-augmented dataset. While we agree with the reviewer that it would be valuable to quantify how many examples are necessary for tool-use, this undoubtedly varies according to the specific tool. For some tools like a search engine, this may not need that many samples, but the same is not necessarily true for more complex APIs like scheduling a meeting. Furthermore, finding good opportunities to generate complex examples in a corpus like CCNet poses yet another challenge. The proposed ablation experiments would also require significant compute (at least ten model training ablations for the five tools) and the results may not generalize. > A thorough error analysis of failure modes would improve the understanding of the limitations of the proposed approach more clearly. While we did not conduct an extensive and quantitative error analysis, we do have evidence that suggests failure modes are prevalent with some combinations of task and tool. For example, for the DateSet dataset, the Calendar tool is necessary for every example, but we observe that the Toolformer tends to call it less frequently than needed (92.9% of the time). Note however that our evaluation datasets are skewed towards requiring immediate tool use, and that a quantitative error analysis on those would not necessarily shed light on failure patterns in less biased conditions. Additionally, the task of attributing each incorrect answer across all evaluations to a specific failure mode (failing to call the tool, calling the wrong tool, calling the tool incorrectly, receiving a useless or wrong result from the tool, or failing to come to the wrong conclusion using the tool results) is a major challenge that requires expert human annotation and ideally a broader set of tools. We leave a general study of failure modes across data sources for future work, and instead include in the camera-ready version a discussion of the different types of failure modes and relevant statistics that we are able to report. Please see the general response for more details. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I am keeping my initial score.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and efforts to review, discuss and improve the paper. We have written responses to each reviewer in turn. Two of the reviewers asked for more details on the types of errors we have seen in the evaluation. While a detailed and quantitative classification of each failure for each task and tool would require expert human annotation, we do notice broad trends. Namely, we observed that failure can arise from the following: Failing to call the tool: - Our evaluation and finetuning datasets are distributionally different since the former are all ultra-short form QA style tasks while the latter consists of CCNet paragraphs, with only a few tool calls per paragraph. This difference in distribution likely leads to under-use of the tool in our desired setting. We correct for this by triggering a tool-use when a start token is in the top 10 tokens for the Toolformer, but this is clearly an example of ‘failing to call the tool’. We note that tools are not called for every question, especially where the answer is strongly in-weights. Calling the wrong tool: - We find Toolformer very often calls an appropriate tool and can often judge the context correctly. In the maths section, we do observe occasional calls to Question Answering and WikiSearch likely because examples using these tools are much more frequent in the fine-tuning dataset than those using the calculator tool. Calling the tool incorrectly. - Many of the tools we use cannot be called ‘incorrectly’, taking either no arguments or strings as inputs - all of which are valid as API calls. In the case of the Calculator tool, at data-augmentation time, we see many incorrect/invalid generations (even with constrained decoding to arithmetic tokens), but since the final dataset contains useful, correct API calls, the fine-tuned model often generates valid calculations. However, these calls are often very simple (often two number +,-, /, *), and have low complexity. Receiving the wrong result from the tool or failing to come to the right conclusion - In some cases we see that a tool response is either useless or incorrect - most often this can be seen with the WikiSearch tool which uses a naive BM25 information retrieval algorithm on Wikipedia. - The model’s response to this varies and sometimes ignores the result of the tool, while at other times incorporates it. More investigation is needed to understand _when_ or _why_ a tool is ignored. Anecdotally, we observe that the natural continuation requires a highly specific answer, but the tool has not returned it - for instance “Harry Styles was born in [WikiSearch(Harry styles) -> Harry Styles is an British singer and actor, who has starred in My Policeman] Worcestershire.”
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Beyond Stationarity: Convergence Analysis of Stochastic Softmax Policy Gradient Methods
Reject
Summary: The paper presents a convergence (rate) analysis for a variant of policy-gradient learning of tabular-softmax policy models under finite-horizon MDP setting with discounting factor $\gamma=1$. In this variant, the policy parameters are updated in an epoch-by-epoch manner, from the last decision epoch at the horizon back to the first decision epoch. For each epoch, the learning is a vanilla (stochastic) gradient ascent process, with the objective function being the value obtained assuming fixed return in future epochs. It seems to me that most results in the paper are obtained following similar ideas with previous theoretical studies of policy gradient under discounted MDP settings (e.g. Agarwal et al., 2021, Mei et al., 2020). Strengths: I appreciate the general motivation of this work which aims at analyzing RL algorithms under undiscounted setting. In my opinion (and as the authors also pointed out in the paper), many performance bounds of RL algorithms derived under the discounted MDP setting crucially depend on the factor $\frac{1}{1-\gamma}$, thus fail to accurately characterize the performance of RL algorithm under undiscounted setting. On the other hand, many real-world problems in practice are indeed undiscounted MDP problems. Therefore, directly analyzing RL algorithms under undiscounted MDP settings, or deriving results that do not degenerate when $\gamma$ approaches 1, is relevant and important research direction. Weaknesses: However, several major concerns make me hard to appreciate this particular work. Specifically, 1. The so-called policy gradient algorithm as analyzed in this paper is not really the standard policy gradient algorithm as generally perceived and applied -- the latter updates the parameters of states for all decision epochs simultaneously in each gradient step while the algorithm variant targeted in this paper will update parameters only for one epochs each time. I think the standard co-updating strategy is much more practically relevant. For example, in RL training of language models, the problem is indeed an undiscounted MDP problem, and standard policy gradient methods (with PPO upgradation) are indeed widely used; see the InstructGPT paper (https://arxiv.org/abs/2203.02155) as an example. I thus have concern on the relevance and significance of the algorithm (not the problem setting) studied in this paper. 2. The epoch-wise updating strategy as assumed in this paper makes most of the analysis a straightforward adaptation of the standard analysis method used in the literature. Moreover, it seems to me that the epoch-wise learning problem analyzed in this paper is just a contextual bandit problem -- the paper assumes the policy $\tilde{\pi}$ for all future epochs is fixed, as well as assuming that the distribution $\mu_h$ for states encountered at the epoch is also fixed, but in this case we are essentially just maximizing an "augmented immediate reward (with future returns counted, which is fixed)" over actions conditioned on a context distribution. Since contextual bandit problem is a special case of discounted MDP (with arbitrary $\gamma$), I wonder if most results presented in this paper trivially hold given known results about discounted MDP. 3. I also have concerns on the authors' interpretation of their mathematical results. For example, the paper claims (as a main contribution) that Theorem 3.8 derives a linear performance bound to the horizon H. However, the bound contains the factor c_h, and it's not clear if c_h itself can be a function of H (or of H-h). c_h can be considered a "constant" only with respect to n, not to H. Moreover, Theorem 3.8 only gives a per-epoch performance while previous results gives the overall performance. My understanding is that when we consider the convergence of epoch 0 (which is what we eventually care), we would need to count the time spent on epoch 1 ~ H-1 too, which would bring in another factor of H to the bound? 4. Similarly, throughout the paper it's assumed that the state distribution $\mu_h$ is given for all epoch h, which is not a reasonable assumption in my opinion because except for $\mu_0$, the state distributions for all other epochs actually depend on the algorithm. This is in sharp contrast to the previous papers which only assume that the *initial* state distribution $\mu_0$ is given (which is indeed given). Again, this $\mu_h$ is considered a "constant" when interpreting the bound derived in Theorem 5.2, although $\mu_h$ (for h>0) may heavily depend on structures both in the problem and in the algorithm, in a highly non-trivial manner. 5. The majority of the paper assumes that the reward is nonnegative, which is a bit of a limitation, given that the sign of the rewards can greatly affects the performance of policy gradient. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: (A) You argued that "*reducing the problem to stationary policies is inadequate for finite-time MDPs, and a new policy must be trained recursively at each time step*" (Line 50). I am not sure that this assertion is true. The so-called non-stationary policy $\pi$ (defined at Line 104) is exactly a stationary policy over the entire state space (note that you have defined the state space $S$ as the union of the disjoint epoch-wise state spaces; see Line 100). Since the policy $\pi$ at Line 104 is just an ordinary stationary policy over $S$, the standard policy gradient algorithm can be applied, and we don't have to train it recursively. Since this disagreement directly leads to my major concern 1 as stated above, I am open to hear your rebuttal if I am misunderstanding something here. (B) Line 154: It seems to me that $J_h$ depends not only on $\theta$ but also on $\mu_h$ and $\tilde{\pi}$. However, the notation here is hiding this fact. This in turn hide some subtle issues in the analysis later. For example, explicitly writing $J_h(\theta, \mu_h, \tilde{\pi})$ would make it clear that in Algorithm 1 the $\mu_h$ is left undetermined. (C) Line 263: What is $C_h$? I originally thought this is a typo for $c_h$, until I found that in Theorem 5.2 both $C_h$ and $c_h$ show up in the definition of $K_h$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: The paper well discussed the limitations of this work in Section 7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you very much for your careful and constructive review which we appreciate a lot! **Your practical concern** There is an increasing gap between (extremely) successful practical applications of RL and solid foundations. While the approach of training all policies at once is used in practise (and works) there is limited theoretical understanding. In contrast our approach seems unnatural but allows a very clean analysis. Following your criticism we tried to fill the gap, and believe to understand what is happening. The reason we introduce our approach is dynamic programming (=backwards induction for finite MDPs). There is another way to see why the approach makes a lot of sense. The policy gradient theorem (Thm 2.2) shows that estimation errors are traced backwards by the Q-function. If the policy at later epochs is poorly estimated, the policy gradient at earlier epochs estimates poorly. Already from the gradient representation it is perfectly reasonable to first concentrate on later epochs, then concentrate on earlier epochs. We see our approach and the practically approach as two extreme algorithms. Now we argue that, in fact, their convergence properties are very similar. In the main rebuttal we added graphs for both extremal approaches in the very simple MDP problem of optimally stopping when throwing a dice 5 times. There both extremal approaches perform almost equally well. Our approach is slightly better if a given accuracy is to be achieved. Why do we think our analysis is important also for the simultaneous algorithm? From our point of view it becomes quite clear how the simultaneous PG approach should be analyzed. Ignoring the training of earlier epochs, later epochs can be handled as we did. At steps suggested by our estimates the focus shifts towards an earlier epoch, the currently estimated parameter is considered as new $\theta_h^{(0)}$. We have a clear view how to proceed to reduce simultaneous PG to our backwards PG with those $\theta_h^{(0)}$ but will not be able to finish the analysis in time for this article. Our current analysis shows how many steps are needed in order to achieve a given accuracy, improving accuracy by one order forces to increase time-steps by factor 10. In fact, the simulation example shows that the estimates are pretty tight. Such an insight extended to the simultaneous algorithm would be very interesting in applications. From a practical point of view we believe that our study can be very beneficial. Algorithms should use replay buffers in a way that first focus stronger on later policies instead of using all epochs equally. We also believe that there will be combinations of both extremal algorithms that perform better. For instance, first passing backwards with our algorithm to obtain a reasonably good approximation with relatively few samples followed by training all policies at once. **Questions** (A) There is indeed a miss-understanding here. We defined the state space as the union of (possibly) time-dependent state spaces but want to point out that they do not need to be disjoint (and typically are equal). Given this setting a non-stationary policy is required such that given the same state in different epochs we are allowed to take different optimal actions. For example, in the simple optimal-stopping problem for the dice it is clear that the stopping-action strongly depends on time. To be more precise, having 5 tries at time 4 the optimal policy stops if and only if the dice shows 4,5,6, whereas at time 3 the optimal policy stops at 5, 6 only. (B) You are right, thank you. We will change the notation. For your concern about $\mu_h$ please see the later comment. (C) $C_h$ is defined in Lem C.1 and is the bound on the variance of the stochastic estimator of the gradient. We will change the notation of that constant. **About $\mu_h$** In cases where starting at any time of the MDP is impossible you can always choose a uniform distribution over the action space to ensure that every possible state in the MDP in reached with positive probability. Hence, choosing a uniform policy until epoch $h$ and just assuming $\mu_0$ strictly positive results in a "start distribution" $\mu_h$ which is strictly positive in every reachable state at $h$. Using this, Thm 5.1 and 5.2 can be straightforward generalized to only assuming $\mu_0$ strictly positive. **About contextual bandits** Somehow all finite-time MDP problems could be interpreted as a nested sequence of contextual bandit problems as dynamic programming suggests the backwards iteration we exploit. Since the dependencies of model parameters in policy gradient estimates (such as Mei2020) are not sufficiently explicit, we believe this point of view probably leads nowhere. Moreover, we emphasize that our main contribution, the analysis of **stochastic** policy gradient, is completely new and also not considered in previous work. **About interpretation of $c_h$** Thank you for pointing out that $c_h$ generally depends on $H$. This is true and, in general, we should not speak of linear dependence of the error bound. In fact, also in Mei2020 a similar constant depends on $\gamma$ without mentioning the dependence. Your concern should be investigated in more detail, also in the articles that appeared for discounted MDPs. For the second concern we refer you to Sec 5, where we analyzed the error over time. We indeed observe that an additional dependence on $H$ occurs when considering the whole algorithm. You can find the second $H$ in the rates in Thm 5.1 and 5.2 This leads to an overall dependence of $H^2$ (plus the $c_h$ of course). **About non-negative rewards** Positive rewards are considered wlog and are typical in the analysis of policy gradient algorithms. Using the standard base-line trick for policy gradient theorems the results can be extended to bounded rewards. We will add a comment in our manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. > the approach of training all policies at once is used in practise (and works) there is limited theoretical understanding The "simultaneous" version of policy gradient is, in my opinion, really the algorithm that most people refer to when talking about "policy gradient", *not only in practice, but also in most theory works*. The algorithm is perhaps not analyzed in the undiscounted setting yet, but this is exactly why we *should* analyze it under this setting, right? I don't see why the fact that "the standard algorithm is not analyzed" motivates us to turn to analyze another algorithm. > our approach seems unnatural but allows a very clean analysis As I mentioned in Concern 2, I doubt if the epoch-by-epoch setting as you assumed has made the analysis a bit too easy. Given fixed future payoffs, the problem of finding optimal action *for a single epoch* is (just) a contextual bandit problem for which performance of policy gradient can be easily derived (because contextual bandit is a special case of discounted MDP). So, while I agree that your algorithm setting allows easier analysis, I think this makes your paper weak, actually. > There is indeed a miss-understanding here You are right that I shouldn't say "S_h are *disjoint* subsets", which I actually mean they are "*separate* subsets", but this correction doesn't touch the main point. My point here is that, the simultaneous-update version of policy gradient is also perfectly applicable in your problem setting because the so-called non-stationary policy that you are analyzing is only non-stationary with respect to the epoch-wise state spaces, but *is* a stationary policy with respect to the state space $\mathcal{S}$. > $c_h$ generally depends on $H$ I feel $c_h$ might also crucially depend on $\mu_h$. A positive $\mu_h$ can only guarantee convergence in the limit, but it feels unlikely that the speed of the convergence is agnostic to the specific form of $\mu_h$. > From our point of view it becomes quite clear how the simultaneous PG approach should be analyzed It is great to hear that you got inspired about analyzing the standard policy gradient. I agree that it would be quite interesting if a paper can show that the epoch-by-epoch variant is better than standard PG, either through systematic experiments or through mathematical analysis. It's interesting especially because I personally and intuitively don't feel it true :) In any case, I think the existing evidences you presented, including the rebuttal PDF, has not really established the above. I encourage you to keep working in this direction, though! --- Reply to Comment 1.1.1: Comment: Thanks for the good discussion! Though it might go a bit far we carried out the proofs you asked for to assure you we did not try to take a theory shortcut by not studying the (simultaneous) PG algorithm. > ...interesting if a paper can show that the epoch-by-epoch variant is better than standard PG...especially because I ... don't feel it true :) Challenge accepted. We did the Maths. In short: the provable bounds for backwards PG are indeed more intriguing than simultaneous PG. It scales better in $H$ and even better, the disturbing (and possibly huge) model-based constants $c_h$ can be made to disapear for backwards PG if properly initiated (not for simultaneous PG). This confirms our simple simulation example. **Two algorithms for non-stationary finite-time MDP** a) First our algorithm. You are right, we now understood your thoughts. It can be interpreted as a concatenation of contextual bandits, this is our Sec 3. Sec 3 is similar to Mei2020 but still needs some extra analysis, e.g. the crucial dependence on the time horizon $H-h$ (obtained in the smoothness) would not be visible in Mei2020 by seeing a contextual bandit as MDP with $\gamma=0$. (But please keep in mind, our major contribution is the stochastic case of later sections!) b) Second, what you call the PG Algorithm: artificially use a stationary policy by adding the time coordinate to the state-space, then considering an undiscounted stationary MDP with finite time-horizon. This leads to a stationary softmax policy with $H |\mathcal{S}| |\mathcal{A}|$ many parameters. We guess this is what you meant, the state-space is artificially made disjoint. **The Maths** We did the analysis for simultaneous PG to finally convince you, that our viewpoint is indeed worth looking at and that your personal feelings might not be completely true :) The analysis of "standard PG" can be proved with methods similar to Mei2020 (or other recent articles). Calculating the smoothness constant for the artificial undiscounted MDP then adapting the PL-inequality and putting both together gives the number of gradient iterations $$N = \frac{2 H^5 R^\ast (2-\frac{1}{|\mathcal{A}|})||\mathcal{S}|}{c^2 \epsilon} \Big\lVert \frac{d_\mu^{\pi^\ast}}{\mu}\Big\rVert_\infty^2$$to achieve an error $V^\ast(\mu) - V^{\hat{\pi}^\ast}(\mu)\leq \epsilon$. The $H^5$ is not surprising, you can compare to known results for discounted MDPs and then keep in mind that $H$ should replace $1/(1-\gamma)$. In fact, a discounted MDP can be seen as an undiscounted one terminated at an $Geo(\gamma)$ time which has mean $1/(1-\gamma)$. Thus, stopping at $H$ should give $H$ instead of $1/(1-\gamma)$. In comparison, the total number of training steps from our Thm 5.1 to achieve the same error is$$N= \sum_{h=0}^{H-1} N_h = \sum_{h=0}^{H-1} \frac{4(H-h)H R^*|\mathcal{A}|}{ c_h^2 \epsilon } \Big\lVert \frac{1}{\mu_h}\Big\rVert_\infty.$$Note that we train $|\mathcal{S}||\mathcal{A}|$ many parameters in every $N_h$ such that $N |\mathcal{S}||\mathcal{A}|$ many partial derivatives are considered. The second viewpoint involves $H|\mathcal{S}||\mathcal{A}|$ many parameters in every training step and needs to evaluate $N H |\mathcal{S}||\mathcal{A}|$ many partial derivatives in total. **Comparison** a) the model dependent constants: Thinking about our backward inductive algorithm as a concatenation of contextual bandits, we now reaslised that the model-based constant $c_h$ simplifies to $\frac{1}{|\mathcal{A}|}$ **if we initialise softmax uniformly**. Mei2020 showed this already for bandits (in Prop 2) and this can also be transferred to contextual bandits. This is a bandit feature, i.e. using contextual bandits might not be that stupid after all. For the simultaneous training one looses this advantage and cannot get rid of the unknown $c$. b) The (non)dependence on $c$ is a clear advantage. A disadvantage is, as you already spotted, $\mu_h$. Our approach was motivated partially by optimal stopping approaches such as Becker et al. "Deep optimal stopping", JMLR 20 (2019) where backwards induction appear. In that case (see our simulation) one would start in $\mu_h$ uniform, i.e. yields $$N= \frac{4H^3 R^*|\mathcal{A}|^3 |\mathcal{S}|}{\epsilon }$$gradient steps, which is pretty nice and a much better guarantee than we have for simultaneous PG. Of course, if one cannot start at later times, then the constant matters just as the $c$ constant matters for simultaneous PG. Perhaps you believe that that there is just an important point about the backwards version. We think it is nice (hopefully important) to see clearly how to train different epochs differently. Analysing simultaneous PG was not the purpose of this article, we focused on extending PG to SPG in the spirit of the usual extension of GD to SGD. We feel a conference contribution with the analysis of new PG scheme, standard PG, REINFORCE might be a bit heavy, but we are happy to add the proofs for simultaneous PG into the appendix.
Summary: This paper proves asymptotic convergence and convergence rate of (stochastic) policy gradient descent to global optimum for un-discounted finite time Markov decision process (MDP). For the deterministic version, at each decision time, they show the error bound depends linearly on the remaining time step. For the stochastic version, they derived probability bounds. They extend their analysis to REINFORCE on discounted MDP. Strengths: The presented results are sound and complete. Both convergence and convergence rate of policy gradient methods with softmax policy are established for the finite-time MDPs. Weaknesses: The main argument seems to be based on Mei et al (2020) with some modifications for un-discounted finite time MDPs case. I did not go over the details of proofs in appendix. It is hard to evaluate the significance and the novelty of the contributions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See wekness Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you very much for taking the time to review our article and the overall positive feedback! In order to answer the comment about your concerns about significance and novelty, in the following we will explain in more details the two main contributions of this article. One contribution, as you observed correctly, is in the spirit of Mei2020. However, let us emphasize that the main contribution is a novel analysis of the stochastic policy gradient method. **1. Contributions in the deterministic setting** The first contribution is a dynamic programming inspired version of policy gradient descent for finite-time MDPs. In fact, in practice PG is sometimes used differently, tuning all policies at once without taking into account that training errors of later times propagate backwards according to the policy gradient formula for finite-time MDPs (see Thm 2.2). It seems that our analysis is the first to give a rigorous analysis in this direction. The analysis of algorithms more common in practice will most likely be a reduction to our results, we give further discussion in the main rebuttal. You are right that our analysis in the deterministic case is based on Mei2020. Please note again that the analysis in Mei2020 crucially depends on the discount factor $\frac{1}{1-\gamma}$ and fails to transfer straight forwardly to undiscounted settings by choosing $\gamma =1$ or for cases where $\gamma$ approaches $1$. **2. Contributions in the stochastic setting** The second main contribution is completely novel. We show how to use techniques from stochastic approximation theory, in particular from the analysis of stochastic gradient descent methods, to extend the deterministic case to the sample based stochastic case. In contrast to the deterministic policy gradient analyzed by Mei2020 and others (under the strong assumption of exact knowledge of gradients) the stochastic policy gradient is very much different from stochastic gradient descent schemes. The reason is that samples are not iid, and the distributions of samples change in every step of the iteration. Proving that stochastic PG can be shown to work (in both undiscounted and discounted settings) is the main contribution of this article. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarification. I will keep my score.
Summary: This paper studies the convergence properties of stochastic policy gradient methods for finite-state MDPs for finite horizon problems with un-discounted optimality criteria. The convergence relies on the development of a weak PL condition. In the second part of the paper, the authors then extend their convergence analysis to the setting where the policy gradient is not available exactly and only a stochastic version of it can be used. Strengths: The paper is well written and easy to follow. In particular, I find Section 4, where the convergence analysis is generalized to the stochastic setting strong and practical useful. Proofs are derived very clearly and detailed with I appreciate. Weaknesses: 1) Numerical example missing: In my opinion, it would be interesting to include a numerical example to the paper. In particular it would be interesting to see how tight the complexity bounds of Theorem 4.4 are on a concrete example. 2) Finite state/action spaces often are restrictive: What can you say about the results being generalised to continuous state and action spaces? Maybe for a linear system to start with. 3) How relevant are un-discounted problems? Could you add a detailed motivation, why these problems are practically relevant? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1) Could you clarify better which results are new and which are existing results. For example, I don’t understand if Theorem 2.2 is an existing result or if it is new. 2) In Equation (5), I don’t understand the dimension of $\theta$. I think there is something wrong here. 3) In Lemma 2.1, why is the superscript of the expectation operator $\pi_{(h)}$ and not $\pi$ ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Without Section 4, the work would have been limited as the gradients typically are not known exactly. But thanks to the results in Section 4, I think the paper is rather complete and I do not see major limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you very much for taking the time to review our article and the overall positive feedback! **1. Numerical examples** In the main rebuttal we added graphs for the deterministic analysis in the very simple MDP problem of optimally stopping when throwing a dice 5 times. In this very (!) simple example we can see the $\frac{1}{n}$ convergence of Theorem 3.8. We won't be able to carry out a full analysis also for the stochastic setting in time. Unfortunately, we also not expect the complexity analysis in Theorem 4.4 to be tight. This is due to the fix step sizes and the resulting sufficient large sample sizes to guarantee $c_h>0$. Let us mention, that general stochastic gradient methods under the imposed assumption of (weak) PL-type, are not well-understood yet. Improving this result is ongoing work. **2. Finite state/action spaces** The analysis presented in this paper relies on the tabular softmax parametrization. This tabular structure is exploited to obtain the PL-inequality which ensures the gradient domination property. Therefore, we cannot expect a straight forward extension to continuous state/action spaces. Nevertheless, there are recent works on specific policy gradient algorithms for the stationary discounted MDP problem (see for example *"Stochastic Policy Gradient Methods: Improved Sample Complexity for Fisher-non-degenerate Policies", I. Fatkhullin et. al (2023)*), where a different policy parametrisation fulfills similar properties on continuous sets. Transferring these to the non-stationary case would be an interesting future research direction. **3. Undiscounted problems** Many real-world problems in practice are indeed undiscounted MDPs. A few specific examples are given in the following: 1. Training of language models is typically done using policy gradient. This is also an undiscounted MDP problem. 2. Project Scheduling: An agent schedules tasks to complete a project in finite time. The goal is to minimize the total cost of the project while ensuring all tasks are completed on time. 3. Medical Treatment Planning: An agent decides how to allocate resources (e.g., water, fertilizer, labor) to different crops over a growing season to maximize overall yield while minimizing resource usage. 4. Online Advertisement: An agent allocates a fixed budget to different campaigns over a specific period. The goal is to maximize the total number of clicks or conversions within the budget constraint. We will add a brief discussion in the introduction of the manuscript. **4. Clarification of new results** Due to the specific non-stationary policy setting, many well-known results from general MDPs appear in a different version than usually known for stationary policies. For example, in Theorem 2.2 we show the policy gradient theorem for the specific non-stationary setting. To the best of our knowledge this version is nowhere else stated, even though the proof idea is similar to the stationary policy version in finite-time horizons and therefore the result is no surprise. Still, all results stated in this paper are novel and whenever there exists a similar version in the stationary setting, we pointed that out in the texts before or after the results. **5. Dimension of $\theta$** As in almost all recent articles on the subject, we consider the tabular softmax parametrization such that for every state-action pair one parameter is used. Due to the non-stationary policy, we train a different parameter in every decision epoch of the MDP. Thus, the dimension of $\theta$ for time-point $h$ equals the number of state-action pairs in this epoch. As we allow for the states to be different between epochs (does not need to be the case) and the number of actions to depend on the current state, the number of state action pairs for epoch $h$ are given by $\sum_{s\in\mathcal{S}} |\mathcal{A}_s|$. In settings, where $\mathcal{A}_s = \mathcal{A}$ for all states and $\mathcal{S}_h =\mathcal{S}$ for all epochs, this simplifies to a dimension of $d_h = d = |\mathcal{S}|\cdot|\mathcal{A}|$ for all epochs. **6. Notation Lemma 2.1** The notation $\pi_{(h)}$ stands for a non-stationary policy from time-point $h$ to $H-1$ and is defined in line 105. Especially in Lemma 2.1, where we start in epoch $h$ with $S_h =s$, we used this notation to point out that there is no dependency on earlier policies. Still as we dropped the subscript $(h)$ in the policy of the value and advantage function you are right that the notation in this line is inconsistent. We work over these cases throughout the manuscript. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I would like to thank the authors for the detailed answers. I will keep my score.
Summary: The paper analyzes the convergence of non-stationary softmax policy gradient methods where the policy is parameterized by different parameters at each decision epoch. The convergence results on REINFORCE algorithm are provided under undiscounted finite-time and infinote-horizon cases. Strengths: - The paper obtains new global convergence results for softmax policy under undiscounted finite-time and infinite-horizon MDPs. The assumptions used are standard in policy gradient literature. - The analyses in the paper also cover policy parameterized by deep neural network in deep reinforcement learning. - The paper is well-written and easy to follow. Weaknesses: - My major concern is about the factor $c_h$ in Theorem 3.8. The authors use it as a constant but I argue it is not. Since $c_h$ is the infimum over all iteration of the minimum output of softmax policy, it is not known or can be computed before-hand. In fact, it depends on the choice of number of iteration $n$ and the current parameter $\theta_n$. More discussion is needed in this case. - The results in Theorem 3.8 and Theorem 4.4 are valid for the exact policy gradient which is rather restrictive and impractical. I believe having results in the stochastic case using stochastic policy gradient will strengthen the contribution of the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The factor $c_h$ can be very small (if the policy is well-trained). Can we impose certain condition to overcome this? 2. I wonder if the results can be extended to other class of policy gradient method, such as ones with different policy gradient estimator. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I do not find clear discussion on the limitation of the work but I find the use of exact gradient is the main limitation for the key results in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you very much for taking the time to review our article! **1. Limitation to exact gradients: not true** It is correct, that access to exact gradients is a restrictive assumption for practical applications. However, the article has two main contributions. Firstly, we extend the analysis for policy gradient methods (with exact gradients) from discounted MDPs (Mei2020) to the dynamic programming inspired finite-time policy-gradient method for undiscounted MDPs. Secondly, we show how to overcome the limitation of knowledge of exact gradients, which you have pointed out. To be more precise, we show in both the finite-time and also in the discounted infinite-time case how to analyse the stochastic version when the exact gradients are replaced by samples (REINFORCE algorithm). You can find the results on removing the limitation of exact gradients in Section 4, Theorem 5.2 and Section 6. We will present a more detailed discussion about our main contributions in the introduction of our manuscript. **2. Constant $c_h$** As you observed correctly the factor $c_h$ is the infimum over all iterations of the minimum over all states, but for the probability of taking the **best action** $a^\ast(s)$ in these states. Hence, training a perfect policy is not a problem concerning this factor, as the probability of choosing the best action would then be very high. We also emphasize that $c_h$ is indeed a positive constant with respect to the number of iterations $n$. This result is formulated in Lemma 3.7. We will make this more precise in the manuscript. **3. Extend to other classes of policy gradient methods** Using different policy gradient methods like natural gradient or regularized policy gradient requires a new derivation of the PL-inequality and is not within the scope of our article. There are some results regarding these methods for discounted MDPs and as mentioned in the conclusion considering these algorithms in the context of finite-time MDPs is an interesting future research direction. On the other hand considering different type of stochastic policy gradient estimators to approximate the exact gradient (different from the one we introduced in (8)) would indeed be possible as long as they are unbiased with bounded variance and the variance can converge to zero by increasing the batch-size. In particular, it would be interesting to consider variance reduced gradient estimators in the spirit of stochastic gradient descent methods for finite sums such as stochastic average gradient (SAG), stochastic average gradient amélioré (SAGA) or stochastic variance reduced gradient methods (SVRG).
Rebuttal 1: Rebuttal: Dear reviewers, thank you all very much for your reviews which we appreciate a lot! **1. Stochastic vs. deterministic** Please note that there are two contributions of the article. We will improve the abstract to make this more clear. Extending Mei2020 to the finite case and, most importantly, show that the **stochastic policy gradient** (sample based) approach is meaningful. This covers also the famous REINFORCE algorithm for discounted MDPs for softmax parametrization. The second part is a non-trivial extension of the fact that, under PL inequality in weak form, GD (resp. PG) can be extended to SGD (resp. SPG) as long as sufficient sample sizes are used. This extension is in itself non-trivial and to best of our knowledge not known in the literature of SGD, yet. Moreover, since in contrast to SGD in stochastic policy gradient samples are not iid, it is somewhat surprising that the arguments can be extended. **2. Numerical examples** As some of the referees asked for numerical validations we enclosed a numerical toy example of a very simple MDP problem of optimally stopping when throwing a dice 5 times. The example was chosen as this is one of the only non-trivial examples that we know of for which exact policy gradients can be computed. The simulations show that the theoretical results (in the exact gradient setup) are sharp up to constants. Figure 1 in the attached pdf shows a log-log-plot to visualize the $\frac{1}{n}$ convergence rate for the deterministic time dependent policy gradient algorithm proven in Theorem 3.8. Here, the magenta dotted line is a plot of the upper bound $\frac{4(H-h)R^* |\mathcal{A}|}{c_h^2 n}$ as a function in $n$. And the red line visualizes the difference of $J_h^\ast - J_h(\theta_n)$ also as a function in $n$. The constant gab between the two lines shows that our rate is sharp up to constants. In Figure 2 (b) we visualize a similar log-log-plot for Thm 5.1. A detailed description of the plot is given in the caption and we will analyze the figure in the following section. **3. Comparison to a different algorithm** One of you suggested that a comparison to the policy gradient algorithm where the parameters of different epochs are updated simultaneously (instead of backwards one by one) would be interesting. That algorithm is typically used in practice, without theoretical backup. Indeed, it turned out that this is a very interesting question, it seems like our estimates can be used to analyse the simultaneous update algorithm as well. In Figure 2 (a) and (b) we plotted different versions of our algorithm, with different target accuracy $\varepsilon$ and compare to the simultaneous update algorithm. We refer you to the captions in order to understand the following take-aways. What do we learn from these simulations? The funny curves in Fig 2 (a) reflect the epoch-by-epoch training scheme of our algorithm. The curves always move up fast if a new policy is started to be trained. The simultaneously trained algorithm does not have that feature, it improves fast at the beginning but takes more time to get close to the optimal value. 1. The blue lines from our backward induction algorithm perform slightly better than the magenta line from simultaneous training (and has theoretical error guarantees). For a given accuracy $\varepsilon$ the backwards trained algorithms is always a few gradient steps faster then the simultaneously trained algorithm up to the same accuracy. This can especially seen in Figure 2 (b), as the dotted blue line is slightly below the dashed magenta line. 2. Given a fixed number of overall gradient steps (summed over all epochs) the backward inductive approach is more accurate then the simultaneous approach if $\varepsilon$ is chosen accordingly. 3. The dot-dashed green line with the same number of updates in each epoch performs much worse than the epoch-dependent updates suggested by our theoretical analysis. Epochs should not be treated equal but according to the choice in Thm 5.1! The policy gradient theorem (Thm 2.2) tells us that estimation errors are pushed backwards through the estimated reward to go (Q-function). Thus, errors at later epochs imply errors at earlier epochs. What we see is not surprising. Our algorithm first optimizes the late policies (reducing errors that are pushed backwards) but does not optimize earlier epochs at the beginning. Thus, the algorithm must be poor at the beginning but has the chance to be accurate once all epochs are trained. The simultaneous algorithm first improves all policies better but then becomes weaker as there is less accuracy for late policies and the errors are automatically pushed towards all other epochs via the policy gradient theorem. We believe that there is even a simple take-away for practitioners. If you can, then first train more accurately the later epochs, later focus on early epochs. **4. Resulting interesting future research direction** Training backwards vs. simultaneously are two extremes. We are quite certain that combinations of both will be the way to go in the stochastic setting. Of course, it will depend on the actual situation and how roll-outs must/can be sampled. For instance, in offline training with many data available this might result in a more efficient use of computational power. We are convinced that a theoretical analysis of the simultaneous and mixed approaches is possible, reducing to the arguments of this paper. After all, the reason for convergence is the backwards training, early training of other epochs only helps. Pdf: /pdf/f73a82521e122a8ba581ef5902d9b114adf23447.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain Activities
Accept (poster)
Summary: The paper focuses on decoding visual stimuli from recorded fMRI activity. Proposed method initially pre-trains an fMRI feature learner (and essentially a signal denoiser) on unlabeled data using a contrastive training scheme resembling to masked autoencoders. Subsequently this feature encoder is finetuned by using guidance from an image auto-encoder to attend to signal patterns that are informative for reconstruction. Output of this fMRI feature learner is then used to condition a latent diffusion model to be able to visually reconstruct the stimuli in the image domain with high resolution. Strengths: The paper tackles a very interesting problem with nice, visually illustrated results. Proposed idea of performing neural signal guidance in the latent space such that a generative latent diffusion model can be used for high-resolution image synthesis is quite a neat approach to solve this problem. Weaknesses: There is no higher level justification or discussion present in the paper on why one would/should use fMRI for this problem (as opposed to other alternatives) towards building a non-invasive BMI. Moreover from a technical perspective, the paper only puts together several existing methods but some technical details are appearing to be mixed up in the descriptions. These should be corrected and some clarifications are still necessary in terms of the presentation of results. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Motivation of the study from a non-invasive brain-machine interface perspective is somewhat disputable, since the neurophysiological source signal considered here is fMRI. Can the authors extensively discuss and justify to the reader why their choice of fMRI (with low temporal resolution and longer temporal scales of responses, noisy nature, and hard-to-integrate experimental recording/design) would be a better candidate than other alternatives (e.g., EEG, or (partly-)invasive methods) to tackle this problem? - Figure 2 with the "VQGAN" does not make sense. Starting from the last paragraph of page 5 and then in Figure 2, the paper talks about a VQGAN, whereas the LDM that the authors use should certainly be a pre-trained VQ-VAE with an encoder-decoder pair? Also based on the notation in Eq 9 and line 193, I presume the VQ-VAE encoder should be E_g rather than E_G on line 191? - Description of Phase 1 has some ambiguities. Beginning of Sec 3.2 was understandable. Then line 140 starts to talk about an unmasked original "image" at some point, whereas the fMRI inputs were not yet denoted as "images" so far? The narrative can be made more consistent. Perhaps in Section 4.1 or 4.2, authors should also briefly describe the image-like fMRI input data representation scheme in a few words. - Presentation of quantitative results are also ambigious. Figure 3 is not discussed in text, and achieved accuracies are not readable from the figure. Ablation studies from Section 5.2 are completely separated from the main quantitative result of the paper that is written in the Abstract: 39.34% in 50-way-top-1 semantic classification accuracy. In Section 5.1 or 5.2, it is not clear where this number really comes from? In general, the paper should revise its presentation of quantitative evaluations. - Methodological comparisons to DC-LDM should be a bit more elaborated. Does this method only differ in the use of a contrastive pre-training phase, or are the encoder network architectures of the 24-layer FRL also different? Can the authors clarify the differences a bit more in depth on how significant their contributions are? - Authors do not explicitly state this, but is the general idea of contrastively using a masked autoencoder scheme for denoising fMRI data a novel contribution of the paper? If not, can the authors refer to similar works and discuss that it is a powerful approach that is adopted for pre-training in their model? Minor comments: - It appears like in no occasion an fMRI conditioned reconstruction comes out directly identical to the ground truth, and hence it would be probably better/more realistic in Figure 2c to use a different airplane image as generated. - What is the "CMA" block indicating in Figure 1? Is it the cross-attention module? - In Figure 1, the dotted background pattern prevents the equation symbols/text to be easily read. Perhaps this figure text/equations can be re-generated in a better visible way. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Sufficiently addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for all the constructive feedback. We answer your questions and comments as follows. (Due to the rebuttal length limit, we summarize your questions.) Q1: Why fMRI is a better choice over alternatives such as EEG for this problem? A1: Primarily, fMRI offers significantly higher spatial resolution than EEG. This allows us to capture detailed activations within the visual cortex and related brain areas, a beneficial factor when reconstructing visual images from brain activity. Moreover, the presented stimuli in the dataset are all static images, making the fMRI’s limitation in temporal resolution less concerning. When working with static images, the prolonged temporal scales of fMRI responses provide ample time for the brain to process the stimuli and for the hemodynamic response to unfold. Therefore, considering both fMRI and EEG have inherent noises, fMRI's strength in spatial resolution and the nature of our stimuli make it a more fitting choice than EEG in this work. fMRI is also used in a lot of related work published in top-tier journals, such as Nature[1] and Science[2]. (All the following work have been cited in our paper) [1] K. N. Kay, et al, “Identifying natural images from human brain activity,” Nature, vol. 452, pp. 352–355, 2008. [2] T. Horikawa et al., “Neural decoding of visual imagery during sleep,” Science, vol. 340, pp. 639 – 642, 2013. Q2: Why does Figure 2 reference a "VQGAN" when the LDM used a pre-trained VQ-VAE? Based on Eq 9, should the notation for the VQ-VAE encoder be E_g instead of E_G? A2: The VQGAN, at its core, can indeed be perceived as a VQ-VAE optimized with a GAN loss. However, as corroborated by LDM’s original paper [3], the model we adopted specifically leverages a VQGAN. In section 3.2 of the LDM's original paper, it's explicitly stated in the last paragraph, “This model can be interpreted as a VQGAN...”. The open-sourced code of the LDM provides further evidence of discriminative loss implementation. As for the notations, thanks for pointing this out, E_g should be used to denote the VQGAN encoder. We will correct the notation in line 193 in our revised manuscript. [3] R.Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF Conference on CVPR. 2022. (ref. 24 in our paper) Q3: About making consistent references to fMRI inputs and adding a brief description of the fMRI representation scheme. A3: Thanks for the suggestions. As a kind of neuro-imaging, one sample of fMRI recordings can be called an image. But we acknowledge that it might be a little confusing to differentiate between the image of visual stimuli. We will follow your suggestions and unify the naming of fMRI data as fMRI samples in the revision. As for the image-like fMRI representations scheme, we divide the vectorized voxels into patches which will be subsequently transformed into embeddings using a 1D convolutional layer with a stride equal to the patch size (16). We will add these descriptions in Section 4.1. Q4: Is Figure 3 discussed in the text? How do the ablation studies from Section 5.2 relate to the main quantitative result mentioned in the Abstract? How is the 39.34% accuracy in the Abstract calculated? A4: Thanks for the questions. 1. We've indeed referenced Figure 3 within the text. Specifically, Figure 3[a,b,c] is elaborated upon in lines 244-248 and 250-252 of the Results Section 5.1. 2. Our ablation studies in Section 5.2 align with the main quantitative outcomes. Specifically, the concluding line of our ablation table (Table 2) indicates the hyperparameter setting yielding the 25.080 performance as illustrated in Figure 3[a]. 3. The 39.34% mentioned in the abstract denotes a relative improvement, not an absolute accuracy. This is derived from the formula: (25.080−17.999)/17.999×100%=39.34%, comparing our model's accuracy (25.080) against the DC-lDM's accuracy (17.999). We will directly annotate Figure 3[a] with our accuracy value in the revision for better transparency. Q5: Does the method's distinction lie solely in the contrastive pre-training phase, or are there other differences in the FRL? How significant are the authors' contributions in relation to these differences? A5: Thanks for the questions. First, our model's Phase 1 indeed differs from the simple masked auto-encoder (MAE) utilized by DC-LDM by employing a novel double-contrastive MAE. Second, exclusive to our approach is Phase 2, characterized by our Cross Modality Guidance. As elaborated in section 3.2 and illustrated in Figure 1, this phase directs the fMRI encoder towards features crucial for image reconstruction—a facet absent in DC-LDM. Our ablation study, specifically in section 5.2.1, delves into the contributions of these mechanisms to the final performance. We'll ensure these distinctions are more pronounced in the revision to underscore the novelty of our approach. Q6. Is the general idea of contrastively using a masked autoencoder scheme for denoising fMRI data a novel contribution of the paper? If not, can the authors refer to similar works and discuss that it is a powerful approach that is adopted for pre-training in their model? A6: Indeed, our adoption of the contrastive masked autoencoder (MAE) scheme specifically for fMRI data denoising represents a novel contribution to neuroimaging. While the synergy of contrastive learning and MAEs has been explored in other domains, its application to fMRI is distinct, given the domain's inherent challenges like high dimensionality and pronounced noise levels. We've substantiated the efficacy of this approach in sections 5.1 (results) and 5.2 (ablation study). In light of your feedback, we'll emphasize this innovation more prominently in our revised manuscript. Response to minor comments: Yes, we will use a different plane image in Fig. 2. And the CMA block indeed refers to the cross-attention module. We will remove the dotted background in Fig. 1 in the revision. --- Rebuttal Comment 1.1: Title: response to the rebuttal Comment: Thanks to the authors for their responses. I have examined all reviews and responses carefully. Several appreciated clarifications are made, and I'm more comfortable with this submission now. Justifications on choosing fMRI over alternatives is fair enough [A1]. But I still think that the motivating point-of-view should not be directly from the perspective of "building a non-invasive BMI", since I somehow can not see this application to yield a realistic BMI with an fMRI in the loop. All of the mentioned clarifications should be included in the revised manuscript. Some of the points that will be fixed in the revisions are quite important for the reader to follow the paper (e.g., sudden changes in terminology preferences). Figure 3a should be revised such that these bars are readable quantitatively. Numerical conclusions referring to residuals should be explicitly stated somewhere in the paper too [A4]. Overall, I'd be happy to increase my score to the accept region, assuming presence of these revisions to higher clarity. --- Reply to Comment 1.1.1: Comment: Dear reviewer, First and foremost, we would like to express our deep gratitude for your time and thoughtful feedback to improve this submission. We genuinely appreciate your insights and recognize the importance of the concerns you've raised. We understand your concerns regarding the potential application of fMRI-based decoders in non-invasive BMIs and will ensure that our revised manuscript offers a more balanced perspective. Your suggestions about the presentation such as terminology consistency, Figure 3a's readability, and the explicit statement of numerical conclusions are well-taken. We commit to addressing all of these in our revisions to provide clearer and more consistent content. In closing, your guidance has been invaluable in refining our work. We will diligently implement all your advice in the revised manuscript. Once again, thank you for taking the time and effort to provide constructive feedback on our manuscript and for considering our submission favorably. Warm regards, Authors of Submission6209
Summary: The authors aim to decode the visual stimuli from neural responses by reverse-mapping the signals from functional MRI (fMRI) to the images the participants see while being scanned. The authors claimed to achieve this through a two-phase framework. In the first phase, they pre-train an fMRI feature learner inspired by mask auto-encoder (MAE) from unlabeled fMRI data. This phase tries to discern the common patterns and features in fMRI across participants responding the same stimuli. In the second phase, they further tune the fMRI feature learner with cross-modality correspondence from the visual stimuli, i.e., the images. This process is similar to the cross-modality guidance commonly used in vision language models. Finally, after the two phases of training, the fMRI feature learner is used to generate signals that serve as conditions for a latent diffusion model (LDM) for image generation. The authors demonstrated that the final model is able to produce images from the participants’ fMRI signals such that the generated images well correspond — in category — to what the participants see. On the quantitative benchmarks, the proposed method outperform the baselines by a significant margin. As much as I appreciate this work, I would like to demystify it a bit. In my opinion, it’s effectively performing two tasks sequentially: (1) fMRI signal classification into 50 or 100 classes, and (2) class-conditional image generation. The holistic framework and storytelling might have made it more impressive than it actually is — though admittedly it is still impressive. Strengths: 1. “Mind reading” is a very bold topic. Generating the image in a participant’s mind from the fMRI recordings is arguably a reasonable way. 2. The design with three stages: contrastive pre-training, cross-modality fine-tuning and latent diffusion model is very reasonable and well conveyed. 3. The ablation studies are presented neatly. Weaknesses: 1. The experimental results, though impressive, do not seem very comprehensive. For example, in Figure 3, qualitative results are shown for two datasets while bar plots are only displayed for one. 2. Also, I would recommend showing some intermediate results — for example, directly evaluating the fMRI representation accuracy by training a simple model to classify the learned fMRI representations after phase 2. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Do you think it is reasonable to present the intermediate fMRI representation accuracy as described in Weaknesses 2? If I understand it correctly, you have $image_{real} \rightarrow (human) \rightarrow fMRI \rightarrow representation \rightarrow image_{gen}$, and you are showing $acc(classifier(image_{gen}), label(image_{real}))$. I wonder if you can also show $acc(classifier'(representation), label(image_{real}))$ and even $acc(classifier'(representation), classifier(image_{gen}))$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Nothing came to my mind. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the valuable feedback and for appreciating our work. Our responses to the weaknesses and questions are as follows. Weakness 1: In Figure 3, qualitative results are shown for two datasets while bar plots are only displayed for one. Answer 1: Thanks very much for the advice on the presentation of the results. We put the bar plot of BOLD5000 in the appendix due to the space limit. We will definitely put it in the main text during revision. Weakness 2 and Question 1: The reviewer would recommend showing some intermediate results — for example, directly evaluating the fMRI representation accuracy by training a simple model to classify the learned fMRI representations after phase 2. Do you think it is reasonable to present the intermediate fMRI representation accuracy? Answer 2: Yes, it is very reasonable to evaluate the intermediate fMRI representation and we thank you for the insightful suggestion. While assessing these representations is valuable, directly classifying them using our datasets is challenging. In BOLD5000, most image classes contain just one image. For GOD, with its 1200 images, test and training sets differ totally in class composition. Given fMRI's inherent noises and lessons from related works, training a precise classifier on these intermediate representations is very complex. However, prompted by your feedback, we've explored an alternative: cross-modality reconstruction in Phase 2, as illustrated in Figure 1. We believe that evaluating the masked auto-encoding results for images in Phase 2 can produce a fitting metric to assess intermediate representations. Specifically, for one image-fMRI sample pair, we mask 50% of the image. The masked image is input into the image encoder, while the fMRI sample is directed to the fMRI encoder. The outputs from both encoders are combined and subsequently fed into the image decoder for reconstruction. Pearson’s correlation between the original and reconstructed images can also measure the quality of fMRI encoder's representations. With average correlations of 0.8971 for GOD subjects and 0.8703 for BOLD5000 subjects, we demonstrate the robustness of our intermediate fMRI representations. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I have read the rebuttal by the authors. They well explained the challenge in training a separate classifier for classifying the intermediate representations. I do not have any further concerns, and decide to maintain my rating at 7.
Summary: This paper proposed to decode visual stimuli from neural responses recorded by fMRI. First, it pretrains an fMRI feature learner with a proposed Double-contrastive Mask Auto-encoder to learn denoised representations. Second, it tunes the feature learner to attend to neural activation patterns most informative for visual reconstruction with guidance from an image auto-encoder. Strengths: (1) The method is straightforward and easy to understand. (2) The related work discussion seems comprehensive. Weaknesses: (1) The selection of baseline methods for comparison appears to be insufficient, thus hindering the ability to effectively demonstrate the effectiveness of the proposed method. To enhance the evaluation, it would be beneficial for the authors to include a comparison with the methods mentioned in the related work section. (2) Another crucial factor that warrants consideration is the quality of the fMRI data, as it can significantly impact the performance of the model. It would be valuable if the authors could provide some ablation studies pertaining to this aspect as well. (3) The evaluation metric solely employed is n-way top-k accuracy, which could be supplemented with additional evaluation metrics to provide a more comprehensive assessment. Metrics such as FID (Fréchet Inception Distance), SSIM (Structural Similarity Index), MSE (Mean Squared Error), among others, would be advantageous for a more qualitative evaluation. (4) Since human data is used in the study, the authors can discuss some potential ethical concerns. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the comments above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is discussed in Section 6, however, since human data is used in the study, the authors can discuss some potential ethical concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for all the constructive feedback. We answer your questions as follows. (Due to the rebuttal length limit, we might summarize some of your questions.) Q1: About the selection of baselines and consideration of other models cited in related work as baselines. A1: Thanks for your advice on baseline selection. We have three points to clarify. First, we follow previous work DC-LDM’s selection for baselines to ensure a fair evaluation. DC-LDM is the previous SOTA model which has been peer-reviewed and accepted by a top-tier conference (CVPR 2023). Compared against DC-LDM, our model shows substantial relative improvements of 39.04%, as depicted in the Results Section 5.1. Second, we described GAN-based, Diffusion-based, and VAE-based methods in the related work. All three categories have their corresponding model in the baselines, IC-GAN for GAN, DC-LDM for diffusion, and SS-AE for VAE. Third, we did not include traditional regression-based methods, since their performances are not comparable to the latest deep learning based methods as we introduce in the related work Section 2.1. Q2: About the impact of fMRI data quality and ablation studies pertaining to this aspect. A2: Thank you for emphasizing the significance of fMRI data quality. We have three points to explain regarding your questions. First, our method is explicitly tailored to fMRI’s noisy nature. We employed a contrastive masked autoencoder to derive denoised fMRI representations. The effectiveness of this approach is evident in our superior task performance. Moreover, we indeed have some related ablation studies with masking on fMRI, as reported in Table 1 and Table 2. In masking, we randomly set part of the fMRI representation to zero which is also a kind of quality impairing. Our optimal task performance is achieved by an fMRI encoder trained with 75% masked fMRI inputs, proving our model’s robustness to fMRI quality. Last, it's pertinent to note that non-invasive neuroimaging, including fMRI, inevitably comes with inherent noises. Despite this, the BOLD5000 and GOD datasets, utilized in our study, have been widely acknowledged in brain decoding tasks, underscoring their quality. For example, BOLD5000 was evaluated using MRIQC [1], yielding a Signal-to-noise ratio of 5.157, attesting to its reliability. [1] Chang, Nadine, et al. "BOLD5000, a public fMRI dataset while viewing 5000 visual images." Scientific data 6.1 (2019): 49. (ref. 46 in our paper) Q3: About supplementing additional evaluation metrics to provide a more comprehensive assessment. A3: Thanks for your suggestions. We follow the previous SOTA DC-LDM for the evaluation setting. We further evaluate our model using MSE and SSIM based on your suggestions. We report the comparisons of our model with DC-LDM. After averaging across all subjects, our model achieves an SSIM of 52.48 and MSE of 49.59, significantly outperforming DC-LDM’s 51.85 SSIM and 51.38 MSE on the BOLD5000 dataset (p<0.01 with paired t-test for all the significant results). We will definitely supplement these metrics in the revised manuscript. Q4: Since human data is used in the study, the authors can discuss some potential ethical concerns. A4: Thanks very much for the suggestions. We used preprocessed data from publicly available datasets. The fMRI data that we train with have been processed and do not contain any data that can be directly linked to the participants’ identities. The collection procedure of the fMRI undergoes strict ethical review as stated in their original paper. We will definitely add a section of the ethical statement at the end of the paper. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thanks the authors for the rebuttal. My concerns have been addressed and I have changed my score to weak accept.
Summary: The paper proposes an approach for decoding visual stimuli from neural responses (fMRI images). The rationale behind the proposed approach lies in the difficulty of learning the complex relationship between a stimuli and the neural responses to it, and the noisy nature of fMRI images. The authors propose a multiple phase approach comprising a contrastive learnign based method for learning denoised representations of fMRI brain activities (a new DC-MAE method is proposed), a feature learner that combines previsous representation space with image representation through cross attention and a diffusiuon model to reconstruct image stimuli from brain activities which is conditioned on the opuput of the feature learner. Strengths: The approcah looks like a smart agregation of state of the art components that allows reaching high top-k accuracy (the chosen metric). The paper is easy to read and the methodological choices are well motivated. The method is validated by top k accuracy and achieves significantly better results than the baselinesit is compared to. A thourough ablation study shows the impact of the various components and of the hyperparameters of the method. Weaknesses: Few aspects of the method are not enough detailed (see questions) Technical Quality: 3 good Clarity: 3 good Questions for Authors: To which space does v_i belongs, is is a time series ? I don’t understand the use of 1D conclutional models that maps v_i^m_1 and v_i^m_2 into embeddings. In Phase2 are all models (E_F, E_I, D_F, D_I) retrained or are only the cross attention layers trained ? The way the latent diffusion model is conditionned seems new. Hwo does this compare to previous attempts of designing conditional latent diffusion models ? Concerning metrics it is said in section 4.3 « We employ the pre-trained ImageNet-1K classifier [50] as a semantic correctness evaluator. » wht does this mean ? Concerning the top-k accuracy experiments did the authors use a statistical test to attest the significativity of the result ? 
 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: limitations are addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for all the constructive feedback and for appreciating our work. We answer your questions as follows. Q1: To which space does v_i belong, is it a time series? Why 1D convolutional model is used to map v_i^m_1 and v_i^m_2 into embeddings? A1: Thank you for your question regarding the nature of v_i and the usage of 1D convolutional models in our study. In our work, v_i denotes the fMRI signal when the subject is viewing a picture. Importantly, it is a 1D vector, not a time series, as we have averaged the data across the time dimension. This results in a spatial pattern of fMRI signal over the visual cortex for each picture viewed by the subject. We then employ a 1D convolutional model to transform this 1D spatial pattern of the fMRI signal, v_i, into an embedding. We will add these details during revision. Q2: In Phase 2 are all models (E_F, E_I, D_F, D_I) retrained, or are only the cross attention layers trained? A2: All models (E_F, E_I, D_F, D_I) are retrained, as described in line 164. Q3: The way the latent diffusion model is conditioned seems new. How does this compare to previous attempts at designing conditional latent diffusion models? A3: Thanks for your question. We apply this type of conditioning on LDM given the noisy nature of fMRI data. We adopt both cross-attention conditioning and time-step conditioning. Time-step conditioning strengthens the overall conditioning effect. Cross-attention conditioning helps to create a stronger condition. This double conditioning is inspired by the conditioning methods in [1] and [2]. [1] R. Robin, et al. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on CVPR, pages 10684–10695. (ref. 24 in our paper) [2] P. Dhariwal et al.“Diffusion models beat gans on image synthesis,” Advances in NeurIPS, vol. 34, pp. 8780–8794, 2021. (ref. 44 in our paper) Q4: What does this “We employ the pre-trained ImageNet-1K classifier [50] as a semantic correctness evaluator” mean? A4: Thanks for your question. We follow the previous work using n-way top-k accuracy to evaluate our model. A detailed explanation of the computation for this metric can be found in Algorithm 1 provided in the appendix. Regarding the statement, “We employ the pre-trained ImageNet-1K classifier as semantic correctness evaluator”, we mean that we use the pre-trained ImageNet-1k classifier to classify both ground truth images and the generated images. These predicted classes are then utilized in computing the final metrics as explained in Algorithm 1. Q5: Concerning the top-k accuracy experiments did the authors use a statistical test to attest the significance of the result? A5: Yes, the improvement of our model over the baselines is significant, including the previous state-of-the-art DC-LDM. All the significant results have p-value < 0.01 with a paired t-test. We will further clarify them in the revision. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thanks to the authors for their responses. I read all reviews and the answers carefully. Many clarifications and additional results have been obtained that strengthen the paper. Yet I am not fully satisfied with the answer about the nature of v_i, and more generally I still feel the description of the method lacks details. I still don’t understand why a 1D convolutional layer is used, why are the data averaged across the time dimension, and if so what is the axis along which a 1D convolutional layer is used? Also, what do you mean by 1D spatial pattern? I am not an expert in neuroscience and i likely lack some background knowledge to understand the processing you describe which is maybe standard in the neuroscience field. Yet I believe providing more details would help any machine learning reader to understand the paper better even if he/she is a non expert in the field of neuroscience. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you very much for taking the time to provide thoughtful feedback on our manuscript and read the rebuttals. We here elaborate further on some standard processing pipelines in fMRI for your convenience. To answer your questions, we need first briefly introduce some background of fMRI. Thanks for your patience. At a basic level, fMRI can be thought of as capturing a series of 3D images of the brain at fixed intervals. When a participant is presented with an image for $t_p$ seconds and the repetition time is $t_r$ seconds, we obtain a sequence of $t_p / t_r$ fMRI samples after each presentation. Given an fMRI sample size of [$x, y, z$], the resulting sequence is of shape [$t_p / t_r, x, y, z$]. The first axis (with shape $t_p / t_r$) represents the temporal dimension or the time dimension. The other three axes refer to the spatial dimensions of the acquired data in 3D space. (1) Why are the data averaged across the time dimension? fMRI measures hemodynamic response, which is the change in blood flow and blood oxygenation in the brain that corresponds to neural activity. Given that the hemodynamic response is slow and can span several seconds, individual fMRI volumes (taken every $t_r$ seconds) might capture only parts of this response. By averaging, we obtain a more stable and comprehensive representation of the neural activity elicited by the stimulus, reducing the effects of transient fluctuations and noise. It is a common operation in fMRI preprocessing. (2) Why a 1D convolutional layer is used and along what axis? After obtaining the averaged representation, we flatten the 3D tensor ([$x, y, z$]) into a 1D tensor, leading to the shape $x \times y \times z$ which is a common operation in fMRI processing [1]. With the flattened tensor, we then apply a 1D convolution. Given the spatial redundancy inherent in fMRI data, adjacent voxels are often found to display similar magnitudes as we describe in Section 3.1 line 109. The convolution operation is particularly suited for our data as it helps in aggregating localized information. (3) What is the meaning of spatial pattern? Simplistically, a neural activation pattern reveals how different brain regions activate in response to a stimulus. The term "spatial pattern" refers to the spatial distribution of these activation patterns, as also used in other work [2,3]. Given the spatial redundancy in fMRI, we apply a convolution layer to effectively aggregate local information and learn the spatial pattern. We genuinely appreciate your insights and guidance. We'll ensure to weave these explanations into the revised manuscript, catering to readers from diverse backgrounds. Should you have any further questions or require additional clarifications, please do not hesitate to propose them. Warm regards, Authors of Submission6209 [1] Jang, Hojin, et al. "Task-specific feature extraction and classification of fMRI volumes using a deep neural network initialized with a deep belief network: Evaluation using sensorimotor tasks." NeuroImage 145 (2017): 314-328. [2] Hsieh, P-J., Ed Vul, and Nancy Kanwisher. "Recognition alters the *spatial pattern* of fMRI activation in early retinotopic cortex." Journal of Neurophysiology 103.3 (2010): 1501-1507. [3] Williams, Mark A., Sabin Dang, and Nancy G. Kanwisher. "Only some *spatial patterns* of fMRI response are read out in task performance." Nature Neuroscience 10.6 (2007): 685-686.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper propose a novel two-phase fMRI representation learning method to encode the visual stimuli from neural responses. It is a significant challenging task due to noisy fMRI signals and complex intricate pattern of brain visual representation. The proposed two-phase can reconstruct image stimuli from brain activities. Strengths: 1.The overall framework is novel and interesting. The use of double-contrastive mask auto-encoder and image-guided auto-encoder are novelty. 2.The introduction is clear. 3.The superiority experimental results are obtained. Weaknesses: 1.The motivation of two phase is confused. Why two phase representation learning is needed? And what is the difference between the two phase? 2. How the contrastive MAE works? It is not clear 3.The model may be too big? How the efficiency? The experiments are weak. How to set the mask? The model’s size? More quantification is needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: the proposed method is too complex to follow. Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Discrimination / Bias / Fairness Concerns', 'Ethics review needed: Inadequate Data and Algorithm Evaluation', 'Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)', 'Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)', 'Ethics review needed: Compliance (e.g., GDPR, copyright, license, terms of use)', 'Ethics review needed: Research Integrity Issues (e.g., plagiarism)', 'Ethics review needed: Responsible Research Practice (e.g., IRB, documentation, research ethics)', 'Ethics review needed: Failure to comply with NeurIPS Code of Ethics (lack of required documentation, safeguards, disclosure, licenses, legal compliance)'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for all the constructive feedback. We answer your questions as follows. Q1: Why is two-phase representation learning needed? What is the difference between the two phases? A1: The two-phase design stems from the unique characteristics and challenges posed by fMRI data in the context of visual reconstruction, as stated in Section 3.1, lines 104-122. For your convenience, we elucidate the motivation of and difference between the two phases as follows. First, fMRI data is spatially redundant. Adjacent voxels tend to display similar magnitudes. The mask and predict methodology in Phase 1 is designed to help the fMRI encoder learn the underlying structure of the input brain data, crucial for understanding brain dynamics. Second, fMRI data is noisy and subject to individual biological variances. Masked autoencoding in Phase 1 helps suppress noise. Optimization of the contrastive loss further discerns common patterns of brain activities over individual variances. Third, the process by which a visual stimulus arouses a neural response involves multiple stages of neural processing. The resulting fMRI signal is a highly convolved representation of these distinct stages. So we design Phase 2 with cross-modality guidance, which aims to instruct the fMRI encoder to capture the most informative signals from the convolved fMRI data for visual reconstruction. In essence, our two-phase work synergistically: Phase 1 focuses on data denoising and structure understanding, while Phase 2 emphasizes extracting information crucial for visual reconstruction. Collectively, they ensure precise differentiation between cognitive states, which is paramount for brain decoding. Q2: How does the contrastive MAE work? A2: Thank you for seeking clarity on the workings of the contrastive MAE. You can refer to Figure 1 and Section 3.2 Phase for details of contrastive MAE works. To elucidate step by step: 1) For each fMRI input sample, we generate two distinct masked versions. 2) These masked versions are then processed through the fMRI autoencoder, resulting in two reconstructed samples. 3) These two reconstructed samples are treated as a pair of positive samples. 4) Furthermore, the association between each reconstructed sample and the original unmasked sample also forms a pair of positive samples. 5) Reconstructed samples from other fMRI inputs in the same batch serve as negative samples. 6) With these established positive and negative pairs, we then optimize the contrastive loss as detailed in equations (1-3). Q3: What is the size and efficiency of the model? How to set the mask? A3: Size and Efficiency of Model: We take training on the GOD dataset as an example. For the fMRI representation learning model, we train Phase 1 for 150 epochs and Phase 2 for 60 epochs on one Nvidia A100 GPU. The two phases in total take about 12 hours. After the two phases, we only save the checkpoint of the fMRI encoder which has 15.16M parameters. For the diffusion model, we use the pre-trained label-to-image latent diffusion model. The model has 401.32M parameters, but during finetuning, we only tune the weights in the cross-attention layers and the fMRI encoder which has 17.4M parameters in total. We use one Nvidia-V100 GPU to finetune the model for 500 epochs and it takes around 20 hours. Setting of Mask: We have conducted experiments to study the effects of masking ratio on reconstruction performance and the results are detailed in the ablation Table 1 and 2. We also include the mask setting to achieve the best reconstruction performance on the last line of Table 2, where the fMRI mask ratio is 0.75 and the image mask ratio is 0.5. We will definitely further clarify these quantifications in the revision. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Dear Authors, Thank you for answering the points that were raised, your response will be taken into account. Best, Your Area Chair
null
null
null
null
null
null
GAN You See Me? Enhanced Data Reconstruction Attacks against Split Inference
Accept (poster)
Summary: The paper introduces a new data reconstruction attack (DRA) on split inference (SI) called GLASS and GLASS++. The task of DRAs in SI is to reproduce the image of a certain user based on the intermediate output of the first part of a trained network, which is located on the user's device. GLASS uses StyleGAN as prior knowledge to constrain the generation of images based on given intermediate activations. The paper shows that GLASS is more robust to multiple defense methods against DRAs and that the generated images are of higher quality than previous DRA approaches. Strengths: - the paper presents a novel DRA attack - the paper presents a good idea to use StyleGAN as prior knowledge for DRAs - the paper demonstrates that existing defenses are not effective when using DRAs based on generative models. Weaknesses: - It is quite hard to follow the paper. For example, it is not clarified what “perturbation” is during the explanation of the approach (paragraph about “Z space search”). - Setting the learning rate and the number of steps used for the attack to be the same for GLASS, rMLE and LM is not fair in my opinion. Different attacks might need a different number of steps or a different learning rate based on the optimization objective. The comparison would be much fairer when the optimal hyperparameters are taken for each of these attacks. - There are too many references and experimental results in the supplementary material. It is hard to read the paper without reading the supplementary material. For example, in the abstract, the paper claims that the approach is evaluated on 7 defense methods. However, not all results for the evaluations are shown in the main paper and are instead only present in the appendix. Misc: Line 249: Table 4 -> should probably be Table 1 Line 264: Table 5 -> should be Figure 5 Line 203: sigma() is not properly defined Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Q1: What does “searching in Z space for the reason that the entanglement of Z space increases the amplitude of positive perturbation in representation space” mean? Could you elaborate what the “amplitude of positive perturbation” is? Q2: Related to the previous question, can you elaborate on what exactly you mean by disturbance in line 142? Q3: Why is the total variation loss required? Shouldn't StyleGAN create realistic images based on the latent variable and as a result, the total variation loss shouldn't be needed? Q4: In Figure 3, the paper claims that GLASS++ is achieving to find the global optimum. As far as I am aware, it is not possible to prove that the found solution is, in fact, the global optimum. Can you elaborate on how you come to the conclusion that the result is the global optimum instead of a local one? Q5: It would be interesting to see not only the mean of the metrics, but also the standard deviation. This would give an impression on how consistent the attack results are. Could you state the standard deviation for the experiments in table 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: For the experiments on CelebA the paper reserves roughly 40% of the dataset for performing the evaluation, while the StyleGAN was trained on 60% of the dataset. The assumption that the attacker has 60% of the data from the exact same distribution as the input he is trying to recreate is quite restrictive. The effect of a distribution shift between the image to be reconstructed and the GAN is addressed in the supplementary material. However, the distribution shift is only tested on an undefended model, which is why it is not possible to make a claim about the influence of the shift regarding defended models. Showing results on the defended models with a distribution shift would strengthen the findings of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate the invaluable and perceptive feedback offered by the reviewer. We have considered all the concerns mentioned and responded appropriately to each one. **Answer for Weakness1, Qustion1, Qustion2, Qustion4:** Before proceeding, we kindly ask you to consult the detailed information available in the global response, specifically response.pdf/Figure-4. The answers provided below are grounded in this context. Regarding **Weakness1**, the "perturbation" referenced in the paper pertains to modifications within the representation space, involving alterations in corresponding facial features resulting from latent code optimization. The term "positive perturbation" denotes changes that diminish reconstruction loss and are favorable for an attack. Regarding **Question1**, the observation of response.pdf/Figure-4(a) reveals that even though the Z space search's trajectory variation amplitude is small, it induces significant facial feature changes, ultimately converging to the target features. This mirrors the greater trajectory variation amplitude in the W space search, with the resultant mapped latent code converging to the target, as depicted in response.pdf/Figure-4(b). This signifies that Z space search enhances the magnitude of facial feature changes and broadens the scope of feature exploration. Furthermore, this adjustment brings the reconstructed outcomes closer to the target image, signifying an "increase in the amplitude of positive perturbation in the representation space." Regarding **Question2**, we first elaborate on the merits of Z and W+ space searches. W+ space search introduces more nuanced feature alterations, enhancing reconstruction precision. However, its restricted range of feature changes increases the risk of getting stuck in local optima during optimization. The Z space search remedies this limitation by offering larger feature perturbations (controlled disturbances) that serve as advantageous starting points, efficiently sidestepping the W+ space search's potential proximity to local optima. Regarding **Question4**, for the same reasons detailed above, we favor W space over W+ space for more insightful analysis. As portrayed in response.pdf/Figure-4(c), distinct initializations w1, w2, w3, and w4 lead their respective W space searches to converge on the target latent code. Enhanced initialization and W/W+ space searches in GLASS++ render this process even smoother. This elucidates our conclusion of "the global optimum." Nevertheless, we recognize the need to revise this wording due to inaccuracy. When the intermediate feature information is exceedingly scarce (such as when the split point is set at Block 5/6), achieving the target latent code becomes unfeasible. We will alter "the global optimum" to "an optimum attainable by the attacker with existing knowledge." We apologize for the imprecise phrasing, which will be rectified in our revised version. **Answer for Weakness2:** Thank you for your feedback. In our view, optimization-based data reconstruction attacks tend to yield improved attack effectiveness as the number of optimization iterations increases. However, as the optimization process advances, the gains in reconstruction enhancement tend to diminish. To address this, we've set a sufficiently large number of iterations (20,000), ensuring that various attacks converge stably to their attainable optima. As depicted in response.pdf/Figure-5, the feature loss for the three optimization-based attacks exhibits complete convergence during the reconstruction process. To validate this, we adjusted the learning rate from a uniform 1e-2 to 1e-1. As evident in the reconstruction outcomes, the three results underwent minimal alteration, further affirming the robust convergence of the attack outcomes. **Answer for Weakness3:** We appreciate your valuable suggestions. We will thoroughly revise our paper to ensure that crucial results and analyses are appropriately incorporated into the main body. **Answer for Weakness4:** We apologize for the errors that have occurred. We will conduct a comprehensive review of the entire paper and rectify these issues in the revised version. **Answer for Qustion3:** The impact of total variation loss on the attack's effectiveness is generally limited, owing to StyleGAN's proficiency in generating realistic images. Nonetheless, W+ space offers extensive image editing capabilities, potentially causing the latent code to deviate from the sampling distribution of W+ space during optimization. Illustrated in response.pdf/Figure-6, under the Noise Mask defense, GLASS's unrestrained reconstruction exhibits unnatural artifacts. The inclusion of the total variation loss term rectifies this by producing smooth and natural reconstructed images. **Answer for Qustion5:** We show the standard deviations of GLASS and GLASS++ in response.pdf/Table-1. **Answer for Limitation:** In real-world split inference systems, the common practice involves the server training the model with its own data and then sharing it with clients for collaborative inference through APIs. Our setup aligns well with this scenario, as the server grants API access based on specific criteria for private inference data. To explore a narrower attack scope, a simple solution could be fine-tuning other pre-trained StyleGAN models with a small CelebA image subset. We'll conduct and include these experiments in the revised version. We present results pertaining to defended models encountering distribution shifts in response.pdf/Figure-7. Although our reconstruction results experienced a moderate decline, the privacy feature disclosure capability remains superior to that of Inverse-Network. While time constraints constrained us to explore only GLASS, we anticipate that the more stable GLASS++ will yield enhanced reconstruction outcomes, which we'll also incorporate in the revised version. --- Rebuttal Comment 1.1: Title: Addressing Rebuttal #1 Comment: Thank you for the detailed rebuttal! My concerns have been appropriately addressed, which is why I am raising my score from 3 --> 6 to `weak accept`. --- Reply to Comment 1.1.1: Comment: We genuinely appreciate your prompt response to our rebuttal.
Summary: This paper proposes GAN-based latent space search attack (GLASS) that leverages a pre-trained StyleGAN for reconstructing private data from shared representations in split inference via a two-step search in the Z space and the W+ space. Additionally, GLASS++ is proposed to learn a mapping model to produce better initial points for subsequent optimizations. The effectiveness of the proposed method is evaluated on the CelebA and FFHQ datasets against several defenses. Strengths: 1. The paper is well-structured and easy to follow. 2. It is interesting to consider combining optimization-based attacks with learning-based attacks. 3. The evaluation considered several attack baselines and defense mechanisms. Weaknesses: 1. Some closely related prior works [R1, R2] were not discussed/compared. For instance, the idea of utilizing GAN inversion to improve data reconstruction has been introduced in [R1], which utilizes a pre-trained GAN to invert latent vectors produced by DNN to the corresponding input images through optimization. The usage of a StyleGAN with the latent space search and learned mapping is new but it is unclear if there are other technical novelties. 2. In the evaluation, it is not quite clear which part of the data is used to train the encoder network, which makes it a bit hard to verify the benefits of GLASS++ as claimed in Figure 3. If there’s a distribution shift in the data used for training the encoder, could it still produce an initial point that is better than random initialization? 3. The experiments only considered CelebA as the private dataset and 40 images for evaluating the attacks. As the optimization process is stochastic and sensitive to initialization, it would be better to consider more data samples to validate the effectiveness of the proposed attacks. [R1] Zhang, Yuheng, et al. "The secret revealer: Generative model-inversion attacks against deep neural networks." CVPR 2020. [R2] Dong, Xin, et al. "Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks." BMVC 2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Which portion of data is used to train the encoder network used in GLASS++? Would it still perform well if both the StyleGAN and the encoder are trained on a different data distribution? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The negative societal impact may be stated more explicitly, e.g., via adding a discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate the invaluable and perceptive feedback offered by the reviewer. We have considered all the concerns mentioned and responded appropriately to each one. **Answer for Weaknesses:** 1. We agree that GAN inversion techniques are used in some of the study cases of Model Inversion Attacks (MIAs) and Gradient Inversion Attacks (GIAs) as well. However, our contribution differs significantly from these studies. The innovativeness of our work does not lie in the use of GAN inversion, but rather in exploiting the latent space search characteristic and advantageous disentangled representation of StyleGAN to develop novel Data Reconstruction Attacks (DRAs) specifically targeting Split Inference systems. The MIAs represented by [R1] aim at obtaining sensitive image features of individuals in the original training data based on the coupled feature information contained in the confidence scores of the face ID classification model, while DRAs focus on reconstructing the client private inference data by utilizing intermediate feature representations outputted from the splitting layer of DNN models with different functionalities. Regarding [R2], this work is methodologically similar to the baseline Inverse Network (IN) that we have compared. We will provide additional comparative experiments or include discussions of these closely related prior works in our manuscript. 2. We thank the reviewer for the comment. We agree that our description of the training data for the encoder network was not very clear. In all GLASS++ experiments, the training data of the encoder network is the same as the data used to train the StyleGAN model used by the attacker. Compared with GLASS, GLASS++ only increases the cost of the attacker's computing resources. In the experiments presented in Appendix D.3, we substituted the StyleGAN model used in the attack with one trained on the FFHQ dataset, instead of CelebA, and utilized FFHQ data to train the encoder network. The results demonstrate that even with a shift of data distribution, GLASS++ still has an effective reconstruction attack effect. That is, even if there’s a distribution shift in the data used for training the encoder, it can still produce a better initial point. We will provide detailed description and extended ablation study in our revision. 3. Thanks for the comment. We would like to clarify that we have extended our method to heterogeneous data (CINIC-10 as the private dataset), which is mentioned in Lines 308-314. The experiments in Appendix D.2 demonstrate the adaptability of our method. We believe that the number of samples can validate the effectiveness of our method, but we agree that more samples would help improve the overall quality of our work. We will provide more attack results as well as standard deviation data for relevant experiments in the revised version to demonstrate the stability of our attack effects. **Answer for Questions:** Please refer to our answer for **Weaknesses-2**. **Answer for Limitations:** We thank the reviewer for this suggestion. We will provide additional discussions with respect to the ethical implications of enabling more effective DRAs in our manuscript. The potential negative societal impact of GAN-based DRAs mainly stems from the disclosure of private data. Once the attacker reconstructs the original data fed into the DL model at the edge-side, it can lead to privacy invasion and generate malicious false information. We hope that our proposed attacks will draw attention to the privacy protection of split inference systems and promote the development of more effective defense mechanisms. *References:* [R1] Zhang, Yuheng, et al. "The secret revealer: Generative model-inversion attacks against deep neural networks." CVPR 2020. [R2] Dong, Xin, et al. "Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks." BMVC 2022. --- Rebuttal Comment 1.1: Title: Thank you Comment: Dear authors, Thank you for your prompt response, which the reviewer greatly appreciates. The suggestion is to consider integrating the relevant discussion and experiment details into the revised version to improve clarity. --- Reply to Comment 1.1.1: Comment: Your feedback on our rebuttal is highly valued. We will enhance our paper by incorporating your valuable suggestions.
Summary: The paper proposed GLASS and GLASS++, which utilize StyleGAN to launch data reconstruction attacks against Split Inference. This is the first GAN-based reconstruction attacks against split inference, and it shows consistently better results compared with previous methods, against 7 defense schemes. Strengths: 1. As stated by authors, this is the first GAN-based reconstruction attacks against SI. I am not sure if similar ideas (search GAN latent space in privacy attacks) have been proposed in similar topics (for example other privacy attacks like gradient inversion or model inversion), so I would like to discuss with other reviewers about the novelty or originality of the method. 2. The work has achieved a high level of completion, and the experiments were conducted comprehensively. The results are reported to be state-of-the-art for nearly every setting. Weaknesses: The results highly rely on the (1) similarity of auxiliary distribution and private distribution and (2) The complexity of the distribution modeled. Although the authors discussed the performance under distribution shift, they conducted experiments on FFHQ and CelebA, which are both face image datasets, well aligned and structured. In practice, the server may not know much about data distribution from end users, so the auxiliary dataset and private dataset could have significantly different distributions. Additionally, in practice, the distribution could be highly complex, for example, for a facial recognition system, photos from end users are not likely to be cropped and well-aligned. They may have various backgrounds, poses, and lighting conditions. Although the paper provides SOTA methods over their settings, it is worth discussing how the method will perform under more challenging settings. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you provide some visualizations on the intermediate results of searching the optimal W+, to make readers understand more about the optimization progress and how your model helps the optimization towards optimal reconstruction? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Same as weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply value the priceless and insightful feedback provided by the reviewer. We have taken into account all of the mentioned concerns and addressed them accordingly. **Answer for Weaknesses:** We thank the reviewer for the comment and would like to further clarify the adversary's knowledge about the data distribution. In real-world protocols for Split Inference, it is relatively easy for a server-side adversary to know the tasks of the target model and the information related to the data distribution, since the service provider usually obtains the complete target model and deploys it in a split. This knowledge allows the adversary to choose auxiliary data with a distribution similar to the client-side private inference data. In addition, existing privacy attacks and defenses studies in machine learning [1][2][3][4][5] typically use aligned/structured face images as assumptions for private data and auxiliary data, and our work follows this setup as well. Our method is also expanded to heterogeneous data, as mentioned in Lines 308-314. In Appendix D.2, we illustrate that our approach can be adapted to heterogeneous image data with a lower level of structural similarity. Our experiments involve the utilization of a released StyleGAN-XL model [7], pre-trained on CIFAR-10 [8], to reconstruct private inference data from CINIC-10 [9]. Despite there is a shift in data distribution between CIFAR-10 and CINIC-10, our attack demonstrates effectiveness in this setting, as depicted in Figure 12. In Lines 652-656, an analysis is presented explaining the better performance of our method compared to other DRA. We are sorry for placing this important result in the appendix and assure you that we will make adjustments to the paper's content and structure to include relevant results in the main body.  We concur that a more realistic implementation of DRA on uncropped/unaligned private inference data will enhance the quality of our research. According to [6], we improve StyleGAN by modifying its first-layer feature from constant to variable. Furthermore, we combine it with the latent code of W+ space and carry out joint optimization during the second stage of our method. The evaluation of GLASS shows our method's effectiveness even when different transformations are applied to private inference data, as illustrated in response.pdf/Figure-1. Furthermore, we implement GLASS in a more realistic setting. As shown in response.pdf/Figure-2, we photographed a volunteer and obtained multiple nature images as private inference data. The attack results demonstrate the robustness of our method. It is essential to clarify that in the above experiments, the StyleGAN generator used in our attack was still trained on CelebA which is cropped/aligned. We believe incorporating data augmentation in StyleGAN training could further improve the attack's effectiveness. A dedicated section will be included in our revised version to comprehensively discuss and present the experiments. **Answer for Questions:** We thank the reviewer for this suggestion. As shown in response.pdf/Figure-3, based on a good initialization (brought by the Z space search or the encoder network), the W+ space search carries out a fine-grained search for sensitive features. Visually, as the number of iterations increases, the reconstructed image gradually resembles the target image. We will likewise provide clearer and more intuitive explanations in our manuscripts. **Answer for Limitations:** Please refer to our answer for **Weaknesses**. *References:* Please refer to global response.
Summary: The paper titled "GAN You See Me? Enhanced Data Reconstruction Attacks against Split Inference" investigates and proposes new methods of data reconstruction attacks (DRAs) against split inference (SI), a deep learning paradigm that addresses computational constraints on edge devices while preserving data privacy. The authors present GLASS and GLASS++, which are the first DRA methods that use Generative Adversarial Networks (GANs), specifically leveraging StyleGAN, for the purpose of data reconstruction in SI. These methods are evaluated against seven advanced defense mechanisms in the SI paradigm and are found to be effective even in their presence. The authors claim their proposed methods significantly outperform existing DRAs. Strengths: Originality: The paper introduces the novel application of GANs in data reconstruction attacks, marking a significant shift in the approach to DRAs in SI. Quality: The paper is technically sound and contains thorough experimental evaluations. It systematically evaluates the proposed methods across different split points, multiple defense mechanisms, and various adversarial settings, providing a well-rounded view of their performance. Clarity: The paper is well-structured and coherent, with clear descriptions of the problem, proposed methods, and the results obtained. It does an excellent job of explaining the limitations of existing DRAs and how the proposed methods overcome these. Significance: The work is of high significance as it highlights the existing vulnerabilities of SI, suggesting that even advanced defense mechanisms may not provide sufficient protection against data privacy attacks. This research could lead to more robust defense strategies in SI systems. Weaknesses: For the paper weaknesses. I think the biggest concern that I have is that the method relies on StyleGAN which is trained on cropped/aligned face data images. One big assumption made here is that the private inference data resembles these cropped/aligned face images. To me, this is a pretty restrictive setting. In order to generalize to other types of image data, the StyleGAN discussed in the paper would not be sufficient. The whole pipeline needs to be redesigned because the pipeline is specially designed with face-StyleGAN in place and certain modules are designed to extract desired styles w+. The impracticality to generalize and the relatively restrictive setting of the proposed pipeline is the major weakness in my opinion. In addition, the performance of GLASS and GLASS++ against defense mechanisms is analyzed, but the paper lacks a clear discussion on possible countermeasures or ways to further improve these defenses in light of the presented attacks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please provide responses for my concern in the weakness section above. Also, please elaborate more on the potential countermeasures that could be implemented to prevent or mitigate the effects of attacks such as GLASS and GLASS++? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: While the paper showcases impressive results, it seems to lack a comprehensive discussion about the potential ethical implications of enabling more effective data reconstruction attacks, which could be used maliciously. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate the invaluable and perceptive feedback offered by the reviewer. We have considered all the concerns mentioned and responded appropriately to each one. **Answer for Weaknesses:** - Data Types We appreciate the reviewer's comment. We'd like to clarify that our attack pipeline isn't solely tailored for face-StyleGAN; rather, we extend our method to diverse data types, as mentioned in Lines 308-314 of the paper. In Appendix D.2, we showcase how our approach adapts to heterogeneous image data with varying structural similarity. Our experiments employed the StyleGAN-XL model [7], pretrained on CIFAR-10 [8], to target private inference data from CINIC-10 [9]. Despite a shift in data distribution between CIFAR-10 and CINIC-10, our attack remains effective, as illustrated in Figure 12. In Lines 652-656, we delve into the reasons for our method's superiority over other DRA methods. We apologize for placing this vital result in the Appendix and assure you that we'll revise the paper's content and structure to incorporate these pertinent findings into the main body. We acknowledge the challenge associated with generating out-of-range images using StyleGAN. Nevertheless, our research primarily concentrates on harnessing the unique characteristics of StyleGAN's latent spaces to amplify the attack performance of DRA. Our focus lies in refining the accuracy of searching for sensitive privacy features. In the existing landscape of privacy attacks and defenses in machine learning [1][2][3][4][5], it's a prevailing convention to assume the utilization of cropped/aligned face images as private inference and auxiliary data. Adhering to the adversarial settings established in previous work [5], our goal is to unveil greater private information from inference data when compared to baseline methods. With regards to the limitation posed by StyleGAN's reliance on the cropped/aligned faces it's pretrained on, we believe this concern extends to other domains within computer vision. Recent research [6], conducted by Yang et al., has caught our attention, as it briefly investigates the fixed-crop constraint of StyleGAN2 – the primary generative model employed in our experiments. The proposed approach effectively expands the generative scope beyond cropped/aligned faces. While addressing the aforementioned limitation is not our primary research focus, we concur that a more practical implementation of DRA on uncropped/unaligned private inference data would enhance the quality of our work. According to [6], we enhance StyleGAN by transitioning its constant first-layer feature to a variable one. Furthermore, we integrate this with the latent code of W+ space and undertake joint optimization during the second stage of our methodology. As demonstrated in response.pdf/Figure-1, the evaluation of GLASS underscores our method's effectiveness even when diverse transformations are applied to private inference data. Moreover, we've implemented GLASS within a more authentic context. Illustrated in response.pdf/Figure-2, we capture images of a volunteer and gather multiple nature images as private inference data. The resulting attack outcomes affirm the robustness of our approach. It's crucial to clarify that, in the aforementioned experiments, the StyleGAN generator utilized for our attack was still trained on CelebA, featuring cropped/aligned images. We believe the integration of data augmentation into StyleGAN training could further bolster the attack's effectiveness. In our revised version, we will incorporate a dedicated section to comprehensively discuss and present these experiments. - Possible Countermeasures Current defense mechanisms focus on safeguarding entire images from the reconstruction. In contrast, our proposal suggests a targeted approach, concentrating on specific sensitive attributes to enhance defense efficacy. For instance, by masking the mouth region of the input image, we can thwart the attacker's ability to reconstruct this portion, albeit introducing a new trade-off in utility. Furthermore, incorporating adversarial samples against StyleGAN, like generating optimized noise perturbations for each intermediate feature, could potentially yield a more potent defense strategy due to its one-to-one correspondence. We will delve into these aspects in greater detail in the revised version. We sincerely appreciate the reviewer's invaluable feedback, and we are fully committed to enhancing our work based on constructive suggestions. **Answer for Questions:** Please refer to our answer for **Weaknesses**. **Answer for Limitations:** We appreciate the reviewer's suggestion and will incorporate further discussions regarding the ethical implications of enhancing the effectiveness of DRAs in our manuscript. The potential adverse societal consequences of GAN-based DRAs largely revolve around the exposure of private data. When the attacker successfully reconstructs the initial data supplied to the edge-side DL model, it can result in privacy breaches and the propagation of harmful false information. We anticipate that our proposed attacks will spotlight the significance of safeguarding privacy in split inference systems and encourage the advancement of more robust defense mechanisms. *References:* Kindly refer to the comprehensive global response provided earlier. --- Rebuttal Comment 1.1: Comment: If you have any further concerns or questions, please feel free to contact us. We greatly value your feedback. --- Rebuttal Comment 1.2: Comment: Thanks for the detailed response. I would like to raise my rating. The manuscript can be further improved by incorporating various suggestions by all the reviewers. --- Reply to Comment 1.2.1: Comment: We genuinely appreciate your feedback on our rebuttal. We will truly improve our paper by integrating suggestions from all the reviewers.
Rebuttal 1: Rebuttal: The **response.pdf** contains our supplementary experiments. Here is our detailed explanation of the experiment in the pdf: **Detailed explanation of response.pdf/Figure-4**: The Z space is entangled, signifying that even a small change in the latent code within the Z space can yield a large change within the representation space. In other words, a continuous change in the latent code results in an abrupt change in facial features. Conversely, within the relatively disentangled W/W+ space, continuous changes in the latent code lead to continuous changes in facial features. The characteristics of different latent spaces provide them with distinct advantages. Taking face reconstruction attack as an example. In the optimization process of the Z space search, gradient descent makes the facial features corresponding to the latent code closer to the target features, that is, makes the image generated by StyleGAN closer to the privacy inference image. Specifically, we use PCA to reduce the dimensionality of the latent codes within latent spaces and visualize them. We use W space instead of W+ space in this example because the latent code of W space has a lower dimensionality (1\*512 compared to 10\*512 of W+ space), which makes dimensionality reduction easier. As shown in response.pdf/Figure-4(a), we initialize z1 and z2 from a normal distribution and perform Z space search to reconstruct the target image. The grey data points correspond to other samples obtained from the normal distribution after dimensionality reduction, whereas the red star symbolizes the reconstructed target image. It should be noted that the target image in this example is selected to be the image generated by StyleGAN, because it ensures that the target latent code is accurate. It can be seen that after the Z space search, the reconstruction results starting with z1 and z2 closely resemble the target image. However, after dimensionality reduction, they appear in three different regions. This is due to the entanglement of the Z space, the same combination of facial features may correspond to multiple different latent codes. We record the latent codes during the Z space search, map them to W space through StyleGAN's mapping network, reduce dimensionality again and visualize. As shown in response.pdf/Figure-4(b), the optimized trajectories within W space corresponding to z1 and z2 finally converge to the target latent code. This convergence can be attributed to the disentanglement of the W space, ensuring that the same combination of facial features corresponds to the same latent code. As shown in response.pdf/Figure-4(c), for distinct initializations w1, w2, w3, and w4, their respective W space searches converge together to the target latent code. *References*: [1] Pasquini, Dario, Giuseppe Ateniese, and Massimo Bernaschi. "Unleashing the tiger: Inference attacks on split learning." Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 2021. [2] Singh, Abhishek, et al. "Disco: Dynamic and invariant sensitive channel obfuscation for deep neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. [3] Vepakomma, Praneeth, et al. "NoPeek: Information leakage reduction to share activations in distributed deep learning." 2020 International Conference on Data Mining Workshops (ICDMW). IEEE, 2020. [4] Kahla, Mostafa, et al. "Label-only model inversion attacks via boundary repulsion." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [5] Chen, Si, et al. "Knowledge-enriched distributional model inversion attacks." Proceedings of the IEEE/CVF international conference on computer vision. 2021. [6] Shuai Yang, Liming Jiang, Ziwei Liu, and Chen Change Loy. Styleganex: Stylegan-based manipulation beyond cropped aligned faces. In ICCV, 2023. [7] Sauer, Axel, Katja Schwarz, and Andreas Geiger. "Stylegan-xl: Scaling stylegan to large diverse datasets." ACM SIGGRAPH 2022 conference proceedings. 2022. [8] https://www.cs.toronto.edu/~kriz/cifar.html [9] Luke Nicholas Darlow, Elliot J. Crowley, Antreas Antoniou, and Amos J. Storkey. CINIC-10 is not ImageNet or CIFAR- 10. CoRR abs/1810.03505, 2018. 5 Pdf: /pdf/4cfab92715d99cc11718467f79a73e5fc753dd1c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Structured Prediction with Stronger Consistency Guarantees
Accept (poster)
Summary: This work studies $\mathcal{H}$-consistency of surrogate losses for structured prediction. The authors show that non classic surrogate losses are not $\mathcal{H}$-consistent thus not Bayes-consistent. They propose two families of $\mathcal{H}$-consistent losses as extensions to existing losses and algorithms for two special cases of loss. Strengths: The manuscript is well-written with few typos. The theoretical results are sound and novel. Practical optimization methods are provided to show that the proposed losses are not only consistent but also computationally feasible. Weaknesses: I don't see obvious weaknesses in the manuscript. A possible drawback might be the lack of empirical results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have a few minor concerns as follows. 1. The inconsistency results for each individual classic loss in Section 3 are not new but the authors seem to give a novel unified view of four losses. Does this include Fenchel-Young losses? 2. The consistency bounds in Theorem 6 and 8 depend on a quantity defined by the minimizability gap, which is difficult or impossible to minimize or estimate. I understand that a non-asymptotic bound is desirable but isn't it implicit since we may not obtain an estimated upper bound for a special case unlike what we can do with a generalization bound based on Rademacher complexities? 3. Can you discuss and compare $\mathcal{H}$-consistency with Fisher consistency since there is a requirement for Fisher consistency in terms of a comparison inequality? (Nowak et al., 2020) (Blondel, 2019) 4. Section E presents proofs for each of the four structured comp-sum losses individually but Theorem 6 is stated for general structured comp-sum losses. Does the conclusion hold or did I miss something? 5. Line 270, $\bar{\ell}_i$ instead of $\ell_i$. 6. I am confused with the definitions about Markovian features in lines 287-294. Specifically, what's the definition of $p$? Based on the context, I can infer that $\mathbf{\Psi}$, $\mathbf{\psi}$, $\mathbf{\Psi}_k$, $\mathbf{\psi}_k$, $\tilde{\psi}$ all refer to a vector in $\mathbb{R}^d$. But what is the number of padding zeros in the definition of $\tilde{\mathbf{\Psi}}_k$ in line 290? 7. Due to the theoretical nature of this work, experiments may not be necessary. But is it possible to compare the proposed consistent losses with other consistent losses empirically (Nowak et al., 2020) (Blondel, 2019)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are mentioned together with introduction of methods. Potential negative societal impacts are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **Questions:** **1. The inconsistency results for each individual classic loss in Section 3 are not new but the authors seem to give a novel unified view of four losses. Does this include Fenchel-Young losses?** **Response:** The formulation presented in Section 3 differs from a general formulation of Fenchel-Young losses (or Bregman divergence losses), even though they both encompass specific instances. For example, both include the hinge loss of support vector machines. **2. The consistency bounds in Theorem 6 and 8 depend on a quantity defined by the minimizability gap, which is difficult or impossible to minimize or estimate. I understand that a non-asymptotic bound is desirable but isn't it implicit since we may not obtain an estimated upper bound for a special case unlike what we can do with a generalization bound based on Rademacher complexities?** **Response:** That’s a natural question. Let us first mention that the minimizability gap can be crudely upper bounded by the approximation error. But, more significantly, although we have not detailed it in this paper, the minimizability gap can in fact be upper bounded in terms of useful terms depending on the magnitude of the parameter space. Our $H$-consistency bounds can be used to derive finite sample learning bounds for a hypothesis set $H$ expressed in terms of the Rademacher complexity of the hypothesis set and the loss function and an upper bound on the minimizability gap for the surrogate loss. We will elaborate on that in the final version. **3. Can you discuss and compare $H$-consistency with Fisher consistency since there is a requirement for Fisher consistency in terms of a comparison inequality? (Nowak et al., 2020) (Blondel, 2019)** **Response:** With Nowak et al.'s (2020) definition, Fisher consistency coincides with the specific case of $H$-consistency, where $H$ is the family of all measurable functions. However, when dealing with a constrained hypothesis set $H$, the comparison inequality does not yield an $H$-consistency bound that relates the surrogate estimation loss to the target estimation loss in terms of minimizability gaps. **4. Section E presents proofs for each of the four structured comp-sum losses individually but Theorem 6 is stated for general structured comp-sum losses. Does the conclusion hold or did I miss something?** **Response:** Theorem 6 represents a consolidated result for the four structured comp-sum losses, with the proofs for each being presented separately in Section E. We will clarify this distinction in the final version. **5. Line 270, $\bar \ell_i$ instead of $\ell_i$.** **Response:** Thanks, we will correct that. **6. I am confused with the definitions about Markovian features in lines 287-294. Specifically, what's the definition of $p$? Based on the context, I can infer that $\Psi$, $\psi$, $\Psi_k$, $\psi_k$, $\tilde{\psi}$ all refer to a vector in $\mathbb{R}^d$. But what is the number of padding zeros in the definition of $\tilde{\Psi}_k$ in line 290?** **Response:** Each $\Psi_k$ corresponds to a Markovian feature vector based only on $k$-grams, $p$ is the largest $k$. We will fully clarify the notation in the final version. **7. Weaknesses: I don't see obvious weaknesses in the manuscript. A possible drawback might be the lack of empirical results.** **Due to the theoretical nature of this work, experiments may not be necessary. But is it possible to compare the proposed consistent losses with other consistent losses empirically (Nowak et al., 2020) (Blondel, 2019)?** **Response:** Thank you for the suggestion. We will seek to add such experiments in the final version on the empirical comparison with other consistency losses in previous work. But, as you have mentioned, our paper mainly focuses on the theoretically principled surrogate losses for structured prediction based on $H$-consistency bounds. While we have demonstrated that the minimization of several proposed loss functions such as structured logistic loss benefit from efficient algorithms, we recognize the importance of further exploration. As such, we intend to dedicate future work to an extensive empirical analysis and the development of more universally applicable algorithmic solutions to encompass a broader family of surrogate loss functions as well as a diverse range of target losses. --- Rebuttal Comment 1.1: Comment: Thanks for your response, which has addressed all my concerns. Please make sure Question 2 and 4 are addressed in your revision which I believe should be helpful to readers. I will maintain my score and vote for acceptance.
Summary: This work extensively studies surrogate losses for structured predictions supported by H-consistency bounds. It first shows several negative results for some widely used surrogate losses in structured predictions: no non-trivial H-consistency bound can be derived. Then it provides two new families of surrogate losses that are supported by H-consistency bounds (which imply Bayes consistency): structured comp-sum losses and structured constrained losses. Finally, efficient algorithms are proposed for some of these new surrogate losses. Strengths: This work presents a solid theoretical study in the field of structured predictions. First, the paper shows that structured max losses (which include loss functions associated with several prominent structured prediction algorithms in the literature) are not Bayes consistent, which implies they cannot be supported by H-consistency bounds either (Theorem 4). Then, it shows that voted Conditional Random Field losses (which have been presented in some works of structured predictions) are not Bayes consistent either (Theorem 5). Moving to positive results, the paper provides two new families of surrogate losses that are supported by H-consistency bounds (which imply Bayes consistency): structured comp-sum losses and structured constrained losses (Theorem 6, 8, Corollary 7, 9). Finally, it presents efficient algorithms for minimizing several of the proposed surrogate losses. The level of originality exhibited in the research was noticeable, demonstrating a study (H-consistency bounds in structured predictions) that has not been extensively explored before. The quality of the work is good, with meticulous comparisons and contrasts with previous works (including some prominent ones) in structured predictions (Line 134-145, 157-162) and detailed analysis, supporting the authors' arguments convincingly. The paper is well-written at large (some suggestions for improvements below). The work is significant; it offers valuable insights into an under-researched topic (H-consistency bounds in structured predictions), and the implications of the findings could stimulate new directions for future research in structured predictions. Weaknesses: 1. The current manuscript does not have a conclusion section. Given that the results are already impressive, the authors should have shortened Section 6 and added a conclusion section to improve the readability further. 2. Because the authors showed the gradient of the structured logistic loss can be computed efficiently and claimed practical use, the authors should consider including some experiments for demonstration. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. When you say "no non-trivial H-consistency bounds can be derived", what are the "trivial H-consistency bounds"? 2. How are the results of this work related to those in several works by Ciliberto et al., 2016, 2019, and 2020? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I could not find the location where the limitations of the work were explicitly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **Weaknesses:** **1. The current manuscript does not have a conclusion section. Given that the results are already impressive, the authors should have shortened Section 6 and added a conclusion section to improve the readability further.** **Response:** Thank you for the suggestion, we will definitely add a conclusion section. **2. Because the authors showed the gradient of the structured logistic loss can be computed efficiently and claimed practical use, the authors should consider including some experiments for demonstration.** **Response:** We will take into account your suggestions when preparing the final version. Our paper mainly focuses on the theoretically principled surrogate losses for structured prediction based on $H$-consistency bounds. While we have demonstrated that the minimization of several proposed loss functions such as structured logistic loss benefit from efficient algorithms, we recognize the importance of further exploration. As such, we intend to dedicate future work to an extensive empirical analysis and the development of more universally applicable algorithmic solutions to encompass a broader family of surrogate loss functions as well as a diverse range of target losses. **Questions:** **1. When you say "no non-trivial H-consistency bounds can be derived", what are the "trivial H-consistency bounds"?** **Response:** We refer to a bound as in (4) where $f(t)$ does not tend to zero as $t$ approaches zero, for example because it is lower bounded by a constant. In such cases, the bound becomes uninformative about the left-hand side even when the argument of $f$ is small, and it does not even guarantee Bayes-consistency when $H$ represents the family of all measurable functions. We will further clarify on this matter in the final version. **2. How are the results of this work related to those in several works by Ciliberto et al., 2016, 2019, and 2020?** **Response:** Ciliberto et al. [2016] focused on a least squares surrogate loss function and corresponding framework. In this framework, the structured prediction problem is cast as a regression problem. They derived a regularization approach to structured prediction from the least squares surrogate loss and proved the Bayes-consistency of that approach. Ciliberto et al. [2019] focused on a local structure-adapted framework for structured prediction. They proposed a novel structured prediction algorithm that adaptively leverages locality in the learning problem. Ciliberto et al. [2020] developed a general framework for structured prediction based on implicit embedding. Their methods lead to polyhedral-type surrogates losses that benefit from Bayes-consistency. On the other hand, our work presents an extensive study of surrogate losses for structured prediction supported by $H$-consistency bounds. Different from the surrogate losses studied in the previous work, the formulations of our proposed surrogate losses including structured comp-sum losses and structured constrained losses are completely novel and do not cast structured prediction problems as a regression problem. Furthermore, we prove stronger consistency guarantees that imply Bayes-consistency for these new proposed families of surrogate loss. We will further clarify and detail these comparisons in the final version. **Limitations:** **I could not find the location where the limitations of the work were explicitly discussed.** **Response:** Thank you for pointing it out. We will add a separate discussion on potential limitations in the final version. --- Rebuttal Comment 1.1: Comment: Thank the authors for their responses. I have also read other reviews. I stand by my initial rating.
Summary: In this paper, the authors study surrogate losses for structured prediction problems. They show that surrogate losses proposed in previous work are not Bayes-consistent, i.e. a sequence of hypotheses which minimises the surrogate loss may not minimise the target loss. They then introduce two families of surrogate losses, namely structured comp-sum losses and structured constrained loss functions, which generalise two corresponding existing families to the structured prediction setting, and show that these admit $\\mathcal{H}$-consistency bounds, a stronger form of consistency that also implies Bayes-consistency. Last, they derive practically efficient algorithms for minimising the surrogate losses they have introduced, under certain settings. Strengths: __Paper highlights shortcomings of existing theory:__ The authors prove that existing surrogate loss functions, namely the structured max loss and the structured voted conditional random field loss, are not Bayes-consistent. Therefore, minimising one of these surrogates is not guaranteed to also minimise the corresponding target losses. This result is likely of interest to the wider community. __Introduction of new structured losses supported by theory:__ This work introduces two classes of surrogate losses, namely the structured comp-sum and the structured constrained loss, which they show to be $\\mathcal{H}$-consistent. This implies Bayes consistency, a guarantee that was missing from existing structured surrogate losses. In fact, $\\mathcal{H}$-consistency is a stronger guarantee than Bayes-consistency, since, as the authors point out, it is not asymptotic and accounts for the hypothesis set $\\mathcal{H}$ in question. __Theoretically sound and technically precise:__ Although I was not able to check the entirety of the derivations in the Appendix (I did not verify the proofs of Theorem 6 and Theorem 8 closely enough to be fully confident about these), other parts which I did check closely looked sound and technically precise to me. Also, generally, I found the paper to be carefully written and I also found that the notation was precise and clear (though I thought Section 6 could have been better organised to improve readability, and that the paper could have benefited from a conclusion section). Weaknesses: __The exposition of the paper could be improved:__ While I found that the notation and definitions in the paper are clear and precise, I found that from Section 6 onwards, the paper has tougher to follow, and the exposition was far denser and less clear than in the preceding sections. I think that the paper could benefit by organising the key results in lemmas and propositions, and deferring more some of the details from this section into the appendix. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see a list of questions and suggestions for improvement I have about the paper, organised roughly in order of appearance: - __Line 83:__ The authors say “naturally $\\ell$ is symmetric”. Does it have to be symmetric, or are they assuming it to be symmetric? - __Lines 95-97:__ The phrasing here could perhaps be clarified to “which guarantees that minimizing the generalization error for a surrogate loss $L_{\\text{sur}}$ over $\\mathcal{H}_{\\text{all}}$ also leads to the minimization of generalization error for the target loss $L$.” - __Definition 1:__ This appears to have typos. First $f_n$ appears in eq. 3 but it is not defined. Second, $h$ appears in the definition but not in eq. 3. Do the authors mean $h_n$ in the definition and $h_n$ instead of $f_n$ in eq. 3? - __Definition 2:__ How is $\\mathcal{H}$ defined (in line 107)? Is this a subset of the full hypothesis set? If so, the authors could clarify this by saying “Given a subset of the hypothesis class $\\mathcal{H} \\subseteq \\mathcal{H}_{\\text{all}}$, a surrogate loss…”. - __Equation below line 116:__ Should the expression on the right hand side have the conditional distribution $p(y | x)$ rather than the joint $p(x, y)$? - __Comment on notation:__ In line 117 the authors could use the notation $\\mathcal{C}_{L, \\mathcal{H}}^*(x)$ to make this consistent with the notation under eq. 79. - __Line 122:__ Typo, “hypothesis set” not “hypothesis sets”. - __Line 129:__ Typo, the “possible predictions” instead of “possible prediction”. - __Lemma 3:__ Similarly to the above comment for line 116, should the lemma involve the conditional $p(y | x)$ rather than the joint $p(x, y)$? Furthermore, the authors don’t seem to be using Lemma 3 (or referring to it) in the rest of the main text. In this case, I would suggest removing it and deferring it to the appendix. - __Theorem 4:__ I have two questions regarding Theorem 4: - First, in line 650, Appendix C, you refer to $h^*$ as “the Bayes classifier” of the structured max loss. I think a more accurate statement would be that $h^*$ is “__a__ Bayes classifier”, because there exist many choices of $h$ which minimise the generalisation error. For example, consider $h^*(x, 1) = h^*(x, 2) = 1$ and $h^*(x, y) = 0$ for all other $y > 2$. This also minimises the generalisation error, and coincides with the Bayes classifier of the target loss. - Second, the proof of this result seems to highlight a way in which the problematic classifiers (i.e. the classifiers which are optimal for the surrogate loss, but not for the target loss) are far fewer than those which are optimal for the target loss. In particular, is it correct to say that the set of classifiers which optimise the surrogate loss but not the target loss are those for which $h(x, y_1) = h(x, y_2) = \\dots = h(x, y_n)$, and this set is much smaller than the set of classifiers which optimise both the surrogate as well as the target loss (that is the set of classifiers for which $h(x, y_i) > h(x, y_3), \dots, h(x, y_n)$ for either $i = 1$ or $i = 2$). Can the authors comment on this point? For example, could one learn $h$ with an algorithm that involves a randomisation step (e.g. randomised initialisation), that results in convergence to the Bayes-optimal predictor of the target loss with high probability? - __Line 663:__ In Appendix D, in the proof for Theorem 5, the authors introduce the quantity $\\Phi_y$. Is this simply a function from $\\mathcal{Y}$ to $\\mathbb{R}$? - __Theorem 5:__ The proof of Theorem 5 relies on an argument (line 663) where the authors consider a loss $\ell(y’, y)$ that decouples (i.e. factorises) into a term that depends solely on $y$ only and another term which depends solely on $y’$. From this, they show that the VCRF loss is not Bayes-consistent. Some common losses, such as the zero-one $\\ell_{0-1}$ loss do not decompose in this way. When constrained to such losses, is the VCRF loss Bayes-consistent? How critical is the factorisation requirement in the proof? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In my assessment, I do not see any substantial limitations of this work which have not been addressed by the authors. However, I would appreciate the authors’ clarification on the questions I have raised above regarding the argument in the proofs of Theorems 4 and 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and suggestions on improving the readability. We will take them all into account when preparing the final version. Below please find responses to specific questions. **Weaknesses:** **The exposition of the paper could be improved: While I found that the notation and definitions in the paper are clear and precise, I found that from Section 6 onwards, the paper has tougher to follow, and the exposition was far denser and less clear than in the preceding sections. I think that the paper could benefit by organising the key results in lemmas and propositions, and deferring more some of the details from this section into the appendix.** **Response:** Thank you for your suggestions. We recognize that Section 6 may seem dense due to the necessity of introducing new notations and technical tools for efficient gradient computation and inference in structured prediction. We will follow your suggestion regarding the organization and will simplify our presentation to make it more accessible to readers. The addition of an extra page in the final version will also enable us to include more discussions in the main body, further enhancing the clarity and depth of our work. **Questions:** **1. Line 83: The authors say “naturally is symmetric”. Does it have to be symmetric, or are they assuming it to be symmetric?** **Response:** Yes, $\ell$ is assumed to be symmetric in our analysis. We will clarify that. We meant that this is a natural assumption since all instances of $\ell$ that we are familiar with in structured prediction admit this property. **2. Lines 95-97: The phrasing here could perhaps be clarified to “which guarantees that minimizing the generalization error for a surrogate loss $L_{\mathrm{sur}}$ over $H_{\mathrm{all}}$ also leads to the minimization of generalization error for the target loss $L$.”** **Response:** Thank you for the suggestion. We will clarify that in the final version. **3. Definition 1: This appears to have typos. First $f_n$ appears in eq. 3 but it is not defined. Second, $h$ appears in the definition but not in eq. 3. Do the authors mean $h_n$ in the definition and $h_n$ instead of $f_n$ in eq. 3?** **Response:** Thank you for pointing that out. You are indeed correct: $h$ and $f_n$ should be corrected to be $h_n$ in the definition. We will fix that in the final version. **4. Definition 2: How is $H$ defined (in line 107)? Is this a subset of the full hypothesis set? If so, the authors could clarify this by saying …** **Response:** Yes, $H$ is a subset of the family of all measurable functions. We will clarify that following your suggestion. **5. Equation below line 116: Should the expression on the right hand side have the conditional distribution $p(y | x)$ rather than the joint $p(x,y)$?** **Response:** Sorry for the confusion. We use the notation $p(x, y)$ to denote the conditional distribution as mentioned in line 115. We will make it more clear in the final version. **6. Comment on notation: In line 117 the authors could use the notation $\mathcal{C}^{*}_{L,H}(x)$ to make this consistent with the notation under eq. 79.** **Response:** Thanks, we will take your suggestions into account. **7. Line 122: Typo, “hypothesis set” not “hypothesis sets”.** **8. Line 129: Typo, the “possible predictions” instead of “possible prediction”.** **Response:** Thank you, we will correct these typos. **9. Lemma 3: Similarly to the above comment for line 116, should the lemma involve the conditional $p(y | x)$ rather than the joint $p(x,y)$? Furthermore, the authors don’t seem to be using Lemma 3 (or referring to it) in the rest of the main text. In this case, I would suggest removing it and deferring it to the appendix.** **Response:** Sorry for the confusion. We use the notation $p(x, y)$ to denote the conditional distribution as mentioned in line 115. We will make it more clear in the final version. We will consider moving Lemma 3 into the appendix following your suggestion. **10. Two questions regarding Theorem 4.** **Response:** With regard to your first question, you are indeed correct; the use of "a Bayes classifier" is more fitting in this context. As for your second question, it is definitely an intriguing one! You are right that in the given example, the set of classifiers that optimize the surrogate loss without optimizing the target loss is much smaller than the set that optimizes both. However, the applicability of this observation to general problems remains unclear. The idea of randomization in this context is indeed natural and potentially fruitful. We have explored a similar randomization idea in a different context without success but it is certainly a valuable avenue for further research. **11. Line 663: In Appendix D, in the proof for Theorem 5, the authors introduce the quantity $\Phi_y$. Is this simply a function from $\mathcal{Y}$ to $\mathbb{R}$?** **Response:** Yes, that’s right. We will further clarify that. **12. The proof of Theorem 5.** **Response:** That's a great question. In Theorem 5, we examine Bayes consistency within the context of structured prediction. This refers to the consistency property that must be maintained across any target loss function, as described in Definition 1. In our current proofs, we use the decoupling property of the target loss as a convenient technical assumption, facilitating the demonstration that surrogate loss and target loss lead to different Bayes classifiers. We believe that our proof can be extended beyond this assumption to encompass a more extensive family of target loss functions. However, it is definitely intriguing to investigate the consistency question further, especially when restricted to specific target loss functions. --- Rebuttal 2: Title: Thank you for your response Comment: I would like to thank the authors for their response to my rebuttal. I have read through this and was pleased to see that the authors found several of my suggestions useful and will incorporate them in their paper. In addition, I appreciated their clarification on my more technical questions regarding Theorems 4 and 5. Currently, I maintain a positive view on this work recommending it for acceptance. However, I would refrain from increasing my score as I lack extensive knowledge of this area, and also in light of certain consistency results presented in this work being present in previous work (as pointed out by the other reviewers). I therefore maintain my original score.
Summary: * The paper studies (Fisher, or “Bayes”) consistency in structured prediction. In particular, the focus is on non-asymptotic, quantitative bounds for common and not-so-common (i.e., new) surrogate losses, which requires different proof techniques and an approach based on “H-consistency”. * Thm 4, Thm 5 provide a few of the main results in the paper, showing that commonly used losses in structured prediction are not Bayes-consistent * Thm 6, Thm 8 (and companion corollaries) provide the other main results in the paper, showing that a few new-ish but less tractable (though still convex) losses are H-consistent and Bayes-consistent — the losses in question here are the so-called “comp-sum” and “structured constrained” losses * Finally, lem 10 shows that the latter (comp-sum) losses may be computed in polynomial time — but revealing something of a natural statistical/computational trade-off, i.e., the consistent losses evidently require more compute than the non-consistent ones Strengths: * The paper gives a quite detailed study on the statistical + computational aspects of H-consistency, for commonly (and not-so-commonly) used losses in structured prediction * In particular, the paper highlights losses that are Bayes-consistent (though coming at the price of computability) Weaknesses: I have just a couple questions / comments: * It seems the content of thms 4, 5 — on the lack of consistency — is also present in previous works (e.g., those by Ciliberto et al.). Can you please elaborate on why these results in your paper are novel? * Usually Fisher consistency is defined relative to a target loss function. What is the target loss function here? * I think the primary avenue the paper could be (significantly) strengthened is via an empirical analysis — the authors could illustrate the consistency (and computational costs) of the comp-sum, structured constrained losses through real-world examples. That would really tie together the whole message of the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No they have not, but I’m not sure that’s necessary here. This is an abstract / general theory paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging review. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **1. It seems the content of thms 4, 5 — on the lack of consistency — is also present in previous works (e.g., those by Ciliberto et al.). Can you please elaborate on why these results in your paper are novel?** **Response:** Theorem 4 provides negative results for a broad and generalized family of loss functions, collectively referred to as structured max loss. This extends the scope of existing research, as previous works had only addressed the inconsistency of specific instances within the structured max loss category, such as Max-Margin Markov Networks (M3N) (e.g., studies by Osokin et al., Ciliberto et al. and Nowak et al.). Theorem 5 further elaborates on the negative results for voted conditional random field, a family of loss function that integrates the target loss $\ell(y’, y)$ within its formulation. To the best of our knowledge, no prior studies in the literature have explored the consistency of this specific formulation. The most closely related discussions center around a specialized instance of the multi-class logistic loss (also referred to as conditional random field in that context), in which $\ell(y’, y)$ disappears within the framework of the voted conditional random field. The previous work by Osokin et al., Ciliberto et al. and Nowak et al. point out that the multi-class logistic loss cannot be consistent in structured prediction due to the absence of the target loss function within its formulation. Instead, our result shows that, even when integrating the target loss $\ell(y’, y)$ within its formulation, the voted conditional random field cannot be consistent. **2. Usually Fisher consistency is defined relative to a target loss function. What is the target loss function here?** **Response:** The target loss function is the one described in lines 82 - 90, where $\ell$ is a symmetric loss defined over $\mathcal Y \times \mathcal Y$. For example, for sequences, $\ell$ may be the Hamming loss or some other rational loss. The surrogate losses typically adopted in structured prediction are expressed in terms of $\ell$. For example, for StructSVM, the surrogate loss is defined by $\mathsf L^{\text{StructSVM}} (h, x, y) = \max_{y' \neq y} \max \bigg\\{ 0, \ell(y', y) - (h(x, y) - h(x, y') ) \bigg\\}$. Our study of $H$-consistency bounds is general: we make no other assumption about $\ell$ beyond symmetry. **3. I think the primary avenue the paper could be (significantly) strengthened is via an empirical analysis — the authors could illustrate the consistency (and computational costs) of the comp-sum, structured constrained losses through real-world examples. That would really tie together the whole message of the paper.** **Response:** Thank you for the suggestion. Our paper mainly focuses on the theoretically principled surrogate losses for structured prediction based on $H$-consistency bounds. While we have demonstrated that the minimization of several proposed loss functions such as structured logistic loss benefit from efficient algorithms, we recognize the importance of further exploration. As such, we intend to dedicate future work to an extensive empirical analysis and the development of more universally applicable algorithmic solutions to encompass a broader family of surrogate loss functions as well as a diverse range of target losses. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you very much for the response. I've gone through it and will maintain my score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs
Accept (poster)
Summary: The paper provides a connection of the convex program for gated ReLU networks to multiple kernel learning model with a weighted data masking feature map. Additionally, the paper provides a theoretical analysis of the predictive error of the proposed kernel algorithm. Strengths: The paper provides a new framework that employs multiple kernel learning which is a convex reformulation of neural network training process, the new framework is better able to explain the remarkable performance of neural networks than the neural tangent kernel perspective, which requires the infinite width limit assumption. Weaknesses: As I am not familiar with multiple kernel learning and group lasso, I feel that I am not able to fully identify the weakness in this paper. I will give a borderline rating with a low confidence score. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
null
Summary: This work considers a convex formulation of a finite-width regularised two layer ReLU network and interpretations as multiple kernel learning. This then is related to the neural tangent kernel. 
The convex formulation considers cones of parameters with fixed activation along with a bound O(n^d). The gated ReLU further decouples the activation region from the linear function. A main conclusion is that while the NTK of the gated ReLU does not take labels into account, optimal Multiple Kernel Learning can. Strengths: * The work deals with a relevant problem of understanding the optimization problem of neural networks beyond the infinite width setting. * The work gives an overview of different perspectives about ReLU networks, particularly previous work on convex programs, group lasso, multiple kernel learning, and NTK, which are interesting as they could facilitate further perspectives. * The work discusses convex formulations of gated ReLU networks from a perspective of multiple kernel learning. Weaknesses: * The article indicates that previous works have considered training dynamics with infinitesimally small learning rates on networks in the infinite width limit. It would be indicated to comment on works covering small width [1,2], or works that discuss NTK more independently of the width [3], or works that investigate the dynamics of finite-width networks using NTK [4]. 

 * [1] E et al. A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics. 
 * [2] Su and Yang. On learning over-parameterized neural networks: A functional approximation perspective.
 * [3] Liu et al. On the linearity of large non-linear models: when and why the tangent kernel is constant. * [4] Bowman and Montufar. Spectral Bias Outside the Training Set for Deep Networks in the Kernel Regime. * I missed a more explicit discussion of how the presented work is significantly different from or significantly advances in relation to prior works on convex formulations of ReLU networks, particularly the sequence of works [1,2,…]. * [1] Ergen and Pilanci. Convex geometry and duality of over-parametrized neural networks. * [2] Pilanci and Ergen. Neural networks are convex regularizers: exact polynomial time convex optimization formulations for two layer networks. * A more explicit connection between the presented analysis and the result of training a neural network would have made this a stronger contribution. Particularly, the conversion of gated ReLU to ReLU should be discussed more explicitly in the main part of the document. The theoretical discussion of fixing the kernel or improving the weights seems to focus on the training error with a more explicit discussion of generalization error missing. Figure 1 and the first experiments seem to be on toy examples. The experiments on UCI data compare NTK vs iteratively reweighed least squares but seem to be missing comparison with the trained network as well as reporting training accuracy. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Please clarify the intended notation for w^{(+)} and w^{(-)} or why (11) should have the same expressive power as (1). Concretely, (11) is a signed sum of 2m ReLUs, whereas (1) is a sum of just m ReLUs. For d=2 and m=1, (11) can represent functions which have 4 linear regions, whereas any function represented by (1) has at most 2 linear regions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: * The work seems to consider minimal complete gate sets. * The work does not appear to include observations into overfitting or generalisation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback and comments. We hope that you would consider increasing your score if your concerns are adequately addressed. We have addressed the issues related to generalization in the global response. Please see our responses to your other specific queries below. We greatly appreciate the reviewer for pointing out the references that study the NTK in the finite but large or small width regimes. We will include a discussion of this related work in the final revision of our work. However, we would like to point out an important distinction between our kernel characterization and the works which study the NTK in various width regimes. Specifically, based on our understanding, the works you reference still analyze the NTK in the regime where the weights do not much from initialization (the lazy regime), and the kernel is not learned from the data. On the other hand, our MKL kernel characterization does not require the weights to stay close to initialization, and the optimal kernel is learned from the data. The significant contribution in relation to prior works on convex formulations is to connect these reformulations to the MKL perspective, which further reveals a connection to the NTK theory, allowing us to show that the convex reformulation leads to a new kernel characterization of gated ReLU networks that applies beyond the lazy regime (which is required for the NTK theory to hold). $\textbf{Training accuracy results}$ In the paper, we present experimental results that showcase the $\textit{generalization}$ performance of the gated ReLU model compared to the NTK. For completeness, we also present here the $\textit{training}$ accuracies for the UCI datasets, which we will add to the final revision of the paper. Note that, as a consequence of Theorems 4.3 and 5.1, we expect that our kernel always outperforms the NTK on the training set, which is exactly what we observe in the table below: **We note that our aim is to minimize the regularized training objective which includes not only the training loss but also the regularization term. And then during the validation process, we made a grid search for the regularization coefficient $\beta$ and included the results with the best test accuracies (generalization performance) for both models. Therefore, it is normal to have some training accuracy below 100%.** **Training accuracies by training on standard $\ell_2$ regularized loss** | Dataset Name | NTK | **Ours** | |-----------------------------------|---------|---------| | acute-inflammation | 1.000 | 1.000 | | acute-nephritis | 1.000 | 1.000 | | balloons | 1.000 | 1.000 | | blood | 0.841 | 0.957 | | breast-cancer | 0.799 | 0.986 | | breast-cancer-wisc-prog | 0.975 | 1.000 | | breast-cancer-wisc-diag | 1.000 | 1.000 | | breast-cancer-wisc-prog | 0.973 | 1.000 | | congressional-voting | 0.752 | 0.791 | | conn-bench-sonar-mines-rocks | 1.000 | 1.000 | | credit-approval | 0.868 | 1.000 | | cylinder-bands | 0.766 | 1.000 | | echocardiogram | 0.888 | 1.000 | | fertility | 0.960 | 0.920 | | haberman-survival | 0.821 | 0.991 | | heart-hungarian | 0.850 | 1.000 | | hepatitis | 0.931 | 1.000 | | ilpd-indian-liver | 0.808 | 1.000 | | ionosphere | 0.939 | 1.000 | | mammographic | 0.845 | 0.950 | | molec-biol-promoter | 1.000 | 1.000 | | musk-1 | 1.000 | 1.000 | | oocytes_trisopterus_nucleus_2f | 0.830 | 1.000 | | parkinsons | 0.884 | 1.000 | | pima | 0.837 | 1.000 | | pittsburg-bridges-T-OR-D | 0.908 | 0.987 | | planning | 0.809 | 1.000 | | statlog-australian-credit | 0.571 | 1.000 | | statlog-german-credit | 0.828 | 1.000 | | statlog-heart | 0.881 | 1.000 | | tic-tac-toe | 0.978 | 1.000 | | trains | 1.000 | 1.000 | | vertebral-column-2clases | 0.892 | 1.000 | Our kernel achieves a higher train accuracy on 32/33 datasets, which is to be expected. The discrepancy in one of the datasets can be attributed to the approximation of subsampling a set of hyperplane arrangements. $\textbf{Clarification about reparameterized ReLU network}$ We would thank the reviewer for bringing up this point. Indeed you are correct that the model (11) can be more expressive than (1), so we will modify the wording to "... can represent the same or more functions as (1) ...". Note that this does not affect the significance of our results, since the motivation behind introducing this reparameterization is to study an equally sized model (with $O(m)$ neurons) that can be obtained from a ReLU network (1) with $m$ neurons that has the same NTK as the gated ReLU network. $\textbf{Minimal gate sets}$ Indeed for the theoretical results to go through, we require that the gate set be complete. However, in practice we observe that sampling gates to obtain a subset of all possible hyperplane arrangements performs very well, so the approach is computationally feasible. Additionally, as discussed in the global response, the number of arrangements $p$ can be significantly reduced by subsampling or using a CNN architecture. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I appreciate the authors responses, which clarify some of the concerns in my initial review and propose suggestions to improve the text. I have adjusted my score accordingly. --- Rebuttal 2: Title: Look forward to your feedback Comment: Dear Reviewer nJAh, We believe that we have addressed your concerns in our responses. Since the deadline is approaching, we would like to hear your feedback so that we can respond to that before the discussion period ends. Please feel free to raise questions if you have other concerns. Thank you very much for your support, we really appreciate that! Best regards, Authors
Summary: The paper studies gated ReLU networks with L2 regularization. The authors show that this model is equivalent to Multiple Kernel Learning with group lasso, which is a convex optimzation problem. Thus L2 regularized gated ReLU networks is equivalent to learning the NTK according to a Lasso objective and then fitting the data with the resulting learned NTK. In contrast, in the NTK limit the model fits the data with the initial NTK, which is in general not optimal w.r.t. to the optimization procedure defined previously. Strengths: The article is well written and studies the important question of whether DNN can be interpreted as a kernel learning model. The theoretical results appear to be correct, and the numerical experiments illustrate the theory well. Weaknesses: The paper studies gated ReLU networks as an approximation of ReLU networks, but a lot of either very similar or stronger results to the ones proven here are already proven for actual ReLU networks. In particular, [Francis Bach, Breaking the curse of dimensionality with convex neural networks, 2017], proves a similar convex reformulation, and though not explicitely stated, it can also be interpreted as a kernel learning objective (arguably this optimization over kernels is more obvious in [A. Jacot, E. Golikov, C. Hongler, F. Gabriel, Feature Learning in L2-regularized DNNs: Attraction/Repulsion and Sparsity, NeurIPS 2022] which generalizes Francis Bach's results to the deep case). Francis Bach also proves generalization bounds which are missing here. The only possible advantage of the gated ReLU setting is that the optimization might not be NP-hard as is argued in the paper (we know the NP-hardness of the equivalent reformulation of ReLU networks), though this relies on the fact that the number of required gates $p$ grows polynomially, which is not proven. Actually I think it is quite likely that optimization proposed in this paper is NP-hard as well. Another related problem is that the gated ReLU optimization is arguably `less unique' than the traditional ReLU, since the dates $g_i$ can always be changed without changing the activations on the training data, thus changing the learned function and kernel outside the training set. It appears that the solution proposed here is to arbitrarily choose a gate direction $g_i$ for each activation pattern on the training set, which is not ideal. Also this problem is not really addressed in the paper. Note that the same problem appear in ReLU networks though it seems less severe, since there can be case where you obtain uniqueness of the solution, in contrast it appears that there is never uniqueness with gated ReLU. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I also do not like how the authors say in the abstract and introduction that the NTK at initialization is suboptimal compared to the learned NTK, without clarifying w.r.t. to which cost. This could suggest that they prove that the learned NTK is better in terms of generalization, which is not proven. Rather the learned NTK is optimal w.r.t. to the MKL optimization with group Lasso, which is a very specific optimization introduced in this paper (and previous paper) as a reformulation of the L2 regularization loss. So in some sense the statement that the learned NTK is optimal in contrast to the initial NTK is a tautology, since it is w.r.t. to a cost that was designed to reflect the effect of learning. If we are instead talking about generalization, then we cannot say that the learned kernel is always better than the initial kernel, as there could be some form of `kernel overfitting'. This is the `no free lunch' theorem: there is no statistical model that is strictly better than another, each model is better under certain assumptions on the task. For example we already know that learning with a constant NTK is optimal if the true function $f^*$ is a Gaussian process with covariance equal to the limiting NTK, in such a setting, NTK learning would be detrimental. For these reasons, I ask the authors to be more specific when thay say that the learned kernel is optimal and initial NTK is suboptimal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As already mentioned, the authors should be clearer about the two following possible issues: - That the reformulation may not be that much faster, because the number of required kernels $d$ one has to optimize over could be very large. It is therefore not clear that gated ReLU offer a significant optimization advantage over ReLU networks. - That the optimization is not unique outside the training set. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback and comments. We have addressed the issues related to complexity in the global response. Please see our responses to your other specific queries below. $\textbf{Regarding the objective}$ We would like to clarify that one of the main results of our work is that the MKL optimization problem with group lasso is indeed equivalent to the standard non-convex training with L2 regularization loss. Thus, this cost was not designed from the perspective of the NTK, but instead was obtained by deriving an equivalent convex reformulation, which we also show captures the loss function from performing kernel ridge regression with the NTK. We thank the reviewer for raising this point, and we will revise our wording in the final version to be clearer when describing the suboptimality of the NTK. --- Rebuttal Comment 1.1: Comment: Optimizing over a polynomial a random subset of hyperplane alignement is an interesting idea and does yield a polynomial approximation to the full optimization, but it does not answer the question of whether the convex reformulation of gated ReLU networks is computationaly more efficient than the convex reformulation of ReLU networks that I mentioned (since one could use a similar polynomial approximation of this second reformulation). In general I think it would be important to compare the method proposed here not only to the fixed NTK limit, but also to the other convex reformulations that other reviewers and I have mentioned, since gated ReLU are much less standard than ReLU networks, their use as a replacement for ReLU nets should be motivated. --- Reply to Comment 1.1.1: Comment: Thank you for your response. The main benefit of considering the gated ReLU network compared to the standard ReLU network is that the convex reformulation of the gated ReLU network leads to an _unconstrained_ optimization problem which is easier to solve than the standard ReLU network which leads to a cone constrained convex problem.
Summary: This work presents the insight that the convex formulation of training a gated ReLU network is an instance of Multiple Kernel Learning (MKL) techniques. This contrasts with the NTK limit, which becomes a single kernel learning in the infinite width limit. The main thesis is that the finite-width ReLU network is necessarily superior over the NTK limit, since MKL learning optimizes over all linear combinations over the multiple kernels, but the NTK limit corresponds to a single fixed linear combination of these kernels. Strengths: The presentation and the contribution of this paper are very straightforward: (convexified gated ReLU) = (MKL). This main insight, in my view, the key and only non-trivial insight of this work. In my view, all other parts of this work are relatively straightforward consequences of this observation, and the experiments are minimal and compact. Therefore, in my view, the decision comes down to whether this single insight is a sufficient contribution to warrant a NeurIPS publication. In my view, it definitely is. How the hidden convex structure of NN approach compares with the NTK or the mean-field analysis was an important problem to tackle, and I find the insight of this work to be a very good start. (Although this paper doesn't yet deal with the mean-field analysis.) I recommend this paper be accepted. Weaknesses: . Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: . Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the positive feedback and accurate comments, and greatly appreciate the time taken by them to review our work. We are encouraged to know that you believe that our main insights warrant an acceptance.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers and the AC for taking the time to review and assess our work. We are encouraged to know that the reviewers believe that our findings are valuable and our main insights are sufficiently novel. We also appreciate that the reviewers found the paper well written and lucid, as clarity of presentation is important to us. In this global response, we address some of the common queries and concerns brought up by the reviewers. $\textbf{Regarding Generalization:}$ We would like to emphasize that the focus of this paper is on optimization of neural networks with a standard regularized training objective. Indeed, theoretically studying the generalization properties of the MKL kernel and convex reformulations is a very interesting direction for future work, but the focus of our theoretical results in this work is on the training performance. **With regards to generalization, we present empirical results that indicate that our convex MKL formulation also achieves better test performance than the NTK formulation on multiple datasets (Table 1 for UCI datasets, and the learned functions in Figure 1 for toy datasets).** $\textbf{Complexity arising from number of hyperplane arrangements/kernels:}$ There are multiple ways to avoid high computational complexity as detailed below. * First, one can use a sampling based approach where one can randomly sample a tiny subset of all possible hyperplane arrangements and then solve the convex program with this subset. Thus, although the resulting approach isn't exact, **the training complexity won't be exponential in any of the problem parameters anymore**. The experimental results in Section 7 show that this approximation in fact works extremely well, specifically resulting in models that outperform the NTK in 26/33 UCI datasets. * Second, we can change the architecture. Particularly, we can replace fully connected networks with convolutional networks. Then, since CNNs operate on the patch matrices $\\{\mathbf{X}_b\\}\_{b=1}^B$ instead of the full data matrix $\mathbf{X}$, where $\mathbf{X}\_b \in \mathbb{R}^{n \times h}$ and $h$ denotes the filter size, even when the data matrix is full rank, i.e., $r=\min (n,d)$, the number of hyperplane arrangements $p$ is upperbounded as $p \leq \mathcal{O}(n^{ r_c})$, where $r_c:=\max_b \mathrm{rank}(\mathbf{X}_b)\leq h \ll \min(n,d)$. For instance, let us consider a CNN with $3 \times 3$ filters, then $r_c \leq 9$ independent of $n,d$. As a consequence, weight sharing structure in CNNs dramatically limits the number of possible hyperplane arrangements and avoids exponential complexity. This also supports the observed efficiency and remarkable generalization performance of CNNs in practice.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper establishes the equivalence between shallow neural networks activated by ReLU and multiple kernel learning (MKL) through the convex reformulation of shallow neural networks into gated ReLU networks. By interpreting neural networks in this way, it becomes apparent that the network may not be able to attain the optimal MKL kernel on the training set. To address this issue, the authors propose an iterative reweighted method to solve the corresponding MKL problem, which has the potential to obtain the optimal MKL kernel due to its convex nature. Moreover, the authors conduct an analysis of the in-sample prediction error for the gated ReLU networks. Strengths: Previously, a series of recent studies have demonstrated that (shallow) ReLU networks can be reformulated as a convex problem. This paper builds upon the previous work by establishing the equivalence between this network formulation and the mask kernel learning (MKL) model. This finding has the potential to offer valuable insights into comprehending the performance of neural networks during both training and testing stages. The paper is well-written, providing an ample and lucid introduction to various topics such as the convex formulation of neural networks, mask kernel learning, NTK, and other related subjects. Weaknesses: The main contribution of this paper is to establish the equivalence between neural networks and multiple kernel learning (MKL) models. However, there are certain assumptions and setups in the paper that may limit the scope of their findings: 1. The convex formulation of neural networks appears to be applicable only to ReLU (or its variants) activation functions, as the cone constraints rely on the non-negativity of ReLU. Consequently, the results might be specific to ReLU and may not generalize to other activation functions. 2. Since the output layer of the neural network is fully connected, it can be expressed in a feature map format as f(x) = phi(x)^T v, which holds true even for deep neural networks. Thus, merely establishing equivalence to a kernel method may not sufficiently uncover the intricacies of neural networks. 3. In lines 164-167 of the proof for Lemma~3.1, the authors fail to explain why the simplex constraint can be eliminated through a change of variables. This omission could lead to confusion for readers. 4. Theorem 4.3 demonstrates that problem (7) can be reformulated in the form of MKL, but the original problem (7) does not include a simplex constraint. However, the authors do not address this discrepancy in the main theorem or the supplementary proof. 5. The results presented in section 6 do not offer further insights or explain the benefits of formulating the problem as convex optimization or considering it as MKL. This is because the authors solely analyze in-sample prediction, while previous research has shown that shallow ReLU neural networks can generalize well to various underlying distributions (e.g., see https://arxiv.org/abs/1901.08584). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. What is the motivation behind using a regularized formulation? Do the main results (Theorem 4.3, Theorem 5.1, Theorem 6.1) hold for the non-regularized version with lambda set to zero? 2. Is it possible to extend the convex formulation to activation functions other than ReLU? If so, what would be the considerations and challenges involved in adapting the formulation to non-ReLU activations? 3. Given that the reformulation is convex, why do you suggest using the Iteratively Reweighted Least Squares (IRLS) algorithm instead of a commonly used convex problem solver? Are there specific benefits to using IRLS in this context? Furthermore, can you provide insights into the properties of the solution that IRLS converges to? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the feedback and comments. We hope that you would consider increasing your score if your concerns are adequately addressed. We have addressed the issues related to generalization in the global response. Please see our responses to your other specific queries below. $\textbf{Activation functions:}$ We note that the convex reformulations can be extended to other activation functions. Particularly, for all piecewise linear activations (including ReLU, leaky ReLU, absolute value or binary activations), the same analysis hold except that the structure of the gates (diagonal matrices $\mathbf{D}_i$) changes. We also want to emphasize that the same analysis can also be applied to smooth activations such as sigmoid or tanh but in those cases the computational complexity can be slightly higher due to the increase in the required number of gates/diagonal matrices. $\textbf{Kernel methods}$ We are not sure exactly what you mean by the statement that even deep neural networks can expressed in a feature map format. Previous works (apart from the NTK) characterizing deep neural networks establish equivalence to a kernel method by freezing all the weights except for the last layer, effectively resulting in a random features model. The NTK is another kernel characterization, but requires the neural network weights to be in the lazy regime with infinite or large width (where the weights do not change much from initialization). The kernel characterization we present in this work differs from these previous approaches by studying the finite width regime where the weights need not be close to initialization, and the kernel is learnt from the data. $\textbf{Simplex constraint:}$ Note that the change of variables argument merely establishes equivalence of problem (12) and the problem described in lines 163-164. We would like to point out that both problems contain the simplex constraint. This constraint, and the weight variable $\eta$ can be eliminated by applying the variational formulation of the squared group $\ell_1$ norm, which is presented in lines 161-162. This allows us to convert the standard $\ell_2$ regularization in the problem between lines 163-164 to the squared group lasso regularization without the weights $\eta$ in problem (14). This is the main result of Lemma 3.1. A further result is the equivalence between $\textit{squared}$ group lasso and the standard group lasso, which is presented in problem (15). Theorem 4.3 is the result of a chain of equivalences between various optimization problems (proof in the supplement). We present them here again as a summary. (7) is the standard non-convex learning problem for the gated ReLU network. Prior work showed that this is equivalent to the group lasso problem (15). This is also equivalent to the squared group lasso problem (14). Lemma 3.1 then connects (14) to the MKL problem (12). The crux of the argument is as follows: the MKL problem optimizes over parameters $\mathbf{w}$ as well as weights $\eta$ over the kernels, with $\eta$ constrained to be in the simplex, and the parameters $\mathbf{w}$ having standard $\ell_2$ regularization (this is problem (12)). Using the variational argument in the proof of Lemma 3.1, we can completely eliminate the kernel weight variable $\eta$ (along with its simplex constraint), by replacing the standard $\ell_2$ regularization with a squared $\textit{group lasso}$ regularization, which is exactly problem (14). $\textbf{Regarding Regularization}$ Yes, the results extend to non-regularized version by taking the limit $\lambda \to 0$ and following the regularization path of the solution. Additionally, we use the regularized objective to avoid issues related to overfitting, leading to better generalization performance in practice. $\textbf{Why IRLS?}$ The motivation behind introducing IRLS is that the solution obtained by the NTK can be expressed as one instance of an iterate in the IRLS algorithm, allowing us to directly obtain the optimal MKL solution by initializing with the NTK solution. Computationally, each iteration of the IRLS method involves a simple least squares regression problem that has a closed form solution which can be computed using very efficient direct and iterative methods. Additionally, state of the art iterative methods like preconditioned conjugate gradient -- specifically LSQR and LSMR [1, 2] that we use for solving these least squares problems can be efficiently parallelized and implemented with GPU acceleration [3]. [1] - Paige, Christopher C., and Michael A. Saunders. "LSQR: An algorithm for sparse linear equations and sparse least squares." ACM Transactions on Mathematical Software (TOMS) 8.1 (1982): 43-71. [2] - Fong, David Chin-Lung, and Michael Saunders. "LSMR: An iterative algorithm for sparse least-squares problems." SIAM Journal on Scientific Computing 33.5 (2011): 2950-2971. [3] - Huang, He, et al. "An MPI-CUDA implementation and optimization for parallel sparse equations and least squares (LSQR)." Procedia Computer Science 9 (2012): 76-85. --- Rebuttal Comment 1.1: Title: Further Review Comments Based on Author's Rebuttal. Comment: Thank you for the author's response. Having reviewed the rebuttal and comments from other reviewers, I have chosen to maintain my original score due to concerns regarding the applications and implications of this work. I remain unconvinced that the convex reformulation presented in this paper can be readily extended to activations beyond ReLU. As ReLU is also piecewise linear function, I find it lacking that the authors have not provided references or a brief explanation detailing the applicability of such analysis to smoother activations like sigmoid or tanh. Furthermore, neural networks can be considered as a kernel method whenever the output layer is linear. Thus, it is worth considering whether the reformulation and results still hold when the output layer is non-linear, rather than linear. My reservations extend to the empirical results as well. Essentially, [1] reveals that the behavior of neural networks under gradient descent aligns with the kernel gradient with respect to NTK. In the infinite-width scenario, NTK converges to the so-called limiting NTK [1, Theorem 1]. To ensure a fair comparison, numerical experiments should consider juxtapose trained neural networks (across varying widths) with MLK using the optimal MKL kernel obtained in this paper. Unfortunately, based on the provided code, it appears the authors treat MLK with the limiting NTK as the performance of NTK. I hesitate to deem this a fair comparison, as the limiting NTK can be employed in alternative machine learning models like SVM rather than being solely tailored to MLK, but NTK itself really depends on the training. Given my reservations arising from the experimental outcomes, the insights offered by the results in section 6 do not sufficiently underscore the benefits of formulating the problem via convex optimization or viewing it as MKL. Earlier research [2] has indicated that shallow ReLU neural networks exhibit commendable generalization to diverse underlying distributions. Consequently, my decision remains unaltered regarding the review score. [1] Jacot, Arthur, Franck Gabriel, and Clément Hongler. "Neural tangent kernel: Convergence and generalization in neural networks." NeurIPS 2018 [2] Arora, Sanjeev, et al. "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks." ICML 2019 --- Reply to Comment 1.1.1: Comment: We thank you for your detailed response. Regarding the activation functions beyond ReLU, we refer the reviewer to [1] for polynomial activation functions (which can sufficiently approximate most other activation functions as well). **Regarding Experiments with the NTK** We are not exactly sure by what you mean when you say that the experiments are not a fair comparison. Specifically, our experiments do indeed include comparisons with finite width neural networks trained by gradient descent. In Figure 1, the learned functions with the black dashed lines are the result of training a finite width neural network via gradient descent. We observe that the convex MKL formulation better matches this result than the limiting NTK. Could you please clarify what exactly you mean when you refer to "MLK with the limiting NTK as the performance of the NTK"? Additionally, the MKL approach can also be extended to SVMs and other applications where standard kernel methods can be applied [2]. [1] - Bartan, Burak, and Mert Pilanci. "Neural spectrahedra and semidefinite lifts: Global convex optimization of polynomial activation neural networks in fully polynomial-time." arXiv preprint arXiv:2101.02429 (2021). [2] - Bach, Francis R., Gert RG Lanckriet, and Michael I. Jordan. "Multiple kernel learning, conic duality, and the SMO algorithm." Proceedings of the twenty-first international conference on Machine learning. 2004. --- Rebuttal 2: Comment: Dear Reviewer 98L1, We believe that we have addressed your concerns in our responses. Since the deadline is approaching, we would like to hear your feedback so that we can respond to that before the discussion period ends. Please feel free to raise questions if you have other concerns. Thank you very much for your support, we really appreciate that! Best regards, Authors --- Rebuttal Comment 2.1: Comment: After reviewing the code in the 'UCI' directory, it is evident that Table 1 is generated using this codebase. However, it's important to note that the reported test accuracy for the NTK does not originate from a conventionally trained neural network. Instead, it seems to be derived from MLK with the so-called limiting NTK. It is reasonable to expect that MLK with a restricted limiting NTK might exhibit inferior performance compared to MLK with the optimal kernel. That is why I feel the experiments are not so fair. To obtain a more comprehensive assessment of test (and train) performance, I recommend the authors consider conducting experiments involving neural networks with varying widths which is used in [1]. By comparing the accuracies across different network configurations, we can draw more meaningful conclusions. Specifically, if most of these test accuracies fall below the performance achieved with MLK using the optimal kernel, it would support the assertion that MLK outperforms NTK in terms of test performance. [1] Lee, J., Bahri, Y., Novak, R., Schoenholz, S. S., Pennington, J., & Sohl-Dickstein, J. (2018, February). Deep Neural Networks as Gaussian Processes. In International Conference on Learning Representations. --- Reply to Comment 2.1.1: Comment: We thank the reviewer for additional comments. We first would like to emphasize that our main objective in this work is to study the optimization of regularized neural networks through a novel kernel perspective. Thus, we don't provide any theory or an extensive set of experiments regarding the generalization properties of the proposed approach which itself requires a comprehensive analysis that can be the focus of a single paper. Additionally, we would like to clarify that the NTK is itself a kernel method which aims to approximate the conventional neural network training procedure, using a kernel derived from the infinite width limit of the neural network. This is in contrast to our novel kernel characterization, the convex MKL approach, which is also a kernel method, but with an optimal kernel that is learnt from the data. Note that we do not require the infinite width or large width assumption for this kernel characterization to hold. Our theory shows that this MKL approach is equivalent to the conventional neural network on the training set. In addition to this theory, we present empirical results (in Table 1 and Fig 1) which compare the test performances of the two kernel characterizations, and show that our convex MKL formulation achieves better test performance than the NTK formulation on multiple datasets. Since we are empirically comparing two different kernel methods, we believe that this is a fair comparison. Specifically, we perform standard kernel ridge regression (KRR) with the NTK kernel to obtain the test accuracies for the NTK in Table 1. Note that we do not perform "MLK" for this column (nor is it clear what this exactly means). Similarly, to obtain the test accuracies for our novel kernel characterization in Table 1, we solve the MKL problem with masking kernels that we derived from the convex reformulation of the gated ReLU network. We are not entirely sure how running conventional neural network training would show that the NTK is suboptimal, since the NTK is not an accurate approximation of conventional neural network training when the width is not large or close to infinity. Could you please clarify the details of the experimental setup you are proposing that would lead to a more conclusive comparison between the two kernel characterizations?
null
null
null
null
null
null
Predicting a Protein's Stability under a Million Mutations
Accept (poster)
Summary: The paper proposes a new method, Mutate Everything Method, to model a protein's thermodynamic stability based on mutations in a protein's sequence. A key distinguishing feature of the method is the ability to perform large amount of parallel evaluations, which significantly speed up computational efficiency. The authors first introduce the general challenge in designing stable protein though mutations, describe thermal stability as a physical metric and introduce the general outline of their method. Next, the authors describe related work in protein engineering, protein structure prediction and protein stability modeling followed by a more precise, mathematical description of the problem setup for the proposed method. Essentially, the Mutate Everything Method relies on taking protein sequence embeddings from pretrained models (AlphaFold, ESM) and then predicting thermal stability of the mutations with lightweight MLP heads with one head corresponding to each mutation. In the case of multiple mutations, the authors aggregate the outputs from each individual mutation head. In their experiments, the authors study their method on a variety of sequence mutation datasets, including both single and higher-order mutations with the general results indicating better modeling and compute performance of the Mutate Everything Method. The order also perform an ablation of representations from different embedding encoders, including AlphaFold, ESM2 and MSA-Transformer. Strengths: The paper has the following strengths: * Originality: The paper proposes a new, pragmatic method for a relevant problem in modeling how mutations affect a protein thermodynamic stability. * Quality: The paper performs detailed evaluations of the method on multiple types of tasks, including multiple datasets, and compares to various other methods along both modeling performance and compute efficiency. * Clarity: The purpose, goals and details of the method and results are generally well presented. * Significance: The paper tackles a relevant problem and show notable performance improvements in multiple settings. Weaknesses: The paper could be improved by: * Providing more clarity into the parallel evaluation pipeline. Figure 3 is not very clear in how exactly multiple mutation evaluations are processed and aggregated. I suggest adding labels that show how a mutation changes a particular part in the sequence similar to Figure 2. [Clarity] * The analysis on homology in Section 5.3 is interesting and adds an important dimension on how to properly design data splits for these kinds of tasks. It would be nice to see a deeper discussion on this; potentially in the appendix due to space constraints. [Quality, Significance] Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * Can you provide more detail on how you ensure the mapping of the mutation in the embedding space is aligned with the mapping of the mutation in the original sequence space? * Can you describe in more detail how your parallel evaluation ensure proper mapping of mutations to the correct protein sequence? I am trying to get a better sense of whether the parallel evaluation are essentially operating by creating a larger "meta-sequence" or if there is any type of multi-processing happening. * It seems like the method requires a frozen embedding encoder as it is implemented right now. How robust would the method be to finetuning the encoder? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors include a section on limitations mainly focused on modeling shortcomings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Figure 3 is not very clear in how exactly multiple mutation evaluations are processed and aggregated. I suggest adding labels that show how a mutation changes a particular part in the sequence similar to Figure 2. [Clarity] Figure 3 showcases how we can apply Mutate Everything to decode ΔΔG values in parallel. How the multiple mutation head processes and aggregates is identical to that shown in Figure 2c. We will clarify the head in Figure 3. > The analysis on homology in Section 5.3 is interesting and adds an important dimension on how to properly design data splits for these kinds of tasks. It would be nice to see a deeper discussion on this; potentially in the appendix due to space constraints. [Quality, Significance] We agree that homology plays an essential role in designing data splits. There are many works in the literature that recognize this problem [1] and design more informed dataset splits [2,3,4]. We will elaborate on this discussion in the additional page provided in the final version. > Can you provide more detail on how you ensure the mapping of the mutation in the embedding space is aligned with the mapping of the mutation in the original sequence space? Our training objective aligns the embeddings with the input sequence. We have an embedding F[L,t] for each position L and “to” amino acid t. The loss aligns a mutation at position L to amino acid t (and its ΔΔG) with the corresponding embedding F[L,t]. > Can you describe in more detail how your parallel evaluation ensure proper mapping of mutations to the correct protein sequence? I am trying to get a better sense of whether the parallel evaluation are essentially operating by creating a larger "meta-sequence" or if there is any type of multi-processing happening. Single mutations are modeled exhaustively. The decoder outputs a set of Lx20 single mutation representations from which all single mutation ΔΔGs are decoded. For example, F[10,A] decodes the change in stability when position 10 is mutated to Alanine. During training, the mutation representation is aligned to the experimental ground truth ΔΔG. All higher-order mutations are made up of the same set of Lx20 single mutations. In our model, these single mutation representations are indexed and aggregated to represent higher-order mutations. For example, a mutation at position 10 to Alanine and 15 to Cysteine is represented as F[10,A] + F[15,C]. The loss will ensure alignment to experimental ΔΔG. Computationally, we perform one forward pass through the backbone network. The backbone is the computationally most expensive portion of the network. Computation does not depend on the number of higher order mutations considered. ΔΔGs of higher order mutations are computed as inner sums between embeddings at certain positions, and are thus computationally very fast. They do however scale exponentially with the order of mutation considered (there is a linear number of single point mutations, a quadratic number of two point mutations, a cubic number of three point mutations, …). The structure of our model enables us to compute millions of ΔΔG values for one protein with only a single forward of the backbone. > It seems like the method requires a frozen embedding encoder as it is implemented right now. How robust would the method be to finetuning the encoder? We find that fine-tuning the AlphaFold2 backbone improves performance. The current method fine-tunes the backbone. | | Spearman | AUC | MCC | RMSE | |-----------|----------|------|------|------| | Freeze | 0.48 | 0.72 | 0.20 | 1.46 | | Fine-tune | 0.56 | 0.76 | 0.37 | 1.36 | [1] Montanucci L, Savojardo C, Martelli PL, Casadio R, Fariselli P. On the biases in predictions of protein stability changes upon variations: the INPS test case. [2] Li, B., Yang, Y.T., Capra, J.A., Gerstein, M.B.: Predicting changes in protein thermodynamic stability upon point mutation with deep 3d convolutional neural networks. [3] Pancotti, C., Benevenuta, S., Birolo, G., Alberini, V., Repetto, V., Sanavia, T., Capriotti, T. and Fariselli, P. Predicting protein stability changes upon single-point mutation: a thorough comparison of the available tools on a new dataset. [4] Diaz, D.J., Gong, C., Ouyang-Zhang, J., Loy, J.M., Wells, J.T., Yang, D., Ellington, A.J., Dimakis, A., Klivans, A.R.: Stability oracle: A structure-based graph-transformer for identifying stabilizing mutations. --- Rebuttal Comment 1.1: Title: Thank you for additional details Comment: Thank you for the additional details. Most of the questions and concern have been addressed.
Summary: The authors present the "Mutate Everything Method", a simple method that builds on top of protein representations obtained through existing models to predict the effect of single and higher-order mutations. They apply the method to representations from ESM2 and AlphaFold, and show the performance on several downstream tasks. Strengths: The proposed method is based on a very simple but effective idea, and the paper is written in a clear and mostly comprehensive manner. This seems like a relatively low-resource and fast method to get more use out of existing representations. Additionally, a substantial amount of results is presented over a wide range of benchmarks and metrics. Weaknesses: Even though I enjoyed reading this paper and I appreciate the idea that a simple method can make a difference for real downstream tasks, I am not entirely convinced that this method, which esentially consists of training a lot of MLPs on top of existing representations, fits the NeurIPS venue well. It doesn't seem like a substantial contribution to the field of machine learning, and this paper might therefore be better suited for a different venue where it would have more impact. Moreover, the reported spearman correlations to experimental results are very low in some places, and around 0.5 at most, signifying a very weak correlation at best. This begs the question whether this is an unfortunate metric or if there's a deeper issue causing the correlation to experimental results to be weak. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The way the method is presented, it seems like the authors claim that it will be possible to reliably predict the effect of single mutations as well as high-order effects. Even though some evidence is presented, it is not conclusive, especially not for the higher-order case. I understand that this depends on e.g. the availability of data sets, but perhaps the presentation can be a bit more nuanced. 2. Even though the method is presented as being simple, there seem to be a lot of MLPs to train! Can you give more details about these MLPs (i.e. how many layers, size of the layers) and the overall memory requirements? 3. The paper goes straight from results to "Limitations", without any conclusions or discussion. As a reader, this section is clearly missing. 4. Table 4, 5, and 6 show no standard deviations. Is there a good reason for that? Otherwise, it would be great if those could be added. 5. The part where you predict the residual between single mutation $\Delta\Delta G$s and teh combined effect is a bit vague to me. Why was this beneficial? 6. Related to the previous point, when you say in line 174-175 that you learn the residual to the sum of experimental $\Delta\Delta G$s for the constituent single mutations, what does this mean? Are you giving the model the experimental values of single mutations? Does this mean you always need to have experimental data for single effects if you want to predict combined effects? Or am I misunderstanding the sentence? The wording is a bit confusing. 7. For clarity, it would be great if you could incorporate $x$, $\mu$, and $z$ (and perhaps $f$ and $h$) in Figure 2. 8. What is the dimensionality of $d$? 9. Some of the baselines only show up in tables without really being explained in the main text or the appendix. 10. There seem to be some references missing in the first part of the introduction (line 25-30), about the use of machine learning for finding stabilizing mutations. And perhaps more citations can be added to section 2.3 (line 104-111) as well. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors describe the limitations of their work briefly but accurately. The only thing missing could be a discussion about the low spearman correlation values (in general). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Reported spearman correlations to experimental results are very low in some places, and around 0.5 at most, signifying a very weak correlation at best. This begs the question whether this is an unfortunate metric or if there's a deeper issue causing the correlation to experimental results to be weak. We agree that the spearman correlation coefficient can be a suboptimal metric. Many datasets contain predominantly destabilizing mutations. Spearman overwhelmingly measures the model’s ability to rank destabilizing mutations [2,3], which does not directly translate to identifying the most stabilizing mutations. Prior works have proposed using metrics more in line with finding stabilizing mutations. Area under precision-recall curve (AUC), Matthews correlation coefficient (MCC) [1], and normalized Discounted Cumulative Gain (nDCG) [4] assess a model’s ability to identify stabilizing mutations. We report all metrics including Spearman for completeness. We will include a discussion of the metrics in the final version. > The way the method is presented, it seems like the authors claim that it will be possible to reliably predict the effect of single mutations as well as high-order effects. Even though some evidence is presented, it is not conclusive, especially not for the higher-order case. I understand that this depends on e.g. the availability of data sets, but perhaps the presentation can be a bit more nuanced. Rigorously studying higher-order mutations is hard. Public data is scarce and expensive to collect. The data that does exist does support our conclusions. We express our concerns in the Limitations section in line 285. We are more than happy to add nuance to the presentation where needed. What were the things the reviewer would like to change? > MLP details and memory requirements An adapter maps the backbone-specific hidden dimensionality to D=128 and all subsequent layers operate at D=128. Each amino acid projection $f^t$ in the amino acid expansion is a linear layer. The single mutation decoder $g^1$ is a linear layer. The higher-order mutation decoder transforms the previous embedding with a 2-layer MLP. These representations are aggregated and are fed into a 3-layer MLP to predict ddg. The MLPs use LayerNorm and ReLU. We train on single and double mutations from small proteins with at most 72 amino acids. This keeps the memory requirement small for the higher-order decoder. Thank you for the feedback, we will report these thoroughly in the paper and release code upon acceptance. > Missing Conclusion We omitted a conclusion due to space constraints. We will include one in the additional page allotted for the final version. > Table 4, 5, and 6 show no standard deviations. Is there a good reason for that? Otherwise, it would be great if those could be added. Thank you for the suggestion. We will add standard errors for all of our experiments in the final version. > The part where you predict the residual between single mutation ΔΔGs and the combined effect is a bit vague to me. Why was this beneficial? We tried predicting the residual and directly predicting the combined ΔΔG for higher order mutations. We found predicting the residual to be easier to train. We will mention this in the exposition. > Related to the previous point, when you say in line 174-175 that you learn the residual to the sum of experimental ΔΔGs for the constituent single mutations, what does this mean? Are you giving the model the experimental values of single mutations? Does this mean you always need to have experimental data for single effects if you want to predict combined effects? We do not provide the model with any experimental ΔΔG values. For higher order mutations, our model first predicts the ΔΔG for each constituent single mutation (computationally), then predicts a residual to the sum. > For clarity, it would be great if you could incorporate x, μ, and z (and perhaps f and h) in Figure 2. Great idea, thank you. We will add the symbols to the figure. > What is the dimensionality of d? We map a backbone-specific hidden dimension to D=128. > Some of the baselines only show up in tables without really being explained in the main text or the appendix. We explain our baselines in Supplement Section 1.3 and provide citations for existing works. We will try to work this into the final version given the extra content page. > There seem to be some references missing in the first part of the introduction (line 25-30), about the use of machine learning for finding stabilizing mutations. And perhaps more citations can be added to section 2.3(line 104-111) as well. Thank you. Will do. One citation for the explosion in biological data is “Mega-scale experimental analysis of protein folding stability in biology and protein design” by Kotaro Tsuboyama, Justas Dauparas, Jonathan Chen, Niall M. Mangan, Sergey Ovchinnikov, Gabriel J. Rocklin. [1] Broom, A., Trainor, K., Jacobi, Z., Meiering, E.M. Computational Modeling of Protein Stability: Quantitative Analysis Reveals Solutions to Pervasive Problems. [2] Benevenuta, S., Birolo, G., Sanavia, T., Capriotti, E., Fariselli, P. Challenges in predicting stabilizing variations: An exploration. [3] Pucci, F., Schwersensky, M., Rooman, M. Artificial intelligence challenges for predicting the impact of mutations on protein stability. [4] Qiu, Y., Wei, G.W. Persistent spectral theory-guided protein engineering. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed reply. Here's my response: * **Venue)** I noticed you did not respond to whether or not your paper suits the NeurIPS venue. Just to clarify, this comment was really not meant to attack your paper, as I would gladly be convinced that this paper matches the venue well. Can you comment on this? * **Metrics)** Okay, fair enough. I still think these Spearman correlations are so low that they’re barely worth reporting, but I understand that you’re following existing literature for this metric. Thank you for clarifying. * **Higher-order mutations)** Examples could be line 9-10 of the Abstract, or line 37-42 of the Introduction. Again, I fully appreciate how hard (or perhaps currently impossible) it is to rigorously benchmark performance for high-order mutations. I am merely suggesting to provide a bit of a disclaimer for this specific part of your method. It is of great added value that your method can handle higher-order mutations, but no conclusive evidence can be given for its reliability in this setting. * **Model and memory details)** Thank you for the detailed answer. * **Missing conclusion)** Okay. * **Standard deviations)** What was the reason behind the decision to not include standard deviations, neither in the original manuscript nor in the rebuttal period where a pdf with one extra page of results could be provided? Is this only due to time constraints or is there some other reason? * **Predicting the residual)** Can you give any intuition on why predicting the residual is helpful, or is it purely an empirical observation? * **Line 174-175)** Ah I see, so “experimental” here actually means predicted by the model? If so, then I would recommend rewording this sentence a bit for clarity. * **Figure 2)** Great, thanks. * **Dimensionality of $d$ )** Thank you, please include this in the paper/appendix if it’s indeed not there yet. * **Baseline explanations)** *Mean*, *MSA*, and *ESM* are indeed described in the appendix, but for the general reader, baseline methods like DeepSequence and EVE might require a brief explanation/motivation (so for example “VAE-based method with a Bayesian decoder” or something similar). * **References)** Thank you for adding more references. --- Reply to Comment 1.1.1: Title: paper suits the NeurIPS venue Comment: > [Reviewer] Even though I enjoyed reading this paper and I appreciate the idea that a simple method can make a difference for real downstream tasks, I am not entirely convinced that this method, which essentially consists of training a lot of MLPs on top of existing representations, fits the NeurIPS venue well. It doesn't seem like a substantial contribution to the field of machine learning, and this paper might therefore be better suited for a different venue where it would have more impact. >> [Reviewer] Venue) I noticed you did not respond to whether or not your paper suits the NeurIPS venue. Just to clarify, this comment was really not meant to attack your paper, as I would gladly be convinced that this paper matches the venue well. Can you comment on this? According to the call for papers, NeurIPS welcomes research in machine learning for science. The ML community introduced some of the most exciting benchmarks relevant to our method and we have rigorously evaluated our models against it [1]. This highlights the community’s interest in the problems we address. We initially did not respond to this comment because we believe the scope of the conference is better defined at ACs, SACs, and PCs. If scope is a true concern it might be worth escalating this discussion, since ACs, SACs, and PCs are the ultimate authority on scope. We obviously would argue to include more scientific (and biologically grounded) research. NeurIPS has always been a very inclusive venue, which many consider its core strength. NeurIPS has always undergone tremendous change, and always included and absorbed new fields. In the 90s, old-school computational neuroscience made up a large part of NeurIPS. In the 2000s, classical machine learning found a home at NeurIPS, while Neural Networks were shunned. In the early 2010s, NeurIPS was among the first venues to welcome deep learning back. As for the exact research topic at hand, as far back as the second NIPS 1989 research on protein sequences appeared at this venue [2]. Bengio et al [2] adapted their own speech recognition system [3] to detect homologies in proteins. In fact, NeurIPS features a paper studying protein structures more years that it didn’t. [1] Notin, P., Dias, M., Frazer, J., Hurtado, J.M., Gomez, A.N., Marks, D., Gal, Y.: Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval. ICML 2022. [2] Bengio, Y., Bengio, S., Pouliot, T., Agin, P. A Neural Network to Detect Homologies in Proteins. NIPS 1989. [3] Bengio, Y., Cardin, R., De Mori, R., Merlo, E.M. Programmable execution of multi-layered networks for automatic speech recognition. Communications of the ACM 1989.
Summary: The authors are concerned with the task of predicting the effects of single- and double-residue mutations on the thermodynamic stability of a protein. They propose a simple method that involves passing combinations of embeddings from a pretrained model (AlphaFold2 or ESM) to MLPs. Compared to existing approaches, theirs is computationally efficient and conceptually simple. It also achieves good results on various benchmarks. Strengths: The method is original, efficient, simple to understand, and works well. It addresses an important question in structural biology that is currently far from a solution. It has several favorable properties; for example, one can hot-swap in better embeddings as new models become available. Weaknesses: Some of the evaluations lack some important details, and a few claims seem questionable to me. See the "questions" section below. Overall, I think the evaluation for high-order mutations is a little light. In particular, while the model is evaluated on a mixture of double- and triple-residue mutations, individual results for each group are not provided. Given that the model wasn't trained on triple-residue mutations, and a purported strength of this method its ability to cleanly generalize to high-order mutations, this seems like an important omission. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - "We compute z(μ) = ft(xp) + ht..." If h^t depends solely on t, why separate f^t and h^t? - “Sequence features from the Evoformer and Structure Module are aggregated as input to the decoder.” Which sequence features exactly? How are they aggregated? - “To evaluate generalization on unseen proteins, we train on cDNA proteins with low similarity to those in all evaluation benchmarks.” How exactly? - “We train on the cDNA display proteolysis dataset [61], which leverages a high throughput selection assay to extract noisy ∆∆G values for 100 mini-proteins totaling over 100,000 single and double mutations.” Would be good to see some stats in the main paper. Protein length, MSA depth, etc. - “Our model demonstrates exceptional performance in prioritizing stabilizing double mutations over destabilizing ones, achieving a significantly higher normalized discounted cumulative gain of 0.43 compared to 0.25, as well as a superior detection precision of 0.16 compared to 0.10. Our model additionally improves classification metrics MCC and AUC by 0.02 and 0.03 respectively.” If I understand this correctly, the baseline referenced here is just a version of your model where the heads at the end are replaced with simple addition? Why is that a meaningful comparison? - “While other methods also handle multiple mutations, they adopt a train and test split where unique mutations in the test set are not included in the training process. This inadvertently leads to training and testing on the same set of proteins [12, 28, 55]. PTMul proteins have at most 35% homology to the proteins used in these methods’ training. To fairly evaluate generalization to new proteins, we exclude these inflated comparisons from our study.” Is this widely known? If not, this’ll require more specific justification, either in the main paper or the supplement. - You finetune AlphaFold2 for the task. I'm kind of curious how well the method works if you simply train the MLPs with the embedding model frozen. - Could you elaborate more on how AlphaFold2 was finetuned? There are some nontrivial implementation details here; e.g. how cropping is handled. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Authors include a brief discussion of limitations. They do not address potential negative societal impacts of their work (not that I think that would be warranted in this case). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > While the model is evaluated on a mixture of double- and triple-residue mutations, individual results for each group are not provided… this seems like an important omission. Agreed. While our model performs similarly against the additive baseline on double mutations, it excels at higher-order mutations. We will include this analysis in the final version. Below are results for the double mutations in PTMul | | Spearman | AUROC | MCC | RMSE | |-------------------|-------------|-------------|-------------|-------------| | DDGun | 0.28 | 0.63 | 0.22 | 2.23 | | DDGun3D | 0.29 | 0.61 | 0.17 | 2.25 | | Additive Baseline | 0.52 (0.01) | 0.76 (0.01) | 0.30 (0.01) | 1.95 (0.01) | | MEM (ours) | 0.50 (0.01) | 0.75 (0.01) | 0.34 (0.01) | 2.08 (0.01) | and triple and more mutations in PTMul | | Spearman | AUROC | MCC | RMSE | |-------------------|-------------|-------------|-------------|-------------| | DDGun | 0.15 | 0.57 | 0.12 | 2.19 | | DDGun3D | 0.19 | 0.60 | 0.17 | 2.20 | | Additive Baseline | 0.49 (0.04) | 0.79 (0.04) | 0.45 (0.06) | 2.14 (0.08) | | MEM (ours) | 0.60 (0.01) | 0.86 (0.01) | 0.58 (0.02) | 1.99 (0.02) | > If h^t depends solely on t, why separate f^t and h^t? We tried both and found that adding it improves performance. We believe that separating them benefits training, as f^t is a deep network and h^t is an embedding that encodes each amino acid type separately. > “Sequence features from the Evoformer and Structure Module are aggregated as input to the decoder.” Which sequence features exactly? How are they aggregated? The Evoformer and Structure Module both output LxD representations for the input sequence where L is the sequence length and D is the hidden dimension. This is “Single repr. (r,c)” for Evoformer in AlphaFold2 Figure 1. More precisely, in the OpenFold code, it is `s` at the following lines: https://github.com/aqlaboratory/openfold/blob/1d878a1203e6d662a209a95f71b90083d5fc079c/openfold/model/evoformer.py#L823 and https://github.com/aqlaboratory/openfold/blob/1d878a1203e6d662a209a95f71b90083d5fc079c/openfold/model/structure_module.py#L753C1-L754C1 The representations are normalized (LayerNorm) and added together before fed into the decoder. We experimented with the MSA and pair representations but decided not to use them to standardize the inputs from all backbones. > “To evaluate generalization on unseen proteins, we train on cDNA proteins with low similarity to those in all evaluation benchmarks.” How exactly? We compute the sequence similarity between the cDNA proteins and proteins in the validation set. We filter out any cDNA protein from our training set with higher than 30% sequence similarity with any test protein. This ensures that the proteins in the test set are unseen. > cDNA protein statistics. Great idea. The cDNA proteins average 56.1 amino acids in length with a maximum length of 72 and minimum length of 30 amino acids. The mean MSA depth is 7797 with a standard deviation of 6282. The maximum depth is 23525 and the minimum depth is 5. A comprehensive analysis of the dataset, experimental assay, filtering criteria are found in their paper [1]. > The baseline referenced here is just a version of your model where the heads at the end are replaced with simple addition. Why is that a meaningful comparison? The impact of two mutations together on ΔΔG can differ from the impact of the two mutations performed separately (epistasis). Our finding that the model outperforms this additive baseline suggests that our model learns the interactions between single mutations. > Is it widely known that training and testing should have low homology? Yes, proteins with 35% or more sequence similarity are typically highly similar (e.g. same function in related organisms). Prior works have shown that training on proteins with substantial overlap with the test set leads to heavily overestimated performance [2]. Many prior works build training (Q1744 [3]) and validation splits (s669 [4], t2837 [5]) with low similarity overlap. In this work, we filter our training set to keep low sequence similarity with all our validation sets. > Frozen embedding model We find that fine-tuning the AlphaFold2 backbone improves performance on S669. | | Spearman | AUC | MCC | RMSE | |-----------|----------|------|------|------| | Freeze | 0.48 | 0.72 | 0.20 | 1.46 | | Fine-tune | 0.56 | 0.76 | 0.37 | 1.36 | > Could you elaborate more on how AlphaFold2 was finetuned? We do not subsample the protein sequence during fine-tuning, as the proteins are shorter than the crop size. The MSA sampling is performed randomly. Many implementation details are hard to explain in plain English. We are happy to publish the code upon acceptance. [1] Tsuboyama K., Dauparas, J., Chen, J., Mangan N., Ovchinnikov S., Rocklin, G.. Mega-scale experimental analysis of protein folding stability in biology and protein design. [2] Montanucci L, Savojardo C, Martelli PL, Casadio R, Fariselli P. On the biases in predictions of protein stability changes upon variations: the INPS test case. [3] Li, B., Yang, Y.T., Capra, J.A., Gerstein, M.B.: Predicting changes in protein thermodynamic stability upon point mutation with deep 3d convolutional neural networks. [4] Pancotti, C., Benevenuta, S., Birolo, G., Alberini, V., Repetto, V., Sanavia, T., Capriotti, T. and Fariselli, P. Predicting protein stability changes upon single-point mutation: a thorough comparison of the available tools on a new dataset. [5] Diaz, D.J., Gong, C., Ouyang-Zhang, J., Loy, J.M., Wells, J.T., Yang, D., Ellington, A.J., Dimakis, A., Klivans, A.R.: Stability oracle: A structure-based graph-transformer for identifying stabilizing mutations. --- Rebuttal Comment 1.1: Comment: >Yes, proteins with 35% or more sequence similarity are typically highly similar (e.g. same function in related organisms). Prior works have shown that training on proteins with substantial overlap with the test set leads to heavily overestimated performance [2]. Many prior works build training (Q1744 [3]) and validation splits (s669 [4], t2837 [5]) with low similarity overlap. In this work, we filter our training set to keep low sequence similarity with all our validation sets. I understand that training and validation proteins should have low sequence similarity; my question was whether it's widely known that the papers you mention ([12, 28, 55]) have sketchy validation sets. Unless that claim is documented somewhere, you'll need to provide more details to prove it before you can toss out their reported metrics. The rest looks good. If the above is resolved, I'll raise my score from 6 to 7. --- Reply to Comment 1.1.1: Comment: > I understand that training and validation proteins should have low sequence similarity; my question was whether it's widely known that the papers you mention ([12, 28, 55]) have sketchy validation sets. Unless that claim is documented somewhere, you'll need to provide more details to prove it before you can toss out their reported metrics. Thank you for clarifying the question. We will provide these details in the final version. “Predicting the effect of single and multiple mutations on protein structural stability” [12] generates splits at the residue level, so similarity between training and validation proteins is not accounted for. In Section 4.1.4., “data was split into these sets under the constraint that each unique wild type mutation combination appeared only in a single set.” Maestro [28] performs cross validation (see Table 1 in [28]). We could not find the split curation in the paper, but found the same protein in multiple folds in their data release. https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-015-0548-6#additional-information Dynamut2 [55] generates splits at the mutation level. In Section 4.1 paragraph 2, “Our final dataset comprised 1,098 entries (710 destabilizing and 388 stabilizing) (Figure S4), which were randomly split into train and test sets comprising 872 and 227 entries, respectively.” Their train and test data for multiple mutations contain overlapping pdb IDs (e.g. 1ACB) at https://biosig.lab.uq.edu.au/dynamut2/data We categorized the Dynamut2 test set based on whether the sample’s PDB ID appears in the training set (found at https://biosig.lab.uq.edu.au/dynamut2/data). The counts and errors are detailed below. Errors are noticeably higher for the PDB IDs that have not been seen during training compared to those that have been seen. This suggests that their test set performance might not be representative of their model’s performance on new proteins. Please note that in this context, we are not using a strict sequence similarity filter, but rather a more lenient matching filter. | | Num. PDB | RMSE | |------------|----------|------| | Full Test | 226 | 1.66 | | Seen PDB | 213 | 1.59 | | Unseen PDB | 13 | 2.52 | [12] Dehghanpoor, R., Ricks, E., Hursh, K., Gunderson, S., Farhoodi, R., Haspel, N., Hutchinson, B., Jagodzin- ski, F.: Predicting the effect of single and multiple mutations on protein structural stability. [28] Laimer, J., Hofer, H., Fritz, M., Wegenkittl, S., Lackner, P.: Maestro-multi agent stability prediction upon point mutations. [55] Rodrigues, C.H., Pires, D.E., Ascher, D.B.: Dynamut2: Assessing changes in stability and flexibility upon single and multiple point missense mutations.
Summary: This work predicts changes in thermodynamic stability for single or higher-order mutations on top of AlphaFold2 modules. The proposed model leverages linear aggregation of mutational scores on all possible sites in the latent space to decode $\Delta\Delta G$ value for deep mutations in parallel. Strengths: - The workflow is easy to understand, which proposes a lightweight solution for an important direction of research. - Compared to most existing work, the proposed method 'runs a strong backbone only once and a lightweight decoder N times in parallel'. Weaknesses: - The innovation is incremental. The main algorithm relies heavily on the existing AlphaFold2 model. - While the authors claim the main contribution is that they designed 'a simple, parallel decoding algorithm', this is not the first method that tries to decode all possible mutational scores from a large latent representation (a $L\times 20$ matrix in this paper). See, for instance, https://arxiv.org/pdf/2304.08299.pdf. - Notations are poorly explained. For instance, the meaning of p, t, and superscription A, Y are undefined on page 4. - References are missed in many places (for instance, ESM2 in Table 3). Also please define abbreviations properly when they first appear (even when they are used widely). For example, AUC and MCC are on page 6. - A conclusion section is missed at the end. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How is the ratio '3%' observed in line 176? - Similarly, why $\Delta\Delta G<-0.5$ are considered stabilizing? Any reference here? - Since the authors use ProteinGym, is the proposed method applicable to amino acid addition and deletion? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Although the authors discussed limitations of the research in Section 6, they focused mainly on the technical limitations. No negative society impact was mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > “The innovation is incremental. The main algorithm relies heavily on the existing AlphaFold2 model.” We experimented with multiple backbones, including AlphaFold2, ESM2, MSA-Transformer (see Table 6a). Among these backbones, AlphaFold2 happens to be the strongest performing backbone. We present a new model paradigm that predicts the effects of all mutations, rather than a single mutation or position as in prior works [1,2,3,4]. We see our contribution as orthogonal to the choice of backbone. Earlier works found that AlphaFold2 is not sensitive to single mutations [5], concluding “AlphaFold may not be immediately applied to other problems or applications in protein folding.” To the best of our knowledge, we are the first to apply AlphaFold2 for predicting ΔΔG values of mutations. > This is not the first method that tries to decode all possible mutational scores from a large latent representation (a L × 20 matrix in this paper). See, for instance, https://arxiv.org/pdf/2304.08299.pdf. Thanks for sharing “Accurate and Definite Mutational Effect Prediction with Lightweight Equivariant Graph Neural Networks'' by Bingxin Zhou, Outongyi Lv, Kai Yi, Xinye Xiong, Pan Tan, Liang Hong and Yu Guang Wang. Per NeurIPS guidelines, this paper is considered contemporaneous work as it was arxiv’ed 34 days before the NeurIPS paper deadline. We are happy to discuss it as such in the final version. > Notations are poorly explained. For instance, the meaning of p, t, and superscription A, Y are undefined on page 4. We decided to explain all the notations in the preliminary (p,t,A,Y are all defined in lines 113-115). We can see that this might be confusing and will work on clearing this up. > References are missed in many places (for instance, ESM2 in Table 3). Also please define abbreviations properly when they first appear (even when they are used widely). For example, AUC and MCC are on page 6. Thank you, we will add them. > A conclusion section is missed at the end. We omitted a conclusion due to space constraints. We will include one in the additional page allotted for the final version. > How is the ratio '3%' observed in line 176? Line 176 states "Only 3% of the mutation sets in our training set are stabilizing.” This is a dataset statistic. Our training set consists of mutations and their corresponding ΔΔG value. The percentage of mutations with ΔΔG < -0.5 kcal/mol is 3%. > Why ΔΔG < − 0.5 are considered stabilizing? Any reference here? We follow Benevenuta et al. [6] in categorizing mutations with a 0.5 kcal/mol decrease in free energy as stabilizing. We will add the reference to the final version. > Since the authors use ProteinGym, is the proposed method applicable to amino acid addition and deletion? We have not considered insertions and deletions yet. Our primary application of protein engineering modifies an existing protein only slightly to increase thermodynamic stability. Insertions and deletions cause a shift in the entire sequence, leading to global changes in the protein structure. [1] Benevenuta, S., Pancotti, C., Fariselli, P., Birolo, G., Sanavia, T.: An antisymmetric neural network to predict free energy changes in protein variants. [2] Li, B., Yang, Y.T., Capra, J.A., Gerstein, M.B.: Predicting changes in protein thermodynamic stability upon point mutation with deep 3d convolutional neural networks. [3] Umerenkov, D., Shashkova, T.I., Strashnov, P.V., Nikolaev, F., Sindeeva, M., Ivanisenko, N.V., Kardymon, O.L.: Prostata: Protein stability assessment using transformers. [4] Diaz, D.J., Gong, C., Ouyang-Zhang, J., Loy, J.M., Wells, J.T., Yang, D., Ellington, A.J., Dimakis, A., Klivans, A.R.: Stability oracle: A structure-based graph-transformer for identifying stabilizing mutations. [5] Pak, M.A., Markhieva, K.A., Novikova, M.S., Petrov, D.S., Vorobyev, I.S., Maksimova, E.S., Kondrashov, F.A., Ivankov, D.N.: Using alphafold to predict the impact of single mutations on protein stability and function. [6] Benevenuta S., Birolo G., Sanavia T., Capriotti E., Fariselli P. Challenges in predicting stabilizing variations: An exploration. --- Rebuttal Comment 1.1: Comment: Is there anything else the reviewer would need to know to raise the final rating?
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Offline Reinforcement Learning with Differential Privacy
Accept (poster)
Summary: This paper proposes two offline RL algorithms with differential privacy guarantees. The two pessimism-based algorithms apply to both tabular and linear MDP settings. In theory, the authors prove that the proposed algorithms achieve instance-dependent sub-optimality bounds while guaranteeing differential privacy. A nice property is that the cost of privacy only appears as lower order terms, thus become negligible as the number of samples goes large. Strengths: (1) While DP algorithm has been studied in online RL settings, the study of DP in offline RL is limited. This paper provides the first provable study along this line. (2) In spite of a theoretical paper, it is well written and easy to follow. Weaknesses: (1) The motivation of the considered problem needs more real justifications. In the introduction, the authors used a medical example to motivate the need of considering privacy in offline RL. Why could not the data owner (hospital or doctors) do offline policy evaluation or policy optimization directly on the raw data? Why do we need to generate a private policy? On the other hand, if the offline policy optimization is requested from a third-part not the data owner, it is justified that the patient's data needs to be protected. However, in this case, the third-part would not have access to the raw data. This contradicts with the setting considered in this paper as in the proposed Algorithm 1 and Algorithm 2, the input is the raw data. Therefore, it is important and helpful to provide a convincing real example to justify the problem setting and algorithm designs. (2) In Assumption 2.2, the data distribution needs to satisfy the minimum eigenvalue condition. This assumption might be violated when the feature space is large or some features are highly correlated. It is helpful to provide some discussion on this assumption and how to remedy it when this assumption is violated. (3) In the tabular MDP (DP-APVI) algorithm, the Gaussian noise is added to the integer counts $n_{s_h, a_h}$ and $n_{s_h, a_h, s_{h+1}}$ to obtain a private estimation of the transition kernel. Because of the design, the private counts might be negative or very small, which makes the uncertainty estimation (line 5 of Algorithm 1) to be unstable. To handle it, the authors used some truncation approach with some theoretical truncation rate $E_{\rho}$. It is unclear if it is a good choice to add the Gaussian noise to the count statistics. Can we add the Gaussian noise directly to the non-private estimation of transition kernel? Some justifications on the proposed private estimation would be helpful. It is also important to discuss the advantages and limitations of the proposed private estimation while comparing with other choices in existing RL literature. (4) In Algorithm 2 and line 235 of Page 7, the authors require two independent offline dataset with equal length. Can authors clarify what the rigorous condition of the "two independent offline dataset"? Do you need each sample (across $K$) at each time horizon (across $H$) to be independent in each offline data? If yes, this is an unrealistic condition in offline RL problem. Moreover, I did not find such condition in the statement of Theorem 4.1. Where do you use this "independence" assumption? (5) In the experiments (Figure 1), the simulations are for 5 replicates. Can authors include the uncertainty in Figure 1? If the uncertainty is large, it is helpful to increase the replication times. ~~~~~~~~~~~ After rebuttal: my major comments have been nicely addressed. I have increased the score to 7. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weakness section Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your high-quality review and your support. We will reply to the weaknesses you stated. **In the introduction, the authors used a medical example to motivate the need of considering privacy in offline RL. Why could not the data owner (hospital or doctors) do offline policy evaluation or policy optimization directly on the raw data? Why do we need to generate a private policy?** Even if the owner has access to the raw data, it is risky to work directly on the raw data. If the data owner do offline RL directly on the raw data, there will be risk of privacy leakage. For instance, membership inference attack ([1]) could detect the data used in the training procedure by observing the output policy of the offline RL algorithm. In other words, the attacker could reconstruct the medical history of the patients whose data is used for training the model, which is harmful to the privacy of those patients. In contrast, training the model with DP guarantee could provably prevent such risks. [1] R. Shokri, M. Stronati, C. Song, and V. Shmatikov. Membership inference attacks against machine learning models. **On the other hand, if the offline policy optimization is requested from a third-part not the data owner, it is justified that the patient's data needs to be protected. However, in this case, the third-part would not have access to the raw data. This contradicts with the setting considered in this paper as in the proposed Algorithm 1 and Algorithm 2, the input is the raw data.** This is a very good point. Local DP, where the data needs to be privatized before being sent to the algorithm is used to characterize such situation. Under such circumstances, noise should be added to the raw data, and then the algorithm would aggregate the privatized data and learn the policy. This is an interesting but different problem from our setting. We believe that this could be a good future direction. **About Assumption 2.2: the minimum eigenvalue condition.** First, we would like to highlight that the DP guarantee does not depend on this assumption, and the assumption is only used to derive sub-optimality bounds. In addition, such coverage assumption is standard in offline RL literature since without such assumption, there will be a constant sub-optimality gap for the output policy even without constraints of DP, in the worst case ([2]). Therefore, we base on the coverage assumption and derive an asymptotic sub-optimality bound. We will add the discussions in the revision. [2] Ming Yin and Yu-Xiang Wang. Towards instance-optimal offline reinforcement learning with pessimism. NeurIPS 2021. **Can we add the Gaussian noise directly to the non-private estimation of transition kernel? Some justifications on the proposed private estimation would be helpful. It is also important to discuss the advantages and limitations of the proposed private estimation while comparing with other choices in existing RL literature.** Adding noise to visitation counts is a widely applied method in DP RL literature ([3],[4]). It is a possible choice to add noise to the empirical transition kernel, while there are two issues for the method. First, given the sensitivity analysis, the $\ell_1$ difference between the private and non-private transition kernel estimates will be the same order as our approach. Second, in this way, the private transition kernel will not be a valid probability distribution, which is not applicable for our Bernstein-type pessimism. In comparison, we effectively solve the second issue through solving the optimization problem. Similar approaches of private estimation also appears in the DP online RL literature. [4] directly adds Gaussian noise to the visitation counts and constructs the private transition kernel estimate. As a result, [4] could only operate on the Hoeffding-type bonus and derive sub-optimal regret bounds. Recently, [3] follows our construction of the private transition kernel estimate and operates on the Bernstein-type bonus. As a result, [3] improves the regret bound in [4] and derives nearly minimax optimal results. Therefore, our method for privately estimating the transition kernel is not improvable in general. [3] Dan Qiao and Yu-Xiang Wang. Near-Optimal Differentially Private Reinforcement Learning. AISTATS, 2023. [4] Sayak Ray Chowdhury and Xingyu Zhou. Differentially private regret minimization in episodic markov decision processes. arXiv preprint arXiv:2112.10599, 2021. **In Algorithm 2 and line 235 of Page 7, the authors require two independent offline dataset with equal length. Can authors clarify what the rigorous condition of the "two independent offline dataset"? Do you need each sample (across $K$) at each time horizon (across $H$) to be independent in each offline data? If yes, this is an unrealistic condition in offline RL problem. Moreover, I did not find such condition in the statement of Theorem 4.1. Where do you use this "independence" assumption?** This is a good question. Instead of requiring the data to be independent at each time horizon, we only require each trajectory (across $K$) to be independent, and this is naturally ensured by our offline RL setting where the data is sampled i.i.d according to some behavior policy. The independence of the two datasets is only for technical reason, which is to ensure that the dataset for weighted ridge regression is sampled i.i.d from the behavior policy even given the estimated variances. We will clarify this in the revision. **In the experiments (Figure 1), the simulations are for 5 replicates. Can authors include the uncertainty in Figure 1? If the uncertainty is large, it is helpful to increase the replication times.** We will add the confidence interval in the revision. Thanks again for the helpful review! We hope that our response could address your main concerns. We are open to further discussions. --- Rebuttal 2: Comment: Thank you for your positive feedback and increasing the score. We will include the discussions in the final version according to your comments.
Summary: This paper addresses the offline RL with Differential Privacy constraints problem. Tabular and linear MDP are considered, while both forms of DP, traditional DP and zCDP are studied. The authors cast the DP definition into the offline RL problem as a constraint for protecting trajectories. Two algorithms, DP-APVI (resp. DP-VAPVI), are introduced, treating the tabular (resp. linear) case. Those algorithms rely on the pessimism principle, and DP is obtained by adding a Gaussian mechanism, either on the counts or during variance estimation. Authors prove zCDP compliance for both algorithms, and a comparison with non-private algorithms is provided. Finally, authors empirically evaluate the performance of DP-VAPVI under different privacy budgets. Strengths: The main strength of this paper is that it is, according to my knowledge, the first to tackle the offline RL with DP constraints, which is an important problem. This paper introduce two sound and practical algorithms, respecting Differential Privacy by design. Furthermore, we have a discussion comparing the algorithms with their non-private counterpart and experimental validation. Weaknesses: - Globally, the paper is well-written, however, I found the section on DP-VAPVI very hard to follow, especially the algorithm - Experiments study the performance of the algorithms under differential privacy budgets, but it feels very hard to have an interpretation of those budgets for zCDP. In the traditional DP case, one often sees very high values of \epsilon, leading to a discussable Differentially Private algorithm in practice. Would it be possible to have a discussion about \alpha? What would be reasonable values? - Although there may not be other paper directly studying directly DP offline RL, there is work in literature on privacy attacks in RL, such as membership inference attack: R. Shokri, M. Stronati, C. Song, and V. Shmatikov. Membership inference attacks against machine learning models) or Maziar Gomrokchi, Susan Amin, Hossein Aboutalebi, Alexander Wong, and Doina Precup. Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning. It would have been nice to have a discussion on those attacks to have a better estimation of the practical impact of DP casted like this. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - See section weakness, could the authors comment on the \rho factor? And would it allow to robustness to membership inference attacks? - You only consider an MDP with 2 states in the experiments, would the algorithm scale to bigger MDPs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your high-quality review and your support. We will reply to the weaknesses you stated. **Globally, the paper is well-written, however, I found the section on DP-VAPVI very hard to follow, especially the algorithm.** We apologize for this and we will improve the readability in the revision. Briefly speaking, the non-private counterpart of DP-VAPVI contains three parts: variance estimation, weighted ridge regression and adding pessimism. In the first part, the variance of value function is estimated. In the second part, the algorithm applies weighted ridge regression based on the estimated variances to derive tighter results. Finally, pessimism is added to the estimated value function. To achieve differential privacy, we add Guassian noises to sufficient statistics. We will surely improve the writing according to your suggestions. **Experiments study the performance of the algorithms under differential privacy budgets, but it feels very hard to have an interpretation of those budgets for zCDP. In the traditional DP case, one often sees very high values of $\epsilon$, leading to a discussable Differentially Private algorithm in practice. Would it be possible to have a discussion about $\rho$? What would be reasonable values?** That's a very good point and we will add the discussions. Briefly speaking, the budget for zCDP can be transferred to the budget for DP and $\rho$ is roughly $\min(\epsilon,\epsilon^2/2)$. Therefore, smaller $\rho$ provides stronger privacy protection and a small constant would be a reasonable value for the zCDP budget. For instance, [1] takes the privacy budget to be around 1. In practice, any $\rho<10$ should be considered meaningful privacy guarantees. The choice of $\rho$ is often cast as a social problem and the preferred value differ in each application. For this reason, we experimented on a range of different $\rho$ in our simulation. [1] Xu et al. Federated Learning of Gboard Language Models with Differential Privacy. arXiv:2305.18465. 2023. **Although there may not be other paper directly studying directly DP offline RL, there is work in literature on privacy attacks in RL, such as membership inference attack: R. Shokri, M. Stronati, C. Song, and V. Shmatikov. Membership inference attacks against machine learning models) or Maziar Gomrokchi, Susan Amin, Hossein Aboutalebi, Alexander Wong, and Doina Precup. Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning. It would have been nice to have a discussion on those attacks to have a better estimation of the practical impact of DP casted like this.** Thanks for pointing us to the literature regarding membership inference attack. The guarantee of DP ensures that even if all the other training data points are known, it is statistically hard to predict whether some data point appears in the training dataset. More specifically, given a fixed type-I error rate, DP guarantee provides a lower bound for the type-II error rate of the membership inference attack. Therefore, a DP algorithm provably provides robustness against membership inference attacks. **You only consider an MDP with 2 states in the experiments, would the algorithm scale to bigger MDPs?** We use the same MDP setting as previous papers regarding offline RL. The algorithm can be applied under MDPs with larger state space as well as it admits linear structure. We leave experiments on more complex benchmarks such as D4RL (which will require designing deep offline RL algorithms with DP) as a future direction. Thanks again for the helpful review! We hope that our response could address your main concerns. We are open to further discussions. --- Rebuttal Comment 1.1: Comment: My major comments have been addressed. I believe that contributions are enough for an acceptance and I therefore maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback. We will include the discussions in the final version according to your comments.
Summary: This paper focuses on reinforcement learning (RL) in an offline setting under differential privacy considerations. It studies the proposal and analysis of a new approach to learning policies in this specific setting, leveraging the Bernstein concentration inequality. Reinforcement learning explores an agent's interaction with its environment over a series of episodes. The environment is defined as a Markov decision process (MDP) comprising states, actions available to the agent, a reward function, and transition dynamics. Through this interaction, the agent gathers feedback to learn a policy that optimizes the cumulative reward. In this paper, the agent operates under the constraints of differential privacy, implying that it interacts in an environment with sensitive data. Here, observations from each episode are deemed private, being tied to individual users. Traditionally, two settings have been studied: the online and offline settings. In the online setting, an agent learns a policy through active data collection by exploring the environment. Conversely, the offline setting involves the agent receiving all data upfront, barring any access to the environment for training. Prior studies have explored these settings without privacy constraints, but only the online setting has been studied under differential privacy. This paper breaks new ground by studying reinforcement learning in the offline setting under differential privacy. Like its online counterpart, the offline setting necessitates the construction of tight confidence bounds around sufficient statistics for policy approximation. However, due to privacy-induced noise, the algorithm must inflate the confidence bounds to compensate. The primary innovation of this paper is the proposal of new confidence bounds, utilizing the Bernstein concentration inequality, a departure from the traditional Hoeffding concentration. This novel method offers improved error bounds under specific instance-dependent conditions, although the bounds are equivalent to Hoeffding's in the worst-case scenario. The authors cleverly adapt techniques from previous research while also introducing new methodologies that could potentially extend beyond this work. A significant contribution lies in the estimation of variance from noisy statistics, thereby enabling the utilization of the Bernstein concentration. In sum, this paper illuminates a hitherto unexplored area in the RL community, offering innovative solutions and techniques for reinforcement learning in the offline setting under differential privacy. Strengths: The primary strength of this paper lies in its pioneering examination of differential privacy reinforcement learning (DP-RL) in the offline setting, effectively bridging a notable gap in existing academic literature. By venturing into this uncharted territory, the paper opens up possibilities for practical applications in real-world scenarios where sensitive data may be involved. The paper's successful application of differential privacy to offline RL deserves commendation, given the unique challenges associated with this endeavor. While one might assume that the principles governing DP-RL in the online setting would seamlessly translate to the offline setting, this is not the case. The paper adeptly navigates these challenges, setting a benchmark for future investigations into this area. Another significant contribution of this work is the innovative use of Bernstein concentration in the estimation of error bounds. Prior studies primarily employed Hoeffding concentration, which, although effective, offered limited utility under specific conditions. In contrast, Bernstein concentration proves to be more flexible, offering improved error bounds, particularly under specific instance-dependent conditions. This innovative application, therefore, has the potential to enhance results not only in the offline setting but also in the online RL environment. Furthermore, the paper presents an empirical evaluation that provides rich insights into the practical implications of their proposed techniques. This robust empirical analysis not only validates the theoretical contributions but also offers tangible results that underscore the effectiveness of the proposed methods. In conclusion, the paper's merits extend from filling a knowledge void in DP-RL literature, effectively tackling challenges in translating DP-RL principles from online to offline settings, to innovatively employing Bernstein concentration in error bound estimation. The comprehensive empirical evaluation serves as the icing on the cake, demonstrating the practicality and effectiveness of the proposed methods. The paper's contributions are both theoretical and practical, promising to advance understanding and application of DP-RL in offline settings. Weaknesses: Many of the techniques presented in this paper are not new. The concentration bounds utilized, for instance, have been previously developed and employed in other works. However, the paper's merit lies in its successful adaptation of these existing techniques for a specific context. Therefore, this paper effectively demonstrates how to repurpose these pre-existing tools for its unique setting, thereby contributing to the literature in a meaningful way. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No questions. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your high-quality review and your support. We really appreciate the detailed and insightful summary. For the weakness, we agree that part of the techniques (including Gaussian mechanism and Bernstein-type pessimism) have been studied. Our main technical contribution is to privatize Bernstein-type pessimism to get tight sub-optimality bounds, as discussed in your review. Thanks again for the appreciation of our work. We are open to further discussions. --- Rebuttal 2: Comment: I read all reviews and rebuttals. For now I will maintain my score. --- Rebuttal Comment 2.1: Comment: Thanks for your positive feedback. We really appreciate your support and your insightful review.
Summary: The paper proposes an algorithm for offline reinforcement learning with differential privacy (DP), which protects the privacy of the original information using a Gaussian mechanism based on pessimism. The motivation and ideas behind the paper are clear and meaningful. However, there are some issues with the methods and experiments. Overall, the paper presents some interesting ideas, but additional experiments and comparisons are needed to fully evaluate the proposed method and its practical value. Strengths: 1.The authors propose a method for protecting the privacy of original information in offline reinforcement learning. They achieve this goal by implementing differential privacy (DP) in their proposed method, which is a meaningful contribution to the field of privacy-preserving machine learning. 2. The authors implement their ideas in APVI and VAPVI models and provide a thorough theoretical analysis of the proposed method. The writing is clear and the theoretical analysis is extensive, providing a strong foundation for the authors' claims. 3. The authors conduct experiments on simulated datasets, which provide preliminary evidence of the method's performance. Weaknesses: The paper proposes two models, DP-APVI and DP-VAPVI, for solving the offline reinforcement learning problem with privacy guarantees. While the paper presents some interesting ideas, there are several issues: 1. The paper only includes results for DP-VAPVI, and does not provide any experimental results for DP-APVI. It would be helpful to see how DP-APVI performs in comparison to DP-VAPVI and other baseline methods. 2. The results for DP-VAPVI consistently show a performance gap compared to VAPVI, which raises questions about the competitiveness of DP-VAPVI in practice. Without additional experiments or comparisons with other methods, it is difficult to assess the practical value of DP-VAPVI. 3. The paper claims that DP-VAPVI will converge to VAPVI as the dataset size increases, but there is no experimental evidence to support this claim. 4. The paper does not discuss the impact of the privacy budget (ρ) on the privacy protection and performance of the algorithms. It would be helpful to see how different values of ρ affect the results. 5. The paper lacks ablation experiments to investigate the extent to which DP itself as an optimization plug-in for APVI and VAPVI models maintains privacy and affects performance. It would be helpful to see how different components of the DP-APVI and DP-VAPVI models contribute to the overall performance. 6. The paper only compares DP-VAPVI and VAPVI with PEVI, which is not a privacy-preserving method. It would be helpful to see how DP-VAPVI and DP-APVI compare to other privacy-preserving methods. 7. The experiments are conducted on synthetic datasets, which may not fully reflect the complexity and diversity of real-world problems. It would be helpful to see how the proposed methods perform on real-world datasets. Methodologically, the paper proposes to add additional Gaussian mechanisms to APVI and VAPVI models to ensure privacy, but the main methods are still based on APVI and VAPVI. While the paper's definition of neighboring datasets and the use of Gaussian mechanism for differential privacy are contributions, the experimental results are not convincing enough. The authors should demonstrate the advantages of their proposed methods on a wider range of datasets and models. I'm not an expert on differential privacy. My evaluation is based on how well I was able to comprehend the information in the paper. I don't fully comprehend how this work contributes to the overall growth of the field. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to the weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your high-quality review and positive score. We agree that we mainly use simulations to support our theories. Since we are taking the first step towards differential privacy under offline RL, we mainly analyze our algorithms through theory while only running simulations on toy examples. We leave real-world experiments (which will require designing deep offline RL algorithms with DP) as a future direction. We will reply to the weaknesses you stated. **The paper only includes results for DP-VAPVI, and does not provide any experimental results for DP-APVI. It would be helpful to see how DP-APVI performs in comparison to DP-VAPVI and other baseline methods.** We focus on simulations under linear MDPs since this is the first simulation under linear MDP. Previous works regarding DP RL under linear MDP only focus on theories. For comparison, DP-APVI and DP-VAPVI are not directly comparable since the settings are different. In addition, this is the first work studying offline RL with DP, and there are no available baselines, that's why we do not include baseline methods. **The paper does not discuss the impact of the privacy budget ($\rho$) on the privacy protection and performance of the algorithms. It would be helpful to see how different values of $\rho$ affect the results.** As the privacy budget $\rho$ goes larger, the privacy protection will be weaker, while the performance (sub-optimality) of the output policy will be better and closer to the non-private case. The impact of the choice of $\rho$ is shown in the theorems and Figure (b) of the experiments. **The paper only compares DP-VAPVI and VAPVI with PEVI, which is not a privacy-preserving method. It would be helpful to see how DP-VAPVI and DP-APVI compare to other privacy-preserving methods.** To the best of our knowledge, this is the first work studying offline RL with DP, and there is no other privacy-preserving methods for this task. Although there exists previous works regarding off-policy evaluation (OPE) with DP, they are not comparable to our methods. Therefore, we mainly compare the DP-VAPVI algorithm to its non-private counterpart. **The experiments are conducted on synthetic datasets, which may not fully reflect the complexity and diversity of real-world problems. It would be helpful to see how the proposed methods perform on real-world datasets.** This is a very good point and we agree that real-world experiments could help validate the theories for DP RL. However, real-world problems often admit more complex structures compared to linear MDPs (this is the reason why previous works regarding DP RL all focus on synthetic datasets). Experiments on more realistic offline RL benchmark such as D4RL may require incorporating various tricks from offline Deep RL literature, and we leave those as future work. Thanks again for the helpful review! We hope that our response could address your main concerns. We would greatly appreciate it if you could consider raising the score.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper focuses on the offline RL problem with differential privacy. The authors propose algorithms for offline tabular MDP and offline linear MDP with $\rho$-DP. For the first problem, the sub-optimality bound almost matches the best-existing non-private counter-part in spite of an additional term $O(\sqrt{\frac{1}{\rho}})$. For the second problem, the gap between the proposed sub-optimality bound to the best existing non-private counter part is $\mathrm{poly}(d,H,\kappa^{-1})/\sqrt{\rho}$, where $\kappa$ is the minimal coverage parameter. Strengths: 1. This paper firstly provides analysis for offline RL, and proposes error bounds which matches the non-privacy counterparts up to some lower order terms. 2. In technique, the authors make efforts to operate on Bernstein type pessimism while keeping privacy to achieve the tighter sub-optimality bound. Weaknesses: 1. The technique novelty is somewhat limited given literature in online RL with DP. 2. The discussion about related work is insufficient. In particular, I wonder what are the best existing regret bounds for online RL with DP (either tabular or linear MDPs)? If the regret bounds are not tight, could we use the Bernstein-style bonus with DP to improve the regret bounds? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. For the non-privacy problem, does the error bound have a polynomial dependence on $\kappa$ for the linear MDP problem? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your high-quality review and positive score. We will reply to the weaknesses you stated. **The technique novelty is somewhat limited given literature in online RL with DP.** We politely disagree. It is true that current techniques for online RL with DP can be adapted to the offline case. However, the current works for the online case all focus on privatizing Hoeffding-type bonuses. Therefore, directly adapting current online techniques to the offline case will not provide tight sub-optimality bounds. In comparison, we operate on Bernstein-type pessimism, which is highly non-trivial. We manage to provide a confidence bound using private pessimism and lower order additional terms, prove its validity and bound this private Bernstein pessimism by its non-private counterpart in our final result. All such techniques are novel to our knowledge. **The discussion about related work is insufficient. In particular, I wonder what are the best existing regret bounds for online RL with DP (either tabular or linear MDPs)? If the regret bounds are not tight, could we use the Bernstein-style bonus with DP to improve the regret bounds?** We discuss about the algorithms for online RL with joint DP (JDP) or local DP (LDP) in Appendix B. We did not discuss about the detailed regret bounds and kindly refer you to [1] for the results under tabular MDP and [2] for the results under linear MDP. Using the Bernstein-style bonus with DP to improve the regret bounds is a very good point. Under tabular MDP, the best result ([3]) with Hoeffding-type bonus is $\sqrt{SAH^3T}+$ additional cost due to DP, which is sub-optimal in the first term. Recently, following the idea of privatizing Bernstein-type bonus, a follow-up work ([1]) of this submission has improved the first term of the regret bound to $\sqrt{SAH^2T}$ (minimax optimal) while keeping the additional cost due to DP unchanged. Under linear MDP, the best known result ([2]) is derived by privatizing Hoeffding-type bonus. It is an open problem whether the regret can be improved through Bernstein-type bonus and we believe it is a good future direction. [1] Dan Qiao and Yu-Xiang Wang. Near-Optimal Differentially Private Reinforcement Learning. AISTATS, 2023. [2] Dung Daniel Ngo, Giuseppe Vietri, Zhiwei Steven Wu. Improved Regret for Differentially Private Exploration in Linear MDP. ICML, 2022. [3] Sayak Ray Chowdhury and Xingyu Zhou. Differentially private regret minimization in episodic markov decision processes. arXiv preprint arXiv:2112.10599, 2021. **For the non-privacy problem, does the error bound have a polynomial dependence on $\kappa$ for the linear MDP problem?** For the non-privacy problem, the error bound does not have polynomial dependence on $\kappa$. The main term in our bound is identical to the non-privacy case. The polynomial dependence on $\kappa$ only happens on the lower order term and such dependence results from bounding the worst case difference between private and non-private statistics. Thanks again for the helpful review! We hope that our response could address your main concerns. We would greatly appreciate it if you could consider raising the score.
null
null
null
null
null
null
MoVie: Visual Model-Based Policy Adaptation for View Generalization
Accept (poster)
Summary: This work presents an approach to train model-based RL methods such that it generalize to novel views on multiple RL benchmarks. The method leverages classical STN and frozen encoder to benefit the generalization performance. Strengths: The method is sound and simple with barely hyperparameters tuning The design choices study such as different moving views and the insight are helpful for research community. Weaknesses: The writing can be largely improved on section 2 and 3. It is unclear what the model-based RL problem is, and how to define the view generalization, how to map the model to actual control/policy for evaluations, for audience that are not familiar with this line of work. The problem is specific and the method novelty are limited. Comparisons with baselines are unclear. At least some comparison with model-free RL and recent work [1] would be helpful. [1] Multi-View Masked World Models for Visual Robotic Manipulation, Seo et al., 2023 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: In the experiments, does the baseline methods see all various views in the training sets as well? I.e. is the baseline method in figure 3 also seeing different views? There are many other choices to solve this problem. Is contrastive learning / or data augmentations on different views also a way to map different views to the same representations? I am not sure the application on real-world robot experiments, although some visualizations are presented. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments and suggestions. We address each of your comments in the following. **Q1:** The writing can be largely improved on section 2 and 3. It is unclear what the model-based RL problem is, and how to define the view generalization, how to map the model to actual control/policy for evaluations, for audience that are not familiar with this line of work. **A1:** Thank you for your suggestion. We will add more preliminaries about model-based RL in the final version. Model-based reinforcement learning is an approach to solving RL problems where the agent tries to learn a model of the environment it interacts with. Once the agent has a model of the environment, it can utilize optimal control methods, such as Model Predictive Control (MPC) and Monte Carlo Tree Search (MCTS), for planning and control. As for the definition of view generalization, we consider it as the out-of-distribution generalization about camera intrinsic and extrinsic parameters. The visual RL agent is trained on one camera parameter setting and tested on different camera parameter settings. **Q2:** The method is sound and simple with barely hyperparameters tuning. The problem is specific and the method novelty are limited. **A2:** We thank the reviewer for acknowledging the conciseness and usability of our proposed approach. We greatly value your feedback. We would like to highlight some technical contributions in our method. We made an effort to explore the prediction loss in model-based RL which can serve as a self-supervised task. We found that the dynamics model in model-based RL is a stable supervision for adaptation. We also propose spatial adaptive encode (SAE) in order to better adapt to both static and dynamic changes in view. Furthermore, our method is well-motivated, aiming to solve a fundamental and realistic question: *transforming the unseen test view to the training view*. Since we do not have paired images to align the latent space between training views and test views, we devise to use a dynamic loss combined with STN, utilizing the consistency in dynamics prediction to align the latent space. We would also like to emphasize that besides the method we propose, one main contribution in our work is the formulation of the view generalization problem and the resulting test platform, across locomotion tasks and robotic manipulation tasks. **Q3:** Comparisons with baselines are unclear. At least some comparison with model-free RL and recent work [1] would be helpful. [1] Multi-View Masked World Models for Visual Robotic Manipulation, Seo et al., 2023 **A3:** Thank you for your suggestion. More comprehensive experiments which incorporate comparison with model-free RL and recent work on visual generalization can better reflect the generalization ability of our method. We compare MoVie with DrQ-v2 [1], SVEA [2] and PAD [3] on 2 tasks across 4 settings. As shown in Table 1 of the rebuttal file, MoVie outperforms other methods across all the settings. As for the recent work [4] that you mentioned, we think that its problem setup is different from ours. It uses multi-view images during training, whereas we focus on out-of-distribution generalization and use only images from single view during training. **Q4:** In the experiments, does the baseline methods see all various views in the training sets as well? I.e. is the baseline method in figure 3 also seeing different views? **A4:** No. Our method also does not see various views during training time. All the methods (ours and baselines) are trained with the single fixed view. **Q5:** There are many other choices to solve this problem. Is contrastive learning / or data augmentations on different views also a way to map different views to the same representations? **A5:** The solution to view generalization we initially thought of was contrastive learning and data augmentation. In order to learn a view invariant representation, images from different views or strong data augmentation are needed. However, using images from different views (as shown in Figure 3 of the rebuttal file) or strong data augmentation negatively impacts the performance of the RL algorithm [5]. And this is the reason why we abandoned contrastive learning and data augmentation. **Q6:** I am not sure the application on real-world robot experiments, although some visualizations are presented. **A6:** We believe that our manuscript clearly demonstrates that online adaptation to view changes in practical robotic scenarios, covering an extensive range of robotic manipulation tasks to validate its applicability. Our setting is under a practical setting, suitable for both simulation and real world (single view training and any view adaptation). In our future work, we would like to conduct experiments in real world. [1] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Mastering visual continuous control: Improved data-augmented reinforcement learning. arXiv preprint arXiv:2107.09645, 2021. [2] Nicklas Hansen, Hao Su, and Xiaolong Wang. Stabilizing deep q-learning with convnets and vision 362 transformers under data augmentation. Advances in Neural Information Processing Systems, 34, 2021. [3] Nicklas Hansen, Rishabh Jangir, Yu Sun, Guillem Alenyà, Pieter Abbeel, Alexei A Efros, Lerrel Pinto, and Xiaolong Wang. Self-supervised policy adaptation during deployment. ICLR, 2021. [4] Multi-View Masked World Models for Visual Robotic Manipulation, Seo et al., 2023 [5] Kostrikov, Ilya, Denis Yarats, and Rob Fergus. "Image augmentation is all you need: Regularizing deep reinforcement learning from pixels." arXiv preprint arXiv:2004.13649 (2020). --- Rebuttal 2: Title: Thank you for the review and awaiting your response Comment: We sincerely thank you for your efforts in reviewing our paper and the suggestions again. We believe that we have resolved all the concerns mentioned in the review. Should there be any additional concerns, we are more than happy to address them! Thank you very much!
Summary: This paper mainly provides a training paradigm. Utilizing the dynamic transition model of the environment during the testing phase as a supervisory signal, STN is used to quickly finetune the mapping of observed potential states, resulting in better performance of the strategy mapping trained on a single view task on unseen test views. This work conducted comparative experiments on three types of tasks and four challenging views, achieving results that surpass existing methods. Ablation experiments were conducted to prove the effectiveness of each module setting. Strengths: The training paradigm provided in this article can effectively handle the generalization problem when view changes occur, and a thorough comparative experiment and ablation experiment have been conducted on the proposed model. The core idea of reconstructing mapping h using the environmental dynamic transfer model in the paper is effective. Weaknesses: 1. The subscripts for o and a in Formula 2 are missing. 2. There are issues with the baseline selection of IDM+STN. Cheetah run alone cannot prove that IDM+STN is generally better than IDM. It is not a problem to conduct ablation experiments on the Cheetah run model alone, but this cannot be used as sufficient evidence for the superiority of IDM+STN over IDM and thus selecting it as the baseline. 3. Chapter 3 emphasizes the differences between Formula 2 and Formula 1, on one hand, due to the fixed parameters of Network d, and on the other hand due to the use of SAE in Network h. One of the core methods is to fix the parameters of d, which is intuitive to avoid changes in the target domain of h during the update process, resulting in the failure of the policy pi. However, the ablation experiment showed that the setting of d^* did not achieve a consistently effective effect. 4. The main approach is to use the dynamic transfer model of the environment under the new view as supervision to quickly train the changed h '. Can you obtain a generalized h directly based on o'? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Have the training methods of the three baselines in the comparative experiment also been finetuned? How much data was used during finetuning, and is this amount of data acceptable for practical applications? 2. Can the network of h ^ SAE be directly used in training to achieve learning from multiple perspectives of tasks? Directly generalize from an unseen perspective without finetuning. 3. The reason why Movie is better than IDM+STN should be explained. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have discussed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments and suggestions. We address each of your comments in the following. **Q1:** The subscripts for o and a in Formula 2 are missing. **A1:** Thank you for catching this. We will fix this in the final version. **Q2:** There are issues with the baseline selection of IDM+STN. Cheetah run alone cannot prove that IDM+STN is generally better than IDM. It is not a problem to conduct ablation experiments on the Cheetah run model alone, but this cannot be used as sufficient evidence for the superiority of IDM+STN over IDM and thus selecting it as the baseline. **A2:** We appreciate your feedback and acknowledge your concerns regarding our choice of IDM+STN as the baseline. We broaden our experimental setup to include 5 tasks across 4 settings. The results provided in Table 3 of the rebuttal file shows that IDM+STN is generally better than IDM. **Q3:** Chapter 3 emphasizes the differences between Formula 2 and Formula 1, on one hand, due to the fixed parameters of Network d, and on the other hand due to the use of SAE in Network h. One of the core methods is to fix the parameters of d, which is intuitive to avoid changes in the target domain of h during the update process, resulting in the failure of the policy pi. However, the ablation experiment showed that the setting of d^* did not achieve a consistently effective effect. **A3:** We agree with the reviewer that the pursuit of a universally optimal representation is critical. But it is inherently challenging for a single approach to excel across all tasks in the field of reinforcement learning. In our experiments, although finetuning the dynamics model slightly outperforms our method in the novel FOV scenario, it underperforms in others. Furthermore, we conducted experiments on more tasks and settings, and the results presented in Table 4 of the rebuttal file demonstrate fixing the dynamics model during adaptation is generally better than finetuning it. **Q4:** The main approach is to use the dynamic transfer model of the environment under the new view as supervision to quickly train the changed h '. Can you obtain a generalized h directly based on o'? **A4:** In order to learn a view invariant representation and directly generalize to an unseen view without finetuning, images from different views are needed. However, as shown in Figure 3 of the rebuttal file, training with multi-view images leads to poor training performance in our attempt. Additionally, accessing images from different views is not easy, especially in the real world. **Q5:** Have the training methods of the three baselines in the comparative experiment also been finetuned? How much data was used during finetuning, and is this amount of data acceptable for practical applications? **A5:** TD-MPC in the baselines has not been finetuned at test time. DM and IDM+STN have been finetuned for fair comparison. We ran 20 episodes on each of the settings and we found that the performance has been greatly improved after adaptation of one episode in most tasks. An episode consists of tens or hundreds of steps in our robotic tasks, which is quite acceptable for practical applications. **Q6:** Can the network of h ^ SAE be directly used in training to achieve learning from multiple perspectives of tasks? Directly generalize from an unseen perspective without finetuning. **A6:** Please see A4 for details. **Q7:** The reason why Movie is better than IDM+STN should be explained. **A7:** Intuitively, IDM and DM are somewhat similar, while there are two reasons that make IDM loss worse than DM under our setting: 1. Model-based RL methods such as TD-MPC and MoDem do not introduce IDM during training originally, which is mainly because introducing IDM loss is not generally helping as shown in Figure 4 of the rebuttal file. This is because additional optimization objectives could heavily affect optimization outcomes. 2. Directly using IDM during test time (as our baseline) results in inconsistency between training and test, which thus leads to suboptimal performance. As shown in Table 1, Table2, Table 3, Table 4 and Table 5 in our main paper, despite being slightly better than MoVie on a few tasks, IDM+STN underperforms MoVie on most tasks. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response. They addressed most of my concerns. I decide to raise my rating score. --- Rebuttal 2: Title: Thank you for the review and awaiting your response Comment: We sincerely thank you for your efforts in reviewing our paper and the suggestions again. We believe that we have resolved all the concerns mentioned in the review. Should there be any additional concerns, we are more than happy to address them! Thank you very much!
Summary: The paper addresses the novel problem of view generalization in reinforcement learning, where an RL agent is trained on an environment with a fixed view and then evaluated on a test environment having the exact same dynamics but observed from a different perspective. In order to address this issue, the authors introduce an innovative method that permits test-time adaptation to the new view. The authors integrate a learnable spatial transformer network (STN) into the feature extraction component of the agent. This augmented encoder is then fine-tuned at test time to generate features that enable a frozen latent dynamic model to predict future state representations. An essential aspect of the authors' approach is that this adaptation does not necessitate any reward signal, which is a key advantage. Experimental results demonstrate a significant reduction in the generalization gap presented by a new view across several heterogeneous benchmarks. Strengths: The paper is well-structured and easy to follow, providing clear and detailed explanations that make believe that I could readily reproduce the method based on the descriptions provided. The authors introduce a new problem in the field that could stimulate further research. The proposed method's strength lies in its test-time adaptation capability, which eliminates the need for any reward signal, making it highly applicable to real-world scenarios. This characteristic also suggests that the approach could be integrated into any model-based method, enhancing its universality. Another advantage is the apparent minimal interaction required with the testing environment, promoting efficiency. The method is evaluated across various environments, with the results showcasing impressive improvements compared to non-adaptive methods. The paper's ablation study effectively illustrates the contribution of each component of the method, highlighting the importance of every aspect in achieving the observed results. Additionally, the annex offers excellent visualizations demonstrating the impact of the spatial transformer on the feature map, providing intuitive understanding of the method's mechanics. Weaknesses: Despite the many strengths of the paper, a few areas could benefit from further development and clarification. 1. It would be beneficial to see a comparison of the performance decrease relative to the original view. The lack of this data makes it challenging to truly gauge the significance of the generalization gap and the efficacy of the proposed method. 2. The scope of comparative studies is somewhat limited. The paper primarily compares with methods not designed for visual adaptation, which may not provide the most insightful comparison. It would have been advantageous to see how the proposed approach stacks up against other strategies specifically intended for visual adaptation. 3. An exploration of whether the proposed method can also benefit model-free algorithms such as Soft Actor-Critic (SAC) is missing. This could broaden the applicability of the findings and provide additional insights into the method's versatility. 4. It remains unclear why the proposed method underperforms with the Inverse Dynamics Model (IDM). A deeper exploration of this anomaly could bolster the robustness and reliability of the approach. 5. Finally, the similarity between the proposed method and the PAD method, which performs test-time domain adaptation in model-free RL, raises questions. The key differences are the use of a Spatial Transformer Network and the leveraging of the latent dynamics model of model-based methods instead of adding it as an auxiliary component. However, these differences, while noteworthy, do not necessarily represent a substantial departure from the PAD method. This similarity begs the question of whether the presented method offers significant novelty or if it is essentially an adjustment of existing approaches. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. To better understand the importance of the generalization gap, could you provide performance data related to the original view? This information would help quantify the performance decrease when the agent is switched to a different view. 2. The use of inverse dynamics models with spatial transformer networks in your method isn't entirely clear. Does IDM completely replace the DM in the TD-MPC algorithm? Or does it just used as a side network during TD-MPC training and used for test adaptation only? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The authors mentioned a limitation of this work: the proposed method has not been tested in real-world scenarios, such as with physical robots, which may limit its immediate applicability. Another potential limitation could be that the method has only been tested on model-based reinforcement learning (RL), whereas it would be relatively straightforward to evaluate its effectiveness in model-free RL. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments and suggestions. We address each of your comments in the following. **Q1:** It would be beneficial to see a comparison of the performance decrease relative to the original view. The lack of this data makes it challenging to truly gauge the significance of the generalization gap and the efficacy of the proposed method. **A1:** The performance of the original agents without any adaptation under the training view has been reported in Table 13, 14, and 15 in appendix of the original paper. When tested on our view generalization testing platform, the performance of the agent without adaptation experiences a significant decrease. **Q2:** It would have been advantageous to see how the proposed approach stacks up against other strategies specifically intended for visual adaptation. **A2:** We appreciate your suggestion on including other visual adaptation methods in baselines. And we compare MoVie with PAD [1] which also adapts visual encoder at test time on 2 tasks across 4 settings (We choose tasks on which PAD performs well at training time, and the addition of auxiliary tasks negatively affects PAD's performance on some other tasks.). The results are provided in Table 1 of the rebuttal file. It is observed that MoVie outperforms PAD significantly across all 8 settings. **Q3:** An exploration of whether the proposed method can also benefit model-free algorithms such as Soft Actor-Critic (SAC) is missing. This could broaden the applicability of the findings and provide additional insights into the method's versatility. **A3:** We believe this is an exciting direction and we made initial attempt toward this direction. We attempted to apply our adaptation method to DrQ-v2 [2] by integrating a dynamics model and SAE (spatial adaptive encoder) during testing while it is not able to achieve reasonable results. In our initial attempt, the view generalization ability has not been greatly improved, and the performance gets even worse after adaptation on some tasks as shown in Table 5 of the rebuttal file. We believe it requires non-trivial effort to explore how to use more self-supervised losses for model-free RL methods, but we want to point out that this is not trivial to find the suitable loss that does not hurt the training but also helps during test time. **Q4:** It remains unclear why the proposed method underperforms with the Inverse Dynamics Model (IDM). A deeper exploration of this anomaly could bolster the robustness and reliability of the approach. **A4:** Intuitively, IDM and DM are somewhat similar, while there are two reasons that make IDM loss worse than DM under our setting: 1. Model-based RL methods such as TD-MPC and MoDem do not introduce IDM during training originally, which is mainly because introducing IDM loss is not generally helping as shown in Figure 4 of the rebuttal file. This is because additional optimization objectives could heavily affect optimization outcomes. 2. Directly using IDM during test time (as our baseline) results in inconsistency between training and test, which thus leads to suboptimal performance. As shown in Table 1, Table2, Table 3, Table 4 and Table 5 in our main paper, despite being slightly better than MoVie on a few tasks, IDM+STN underperforms MoVie on most tasks. **Q5:** The similarity between the proposed method and the PAD method, which performs test-time domain adaptation in model-free RL, raises questions. This similarity begs the question of whether the presented method offers significant novelty or if it is essentially an adjustment of existing approaches. **A5:** Although one component of our method is similar to PAD [1], we have made significant improvements: 1. As shown in Figure 4 of the rebuttal file, auxiliary tasks added at training time like PAD could negatively impact the performance of the algorithm. And our method does not require any modification at training time. 2. And we made an effort to explore the prediction loss in model-based RL which can serve as a self-supervised task. We found that the dynamics model in model-based RL is a stable supervision for adaptation. 3. Our method is well-motivated, aiming to solve a fundamental and realistic challenge: *generalize by transforming the unseen test view to the training view*. Since we do not have paired images to align the latent space between training views and test views, we devise to use a dynamic loss combined with STN, utilizing the consistency in dynamics prediction to align the latent space. 4. Furthermore, as you said, we propose spatial adaptive encode (SAE) in order to better adapt to both static and dynamic changes in view. **Q6:** To better understand the importance of the generalization gap, could you provide performance data related to the original view? **A6:** Please see A1 for details. **Q7:** The use of inverse dynamics models with spatial transformer networks in your method isn't entirely clear. Does IDM completely replace the DM in the TD-MPC algorithm? Or does it just used as a side network during TD-MPC training and used for test adaptation only? **A7:** The IDM is added and trained at test time and the loss of inverse dynamics is used to optimize IDM and SAE (spatial adaptive encoder) simultaneously. The reason we didn't add IDM as auxiliary tasks during training is that it would negatively impact the training performance as shown in Figure 4 of the rebuttal file. Thanks for your thoughtful question and we will add more details about IDM in the final version. [1] Nicklas Hansen, Rishabh Jangir, Yu Sun, Guillem Alenyà, Pieter Abbeel, Alexei A Efros, Lerrel Pinto, and Xiaolong Wang. Self-supervised policy adaptation during deployment. ICLR, 2021. [2] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Mastering visual continuous control: Improved data-augmented reinforcement learning. arXiv preprint arXiv:2107.09645, 2021. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses and clarifications regarding my questions. After considering their input, I still believe that this paper is of sufficient quality to be accepted for NeurIPS. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging our efforts.
Summary: This paper focuses on improving the generalization ability of visual DRL to adapt to unseen views. The authors propose a model-based policy adaptation approach that combines spatial transformer networks with a self-supervised dynamics prediction objective to address this problem. The effectiveness of the approach is evaluated through experiments on three commonly used benchmarks. Strengths: - This paper is well-written and easy to follow. - The proposed method is well-motivated. - The authors try to solve a realistic and essential problem, i.e., view generalization, especially in real-world or sim-to-real robotic scenarios. Weaknesses: - While the problem setting is attractive, the technical contribution is insufficient as a full NeurIPS paper. It is a straightforward combination of multiple existing works, like STN. - There needs to be more than the current evaluation to demonstrate the superiority over existing works. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - The experiments need to be more comprehensive. The comparison baselines used are more like ablation studies rather than state-of-the-art (SOTA) methods, such as TD-MPC (the backbone of MoVie), DM (MoVie without STN), and IDM+STN (MoVie replacing DM with IDM). The authors should consider incorporating more commonly-used or SOTA data augmentation algorithms for visual DRL, such as DrQ-v2[1] and SEVA[2]. - The authors mention multiple times that only shallow STNs can improve performance, but they fail to provide deeper analysis, such as visual illustrations. Is the number of layers set for all tasks, or do different tasks require their own suitable layers? This information would be valuable. - The authors claim that "Our proposed method enables direct deployment of offline or simulation-trained agents ...", However, the lack of practical experimental demonstrations, such as robot manipulation, makes their claim unconvincing. I strongly suggest that the authors conduct real-world robot experiments to assess the applicability of their proposed method. References: - [1] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Mastering visual continuous control: Improved data-augmented reinforcement learning. arXiv preprint arXiv:2107.09645, 2021. - [2] Nicklas Hansen, Hao Su, and Xiaolong Wang. Stabilizing deep q-learning with convnets and vision 362 transformers under data augmentation. Advances in Neural Information Processing Systems, 34, 2021. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Not Applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments and suggestions. We address each of your comments in the following. **Q1:** While the problem setting is attractive, the technical contribution is insufficient as a full NeurIPS paper. It is a straightforward combination of multiple existing works, like STN. **A1:** We would like to respectfully point out that our method is not just combining simple techniques straightforwardly. Instead, it is well-motivated, aiming to solve a fundamental and realistic problem: *generalize by transforming the unseen test view to the training view*. Though such motivation is simple, it is not trivial to do this since we do not store the training views, and thus we do not have paired images to align the latent space between training views and test views. To solve this, we devise to use a dynamic loss combined with STN, utilizing the consistency in dynamics prediction to align the latent space. Moreover, one main contribution of our paper is to formulate and study the view generalization problem in RL systematically, across locomotion and manipulation tasks, covering a broad range of tasks. Our proposed method also achieves non-trivial success across all these tasks over strong baselines. We hope the reviewer acknowledges the efforts we made in addressing the realistic view generalization problem. **Q2:** There needs to be more than the current evaluation to demonstrate the superiority over existing works. **A2:** Please see A3 for details. **Q3:** The experiments need to be more comprehensive. The comparison baselines used are more like ablation studies rather than state-of-the-art (SOTA) methods, such as TD-MPC (the backbone of MoVie), DM (MoVie without STN), and IDM+STN (MoVie replacing DM with IDM). The authors should consider incorporating more commonly-used or SOTA data augmentation algorithms for visual DRL, such as DrQ-v2 [1] and SVEA [2]. **A3:** Thank you for your suggestion. To further show the advantage of MoVie, we compare MoVie with DrQ-v2 [1], SVEA [2] and PAD [3] on 2 tasks across 4 settings. As shown in Table 1 of the rebuttal file, MoVie outperforms other methods across all these settings. By the way, the training performance of PAD limited the scale of the experiments. We trained these baselines on several tasks, but the performance of PAD can compete with others on only 2 tasks. **Q4:** The authors mention multiple times that only shallow STNs can improve performance, but they fail to provide deeper analysis, such as visual illustrations. Is the number of layers set for all tasks, or do different tasks require their own suitable layers? This information would be valuable. **A4:** We incorporate STN in the first two layers of the visual encoder for all tasks. After visualizing the features of different layers (as shown in Figure 1 of the rebuttal file), we found that the features of shallow layers contain more information about the spatial relationship; therefore transforming the features of shallow layers for view generalization is reasonable. **Q5:** The authors claim that "Our proposed method enables direct deployment of offline or simulation-trained agents ...", However, the lack of practical experimental demonstrations, such as robot manipulation, makes their claim unconvincing. I strongly suggest that the authors conduct real-world robot experiments to assess the applicability of their proposed method. **A5:** We agree that real-world evaluation is valuable. However, simulation remains critical to the community for a number of reasons: (i) it provides researchers with common benchmarks that accurately measure progress in the area, (ii) it facilitates reproducibility and statistically significant results, and (iii) improves equity in the area of machine learning by removing barriers of entry for researchers to contribute to our collective knowledge. While we agree that reliable real-world benchmarks and evaluations would be valuable and very welcome, it should not be treated as a prerequisite for research on visual generalization due to it still being an open and underexplored problem. [1] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Mastering visual continuous control: Improved data-augmented reinforcement learning. arXiv preprint arXiv:2107.09645, 2021. [2] Nicklas Hansen, Hao Su, and Xiaolong Wang. Stabilizing deep q-learning with convnets and vision 362 transformers under data augmentation. Advances in Neural Information Processing Systems, 34, 2021. [3] Nicklas Hansen, Rishabh Jangir, Yu Sun, Guillem Alenyà, Pieter Abbeel, Alexei A Efros, Lerrel Pinto, and Xiaolong Wang. Self-supervised policy adaptation during deployment. ICLR, 2021. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' additional experimental results and critical analysis. They addressed most of my concerns. So I decide to raise my final rating score. However, Regarding Q5, I acknowledge the importance of simulation but have concerns about your claim that "Our proposed method enables direct deployment of offline or simulation-trained agents" (Lines 186-189). Due to the significant difference (domain gap) in input images between the simulation and the real world, a sim-2-real module is usually needed to mitigate these discrepancies. I am skeptical about the feasibility of deploying your MoVie directly to real-world experiments. Thus, I recommend that the authors present concrete experimental findings instead of mere assertions. --- Reply to Comment 1.1.1: Comment: Thank you for your constructive response and the acknowledgement of our effort. We agree that there is a difference in input images between the simulation and the real world; hence, real world experiment is beneficial. We would like to point out that our method can narrow the domain gap in input images to some extent. As shown in the table below, MoVie effectively facilitates the agent's adaptation to different visual domains. In this experiment, we introduce changes to the appearance of objects such as the ground, table, and background in the environment and report cumulative rewards for DMControl tasks and success rates for robotic manipulation tasks. While these being a surrogate setting, we also plan to conduct experiments in real world in our future work to further demonstrate that MoVie enables direct deployment of offline or simulation-trained agents. | Tasks | TD-MPC (original) | TD-MPC under appearance change | MoVie under appearance change | | :-----: | :-----: | :-----: | :-----: | | Walker, walk | $ 944.99\pm21.71 $ | $ 589.76\pm53.27 $ | $ 882.00\pm72.68 $ | | xArm, reach | $ 0.96\pm0.05 $ | $ 0.15\pm0.05 $ | $ 0.78\pm0.02 $ | --- Rebuttal 2: Title: Thank you for the review and awaiting your response Comment: We sincerely thank you for your efforts in reviewing our paper and the suggestions again. We believe that we have resolved all the concerns mentioned in the review. Should there be any additional concerns, we are more than happy to address them! Thank you very much!
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful comments and suggestions. We are delighted to receive your recognition of the strengths in our work, including but not limited to the meaningful problem formulation, well-motivated and effective method, extensive experimental validation and good writing – “stable performance gain over various methods” “experimental validation is extensive enough” (CADg), “solve a realistic and essential problem” “the paper is well written and easy to follow” (CdSm), “introduce a new problem in the field that could stimulate further research” (N1Ux), “effectively handle the generalization problem when view changes occur” (E44Y), “the method is sound and simple” (BF5c). Your suggestions and concerns are also valuable. We have replied separately and conducted extensive additional experiments as support. The experiment results are given in the PDF file. **EXP1: Comparison with other generalization or data augmentation algorithms for visual RL** in reply to Reviewer CdSm, Reviewer N1Ux and Reviewer BF5c. Results are given in Table 1. MoVie outperforms other methods including DrQ-v2 [1], SVEA [2] and PAD [3] across all the settings. **EXP2: Ablation for baseline selection** in reply to Reviewer E44Y. Results are given in Table 3 and Table 4. We broaden our experimental setup to include 5 tasks across 4 settings. The results show that IDM+STN is generally better than IDM and fixing the dynamics model during adaptation is generally better than finetuning it. **EXP3: Visualizations of features from shallow to deep layers** in reply to Reviewer CdSm. Results are given in Figure 1. We found that the features of shallow layers contain more information about the spatial relationship, so transforming the features of shallow layers for view generalization is reasonable. **EXP4: Visualizations of features before and after adaptation** in reply to Reviewer CADg. Results are given in Figure 2. It is observed that features are transformed after adaptation closer to the training view. This interprets why our method could adapt to different views. **EXP5: Comparison of DrQ-v2 and DrQ-v2 with our adaptation method** in reply to Reviewer CADg and Reviewer N1Ux. Results are given in Table 5. While extending our method to model-free RL is exciting, the view generalization ability has not been greatly improved in our initial attempt. We want to point out that it is not trivial to find the suitable loss that does not hurt the training but also helps during test time. **EXP6: Training performance of DrQ-v2 using single-view and multi-view images during training** in reply to Reviewer N1Ux and Reviewer E44Y. Results are given in Figure 3. It is observed that training with multi-view images leads to poor training performance. **EXP7: Training performance of TD-MPC with and without IDM as auxiliary task** in reply to Reviewer N1Ux and Reviewer E44Y. Results are given in Figure 4. It is observed that auxiliary tasks added at training time could negatively impact the performance. [1] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Mastering visual continuous control: Improved data-augmented reinforcement learning. arXiv preprint arXiv:2107.09645, 2021. [2] Nicklas Hansen, Hao Su, and Xiaolong Wang. Stabilizing deep q-learning with convnets and vision 362 transformers under data augmentation. Advances in Neural Information Processing Systems, 34, 2021. [3] Nicklas Hansen, Rishabh Jangir, Yu Sun, Guillem Alenyà, Pieter Abbeel, Alexei A Efros, Lerrel Pinto, and Xiaolong Wang. Self-supervised policy adaptation during deployment. ICLR, 2021. Pdf: /pdf/8f4eff13b1c7c94b6a322d34761403f8f0ca6471.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Adapting policy into new view setup is an important task in RL. This paper presents MoVie, Model-based policies for View generalization to achieve the fast view adaptation of model-based policy. Specifically, they combine spatial transformer networks into the encoder models and train it during test-time using dynamics prediction loss. Despite the simplicity, the experimental results show that the strong performance of MoVie over several benchmarks on various test set. Strengths: 1. The proposed method is simple yet provide stable performance gain over various methods. Experimental validation is extensive enough. 2. The paper is well written and easy to follow. Weaknesses: 1. Technical novelty and technical depth is limited. I might decrease scores if I found works that combine STN with test-time training. Besides, few discussions are made on the choice of test-time loss and entire architecture. For example, the experimental results suggest that the DM is better than IDM especially when combining test-time training, but the paper lacks discussion on why it is. Besides, I agree prediction loss is one of the natural loss to train STN, but I think it is also possible to extend the method to model-free methods by using some self-supervised loss function (like contrastive learning). 2. I think the experimental validation is already comprehensive, but it lacks qualitative analysis of their representation before/after adaptation. 3. No theoretical justification for the choice of test-time loss and or entire architecture. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments and suggestions. We address each of your comments in the following. **Q1:** The proposed method is simple yet provide stable performance gain over various methods. Experimental validation is extensive enough. But Technical novelty and technical depth is limited. I might decrease scores if I found works that combine STN with test-time training. **A1:** We thank the reviewer for acknowledging the effectiveness of our proposed approach. We emphasize that besides the method we propose, one main contribution in our work is the **formulation** of the view generalization problem and the resulting test platform, across locomotion tasks and robotic manipulation tasks. **Q2:** Few discussions are made on the choice of test-time loss and entire architecture. For example, the experimental results suggest that the DM is better than IDM especially when combining test-time training, but the paper lacks discussion on why it is. **A2:** The main reason we use the dynamic loss during test time is because such a loss is common in model-based RL. Hence, using the same objective during test time leads to consistency between training and test. One might naturally think that IDM could be similar to our DM objective, while there are two reasons that make IDM loss worse than DM under our setting: 1. Model-based RL methods such as TD-MPC and MoDem do not introduce IDM during training originally, which is mainly because introducing IDM loss is not generally helping as shown in Figure 4 of the rebuttal file. This is because additional optimization objectives could heavily affect optimization outcomes. 2. Directly using IDM during test time (as our baseline) results in inconsistency between training and test, which thus leads to suboptimal performance. As shown in Table 1, Table2, Table 3, Table 4 and Table 5 in our main paper, despite being slightly better than MoVie on a few tasks, IDM+STN underperforms MoVie on most tasks. **Q3:** I agree prediction loss is one of the natural loss to train STN, but I think it is also possible to extend the method to model-free methods by using some self-supervised loss function (like contrastive learning) **A3:** We believe this is an exciting direction and we made initial attempt toward this direction. Some previous works attempted to include auxiliary tasks during training, such as PAD [1], but it was observed that these auxiliary tasks could negatively impact the training performance of the RL algorithm. We attempted to apply our adaptation method in DrQ-v2 [2] while it is not able to achieve reasonable results as shown in Table 5 of the rebuttal file. In our initial attempt, the view generalization ability has not been greatly improved, and the performance gets even worse after adaptation on some tasks. We believe it requires non-trivial effort to explore how to use more self-supervised losses for model-free RL methods, but we want to point out that this is not trivial to find the suitable loss that does not hurt the training but also helps during test time. **Q4:** I think the experimental validation is already comprehensive, but it lacks qualitative analysis of their representation before/after adaptation **A4:** In Appendix C (also in Figure 2 of the rebuttal file), we visualize the features of shallow layers before and after adaptation, and it is observed that features are transformed after adaptation, closer to the training view. This interprets why our method could adapt to different views. **Q5:** No theoretical justification for the choice of test-time loss and or entire architecture **A5:** The main focus of our work is to study and solve the view generalization problem in RL, more from an empirical perspective. We also agree about the importance of rigorous theoretical sketch, which could be our future exploration direction. [1] Nicklas Hansen, Rishabh Jangir, Yu Sun, Guillem Alenyà, Pieter Abbeel, Alexei A Efros, Lerrel Pinto, and Xiaolong Wang. Self-supervised policy adaptation during deployment. ICLR, 2021. [2] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Mastering visual continuous control: Improved data-augmented reinforcement learning. arXiv preprint arXiv:2107.09645, 2021. --- Rebuttal Comment 1.1: Title: I would like to keep my rating Comment: Thank you for your detailed response. I still think the paper provides a good contribution to the community, and therefore keep my rating. --- Rebuttal 2: Title: Thank you for the review and awaiting your response Comment: We sincerely thank you for your efforts in reviewing our paper and the suggestions again. We believe that we have resolved all the concerns mentioned in the review. Should there be any additional concerns, we are more than happy to address them! Thank you very much!
null
null
null
null
null
null
CELLE-2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer
Accept (poster)
Summary: In this work, the authors present CellBERT-E, a transformer-based model to generate protein localization images. The model takes in the nucleus and threshold images as well as amino acid (AA) sequences as input. The images are tokenized via VQGAN tokenizer and AA sequences are tokenized via pretrained protein language model. The model is pretrained via mask modeling for both AA and image tokens. Experiments show that CellBERT-E achieves reasonable results in protein localization prediction. The authors also include experiments concerning generating AA motifs from image input. Besides, CellBERT-E generates images in a non-autoregressive manner, which is more efficient than previous CELL-E model. Strengths: 1. Application of the transformer-based generative model to protein localization prediction, a relatively new domain. 2. Leveraging non-autoregressive generation which is more efficient than previous work. 3. Application of the proposed model to generate short AA sequences from nucleus images. Such a setting is tested on NLS design which can be an interesting attempt. Weaknesses: 1. Lack of baseline models. The authors only compare different settings of the proposed CellBERT-E but miss other baselines. 2. Experiments may not support the conclusion of the proposed method. It can be hard to tell the effect of hyperparameters based on current experimental settings. 3. Evaluation metrics may not fully reflect the performance on the application. Authors apply common metrics for image/sequence generation to evaluate the protein localization prediction performance. Domain details may be missed when only applying these metrics. Please see "Questions" for more details. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Major questions: 1. Major results in Table 1&2 only compare different settings of proposed CellBERT-E and miss the performance of other baseline models. Though this work focuses on a relatively new application that few works have investigated. The authors can at least include the comparison to CELL-E which the proposed method is based on. 2. The choice of hyperparameters needs more validation. In Table 1&2, the deeper the model is, the smaller the hidden size is. Why do the authors choose such settings in architecture exploration? Besides, it's hard to conclude the effect of hidden size and depth on the performance as they are changing together. 3. Table 1 reports several image metrics to show the performance of CellBERT-E. Are there other domain metrics that are important in protein localization prediction? Will these image metrics fail to capture such important details? 4. The authors mention the potential rapid overfitting on OpenCell containing limited data. Have the authors considered any data augmentation strategies to enlarge the training set? 5. In Table 2, all models achieve high cosine similarities. Is this because only 15% of AA sequence is masked? The authors can add the results of random sampling to better validate the effectiveness of proposed model. 6. In Table 3, models that perform well on FID are likely to perform poorly on the other 5 metrics. What could be the reason? 7. In Section 6.2, the authors use the predicted signal in nucleus from the image model to validate the generated NLS. I find it unconvincing since the image model trained on limited data can fail when new data is fed in. Minor questions: 1. Typo in line 88, I suppose it should be "allowing images to be synthesized in relatively few steps". Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Besides the structural information of proteins mentioned by the author. The limited training data can be another limitation of the work. Even HPA, the larger dataset in this work, contains only ~17,000 data which is noisy. The authors bring up this in Introduction. However, how the proposed method addresses the concern should be further clarified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback, we will address the concerns below: > ... The authors can at least include the comparison to CELL-E which the proposed method is based on. As the reviewer agrees, this is an incredibly new field of study. As such, there are no proper baseline models with which to compare. For image prediction, the only comparison would be to CELL-E. Our evaluations of different training data (CELL-E uses OpenCell only whereas CellBERT-E combines HPA with OpenCell) and language models (CELL-E uses the BERT model from [1] whereas CellBERT-E uses ESM-2) are implicitly comparing these two models and showing the improvements of CellBERT-E. However, we will include an explicit comparison with CELL-E in the final revision. > The choice of hyperparameters needs more validation. In Table 1&2, the deeper the model is, the smaller the hidden size is. Why do the authors choose such settings in architecture exploration? Besides, it's hard to conclude the effect of hidden size and depth on the performance as they are changing together. Hidden size in predetermined by the ESM-2 language model that we use as a backbone. Depth is based on the findings of the CELL-E paper, showing a correlation between depth and token prediction ability. We maximized the depth that based on the available VRAM in training. We will be including this rationale in Section B.2. > ...Are there other domain metrics that are important in protein localization prediction? Will these image metrics fail to capture such important details? “Nucleus Proportion MAPE” is included as a domain-specific metric which calculates the proportion of intensity within the cell nucleus. This is the most relevant metric to the problem. We include other image metrics for the sake of thoroughness and evaluation for future models. > The authors mention the potential rapid overfitting on OpenCell containing limited data. Have the authors considered any data augmentation strategies to enlarge the training set? We already utilize standard data augmentation techniques which involved randomly cropping the amino acid sequence (A.3) and randomly cropping and flipping on the image (A.4). HPA much larger and more diverse than OpenCell, and pretraining helps our model learn generalizable features for localization prediction. > In Table 2, all models achieve high cosine similarities. Is this because only 15% of AA sequence is masked? The authors can add the results of random sampling to better validate the effectiveness of proposed model. Masking 15% of the amino acids is a standard in unsupervised protein sequence learning. Could you clarify what “random sampling” means in this context, as the chosen positions to mask in Table 2 are selected at random. > In Table 3, models that perform well on FID are likely to perform poorly on the other 5 metrics. What could be the reason? FID measures the similarity between the synthesized images and the ground truth images in terms of the feature space of Inception v3. Models that performed well on FID have only seen one type of image during training (Table 3), and they may have learned to generate images that resemble that type. This may result in high FID scores, but low scores on the other metrics, which are more sensitive to the correct localization of the protein within the cell. FID is also not a stable metric, and can be affected by factors such as JPEG quality [2]. > In Section 6.2, the authors use the predicted signal in nucleus from the image model to validate the generated NLS. I find it unconvincing since the image model trained on limited data can fail when new data is fed in. Our goal was to create a list of candidate NLS sequences for future experimental validation. The image prediction is used to filter out unlikely candidates and not for validation. It is part of the process to generate the NLS candidate list. A two-step process, which combines the reverse model of sequence infilling and forward model of image prediction, should increase the success rate. Our current validation of the generated NLS candidates is based on the biological knowledge about NLS, such as the enrichment of certain types of amino acids and sequence features (domain knowledge not provided to the model). As shown in Table S9 and Figure S10, our generated NLS candidates have a strong potential to be functional. To address the reviewer’s concern, we have additionally passed our generated NLS candidate sequences (appended to GFP) to DeepLoc 2.0 [3], an independent model predicting multiclass annotations of protein localizations. DeepLoc 2.0 predicted **89%** of the generated NLS candidates with nuclear localization and **91%** having potential nuclear localizing signals, clearly validating our model. > Typo in line 88, I suppose it should be "allowing images to be synthesized in relatively few steps". Thank you, we have updated the manuscript. > Besides the structural information of proteins mentioned by the author. The limited training data can be another limitation of the work. Even HPA, the larger dataset in this work, contains only ~17,000 data which is noisy. The authors bring up this in Introduction. However, how the proposed method addresses the concern should be further clarified. We agree that the limited and noisy training data is a challenge. We utilize a frozen pre-trained language model, ESM-2, which was trained on millions of protein sequences, threreby providing a rich source of prior knowledge from a large corpus. We have updated Section 4.2 to explain this rationale in more detail. [1] Rao, et. al. Evaluating Protein Transfer Learning with TAPE. Advances in Neural Information Processing Systems 2019 [2] Parmar, et. al. On Aliased Resizing and Surprising Subtleties in GAN Evaluation. CVPR, 2022 [3], Vineet, et. al. DeepLoc 2.0: multi-label subcellular localization prediction using protein language models. Nucleic acids research. 2022 --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses to my questions and adding extra experiments. The rebuttal has addressed most of my concerns and I have raised my score. In terms of question 5, what I meant is that all models achieve high cosine similarity in in-filling with 15% masking ratio. It would be interesting to see what's the cosine similarity if random sampling is applied for in-filling. This may help set another baseline and better evaluate the performance of proposed method. --- Reply to Comment 1.1.1: Comment: Appreciate the feedback. Thank you clarifying, we agree that a random sampling would make for a useful comparison and willl include it in the final revision.
Summary: This work proposes an image-sequence multimodal encoder to model the interdependencies between cellular image and protein sequence. The pre-trained ESM-2 protein language model is employed to extract protein sequence embeddings, and the pre-trained VQGAN is used to extract cellular image patch embeddings. A Transformer encoder is trained upon these two kinds of embeddings to model the interaction between image patches and amino acid residues. The Validation set performance of image prediction and sequence infilling are analyzed to demonstrate model design choices. The application on de novo NLS design shows the effectiveness of the proposed model. Strengths: + The proposed multimodality learning framework of cellular images and protein sequences is technically sound, and such a multimodality learning setting is novel to my best knowledge. + The results on de novo NLS design shows the model could helpful in real-world applications. Weaknesses: - Important downstream applications and baseline methods are not investigated in the experiment section. - The evaluation protocol of image prediction and sequence infilling is not that standard from the machine learning perspective. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. The proposed method learns image-enhanced representations of protein sequences. Such representation could be superior over pure protein sequence representations learned by ESM-2. The subcellular localization prediction benchmarks proposed by DeepLoc [a] and DeepLoc 2.0 [b] could be a good test field for such a hypothesis, where the proposed CellBERT-E can compare with various protein language models. 2. In the image prediction and sequence infilling experiments, authors report the performance on validation set. However, from a standard machine learning perspective, the validation set should be used for model selection, and another hold out test set serves for evaluation. Authors are strongly encouraged to align such a standard. [a] Almagro Armenteros, José Juan, et al. "DeepLoc: prediction of protein subcellular localization using deep learning." Bioinformatics 33.21 (2017): 3387-3395. [b] Thumuluri, Vineet, et al. "DeepLoc 2.0: multi-label subcellular localization prediction using protein language models." Nucleic acids research 50.W1 (2022): W228-W234. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: In the conclusion section, authors have not sufficiently discussed the limitations of their current method. They are encouraged to discuss potential limitations in terms of effectiveness, efficiency and the scope of applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Important downstream applications and baseline methods are not investigated in the experiment section. We included a highly tangible downstream application in the discussion. There, we demonstrated the generation of new NLS sequences using in-filling. In the protein engineering space, identifying candidate sequences is an important first step. Our demonstration showed the capability for CellBERT-E to generate new sequences driving the engineered protein to the desired subcellular localization. We have additionally passed our generated NLS candidate sequences (appended to GFP) to DeepLoc 2.0 [1], which is an independent model predicting multiclass annotations of protein localizations from their sequences. DeepLoc 2.0 predicted **89%** of the generated NLS candidates with nuclear localization and **91%** having potential nuclear localizing signals, clearly validating our model. We hope to experimentally validate these sequences. With respect to baseline methods, we note that this is a new field of study lacking baseline methods to compare against. For image prediction, the only available comparison is CELL-E. We will add an explicit, direct comparison in the revised paper. For sequence infilling, our model is the first of its kind with this capability. Therefore, we hope that our CellBERT-E itself will serve as the baseline for future developments in this field. > The evaluation protocol of image prediction and sequence infilling is not that standard from the machine learning perspective. We have tried our best to match the standard practices. However, the lack of established baselines and the absence of suitable image-based benchmarks complicates the having a comprehensive assessment. Therefore, we designed our evaluation protocol considering practical limitations and needs within the protein engineering domain. > ...The subcellular localization prediction benchmarks proposed by DeepLoc [a] and DeepLoc 2.0 [b] could be a good test field for such a hypothesis, where the proposed CellBERT-E can compare with various protein language models. CellBERT-E was not intended to be a potentially superior protein sequence representation. Instead, it was developed as a tool to translate between the amino acid sequence of a protein and its functional properties, akin in spirit to AlphaFold and other protein structure prediction models. Sequence embedding is just a part of CellBERT-E model. We thank the reviewer for bringing up the idea that CellBERT-E could lead to a better protein sequence representation that is “image-enhanced”. If so, the protein representation in CellBERT-E could indeed outperform purely sequence-trained ones for certain tasks, such as localization prediction in discrete categories like in DeepLoc [2] and DeepLoc 2.0 [1]. For this paper, we focus on the image prediction and sequence infilling capabilities of CellBERT-E, which has important downstream applications in protein engineering. We hope that future studies will elucidate the potential of CellBERT-E protein representation in other applications. Additionally, for benchmarking the image prediction of CellBERT-E, we previously considered comparing the indicated localizations in the predicted images to the multiclass predictions of DeepLoc 2.0 [2]. However, such a comparison requires conversion between two distinct modalities, i.e., annotate the protein localization in the predicted images using a separate image segmentation model. Therefore, we were afraid that this comparison could be inconclusive and would not serve as a proper benchmark. Still, if the reviewer feels that such comparison is necessary, we will include it in the revision. > ... the validation set should be used for model selection, and another hold out test set serves for evaluation... We agree that a train-validation-test split is a standard practice in machine learning, but we argue that it is not feasible or necessary for our task. Protein datasets are very scarce and expensive to obtain, as they involve complex and costly wet-lab experiments. We use the HPA dataset, which is the largest available dataset of protein images, but still contains only 17,268 proteins. (Small compared to the 617M proteins used in training the sequence model ESM-2). Splitting the HPA dataset into three subsets would reduce the amount of data for training and validation, and it may not reflect the true performance of our model on unseen data. Moreover, we use another dataset, the OpenCell dataset, for finetuning and evaluation. The OpenCell dataset contains live imaging data of a specific human cell line, which is more realistic and consistent than the HPA dataset. The OpenCell dataset contains only ~1000 proteins. Splitting this dataset into three subsets would further limit the data availability and diversity for our task. Therefore, we follow the precedent of a recent work [3] that uses the same OpenCell dataset and adopts a two-way split. We believe that this is a reasonable and practical choice for our task, given the data constraints and challenges. > In the conclusion section, authors have not sufficiently discussed the limitations of their current method.... In Section 7, we will be including a longer discussion on limitations, including real-world implications of the limited resolution and potential ways to address them in future works. We also include a discussion on the trade-off between prediction quality and speed in the sequence prediction stage. [1] Thumuluri, Vineet, et al. "DeepLoc 2.0: multi-label subcellular localization prediction using protein language models." Nucleic acids research. (2022): [2] Almagro Armenteros, José Juan, et al. "DeepLoc: prediction of protein subcellular localization using deep learning." Bioinformatics (2017): [3] Kobayashi, H., et al. Self-supervised deep learning encodes high-resolution features of protein subcellular localization. Nat Methods (2022). --- Rebuttal Comment 1.1: Title: Post-Rebuttal Comments Comment: Thanks for the response, which addresses most of my concerns. I admit that the evaluation on protein engineering/function-prediction benchmarks is out of the scope of this paper, though this could still be very interesting as a future direction. Considering the contribution of this work on a new topic, I increase my rating to 5: borderline accept. --- Reply to Comment 1.1.1: Comment: Thank you, we agree it will be an interesting future direction. Appreciate the feedback!
Summary: This paper proposes a novel bidirectional transformer named CellBERT-E to generate accurate protein localization image prediction from the amino acid sequences. To solve the ignorance of the integration of sequence and image information in existing methods, CellBERT-E adopts a BERT-like architecture so that the model can generate both image and sequence predictions in a non-autoregressive (NAR) paradigm at a fast speed. Therefore, the model allows for bidirectional prediction, making the model a possible candidate for de novo protein design. The model is trained by reconstructing the masked tokens in both the amino acid sequences and images in an unsupervised manner. Benefiting from the pretraining on a large HPA dataset and finetuning on the OpenCell dataset, CellBERT-E achieves competitive or superior performance compared with SOTA methods, which is shown by extensive experiment results. Strengths: Originality: The paper proposes bidirectional transformer for text-to-image translation, and explores how this model could be used for protein design. Therefore, the paper uniquely contributes to the field by making fast and accurate predictions for protein sequence or image generation in the non-autoregressive manner. Quality: The paper carefully designs the experiments to support the idea and make clear visualizations. Clarity: The paper effectively communicates its ideas and findings with clarity. The paper is well-written, and the logic is coherent. The authors clearly illustrate the details in each section and make the ideas transparent to readers. Significance: The paper focuses on the bidirectional prediction of amino acid sequences and protein localization images with the advantage of faster prediction speed and possibly better protein de novo design than models of auto-regressive manner. And the proposed model does outperform baseline models. Therefore, this work is a promising model for protein design and engineering and could inspire research on bidirectional NAR models for this domain. Weaknesses: 1. More background knowledge on biological terms mentioned in the paper is required (or explained more clearly), e.g., what are nucleus images. Besides, the protein images shown in the manuscript and supporting materials are nice, but it's not easy for a non-expert in biology to interpret something useful from the figures. 2. Although the authors' presentation is quite clear in general, the details of the finetuning task provided in the paper are not enough. Besides, it would be better if the authors can provide any intuition on why the task is designed in this way. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What's the function of nucleus images over here? According to the authors' explanation, the nucleus images are passed to the encoder but their tokens are not masked and thus not reconstructed during bidirectional prediction. Can the authors specify what's the role of nucleus images in this case (if there are any biological backgrounds related please clarify briefly)? 2. In Section 7, the sentence "By pre-training on a large 310 HPA dataset and fine-tuning on CELL-E, ..." should be changed by replacing CELL-E with OpenCell. 3. Although it is easier to parallel Transformer encoder-based models during inference, NAR Transformer decoders do exist [1, 2] and can significantly accelerate the decoding speed. Considering the recent success of GPT-based models, it would be great if the authors can discuss a little bit on whether the encoder-only CellBERT-E can be modified to a decoder-based one. 4. The goal of this paper is to train a model for sequence-to-image and image-to-sequence generation, so I'm wondering whether it's possible to train an encoder-decoder model (and cross attention might be more helpful in explaining how the amino acid sequences and threshold images are related), and maybe the authors can explain a little bit how they determine the model architecture. 5. With respect to the finetuning stage, I'm wondering what's the task here, is it the same as the training procedure described in Fig. 2? Why not mask all the tokens in the threshold figure and unmask the sequences (similar to the illustration in Fig. S3), which is closer to what the model is trained for: generate images depicting protein subcellular localization from the amino acid sequences? Probably, such a gap will make the model performs worse in the real application settings. 6. For the different finetuning strategies mentioned in section 5.3, I'm wondering whether the authors have tried finetuning the whole model simultaneously and how the performance is compared with other finetuned models. [1] Gu, J., Bradbury, J., Xiong, C., Li, V. O., & Socher, R. Non-autoregressive neural machine translation. arXiv preprint arXiv:1711.02281. [2] Huang, F., Tao, T., Zhou, H., Li, L., & Huang, M. On the learning of non-autoregressive transformers. In International Conference on Machine Learning (2022). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have partially addressed the limitations of their work, though there is space for improvement (see Strengths, Weaknesses, and Questions). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >More background knowledge on biological terms mentioned in the paper is required (or explained more clearly), e.g., what are nucleus images. Thank you for the feedback, we have updated the language in the introduction and methods to more to clearly explain those terms. > Although the authors' presentation is quite clear in general, the details of the finetuning task provided in the paper are not enough. We adopt a selective finetuning strategy that involves different parts of the model (e.g. the image representation via the VQGAN encoders) at different stages. This is motivated by the idea of domain adaptation, which aims to improve the performance of a model on a new domain by leveraging the knowledge learned from a different domain. We will update the manuscript to make this more clear. >Besides, it would be better if the authors can provide any intuition on why the task is designed in this way. We design the task to parallel the cell-imaging workflow, while also offering additional insights. In experiments, biologists stain the protein of interest as well as nucleus as a spatial reference. Therefore, we use the nucleus as a fixed input. Multiple acquisitions are often needed over time to understand protein dynamics. We demonstrate how our model can generate images quickly *in silico*, thus increasing the potential throughput and efficiency of such studies. > Can the authors specify what's the role of nucleus images in this case (if there are any biological backgrounds related please clarify briefly)? The nucleus image serves as a conditional image with which the model makes a prediction of protein localization with respect to. A predicted localization image holds little informational value without a spatial reference with which to compare to. Nucleus images are obtained for reference in wet-lab imaging workflows as standard practice. We have updated the manuscript to include this information in Section 4.2. >In Section 7, the sentence "By pre-training on a large 310 HPA dataset and fine-tuning on CELL-E, ..." should be changed by replacing CELL-E with OpenCell. Thank you, this will be changed. > Although it is easier to parallel Transformer encoder-based models during inference, NAR Transformer decoders do exist [1, 2] and can significantly accelerate the decoding speed. Considering the recent success of GPT-based models, it would be great if the authors can discuss a little bit on whether the encoder-only CellBERT-E can be modified to a decoder-based one. ... We have not tried the decoder-based model, but it is an interesting research direction. We use an encoder-only model to leverage the parallelism and independence of the patches, which are crucial for protein image synthesis. A decoder-based model would introduce dependencies and sequentiality among the patches, which may affect the heatmap quality. We are not aware of any work that uses NAR Transformer decoders for image generation. > ..I'm wondering whether it's possible to train an encoder-decoder model ... and maybe the authors can explain a little bit how they determine the model architecture. Our goal is to enable in silico screening of protein candidates, which requires a fast and scalable model (enabled by an encoder-based NAR model [1]) that can generate high-quality protein images from sequences and vice versa. An encoder-decoder model with cross-attention would introduce additional computational complexity and latency, which may hinder the practical application of our method. It is an interesting path to explore though. > With respect to the finetuning stage, I'm wondering what's the task here, is it the same as the training procedure described in Fig. 2? We pre-train our model on HPA data, which is diverse and heterogeneous. We finetune it on OpenCell data, which is more realistic and consistent for protein image synthesis, but cointains a specific human cell line. This allows the model to adpt to the domain. > Why not mask all the tokens in the threshold figure and unmask the sequences (similar to the illustration in Fig. S3), which is closer to what the model is trained for: generate images depicting protein subcellular localization from the amino acid sequences?... We do not mask all the tokens in the threshold image and unmask the sequences, because we want our model to be able to generate both modalities from each other. Our goal is not only to produce protein images from sequences, but also to produce sequences from images. This is useful for applications such as protein annotation, identification, and analysis. Therefore, we mask both image and text tokens during finetuning, as we did during pre-training, to train a bidirectional model that can handle both sequence-to-image and image-to-sequence generation. > ... I'm wondering whether the authors have tried finetuning the whole model simultaneously and how the performance is compared with other finetuned models. We have not tried finetuning the whole model simultaneously, because we follow the common practice of using a fixed image codebook in text-to-image models [1-3]. A fixed image codebook allows us to leverage the pre-trained image features and reduce the computational cost of finetuning. We can train on consumer-grade hardware because freezing parts of the model requires less VRAM, which is limited if our model is fully finetuned. [1] Huiwen Chang, Han Zhang, Jarred Barber, A. J. Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, and Dilip Krishnan. Muse: Text-To-Image Generation via Masked Generative Transformers, January 5 2023. [2] Ming Ding, Wendi Zheng, Wenyi Hong, and Jie Tang. CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers, May 2022. [3] Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman. Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors, March 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response to my questions and concerns. I really appreciate that you have helped me better understand your contributions. The responses have almost resolved my concerns except for the last two points. Firstly, I think the finetuned model does not have to be generalizable because the finetuned model should be trained with domain-specific and task-specific knowledge. So I still think you can try to mask all the tokens in the threshold figure and unmask the sequences and I believe this will improve the model performance in the specific task. Secondly, I agree that finetuning the model partially is more efficient, but I think it's still helpful if the authors can compare the results with finetuning the whole model and see whether the finetuning strategy the authors adopt is comparable with simply finetuning the whole model. --- Reply to Comment 1.1.1: Comment: In this work, we sought to leverage more general models for specific tasks without needing to constrain them to a specific task. We agree that masking all the tokens in the threshold and unmasking the sequence will improve the performance. In a more specific study this is absolutely the direction we will go in. To be more clear about finetuning, while we agree it may be an interesting direction, we are unable to hold the entirely unfixed model in our available hardware due to the increased memory required to contain the gradients.
Summary: The authors propose a new architecture, CellBERT-E, for producing flexible embeddings that encode combinations of protein amino acid sequences and protein localization images. It can be used to generate localization images given a sequence and vice versa. Compared to its predecessor, it has many favorable characteristics; it was trained on slightly more data, is faster, and is bidirectional. It performs well on benchmarks. Strengths: The paper is exceptionally well-written and the method clearly improves on its predecessor, CELL-E, in multiple ways. Evaluation is thorough and carefully analyzed. The architecture is well-suited to the task and thoughtfully constructed. Weaknesses: My only real criticism is that the subject matter is quite niche (even to the point where some of the significance of this is lost on me), and I suspect that it will not interest most NeurIPS readers per se. That being said, I think the three-way multimodal architecture is well-executed and potentially worth acceptance as a nice case study in its own right. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Why do you clip logit values instead of softmaxing them (Supplement section B.1)? - Varying depth along with width in Table 1 is a little awkward — it would be good to control for the effect of depth vs width. How do the parameter counts of the different models compare? - Why are the HPA FID scores so erratic? E.g. it seems like HPA_640 achieves good scores in almost all categories but severely underperforms the other models in terms of FID. - “We also visually inspected some of the generated protein images (Fig. S6, Fig. S7). The output images from the OpenCell models appeared realistic and consistent with the ground truth labels, but they had low entropy in the predicted distribution. This suggests that the models learned to assign high probability to correct tokens, but failed to capture the uncertainty and variability of other valid selections. This could be attributed to the rapid overfitting of the OpenCell models, which limited their generalization ability.” Do the datasets contain the kind of diversity you allude to here (e.g. multiple protein images per sequence)? Is there any reason to expect the models to learn wide distributions over tokens? - "Most models had low performance on this task in terms of reconstruction. This is understandable because the models learned to generate amino acids that were common or frequent in the dataset, but not necessarily correct for the specific sequence." This is surprising to me, especially in light of Table S4. Why would ESM + image be so much worse than ESM alone? What exactly was the experimental setup for Table S4. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors include a thorough discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >My only real criticism is that the subject matter is quite niche (even to the point where some of the significance of this is lost on me), and I suspect that it will not interest most NeurIPS readers per se. That being said, I think the three-way multimodal architecture is well-executed and potentially worth acceptance as a nice case study in its own right. We appreciate the recognition of our three-way multimodal architecture as a nice case study. We also acknowledge that the subject matter may seem niche to some NeurIPS readers, but we would like to emphasize its relevance and significance to the broader machine learning community. First, our work aligns with the NeurIPS goal of fostering submissions for “Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences)”, which we believe is an important and growing area of interest. Second, our work has a high potential impact on the field of biology, especially for studying biological pathways and protein engineering. By enabling the in silico replication of experiments, our method can drastically reduce the experimental time and cost, as well as increase the number of possible targets to screen. Third, our work showcases a novel application of multimodal learning that integrates different types of data in a principled way. We believe that this approach can inspire other researchers to explore similar problems that involve multiple modalities and complex relationships. > Why do you clip logit values instead of softmaxing them (Supplement section B.1)? We would like to clarify that we do apply softmax to the logit values in our model, as described in Supplement section B.1. The clipping operation that we mention in the same section is not applied to the logits, but to the pixel values in the output heatmap. This is a post-processing step that we use to handle out-of-range values that may occur from the latent space interpolation within the VQGAN. We apologize for any confusion that this may have caused, and we will make this more clear in the manuscript. >Varying depth along with width in Table 1 is a little awkward — it would be good to control for the effect of depth vs width. How do the parameter counts of the different models compare? CELL-E [1] demonstrated that transformer depth and predictive performance have a strong positive correlation. The embedding size is pre-determined by the embedding dimensions used in the ESM-2 language model. We will include the rationale in Section B.1. To accommodate our available compute hardware, we maximized the depth which could be fit in memory during training. We will include also include a table of parameter counts in the supplemental. >Why are the HPA FID scores so erratic? E.g. it seems like HPA_640 achieves good scores in almost all categories but severely underperforms the other models in terms of FID. The HPA dataset contains significant diversity in terms of cell type, image resolution, and antibody staining, which may cause inconsistencies in the model learnings. We have reported FID because it is a standard measure in text-to-image studies for natural images, but further investigation is needed to understand the behavior of FID in response to data in this domain. There is already ongoing debate on whether FID and IS truly correlate with image quality, as FID is not a stable metric. Recent work [2] has show that changing the jpeg quality from 100 to 75 can cause a difference of up to 20 FID points. > Do the datasets contain the kind of diversity you allude to here (e.g. multiple protein images per sequence)? Is there any reason to expect the models to learn wide distributions over tokens? > ... > This is surprising to me, especially in light of Table S4. Why would ESM + image be so much worse than ESM alone? What exactly was the experimental setup for Table S4. The training objective required the model to reconstruct both image and text tokens. However, the HPA dataset, which is the largest available protein image dataset, containing just 17,268 proteins. This number pales in comparison to the 617M used in training ESM. We expect that performance would improve if model loss were only calculated on the masked amino acids. [1] Emaad Khwaja, Yun S. Song, and Bo Huang. CELL-E: Biological Zero-Shot Text-to-Image Synthesis for Protein Localization Prediction, May 2022. Pages: 2022.05.27.493774 Section: New Results. [2] Gaurav Parmar, Richard Zhang, Jun-Yan Zhu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 11410-11420 --- Rebuttal Comment 1.1: Comment: Fair enough. I'll go up to 7. --- Reply to Comment 1.1.1: Comment: Thank you. We appreciate the feedback!
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Implicit Regularization in Over-Parameterized Support Vector Machine
Accept (poster)
Summary: A regularization-free algorithm for high-dimensional support vector machines (SVMs) is designed by integrating over-parameterization with Nesterov's smoothing method, which induces implicit regularization. An over-parameterized hinge loss function is constructed and true parameters are estimated by leveraging regularization-free gradient descent on it. Nesterov's method enhances the computational efficiency of the algorithm, particularly in terms of determining the stopping criterion and reducing computational complexity. With appropriate choices of initialization, stepsize, and prox-parameter, unregularized gradient descent achieves a near-oracle statistical convergence rate. The theoretical findings are verified through a variety of numerical experiments and the proposed method is compared with explicit $ l\_1 $ regularization. The advantages of employing implicit regularization via gradient descent subsequent to over-parameterization in sparse SVM are illustrated by the results. Strengths: Strenth 1: This manuscript presents an interesting finding about the over-parameterized SVM by showing that if we re-parameterize the sparse unknown parameter $\beta$ by $w\odot w-v\odot v$, then Nesterov's smoothing based alternative update of variables leads to the effect of implicit regularization. That is to say, even we did not explicitly impose $l\_1$-norm onto the underlying parameter, the algorithm behaves as if the $l\_1$-norm is implicitly added. Strength 2: The theoretical analysis is interesting and easy to follow. Strength 3: The theoretical findings are validated with numerical simulations. Weaknesses: Weakness 1: Although it is an interesting result for over-parameterized SVM, the proposed algorithm is not directly designed for the original model (2), but its modifed version in Sec. 2.3. It seems that the proposed Algorithm 1 might have some explicit regulrization term in its single iteration. Specifically, the update of $\mu$ is due to Eq. (4) which introduces an extra smoothing term $d\_\gamma(\mu)=\gamma/2\|\mu\|_2^2$ and this $\mu$-update surely affects $w$ and $v$, which may result an regulurization effect like $l\_1$. Weakness 2: The Assumption 1 requires the design matrix to satisfy the $\delta$-incoherence and the sub-Gaussianality. It seems this assumption is relatively strict for real datasets. Weakness 3: The empirical effectiveness of the proposed algorithm is insufficiently discussed. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Question 1: Since the update of $\mu$ in Algorithm 1 is due to Eq. (4) which introduces an extra smoothing term $d\_\gamma(\mu)=\gamma/2\|\mu\|_2^2$ and this $\mu$-update surely affects $w$ and $v$. Is it possible that this additional smoothing term intruduces some explicit regularization effects? Question/Suggestion 2: As Assumption 1 requires the design matrix to satisfy the $\delta$-incoherence and the sub-Gaussianality, it is suggested to discuss whether this assumption holds on real data. Question 3: Is it possible to discuss the empirical effectiveness of the proposed algorithm? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: It seems the authors did not sufficiently address the limitation due to the lack of explicit discussion in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing valuable feedback on our paper. We will explain each point one by one. If you have new questions or ideas, please don't hesitate to let us know. 1. Statement for the smoothing term $d(\mu)=\lambda/2||\mu||^2$ We appreciate your inquiry regarding the smoothing term, as it's an excellent question that we'll delve into here in detail. Firstly, to address the computational challenges stemming from the non-differentiability of the hinge loss, we employ a smoothing approach by subtracting a prox function $d(\mu)$. This smoothing technique not only simplifies computations but also introduces more convenient stopping criteria. From the viewpoint of the purpose, the incorporation of the smoothing term $d(\mu) = \lambda/2|||\mu||^2$ is primarily for computational ease. Secondly, the prox parameter $\gamma$ we've chosen for specific computations is extremely small ($\gamma\le1/n$), and the vast majority of $\mu_i, i\in[n]$, remain at $0$ during iterations. As a result, the smoothing term has minimal impact. Lastly, even if we were to optimize the original hinge loss directly, the gradient descent algorithm would still instigate implicit regularization. This is because implicit regularization fundamentally arises from the algorithm itself, rather than from the process of smoothing. However, direct optimization of the original hinge loss poses considerable theoretical challenges (owing to non-differentiability). Therefore, the introduction of smoothness is a clever approach, significantly benefiting both computational and theoretical aspects. 2. Assumption 1 holds on real data? Assumption 1, in reality, doesn't overly constrict the analysis of real-world data. As we stress in the paper, while our proof relies on Assumption 1, the experimental outcomes present compelling evidence that its strict adherence isn't crucial for the success of our approach. To exemplify, we generate data from the $t$-distribution, which clearly doesn't meet Assumption 1, yet the experimental results remain surprisingly strong. This showcases the potential for relaxing the constraints of Assumption 1 in practical scenarios. To provide a more tangible illustration, we've incorporated a practical example. In this case, we applied our algorithm to the colon-cancer dataset sourced from the LIBSVM website (https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/). This dataset comprises 62 samples with 2000 features—a quintessential high-dimensional sparse classification challenge. With 42 samples as the training set, 10 as the testing set, and 10 for validation, our approach achieves impressive results. Specifically, the final prediction accuracy on the training set is 0.8, sensitivity reaches 0.75, and specificity stands at 0.833—an exceptional performance. Looking ahead, our plans encompass introducing new real data analysis results in the revised version. 3. Discussion of the empirical effectiveness of the proposed algorithm SVM finds extensive use in classification tasks such as face detection, text categorization, image classification, bioinformatics, handwriting recognition, and medical data analysis. Yet, when confronted with high-dimensional data, SVM can encounter challenges, especially when the sample count is significantly smaller than the feature count. Take, for instance, the colon-cancer dataset we previously examined, where features reach 2000, while samples total only 62. Similar situations arise in text and image categorization, where image dimensions can be notably high. However, it's essential to acknowledge that high dimensionality often accompanies sparsity. Consider the news20 dataset from the libsvm website—featuring 19,996 samples and a staggering 1,355,191 features, yet maintaining a sparse 0.034% density. In such scenarios, conventional SVM algorithms can struggle to yield precise classifications. This is where our proposed algorithm, facilitated by implicit regularization, steps in. As demonstrated in our earlier analysis of the colon-cancer data, our algorithm thrives in the realm of high-dimensional and sparse data. It accomplishes effective classification outcomes while ensuring computational simplicity. In essence, our proposed algorithm holds immense promise in addressing classification challenges within today's high-dimensional, sparse data landscapes. Its potential impact spans across practical applications like face detection, text and image categorization, and medical data analysis. This algorithm has the potential to tackle intricate classification tasks encountered in real-world settings, providing viable solutions. Furthermore, our algorithm excels in terms of algorithmic effectiveness. In the default experimental setup ($m=10$), the average runtime for executing the gradient descent algorithm is 2.75 seconds. This represents an 18% improvement in speed compared to the Lasso algorithm.
Summary: The paper proposes an iterative/implicit regularization algorithm for sparse SVM, using an Hadamard product overparametrization of the iterate $\beta = u \odot u - v \odot v$. In addition, a Nesterov smoothing of the Hinge loss (replaced by its Moreau envelope, a term that is lacking in the paper) is performed. The main result is Theorem 2, stating that results equivalent to that of explicit penalization approach can be obtained, under similar assumptions (subgaussianity of the samples, low incoherence of the design matrix), in terms of oracle error with high probability. Strengths: Implicit regularization is a very active area of research; implicitly regularized algorithms usually require a lower computational budget. Extending known results from regression and sparse matrix factorization to classification is interesting. Weaknesses: - no code is included in the supplementary material, and no code release is mentioned in the paper. - the writing of the paper can be strongly improved; it would benefit from being proofread by a proficient English reader. - E.g. in the first two lines, do not use "the" in "based on the gradient-based methods", and use plural in "models, such as the deep learning model" (there are more than one DL model...) - L20, "tend to converge to the global minimum." : a global minimum - "$s$ is number of signals" - near-oracle rate is achievable via explicit regularization using explicit regularization - show that gradient descent estimator: missing "the" - L110 why "that"? - L120 repeats the same idea twice - proxy parameter instead of prox parameter, and it would be clearer to call it "smoothing paramter" or Moreau envelope paramter. - etc etc. - the paper also lacks rigor and clarity in its writing in many places, e.g. "the zero component is initialized close to zero". What the authors mean is that the coordinates which are zero **in $\beta^*$** are initialized closed to 0 **in the optimization variable**. $\beta^*$ is not even defined. - the algorithm's stopping criterion is based on $\mu_i$, and the algorithm stops when all $\mu_i$'s are negative. This means that all training points are classified in their observed classes. In the case of label noise, doesn't this lead to overfitting? How is this stopping criterion related to the bound on number of iterations $t$ in Theorem 1 and Proposition 2? - why is the performance measure not $\Vert \beta - \beta^* \Vert$ in the experiments? Why are the vectors normalized? This does not match the metric used in the theoretical results. - the approach simply drops the two L2 regularization terms, $\Vert w \Vert^2 + \Vert v \Vert^2$, that come from the original L1 regularization term $\Vert \beta \Vert_1$. Why? - the authors claim that there is no parameter in their approach, but the smoothing of the hinge loss does require one. - why is $d_\gamma$ called a prox-function, when it's just a strongly convex function? why introducing a general notation when only the squared L2 norm is used (l142) - why are there 3 orange curves in Fig 1c? shouldn't those be shaded plots? Where is the rud curve? can you sort the legend by increasing value instead of having 1e-10, 1e-4, 1e-6 in that order? - How is Lasso's (L1 regularized SVM) regularization parameter tuned? - Unless I'm mistaken, thee iterates of the proposed method are never exactly zero. Some thresholding needs to be performed in order to avoid dense vectors and many false positive (Fig 2c for example). How is this done? - "we present finite sample performances of our method in comparisons with the Lasso estimator": SVM is for classification, Lasso is for regression. How can you compare them? Do you mean L1 regularized SVM? - under the assumption of Sub-Gaussian distribution: mention that this is about the noise - L114 the fact that there is a generative model with "true parameters" is completely missing - L139 it's a saddle point problem not a saddle point function - In Thm 1 there is no need to mention randomness, as the generative model does not come into play. Making this a theorem is a bold move. - the legends in the plots are too small to be read. ### References - The proposed Hadamard parametrization is, up to my knowledge, proposed by Vaskevicius. This could be explicitely stated L43 (though it is currently mentioned, it comes later in the paper) - A generic algorithm for any type of sparse regularization is proposed in "Iterative regularization for low complexity regularizers" (Molinari et al, 2022). In particular, handling of sparse classification is proposed in example 1. How does the proposed approach compare? - the authors should cite "Smoothing and first order methods: a unified framework" by Beck and Teboulle (2012). Their thm 1 is a known result of this paper, which holds for "inf-convolution"/Nesterov smoothing regardless of the function being smoothed. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see weaknesses above Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: no societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for carefully reading our paper and sharing valuable feedback. Due to character limits, we can't address every question separately. We will fix all writing mistakes and unclear parts you've pointed out in the revised version. For other questions, we address them here. If you have more questions, please let us know without hesitation. - Code. In our revised version, we will add a GitHub link. Notably, our algorithm is straightforward. We obtain the lasso estimator through R packages 'sparseSVM' or 'penalizedSVM.' The oracle estimator can be derived from standard SVM. - Supplement to the setting of initialization and true parameters $\beta^*$. We explain more about initial setting. Ideally, we'd initialize $w$ and $v$ to mirror the sparsity pattern of $\beta^*$—meaning $w_S$ and $v_S$ would be non-zero, while the values outside the support would be set to 0. Unfortunately, this isn't feasible due to the unknown support of $\beta^*$. Instead, we initialize as $w_0=v_0=\alpha1_{p\times 1}$. This approach strikes a balance: zero components receive almost-zero initial values ($\alpha$ being very small), and nonzero components receive non-zero initializations. Regarding the definition of real $s$-sparse parameter $\beta^*$, which we forgot to declare in the main text, $\beta^*=\arg\min_{\beta}{\mathbb E}(1-yx^T\beta)_+$, we will also add it in the revised version. - Discussion of the stopping criterion. In an ideal setup, we can rely on $\mu_i=0$ to stop iterations, such as in the simulations. In general, we still keep the maximum number of iterations $T_1$ as a stopping condition, (see Algorithm 1), and this stopping condition can be applied to all cases. If we know all parameters, we can estimate the ideal iteration range and subsequently set $T_1$. - why is the measure normalized? We used normalized metrics to compare with the oracle estimator. In simulations, the oracle estimator exhibits a large error before normalization, and a smaller error after normalization. For instance, when dealing with signals like (5,6,7,8) (normalized to (0.38,0.45,0.53,0.61)), the oracle estimator produces values of (1.27,1.57,1.88,2.11) (normalized to (0.37,0.45,0.54,0.61)). The GD estimator yields (3.3,3.97,5.01,5.74) (normalized to (0.36,0.43,0.54,0.62)). While we used the normalized metric for comparison, note that the GD estimator performs well under the $||\beta_t-\beta^*||$ metric. - Reason why we drop the two L2 regularization terms. We explain more here. First, the motivation of implicit regularization comes from some interesting phenomena in deep learning. Applying deep learning to regression and classification, the regression function or classifier is represented by a deep neural network. But the loss function is nonconvex. In addition, neural networks are over-parameterized, which makes the regression or classification ill-posed statistically. However, people found that simple algorithms such as gradient descent tend to find the global minimum of the loss function. To understand why, [B,C] suggest that generalization stems from implicit regularization of optimization algorithms. They observed that in over-parameterized models , the algorithm, usually a variant of the gradient descent, prefers solutions that generalize well. Without adding any regularization term, it is the implicit preference of the algorithm that acts as a regularizer. For this reason, in recent work on implicit regularization in statistical models (e.g., regression), scholars follow this line by not adding any regularization terms and focus on the implicit regularization of the algorithm. To be specific, in line 122, the new optimization is $\min{\cal E}_{{\cal Z}^n}(a,c)+\lambda(||a||^2+||c||^2)$. Following neural network training, we remove $\ell_2$ norm, a practice that is common in implicit regularization work[10, 28, 34]. [B] Neyshabur, B., Tomioka, R. and Srebro, N. (2015). In search of the real inductive bias: On the role of implicit regularization in deep learning. [C] Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O. (2017). Understanding deep learning requires rethinking generalization. - The smoothing requires one parameter. The smoothing does require a parameter $\gamma$, but the constraint on $\gamma$ is quite relaxed, requiring no complex tuning. - why is $d(\gamma)$ called a prox-function? Our smoothing follows [22], hence certain terms are adopted from the original paper. We'll modify the expressions in the revised version. - Explanation of Fig 1b and Fig 1c. Fig 1b shows strong signal magnitudes over iterations, while Fig 1c shows error term magnitudes over iterations. Mean values are presented in both figures, omitting quartile-shaded areas for clarity. Only 3 orange curves in Fig 1c results from larger estimation error magnitude for 1e-4 compared to the other initial values. For enhanced clarity, an "Error Components" Fig is added in "General.pdf". This figure aligns with Proposition 1: $\Arrowvert\beta_{t}\odot1_{S_1^c}\Arrowvert_{\infty}=\Arrowvert w_{t}\odot w_t\odot1_{S_1^c}-v_{t}\odot v_t\odot 1_{S_1^c}\Arrowvert_{\infty}\lesssim (\sqrt{\alpha})^2=\alpha$. We'll reorganize the legend in ascending order based on values, as you suggested. - Tuning in Lasso The tuning methods in ‘sparseSVM’ and ‘penalizedSVM’ are cross-validation by default. - Thresholding procedure. After getting GD estimator and lasso estimator, we choose a threshold (like 1e-5), coordinates less than the threshold are counted as 0. - Discussion of References. Comparing with (Molinari et al., 2022), we have these distinctions: 1. Our focus is on gradient descent without explicit regularization, while Molinari works on primal-dual algorithm with regularization. 2. We analyse error bound of iterates, while Molinari analyse stability bound. 3. We introduce reparametrization to classification, studying gradient descent dynamics. This novel aspect is unexplored in classification. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answer, that only answer my questions partially (in particular, if normalization strongly affects the results, this should be mentioned and both settings included in the paper) Reading the paper again, I found more issues: - L122 it is claimed that one can minimize $\mathcal{E}(a, b)$, but that is definitely not clear: the cosntraint $a \odot c = \beta$ is missing, and plugging $a$ and $b$ in \$mathcal{E}$ yields a $a \odot a - c \odot c$ which is definitely not equal to $\beta$. This whole paragraph is unclear. - L491 has "based on the definition of $T^*$, however $T^*$ in Proposition 1 is defined only with a $\mathcal{O}$ formulation, not a precise value. line 463, the same issue appears. Also the last $(ii)$ above the inequalities L498 should be a $(iii)$. - L497 there is a ':' before the $>$. Overall this gives the very strong feeling that the paper's writing was rushed and that more concerning issues may have slipped through. - As the code was not submitted, it was impossible to reproduce the numerical experiments - Highlighting the sensitivity of the method with respect to $\gamma$ also seems mandatory for acceptance in my opinion Overall, I am not completely convinced by the author's answer (for example, their definition of $\beta^*$ as $\mathrm{argmin} \mathbb{E} (1 - Y X^\top \beta)_+$ does not match the experiments, where $y_i = \mathrm{Bernoulli}(x_i^\top \beta^*)$ is used; what is the guarantee that $\beta^*$ is effectively the minimizer of the population hinge loss? The Bernoulli distribution rather hints at a logistic loss) In my opinion, there are too many issues in the current version of the paper to accept it without a thorough second round of reviewing. I keep my vote for rejection. --- Reply to Comment 1.1.1: Title: Detailed Replies for Your Comments (Part 1/3) Comment: Thank you very much for your valuable comments and suggestions. **I thank the authors for their answer, that only answer my questions partially (in particular, if normalization strongly affects the results, this should be mentioned and both settings included in the paper).** Thank you very much for your comments. We sincerely apologize for not answering your questions one-by-one, which is indeed due to the required character limits. Following your suggestion, we will provide detailed explanations on the normalization in Section 4 and add the numerical results of all the methods without normalization in the supplemental file to provide a comprehensive comparison. **Q1: L122 it is claimed that one can minimize ${\cal E}(a, b)$, but that is definitely not clear: the cosntraint $a\odot b=\beta$ is missing, and plugging $a$ and $b$ in ${\cal E}$ yields a $a \odot a -c \odot c$ which is definitely not equal to $\beta$. This whole paragraph is unclear.** Thanks a lot for your comments. We will rewrite the corresponding part, including the statement on the contraint $\beta=a\odot c$ in Line 122, and provide more detailed explanations on this issue in the revised version. Specifically, by Lemma 1 in [1], there holds that $\inf_{\beta}\frac1n\sum_{i=1}^n(1-y_ix_i^T\beta)_++\lambda||\beta||_1$ equals to $\inf_{a,c}\frac1n\sum_{i=1}^n(1-y_ix_i^Ta\odot c)_++\lambda(||a||^2+||c||^2)/2$ and then the right-hand side can be further reparameterized by using $w=\frac{a+c}{2}$ and $v=\frac{a-c}{2}$. Clearly, we have $\beta=w\odot w-v\odot v$ and $2p$ new parameters $w$ and $v$ are introduced to over-parameterize the original optimization problem. Finally, we drop the explicit penalty and focus on the empirical $\frac1n\sum_{i=1}^n(1-y_ ix_i^T(w\odot w-v\odot v))_+$. It is worthynoting that the same treatment is commonly used in the literature of implicit regularization [2,3,4]. [1] Hoff, P. D. (2017). Lasso, fractional norm and structured sparse estimation using a Hadamard product parametrization. [2] Vaskevicius, T., Kanade, V., & Rebeschini, P. (2019). Implicit regularization for optimal sparse recovery. [3] Fan, J., Yang, Z., & Yu, M. (2022). Understanding implicit regularization in over-parameterized single index model. [4] Zhao, P., Yang, Y., & He, Q. C. (2022). High-dimensional linear regression via implicit regularization. **Q2: L491 has "based on the definition of T, however T in Proposition 1 is defined only with a ${\cal O}$ formulation, not a precise value. line 463, the same issue appears. Also the last $(ii)$ above the inequalities L498 should be a $(iii)$.** Thank you very much for your comments. We apologize for missing the exact form of $T^*$ in the text. Actually, $T^*$ is defined as $T^*=\log(1/\alpha)/(4\sigma\eta\log n)$, and thus $a_1$ in line 463 is $1/(4\sigma)$. Moreover, the inequality notations above L498 are also fixed in the revised version. **Q3: L497 there is a $':'$ before the $>$. Overall this gives the very strong feeling that the paper's writing was rushed and that more concerning issues may have slipped through.** We sincerely apologize again for the typos and unclear expressions in this paper. We will double check this paper to correct the typos and also find some proficient English readers to proofread it. **Q4: As the code was not submitted, it was impossible to reproduce the numerical experiments.** Thanks a lot for your comment. For your convenience, we upload the code for Algorithm 1 in the following link.(https://drive.google.com/file/d/1ZyMz1KMaI5tR9TeAUMFHWXBuKfzWRiQ9/view?usp=sharing) Note that this link and the shared file do not contain any relevant author information. --- Reply to Comment 1.1.2: Title: Detailed Replies for Your Comments (Part 2/3) Comment: **Q5: Highlighting the sensitivity of the method with respect to $\gamma$ also seems mandatory for acceptance in my opinion.** Thank you very much for your suggestion. We will add the sensitivity analysis about $\gamma$ in the revision. Specifically, the detailed experimental setup follows Lines 250-261 and $\gamma$ takes the value in $[2.5\times10^{-5},1\times 10^{-3}]$. The experiments are replicated for multiple times, and the averaged numerical results in terms of estimated signal strengths are shown in the following table, where the standard deviation is in parentheses. | $\gamma$ | $2.5\times10^{-5}$ | $5\times10^{-5}$ | $7.5\times10^{-5}$ | $1\times10^{-4}$ | | ----------------------------------------- | ------------------ | ---------------- | ------------------ | ---------------- | | $\|\|\beta_t-\beta^*\|\|/\|\|\beta^*\|\|$ | 0.480(0.053) | 0.469(0.054) | 0.463(0.055) | 0.460(0.057) | | signal 1 | 5.316(0.482) | 5.427(0.522) | 5.471(0.540) | 5.503(0.534) | | signal 2 | 5.340(0.626) | 5.444(0.642) | 5.508(0.648) | 5.533(0.675) | | signal 3 | 5.036(0.813) | 5.165(0.844) | 5.235(0.867) | 5.253(0.876) | | signal 4 | 5.350(0.653) | 5.481(0.653) | 5.548(0.665) | 5.583(0.697) | | $\gamma$ | $2.5\times10^{-4}$ | $5\times10^{-4}$ | $7.5\times10^{-3}$ | $1\times10^{-3}$ | | $\|\|\beta_t-\beta^*\|\|/\|\|\beta^*\|\|$ | 0.455(0.058) | 0.461(0.058) | 0.470(0.057) | 0.479(0.055) | | signal 1 | 5.557(0.578) | 5.491(0.596) | 5.385(0.586) | 5.289(0.574) | | signal 2 | 5.595(0.680) | 5.543(0.664) | 5.438(0.658) | 5.348(0.652) | | signal 3 | 5.299(0.869) | 5.238(0.856) | 5.150(0.830) | 5.065(0.812) | | signal 4 | 5.651(0.671) | 5.573(0.670) | 5.484(0.650) | 5.381(0.627) | From the table, it is clear that the choice of $\gamma$ is less sensitive in the sense that the estimation error and the estimated strengths of the signals are very close to each other, and the estimation accuracy slightly decreases when $\gamma$ increases, which is still within the acceptable range. --- Reply to Comment 1.1.3: Title: Detailed Replies for Your Comments (Part 3/3) Comment: **Overall, I am not completely convinced by the author's answer (for example, their definition of as does not match the experiments, where is used; what is the guarantee that is effectively the minimizer of the population hinge loss? The Bernoulli distribution rather hints at a logistic loss)** Thank you very much for your comments. In fact, the generating scheme of $y$ adopted in the original submission follows the similar treatment as in [5,8], where the task of variable selection for support vector machine is considered. In the revision, we also use some other generating schemes, which is adopted in [6,7]. Specifically, we generate random data from the following two models. - Model 1: $X\sim MN(0_p,\Sigma)$, $\Sigma=(\sigma_{ij})$ with nonzero elements $\sigma_{ij}=0.4^{|i-j|}$ for $1\le i,j\le p$, $P(y=1|X)=\Phi(X^T\beta^*)$, where $\Phi(\cdot)$ is the cumulative density function of the standard normal distribution, $\beta^*=(1.1,1.1,1.1,1.1,0,\ldots,0)^T$ and $s=4$. - Model 2: $P(Y=1)=P(Y=-1)=0.5$, $X|(Y=1)\sim MN(\mu,\Sigma)$, $X|(Y=-1)\sim MN(-\mu,\Sigma)$, $s=5$, $\mu=(0.1,0.2,0.3,0.4,0.5,0,\ldots,0)^T$, $\Sigma=(\sigma_{ij})$ with diagonal entries equal to 1, nonzero entries $\sigma_{ij}=-0.2$ for $1\le i\not=j\le s$ and other entries equal to $0$. The bayes rule is $sign(1.39X_1+1.47X_2+1.56X_3+1.65X_4+1.74X_5)$ with bayes error $6.3$%. The setup of parameters follows Lines 250-261 and the experiments are replicated for multiple times. The averaged estimation and prediction results are shown in the following table. | Generating Model | Model 1 | Model 2 | | ------------------------------------------------------------ | ------------ | ------------ | | $\|\|\beta_t/\|\|\beta_t\|\|-\beta^*/\|\|\beta^*\|\|\|\|$(GD) | 0.164(0.086) | 0.150(0.106) | | $\|\|\beta_t/\|\|\beta_t\|\|-\beta^*/\|\|\beta^*\|\|\|\|$(Oracle) | 0.155(0.076) | 0.105(0.048) | From this table, we can easily see that the GD estimator can approach the oracle estimator in terms of estimation error. Note that the full numerical results of the newly added examples are added at very beginning of supplemental file, which further indicate the performance of the proposed method. [5] Zhang, H. (2006). Variable Selection for Support Vector Machines via Smoothing Spline ANOVA. [6] Peng, B., Wang, L., & Wu, Y. (2016). An error bound for l1-norm support vector machine coefficients in ultra-high dimension. [7] Zhang, X., Wu, Y., Wang, L., & Li, R. (2016). Variable selection for support vector machines in moderately high dimensions. [6] He, H., Lv, S. \& Wang, J. (2020). Variable Selection for Classification with Derivative-induced regularization. **In my opinion, there are too many issues in the current version of the paper to accept it without a thorough second round of reviewing. I keep my vote for rejection.** We deeply apologize for not answering your questions one-by-one in our previous response. To be honest, our original response contains more than 12000 characters, however, the character limit is up to 6000 in this year. We really appreciate your precious comments and suggestions on this paper, and will revise the paper correspondingly, including proofread by some proficient English reader, enlarge the legends in the figures and so on. To the end, we want to thank you again and hope you can reconsider this paper. --- Reply to Comment 1.1.4: Title: Supplementary answers for some of your first-round questions. Comment: We sincerely apologize for not answering your questions one-by-one, which is indeed due to the required character limits. We try the best to answer the rest of your previous questions below. **Why we design initialization this way.** Thanks a lot for your comment. In this paper, we initialize $w_0$ and $v_0$ as $w_0=v_0=\alpha1_{p\times 1}$, and such a construction provides a good compromise. Specifically, the zero components get nearly zero initializations, which are the majority under the sparsity assumption, and nonzero components get nonzero initializations. Even if we initialise each component at the same value, the non-zero components move quickly, while the zero components remain small. This is how overparameterization differentiate active components from inactive components, and similar treatment is widely adopted in literature [2,3,4]. We can see this phenomena in Figures 1 and 6. **Why we drop $\ell_2$ norm.** Thanks a lot for your comment. We want to clarify that the research goal of this paper is to explore the phenomenon of implicit regularization. This phenomenon was first observed in deep learning, in applied neural networks, gradient descent and its variants tend to find the global minimum despite overparameterization. It is worthnoting that the regularization term is often not added in neural network training. [9,10] show that it is the implicit preference of the optimization algorithm itself will play a role of regularization. Motivated by this phenomenon, research on implicit regularization in other learning tasks has gradually increased in recent years. [2,3,4] usually follow this line of thought by not adding any explicit regularization term to the optimization objective and focusing on the role of the optimization algorithm itself. We also follow this line to explore the implicit preference of gradient descent in SVM, both theoretically and empirically. **Unclear expressions and unreadable images.** Thank you for your comments, we will revise these issues according to your comments one by one, for example, declaring that "Lasso estimator" specifically refers to $\ell_1$-regularized SVM; changing "saddle point function " to "saddle point problem", and we will make the figures more readable in the revised version for readers' convenience. [9] Neyshabur, B., Tomioka, R. and Srebro, N. (2015). In search of the real inductive bias: On the role of implicit regularization in deep learning. [10] Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O. (2017). Understanding deep learning requires rethinking generalization.
Summary: This paper design a regularization-free algorithm for high-dimensional support vector machines (SVMs) by integrating over-parameterization with Nesterov's smoothing method, and provide theoretical guarantees for the induced implicit regularization phenomenon. Strengths: This paper provides a regularization-free gradient method that has proven to be effective in practice and is supported by some theoretical guarantees. The theoretical results are novel, and the experiments are comprehensive. Weaknesses: The theoretical results are relatively weak. 1. The constants $c1\sim c{4}$ in Theorem 2 are not clearly specified, so we cannot see the trend of these constants changing with the problem size (are they independent of $p$?), which weakens the results. 2. The author did not compare the strength of the $\delta$-incoherence assumption and the RIP (Restricted Isometry Property) assumption (only a comparison for verifying difficulty was made). This comparison should be included. 3.The restrictions on the initial values in Assumption 2 may seem somewhat stringent. The author may provide an explanation for why they are designed this way. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please refer to the Weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort you put into reviewing our work. We've reviewed your comments and summarized our responses below. Please let us know if you have any additional comments or concerns. 1. Clear specifications about the constants $c_1\sim c_4$ in Theorem 2. Thank you for highlighting this for us. We prepare a more detailed explanation of the constants. In Theorem 1, the constants $c_1$ and $c_2$ arise from inequalities related to sub-gaussian random variables, these constants remain unrelated to $n$ or $p$. Likewise, the constants $c_3$ and $c_4$ are associated with the interval of iterations, and we can give the specific form of this interval, $[5\log(m/\alpha^2)/\eta m,\log(1/\alpha)/(4\sigma\eta\log n)]$, thereby setting $c_3=5$ and $c_4=1/(4\sigma)$. Notably, our assumptions do not involve $\sigma$, so $c_3$ and $c_4$ remain independent of $n$ or $p$. In conclusion, these constants do not undermine the theoretical results. 2. Comparison of the $\delta$-incoherence assumption and the RIP assumption. This is a pertinent question, and we appreciate the chance to provide a clearer explanation of these two assumptions. The incoherence and RIP are powerful performance measures for guaranteeing sparse recovery and have been widely used in many contexts. Sub-Gaussian matrices satisfy low-incoherence and RIP property with high probability. In fact, incoherence and RIP imply each other[E], showing their close connection. However, compared to the incoherence assumption, the RIP poses a couple of challenges. Firstly, RIP verification becomes NP-hard when dealing with pre-constructed design matrices. Secondly, the RIP's formal complexity renders it challenging to compute. In prior explorations of implicit regularization in matrix factorization and linear regression, some scholars have flagged issues with RIP assumptions. For instance, RIP cannot capture the behaviour of $\ell_2$-loss in the high noise regime [M], and RIP condition could potentially be replaced [28, 34]. Consequently, we opt for the more flexible incoherence assumption in this paper, a choice supported by recent work [18]. [E] Candes, E.J., 2008. The restricted isometry property and its implications for compressed sensing. *Comptes rendus. Mathematique*, *346*(9-10), pp.589-592. [M] Jianhao M and Fattahi S. Global convergence of sub-gradient method for robust matrix recovery: Small initialization, noisy measurements, and over-parameterization. *Journal of Machine Learning Research* 24.96 (2023): 1-84. 3. Detailed explanations about Assumption 2. The details in Assumption 2 are also worth looking into. In addition to explanations in the paper, we're providing more insights here based on both theory and experiments. Firstly, the assumptions about the starting value $\alpha$, parameter $\gamma$, and step size $\eta$ mainly stem from the theoretical side of the algorithm. For instance, $\alpha$ controls the strength of the estimated weak signals and error components, $\gamma$ manages the approximation error in smoothing, and $\eta$ affects how accurately we estimate strong signals. So, assumptions about these are necessary. Now, for the assumptions about $\alpha$ and $\eta$—where we say $\alpha \lesssim 1/p$ and $\eta \lesssim 1/(\kappa \log p)$—we're not sticking strictly to the $\le$ sign, but $\lesssim$. These assumptions aren't too strict, and in our tests, making them really small isn't necessary for good results. For example, we tried $\eta = 0.5$ (much larger than $1/(\kappa \log p)$), and even with a larger $\alpha$ like $10^{-4}$, we still got good estimates. Speaking about $\gamma$, we're strict in the assumption, but in our experiments, we can be a bit flexible. We tested different $\gamma$ values, like $10^{-4}$, $10^{-3}$, and $5 \times 10^{-3}$, for estimation and finally we got the estimation errors of $0.383$, $0.404$ and $0.519$, and the four strong signals are estimated as $[5.8,5.8,6.6,6.6]$, $[5.6,5.6,6.4,6.4]$ and $[4.4,4.5,5.2,5.2]$, respectively. Even with larger $\gamma$ values—like scaling it up a lot—we still got good results in estimating errors and strong signals. To sum it up, the setting of parameters is very common. In related studies like [10,18, 28, 34], they make certain assumptions about initial values and step sizes. In experiments, we can relax the rules a bit. There's no need to make the parameters super small to get the results we want. --- Rebuttal Comment 1.1: Comment: Based on the author's response, I intend to raise my score to 6. --- Reply to Comment 1.1.1: Title: Reply to the Comment Comment: Thank you so much for your help. Your comments help us improve and gain a better understanding of the method. We appreciate your time and effort in reviewing our work.
Summary: The paper tackles the problem of implicit regularization for classification in the context of over-parameterization. Starting from a $L^1$ regularized SVM, they voluntarily over-parameterize the feature vector $\beta = w \odot w - v \odot v $ using two vectors $w, v \in \mathbb{R}^p$. The optimization process then consists in (i) dropping the explicit regularization, (ii) smoothing the hinge loss function, (iii) applying gradient descent with early stopping to get the solution. A theoretical analysis of the estimated parameter is provided, showing under mild conditions that the proposed scheme behaves as well as the explicit regularization scheme if the algorithm is stopped at a certain time. Numerical experiments highlight the benefits of the approach on synthetic data. Strengths: - The topic is of interest for the machine learning community. - The paper is well written and mathematically sound. - The developed method is novel. - The experimental study is convincing. Weaknesses: - No real weakness Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - The problem considered here is a linear SVM with $L^1$ penalization. Would the results presented here still hold for a non-linear SVM based on a finite-dimensional feature map ? - At one point, you drop the explicit regularization to only minimize an empirical risk. What is the ingredient in the algorithm that allows to recover properties similar to the explicit regularization ? Once the regularization is dropped, we could say that the original problem considered another one - e.g. Tychonov regularization - but no the solution presented here retains specific properties usually associated with the $L^1$ penalty. Could you comment on that ? Typos/remarks: - Line 18: on gradient based methods - I believe it would be of interest to cite [A] alongside [1] and [27] in the introduction, given the link they explore with the Lasso problem. - Line 44: Why $u_0$ and not $w_0$ ? Please be consistent with the notation from line 42 - Line 47: "near-oracle rate is achievable via explicit regularization using explicit regularization" - Line 145: "and has an explicit form that" writes ? - Line 146: given that you already use a superscript in equation (4) for $\mathcal{Z}^n$, you should use a subscript here to avoid overcharge. - Line 229: "that scales as $\mathcal{O}$" - Line 236: the subscripts are not correct (two times the complement) [A]: Iterative regularization for convex regularizers; Molinari, Cesare and Massias, Mathurin and Rosasco, Lorenzo and Villa, Silvia; AISTATS 2021 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations have overall been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the valuable feedback you've provided on our paper. We will go through each point and provide explanations. If you have any new questions or ideas, please feel free to share them with us. 1. Future work about implicit regularization in Non-linear SVM. Exploring implicit regularization in nonlinear SVM is a direction we're considering for future work. Note that the proposed method can be extended to the non-linear SVM with a finite-dimensional feature map, which includes the polynomial kernel. Specifically, it is clear that the kernel SVM with linear kernel is exactly the same as our proposed method. Once the polynomical kernel is used, we may reparameterized the corresponding coefficent with some high order vectors, and the theoretical results may be established with some modificatiokns. In the revised version, we have added some detailed discussions on the potential route of extending the proposed method to the nonlinear SVM with finite-dimensional feature map. 2. What allows our algorithm to recover properties similar to the explicit regularization? Implicit regularization is the focus of our paper, and we provide a comprehensive explanation here. Many optimization challenges in networks involve aspects like nonconvexity [C1], nonlinearity, and over-parameterization. These problems potentially lead to subpar performance in regression or classification tasks from a statistical standpoint. However, what's interesting is that practical observations show that simple algorithms like (stochastic) gradient descent often manage to find the global minimum of the loss function, even when faced with nonconvexity. This happens without the need for explicit regularization [B, C2]. In other words, in over-parametrized statistical models (like our experimental setup), although the optimization problems consist of bad local minima with large generalization error, the choice of optimization algorithm, usually a variant of gradient descent algorithm, usually guard the iterates from bad local minima and prefers the solution that generalizes well. As a result, we don't introduce an extra regularization term to the optimization goal. Instead, the optimization algorithm itself showcases implicit preferences that effectively play the role of regularization. We can interpret the iteration count $t$ as a kind of regularization parameter, similar to $\lambda$ in the lasso penalization algorithm. Our approach attains the desired performance when the number of iterations falls within the appropriate range. [C1]Yun, C., Sra, S. and Jadbabaie, A. (2019). Small nonlinearities in activation functions create bad local minima in neural networks. In *International Conference on Learning Representations*. [B] Neyshabur, B., Tomioka, R. and Srebro, N. (2015). In search of the real inductive bias: On the role of implicit regularization in deep learning. In *International Conference on Learning Representations*. [C2] Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O. (2017). Understanding deep learning requires rethinking generalization. *International Conference on Learning Representations*. 3. Typos and Remarks. Thank you for your thorough feedback. We will diligently address each of the typos you've highlighted. Furthermore, [A] extensively covered implicit regularization within linear models and delved into L1 penalty and early stopping. This reference is incredibly valuable to us, and we will certainly include it in the revised version. We appreciate your keen observation and for bringing this to our attention. --- Rebuttal Comment 1.1: Title: Acknowledging rebuttal Comment: I thank the authors for their feedback. Other reviewers have raised valid critics, which I believe have been addressed by the authors. --- Reply to Comment 1.1.1: Title: Reply to the Comment Comment: We greatly appreciate your feedback, which has provided us with a deeper understanding and has guided us in considering future directions for our work.
Rebuttal 1: Rebuttal: We highly appreciate the invaluable feedback we received from reviewers for our work. Your comments aid us in identifying areas where we can enhance our research and make our findings clearer and more accessible to our readers. We have considered all the comments and summarized the responses below. We have selected five key issues to provide a global explanation here. 1. **Discussion of Sparse SVM.** Several reviewers have highlighted the significance of investigating sparse SVM, prompting us to delve deeper into this topic. In modern applications, the challenge of classification arises when dealing with an abundance of redundant features. such as in the fields of face detection, text classification, image classification, bioinformatics, handwriting recognition, and medical data analysis. To illustrate, in medical imaging, our aim is to construct a classifier using specific pixels from high-dimensional data, while in text analytics, we seek to develop an accurate classifier using a modest subset of words from a vast dictionary (for instance, the libsvm website's news20 dataset comprises 19,996 samples and a feature count as high as 1,355,191, with a sparsity of just 0.034%). Acknowledging sparsity within SVMs adds an intriguing dimension to the study of practical applications. Conversely, although sparsity in regression has garnered extensive attention in recent years, the corresponding theoretical exploration in classification problems remains relatively limited. The focus generally gravitates towards analyzing aspects like generalization error rate and empirical risk, with less emphasis on error bound. Thus, there exists substantial room for theoretical advancement in the realm of sparse SVM. In summary, our study of implicit regularization in classification problems in this paper is a very meaningful topic, both as a complement to the theoretical work on sparse SVM and as a new algorithm for the real-world problem of large-scale data classification. 2. **Novelty of our work.** Our work is the first to design unregularized gradient-based algorithm for SVM by leveraging over-parameterization and provides relevant theoretical guarantees for implicit regularization. The reparameterization technique has not been applied to classification problems, although it has been widely used in the study of regression. Reparametrization is not computationally burdensome, and although it creates a snag in the theoretical derivation, with reparametrization, we can theoretically analyse how strong signals, weak signals, and error terms change during the iteration process, and we can clearly study the dynamics of gradient descent to get a clearer picture of the algorithmic iteration process, which has never been done in previous papers. With the help of reparameterization, we introduce an error bound for the iterative solution that is independent of the number of iterations $t$. In addition, we add Nesterov's smoothing, which is a very clever step that computationally overcomes the non-differentiability of the hinge loss and facilitates our theoretical proofs. 3. **Further discussion of assumptions**. Several reviewers have asked about assumptions, and it is a good opportunity for us to make more explanations of assumptions. Firstly, the assumptions we made in the theory section were mainly motivated by theoretical proofs, such as that the initial size $\alpha$ controls the size of the error term, and that we need to make assumptions about it. However, our assumptions are not strict, for example, ours for $\alpha$ is $\alpha\lesssim 1/p$. In our experiments, we have found that the theoretical constraints can be relaxed, for example, we can still get good estimation results with heavy-tailed distributions, which means that our method will not be constrained by the assumptions in practical applications. 4. **Addressing unclear and inappropriate expressions in the text.** We acknowledge the presence of unclear and inappropriate expressions, as well as writing errors in the text. We are committed to rectifying each of these issues in the revised version to enhance the overall quality of the article. We extend our gratitude to the reviewers for their diligent review, as their feedback will undoubtedly contribute to the article's improvement. 5. **Future directions.** Firstly, as we mentioned above, the assumptions are mainly derived for theoretical considerations and can be relaxed in practical applications, and we would like to explore to what extent these assumptions can be relaxed, which is one of the future work mentioned in other studies on implicit regularization; Secondly, as suggested by the reviewers, we can consider extending the current study to nonlinear SVM. This could involve incorporating kernel technique to delve into the realm of implicit regularization in nonlinear classification. (A response to Reviewer 6PiA is included in the pdf file we have provided.) Pdf: /pdf/16bb3158db2a2816a528e7b5694e703c47c53897.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the implicit regularization in over-parameterized (sparse) support vector machine. The paper by nature is an extension of Vaskevicius et al [28], applying the quadratic reparametrization on SVM (hinge loss). Due to the non-differentiability of hinge loss, Nesterov’s smoothing is applied to enhance the computational efficiency. The authors then propose a regularization free gradient descent algorithm and show that the sparse solution is obtained with early stopping. The convergence result is demonstrated via numerical simulations. This is an active area of research, and the contribution is relevant. My concern is more about the setting/significance of this work. The authors claim that “this is the first study that investigates implicit regularization via gradient descent and establishes the near-oracle rate specifically in classification” (line 49). First of all, the implicit regularization via gradient descent in classification has been studied in [12, 27] for logistic/exponential losses. Therefore, the contribution to hinge-loss is appreciated (with the remedy of Nesterov's smoothing). Secondly, when it comes to classification problems, sparsity is relatively minor and the implicit regularization effect is shown via max-margin/separability. More discussion would be helpful about the sparse SVM. These two papers [W]&[Z] below may be closely related to the implicit regularization effect studied in this paper but missed in the related works. [W] B. Woodworth et al, Kernel and Rich Regimes in Overparameterized Models, COLT, 2020. [Z] P. Zhao et al, High-Dimensional Linear Regression via Implicit Regularization, Biometrika, 2022. Strengths: 1. The implicit regularization under hinge-loss is novel. 2. Some theoretical results in this paper are interesting. Weaknesses: 1. The importance of sparsity in SVM and how that affects the classification performance is not discussed. 2. Some metrics used in simulations are not defined. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I didn’t find the definition of either $\beta^\star$ or the data generating mechanism. This may cause a big concern if the authors implicitly assume $y=X\beta^\star$ somewhere, which would be very restrictive. In line 258, the simulation follows a logistic model. Is that also the case for theorem 2? If $\beta^\star$ is a minimizer of the population version of the hinge loss, is that for Eq. (1) or Eq. (3)? 2. Formatting issue in [11] 3. Is there a reason that the oracle method performs the worst in Figure 2 (b)? 4. Is Nesterov’s smoothing necessary here? How about plain sub-gradient descent with quadratic parameterization? 5. For Proposition 2, isn’t the upper bound of t in Proposition 1 still needed? I didn’t check the proof details, but intuitively the convergence on the support is only guaranteed while the error accumulation outside the support remains small. Correct me if I’m wrong. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The significance/importance/potential application of this work is not well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback on our paper. We'll carefully review each point and provide explanations. If you have any new questions or ideas, please don't hesitate to share them with us. 1. More discussion of the sparse SVM. In modern applications, we frequently encounter classification challenges amidst an abundance of redundant features. The prevalence of sparse-scale data is particularly pronounced in fields like finance, document classification, image analysis, and gene expression analysis. To illustrate, in genomics, only a handful of genes from thousands of candidates are utilized to construct a classifier for disease diagnosis and drug discovery. Similarly, in spam classification, an accurate classifier is sought using a relatively small selection of words from a dictionary containing numerous different terms. In such scenarios, there are potential limitations associated with standard SVMs or SVMs augmented with explicit regularization (such as L2 norm). From an applied perspective, incorporating sparsity into SVMs presents an intriguing avenue for exploration. Conversely, it's well-established that sparsity considerations have undergone extensive and in-depth exploration in regression in recent years. However, the exploration of theoretical assurances for sparse SVMs is comparably less extensive. Typically, the focus revolves around generalization error and empirical risk analysis, with even fewer discussions on variable selection and error bound. Our study of implicit regularization in classification problems, as presented in this paper, stands as a complementary endeavor to the theoretical pursuits within the realm of sparse SVMs. 2. Differences with existing studies of implicit regularization in classification. Compared to these existing work [12,27], our approach adds a crucial extra step – quadratic parameterization. This step involves using two vectors, w and v, and combining them in a special way: $\beta=w\odot w-v\odot v$. Although this unique step makes our theory more intricate, it lets us precisely understand how gradient descent changes over time, which is explained more in Appendix B. we can theoretically determine How the real signal changes at certain stages and how the error is controlled, which helps us to understand more clearly how the parameters change in gradient descent, which is difficult to see in previous classification work. Additionally, while previous work often relates the speed of convergence to the number of steps taken, like ${\cal O}(1/\log t)$, our paper's theory showcases that the gradient descent method can achieve a rate of convergence that doesn't depend on the step count $t$, within a certain range. This is a theoretical improvement in how we understand the process. 3. Supplement to the real coefficients $\beta^*$. Thank you very much for pointing this out for us, we omitted the definition of the true parameter $\beta^*$ in the main text.The key result of the paper is an error bound $||\beta_t-\beta^*||^2$, where $\beta^*$ is the minimizer of the population version of the hinge loss function (with respect to $\beta$, without $\ell_1$ norm), that is, $\beta^*=\arg\min_{\beta}{\mathbb E}(1-yx^T\beta)_+$. We will certainly make the necessary revisions in the updated version. Moreover, our data generating mechanism in the simulation is within the domain of Theorem 2. 4. Reason why the oracle method performs the worst in Figure 2(b) In high-dimensional settings, both the GD estimator and the lasso estimator tend to overestimate the number of signals, which is more than the true number of signals. Consequently, both estimators incorporate a larger number of features, potentially resulting in an improved but ultimately misleading predictive performance. 5. Necessity of Nesterov’s smoothing In over-parameterized applications, like the case of gene data classification, the non-smooth nature of the hinge loss poses significant computational and accuracy challenges. Standard first-order methods like the subgradient and stochastic gradient techniques don't achieve rapid convergence for large-scale problems. Second-order methods like Newton's approach and simulated Newton's method can be used, but they're computationally demanding due to the need for the Hessian matrix in each iteration. As a result, a common approach to tackle these complexities in large-scale problems is to 'smooth' the hinge loss. Numerous smoothing methods have been proposed and proven effective in real-world data applications. Given this landscape, using Nesterov's smoothing makes sense in over-parameterized scenarios. It's practically essential. Additionally, incorporating Nesterov's smoothing aids in our theoretical deductions. If we were to directly analyze the gradient algorithm with quadratic parameterization for the non-smooth hinge loss (which is computationally possible), its non-differentiability would complicate the theoretical deductions. In essence, Nesterov's smoothing plays a significant role both computationally and theoretically. 6. Explanation of Proposition 2 Your observation is spot-on, and we appreciate you bringing it up. There is indeed an upper limit on the number of iterations to manage the size of the error term outside the support set. The confusion might arise because we presented the error term and strong signal term in two separate propositions. This layout could give the impression that Proposition 2 lacks an upper bound on the number of iterations. We will address this issue and rectify both propositions in the revised version. Thank you for highlighting this, as it helps us enhance the clarity of our work. 7. References and Formatting issue We take note that you have provided us with new literature [W], related to implicit regularization, which we will cite in the revised version. Meanwhile, we will revise the formatting issue you mentioned, thanks for pointing it out! --- Rebuttal Comment 1.1: Comment: My major concern about sparse SVM was resolved by the authors' reply to AC's follow-up questions. I'd like to see those discussions added to the main context, which would help highlight the importance of this work. The definition of $\beta^\star$ and an accurate statement of proposition 2 are crucial, please be sure to revise them. After all, I will increase my score to 6, as the authors addressed most of my concerns. --- Reply to Comment 1.1.1: Title: Reply to the Comment Comment: Thank you very much for your reply, we will add these discussions and the relevant definitions in the revision.
null
null
null
null
null
null
3D-LLM: Injecting the 3D World into Large Language Models
Accept (spotlight)
Summary: Due to the limited perception of 3D space by LLMs and VLMs, this paper proposes 3D-LLMs to understand spatial relationships, affordances, physics, and layout in 3D scenes. The authors generate 300K 3D-language pairs to train the 3D-LLMs, which enable better performance on various 3D understanding tasks. Strengths: 1. The motivation is reasonable. The idea is clear. 2. The experimental results cover a wide range of tasks, including 3D captioning, 3D question answering, task decomposition, 3D grounding, 3D-assisted dialog, and navigation. The results are substantial and solid. 3. The organization and writing of the paper are fluent. Weaknesses: 1. Is there a potential issue of data leakage? For instance, when generating the 3D-language pairs using ScanNet, the authors utilized object semantic information and bounding boxes, which might benefit the model's performance on downstream tasks such as ScanRefer and ScanQA. 2. The baselines, such as VoteNet+MCAN and ScanRefer+MCAN, should also be trained using the generated data to ensure fairness. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Which 3D feature extractors were used for Objaverse, ScanNet, and HM3D, respectively? How many pairs were extracted for each? 2. Since the LLMs and VLMs have strong few-shot capabilities, how do the 3D-LLMs perform in zero/few-shot scenarios on held-out datasets? 3. As mentioned in the weaknesses, what results would be obtained if other baselines were also trained using the generated data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: It would be better to present the performance of 3D-LLMs in zero/few-shot scenarios on held-out datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; *Thank you for your insightful and constructive comments! We have added additional experiments and modified our paper according to your comments.* &nbsp; > **Q1: Which 3D feature extractors were used for Objaverse, ScanNet, and HM3D, respectively? How many pairs were extracted for each?** * For Objaverse, since we render images using Blender, which gives us correct depths and camera poses, we use *direct reconstruction* to reconstruct Objaverse scenes. * For ScanNet, we use the original images they took to construct the scans. Since camera perspective distortion is inevitable for sensors used to take real-world pictures, we cannot use direct reconstruction to reconstruct the 3D scenes (in fact, we find that no two adjacent partial point clouds could align using direct reconstruction). Thus, we use *feature fusion with SLAM* to build the features for ScanNet. * For HM3D, we use mixed ways. For 3DMV-VQA [1], they do not give the depths for multi-view images. We thus use their released codes that build the features using *neural field*. However, we also use habitat-lab to collect the data of more scenes to train diverse tasks, and we are able to use *direct reconstruction* to reconstruct the data generated by habitat-lab. For example, for navigation data we need to reconstruct the partial point cloud at each observation step, and thus *direct reconstruction* is faster than *neural field*. &nbsp; > **Q2: Since the LLMs and VLMs have strong few-shot capabilities, how do the 3D-LLMs perform in zero/few-shot scenarios on held-out datasets?** Thank you for suggesting! We attach the results of the pre-trained models below. &nbsp; **Table A. Zero-shot Performances of Pretrained Models, on ScanQA.** || B-1 | B-2 | B-3 | B-4 | METEOR | ROUHE-L | CIDER | EM | |-|-|-|-|-|-|-|-|-| | VoteNet+MCAN* |28.0 | 16.7 | 10.8 | 6.2 | 11.4 | 29.8 | 54.7 | 17.3 | | ScanRefer+MCAN* |26.9 | 16.6 | 11.6 | 7.9 | 11.5 | 30 | 55.4 | 18.6 | | ScanQA* | 30.2 | 20.4 |15.1 | 10.1 | 13.1 | 33.3 | 64.9 | 21.0 | | LLaVA(zero-shot)| 7.1 | 2.6 | 0.9 | 0.3 | 10.5 | 12.3 | 5.7 | 0.0 | | **3D-LLM (flamingo) - PT** | 22.5 | 12.1 | 5.5 | 2.7 | 10.0 | 24.3 | 49.8 | 14.0 | | **3D-LLM (BLIP2-opt) - PT** | 26.4 | 14.3 | 7.2 | 3.4 | 11.9 | 27.1 | 52.7 | 13.2| | **3D-LLM (BLIP2-flant5) - PT** | 28.6 | 17.0 | 9.9 | 6.6 | 12.3 | 28.0 | 52.7 |13.8 | | 3D-LLM (flamingo) | 30.3 | 17.8 | 12.0 | 7.2 | 12.2 | 32.3 | 59.2 | 20.4 | | 3D-LLM (BLIP2-opt) | 35.9 | 22.5 | 16.0 | 9.4 | 13.8 | 34.0 | 63.8 | 19.3 | | 3D-LLM (BLIP2-flant5) | 39.4 | 25.2 | 18.3 | 12.3 | 14.9 | 35.9 | 69.3 | 20.5 | &nbsp; **Table B. Zero-shot Performances of Pretrained Models, on 3DMV-VQA.** Methods | Concept | Counting | Relation | Comparison | Overall ---|---|---|---|---|--- CNN+LSTM | 57.8 | 22.1 | 35.2 | 59.7 | 37.8 MAC | 62.4 | 19.7 | 47.8 | 62.3 | 46.7 MAC(V) | 60.0 | 24.6 | 51.6 | 65.9 | 50.0 NS-VQA | 59.8 | 21.5 | 33.4 | 61.6 | 38.0 ALPRO | 65.8 | 12.7 | 42.2 | 68.2 | 43.3 LGCN | 56.2 | 19.5 | 35.5 | 66.7 | 39.1 3D-Feature+LSTM | 61.2 | 22.4 | 49.9| 61.3 | 48.2 3D-CLR (Ours) | 66.1 | 41.3 | 57.6 | 72.6 | 57.7 **3D-LLM (flamingo) - PT** | 60.5 | 18.9 | 47.6 | 63.7 | 45.7 **3D-LLM (BLIP5-opt) - PT** |60.1|17.8 | 44.2 | 58.9 | 42.9 **3D-LLM (BLIP2-flanT5) - PT** | 61.0 | 18.4 | 45.0 | 62.0 | 43.9 3D-LLM (flamingo) | 68.9 | 32.4 | 61.6 | 68.3 | 58.6 3D-LLM (BLIP5-opt) | 63.4 | 30.7 | 57.6 | 65.2 | 54.9 3D-LLM (BLIP2-flanT5) | 68.1 | 31.4 | 55.1 | 69.7 | 54.6 &nbsp; **Table C. Zero-shot Performances of Pretrained Models, on ScanRefer.** ||Acc@0.25 -|- OracleRand|29.9 OracleRefer|40.6 VoteNetRand|10.0 SCRC|18.7 OneStage|20.4 VoteNetGRU|39.5 ScanRefer|41.2 **3DLLM(BLIP2t5)-PT**|20.1 **3DLLM(BLIP2t5)-new-PT**|26.7 3DLLM(BLIP2t5)|30.3 3DLLM(BLIP2t5)-new|35.2 *"new" means the new model we train after submission* &nbsp; > **Q3: what results would be obtained if other baselines were also trained using the generated data?** * Thank you for suggesting! However, these baselines rely on privileged information that is not available in our current pre-training dataset. The training of VoteNet and ScanRefer requires the ground-truth segmentations of the objects in the scenes. However, we do not have these annotations in the Objaverse dataset. We do think such information is crucial and one future step for us is to label the segmentations of the Objaverse dataset to help improve the performances of 3D-LLMs. * The ScanQA baseline uses an answer classification module in addition to object localization, detection and object classification modules to propose answers. Thus, it does not have the language generation ability which is essential for our tasks (*e.g.*, captioning and dialogue). * Despite these limitations, we still provide the results of VoteNet+MCAN and ScanRefer+MCAN below, where we use pre-trained VoteNet and ScanRefer and train the MCAN part (the results might be not so meaningful since there is non-trivial domain gap between the data VoteNet and ScanRefer trained on and our pre-training data). We pre-train the models on our pre-training dataset and finetune them on ScanQA. &nbsp; **Table D. Performances on ScanQA baselines when pretrained on 3D-language data.** | | B-1 | B-2 | B-3 | B-4 | METEOR | ROUHE-L | CIDER | EM | |---|------|------|------|------|--|-|-|------| | VoteNet+MCAN* | 28.0 | 16.7 | 10.8 | 6.2 | 11.4 | 29.8 | 54.7 | 17.3 | | VoteNet+MCAN* (pretrained)| 26.4 | 15.4 | 9.1 |5.8 |10.4 | 25.3 | 50.6 | 15.9 | | ScanRefer+MCAN* |26.9|16.6| 11.6| 7.9 | 11.5| 30| 55.4 | 18.6 | | ScanRefer+MCAN* (pretrained) | 28.3 |17.0|12.1|7.6|11.2| 29.7| 52.6 | 16.9 | &nbsp; [1] 3D Concept Learning and Reasoning from Multi-View Images. Hong et al. 2023 &nbsp; *We sincerely appreciate your comments. Please feel free to let us know if you have further questions.* &nbsp; Best, Authors --- Rebuttal Comment 1.1: Comment: The response clearly solves my concerns. Thus, I improve my final rating from 6 to 7.
Summary: This paper proposes a new family of 3D-LLMs that can take 3D representations as inputs and generate responses, it introduces a series of 3D-language data generation pipelines to generate a dataset of 300K 3D-Language pairs from different tasks for the training. Strengths: The proposed approach seems to be valid and functioning, and the authors say they plan to release the 3D-Language dataset as well. Overall it demonstrates how to use 2d features to gather 3D features and then inject them into LLM. Weaknesses: The weakness of this paper is on how to gather the 3D representation, current pipeline seems to be relying on 2D multi-view images, which will introduce extra complexity/latency and limitations. Also, when you project 3D data to a series of multi-view images, it's likely that there will be some information loss, even though the reviewer does appreciate that the authors do realize this issue and come up with some remedy approaches like the "3D Localization Mechanism", it seems functioning to some extent, but it might not be the optimal approach eventually. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Typo: there is an extra "." in line 132. The reviewer is curious whether the authors have tried to train it directly using the language-3D dataset for the 3D encoder directly without leveraging the images as the bridge, and how will it perform differently. In response to that suggestion, the reviewer would like to mention some related works that have explored the alignment of 3D-image-language triplets and consequently trained the 3D encoder to have language context [1][2]. Given the abundance of 3D object data, employing a pre-trained 3D encoder, and subsequently fine-tuning it with the LLM in this case could be a potentially promising strategy, and it might yield more fruitful results compared to training a model from scratch using the same data the authors have collected. [1]: "ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding" -- CVPR2023 [2]: "CLIP2: Contrastive Language-Image-Point Pretraining from Real-World Point Cloud Data" -- CVPR2023 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: refer to the weaknesses section Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; *We appreciate the positive and insightful comments from you! We address your concerns in detail below.* &nbsp; > **Q1: The reviewer is curious whether the authors have tried to train it directly using the language-3D dataset for the 3D encoder directly without leveraging the images as the bridge, and how will it perform differently. In response to that suggestion, the reviewer would like to mention some related works that have explored the alignment of 3D-image-language triplets and consequently trained the 3D encoder to have language context. Given the abundance of 3D object data, employing a pre-trained 3D encoder, and subsequently fine-tuning it with the LLM in this case could be a potentially promising strategy, and it might yield more fruitful results compared to training a model from scratch using the same data the authors have collected.** Thank you for suggesting! The suggestion will be immensely helpful in enhancing the paper’s quality and assisting the readers in understanding the paper's contributions. We replace 3D-LLMs' features with features from pretrained 3D encoders (ULIP) [1]. The results shown in Table A and B suggest that LLMs with 3D encoders have very poor performances, inferior to 3D-LLMs by a large margin. &nbsp; **Table A. Experimental Results of Pretrained 3D Encoder with LLMs, on ScanQA.** || EM|B-1|B-2|B-3|B-4|METEOR|ROUHE-L|CIDER| |-|-|-|-|-|-|-|-|-| |VoteNet+MCAN*|17.3|28.0|16.7|10.8|6.2|11.4|29.8|54.7| |ScanRefer+MCAN*|18.6|26.9|16.6|11.6|7.9|11.5|30|55.4| |ScanQA*|21.0|30.2|20.4|15.1|10.1|13.1|33.3|64.9| |flamingo-SingleImage|16.9|23.8|14.5|9.2|8.5|10.7|29.6|52.0| |flamingo-MultiView|18.8|25.6|15.2|9.2|8.4|11.3|31.1|55.0| |BLIP2-flant5-SingleImage|13.3|28.6|15.1|9.0|5.1|10.6|25.8|42.6| |BLIP2-flant5-MultiView|13.6|29.7|16.2|9.8|5.9|11.3|26.6|45.7| |**ULIP_PointMLP+flant5**|7.5|11.0|18.4|7.2|2.7|1.4|7.4|18.1|26.9| |**ULIP_PointMLP+opt**|8.4|19.1|7.3|2.7|1.9|7.4|18.2|28.0| |**ULIP_PointBERT+flant5**|14.5|29.2|17.9|10.3|6.1|11.6|28.1|50.9| |**ULIP_PointBERT+opt**|13.8|28.8|16.9|9.7|5.9|11.3|27.9|50.5| |3D-LLM (flamingo)|20.4|30.3|17.8|12.0|7.2|12.2|32.3|59.2| |3D-LLM (BLIP2-opt)|19.3|35.9|22.5|16.0|9.4|13.8|34.0|63.8| |3D-LLM (BLIP2-flant5)|20.5|39.4|25.2|18.3|12.3|14.9|35.9|69.3| &nbsp; **Table B. Experimental Results of Pretrained 3D Encoder with LLMs, on Captioning.** |Models|BLEU-1|BLEU-2|BLEU-3|BLEU-4|METEOR|ROUGH-L| |-|-|-|-|-|-|-| |**ULIP-PointMLP+flant5**|24.6|20.8|14.9|10.4|12.5|35.3| |**ULIP-PointMLP+opt**|24.4|20.4|14.3|10.8|12.1|34.1| |**ULIP-PointBERT+flant5**|26.0|21.9|15.7|10.0|14.8|33.8| |**ULIP-PointBERT+opt**|24.3|20.7|15.8|11.3|12.1|39.5| |3D-LLM (flamingo)|36.2|24.8|19.0|16.0|17.6|40.8| |3D-LLM (BLIP2-opt)|35.7|26.7|20.3|15.9|18.7|40.1| |3D-LLM (BLIP2-t5)|39.8|31.0|24.7|20.1|17.7|42.6| &nbsp; > **Q2: The weakness of this paper is on how to gather the 3D representation, current pipeline seems to be relying on 2D multi-view images, which will introduce extra complexity/latency and limitations. Also, when you project 3D data to a series of multi-view images, it's likely that there will be some information loss, even though the reviewer does appreciate that the authors do realize this issue and come up with some remedy approaches like the "3D Localization Mechanism", it seems functioning to some extent, but it might not be the optimal approach eventually.** * We admit that one limitation of the paper is that the 3D representation relies on 2D multi-view images, which may result in extra complexity and information loss. This limitation is also covered in Line 295 of our submission. * However, We want to emphasize that building LLMs on 3D world is extremely challenging, due to the severely limited amount of existing 3D data and extreme difficulty to gather more such data. There are two potential solutions to approach the unexplored area: * The first one is to encode 3D features using pre-trained 3D encoders and input to LLMs, which is non-trivial due to the limited scale and diversity of 3D assets. Existing 3D encoders focus on simple objects instead of real-world scenes[1,2]. In Q1, we show that such pre-trained 3D encoders have very poor performances. * The second one is learning models of the 3D world by leveraging the abundance of 2D multiview images and aggregating 2D features to 3D. There has been a recent surge of works using 2D features to construct 3D representations[3,4], and there is significantly more plentiful 2D multiview data of scenes than there are 3D scans of these scenes. These methods further show superior ability in zero-shot and open-vocabulary reasoning, indicating this is a promising and appropriate strategy for processing 3D scenes at the moment. * **While we do not come to the conclusion that the second solution, utilized by 3D-LLM, is better than the first one, and indeed has several limitations, it is the most practical solution given the limited data for the time being.** 3D-LLM serves as a promising first step into exploring LLMs grounded in the 3D physical world and brings inspiration into the community. We believe that in the future, more powerful models will be built upon the combination of the two solutions. More discussions will be added in the revision. &nbsp; [1] ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding [2] CLIP2: Contrastive Language-Image-Point Pretraining from Real-World Point Cloud Data &nbsp; *We sincerely appreciate your comments. Please feel free to let us know if you have further questions. Thank you again for your time!* &nbsp; Best, Authors --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the authors' rebuttal, the reviewer has read the rebuttal in detail and would like to maintain the positive rating. --- Rebuttal 2: Title: Follow-up on rebuttal Comment: Dear Reviewer, Thanks again for your suggestions to strengthen this work! As the rebuttal period is approaching the end soon, we want to know if our response has answered your questions and addressed your concerns. If no, we are more than happy to provide further modifications. If yes, would you kindly consider raising the score? Thanks again for your truly constructive and insightful feedback. Best, Authors --- Rebuttal 3: Comment: Dear Reviewer 6ag4, We are nearing the end of the discussion period with authors. The authors have responded in detail to your review, so pls minimally read and acknowledge their rebuttal, and state which (if any) issues you still do not find to be satisfactorily addressed. You should do so as soon as possible. Thanks, AC
Summary: This paper proposes a new framework named 3D-LLM which leverages LLM to understand the 3D world. Specifically, 3D-LLM can take 3D point clouds as inputs to conduct various 3D tasks, including captioning, dense captioning, 3D question answering, task decomposition, 3D grounding, 3D-assisted dialog, navigation, and so on. To achieve this goal, this paper designs three types of prompting mechanisms to generate over 300k 3D-language data. It proposes a 3D feature extractor that obtains 3D features from rendered multiview images and takes pretrained 2D VLMs as backbones to train 3D-LLM. Both benchmark results on held-out and held-in data show the effectiveness of the proposed framework, which achieves SOTA performance on ScanQA benchmark. Strengths: 1. This paper is well written. 2. The idea of injecting the 3D world into large language models is novel. 3. The technical contributions, including data generation, overall framework, and experimental analysis are solid and convincing. 4. The proposed 3D-LLM achieves impressive results on both quantitative and qualitative results. Weaknesses: This paper is satisfactory. I only have some minor comments. 1. In Table 4, the ablation study could be more comprehensive, where the baseline should remove all of the position embedding, location tokens, and localization. The ablation study should begin with this baseline and show all possible combinations of these three designs. 2. In Figure 1, three approaches generate 3D features, what is the effectiveness of each of them? The experimental results should be included. 3. Limitations of 3D-LLM should be discussed. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; *We appreciate the positive and constructive comments from you, which are essential for improving the paper! We have conducted your suggested experiments. We will update all results in the paper.* &nbsp; > **Q1: In Table 4, the ablation study could be more comprehensive, where the baseline should remove all of the position embedding, location tokens, and localization. The ablation study should begin with this baseline and show all possible combinations of these three designs.** Sorry for the confusion! The localization in the table actually means position embeddings plus location tokens. As can be seen from the paper Line 202: *"It consists of two parts: 1) Augmenting 3D features with position embeddings 2) Augmenting LLM vocabularies with location tokens"*. Therefore, we think we have covered all the cases. We will modify the writing to make it more understandable here. &nbsp; > **Q2: In Figure 1, three approaches generate 3D features, what is the effectiveness of each of them? The experimental results should be included.** * The three approaches are meant for different kinds of data. For example, for real-world scans like ScanNet, camera perspective distortion is inevitable and thus we cannot use direct reconstruction to reconstruct the 3D scenes (in fact, we find that no two adjacent partial point clouds could align using direct reconstruction). Thus, we use *feature fusion with SLAM* to build the features for ScanNet. For 3DMV-VQA, the depth data is not released, and thus we can only use *neural field* to reconstruct the 3D scenes. On the other hand, for Objaverse data, since we are rendering with blender which gives us correct camera poses and depths, we use *direct reconstruction*. * To shed light on the performances of different approaches, we conduct an experiment where we use all three different features to construct the ScanNet features, with 3D-LLM (BLIP2-flant5) as our model, and show them in Table A. We can see that the *direct reconstruction* result is inferior to the two others, mainly because the scannet features we obtain via direct reconstruction have noises due to camera perspective distortion. The results of *feature fusion* and *neural field* are on par since they can both correctly reconstruct the 3D scenes. We will update more experimental results to the camera ready version concerning this question you raised. &nbsp; **Table A. Comparison among 3D feature generation approaches.** | | B-1 | B-2 | B-3 | B-4 | METEOR | ROUHE-L | CIDER | EM | |----------------------------|---------------|---------------|-------------|--------------|---------------|---------------|---------------|---------------| | 3D-LLM (Direct Reconstruction) | 34.6 | 22.1 | 15.7 | 9.1 | 13.5 | 33.0 |55.7 | 18.9 | | 3D-LLM (feature fusion) | 39.4 | 25.2 | 18.3 | 12.3 | 14.9 | 35.9 | 69.3 | 20.5 | | 3D-LLM (Neural Field) | 39.1 | 25.0 | 18.5 | 12.1 | 15.2 | 36.0 | 67.3 | 20.3 | &nbsp; > **Q3: Limitations of 3D-LLM should be discussed.** In Line 295 of the paper, we gave one limitation of 3D-LLM: *"A limitation is that the 3D feature extractor relies on multi-view images"*. We would like to share with you some more limitations which are crucial for further improvement of this paper: * For the grounding mechanism, we input detailed text descriptions to refer to an object and train 3D-LLMs to output location tokens for these objects. However, the referring sentence might contain multiple hops of relations, and directly training on such corpus is non-trivial since the models need to simultaneously learn both semantics and relationships. A better way to improve the grounding mechanism is to assign location tokens to each noun in all data, like in [1]. * We find that 3D-LLMs, like a lot of 2D VLMs, are bad at grounding relationships [2]. This problem is more salient for 3D tasks which have more complex spatial relationships. Relational modules need to be added to the models. * We do not have ego-centric or robot-centric data in our 3D-language data. Therefore, current 3D-LLMs are unable to solve embodied robotics tasks. Such data and tasks are crucial for equipping 3D-LLMs with the ability to understand the complex 3D physical world. &nbsp; [1] Kosmos-2: Grounding Multimodal Large Language Models to the World. Zhiliang Peng et al. 2023 [2] Going Beyond Nouns With Vision & Language Models Using Synthetic Data. Paola Cascante-Bonilla et al. 2023 &nbsp; *Please let us know if you have any further questions for our paper. We sincerely appreciate your time for reviewing this paper and raising the valuable suggestions! Thank you again!* &nbsp; Best, Authors --- Rebuttal Comment 1.1: Comment: My concerns have been addressed in the rebuttal and I am satisfied. Therefore, I will maintain my previous rating of 8. --- Rebuttal 2: Title: My last question Question Comment: This work is quite intriguing. My last question is when the generated dataset, the training code, and the evaluation code will be released. I believe that these materials would be helpful for researchers in the community and hope they can be publicly available as soon as possible. --- Rebuttal Comment 2.1: Comment: Dear Reviewer, Thank you for asking. The datasets and codes will be publicly available very soon. Best, Authors
Summary: In this paper, the authors tried to leverage LLM to understand the 3D scene. Specifically, the authors use both grounding and captioning/QA datasets to tune the model. Specifically, the authors adopt the three 2D to 3D feature transformation techniques to let the model have a sense of the 3D features. Strengths: 1. The motivation is clear. 2. The paper is easy to read. Weaknesses: - When you aggregate the 2D features to 3D. It could be time-consuming and ill-posed. - The performance under the pre-trained weights upon all sources of the pretraining data is missing. Authors should report such results. In this way, you will see the effect of finetuning. - How to measure hallucination? - In 3D captioning, authors should report the baseline methods from recent papers. - I think this paper is more like a transition paper. What if we are given a (1) pure 3D point cloud, such as Modelnet? (2) Pointed cloud with limited images, such as KITTI? Here the ill-posed problem is that some 3D points can be never mapped into RGB pixel(s). In this sense, RGBD cameras are the only lucky sensor that can do the 2D to 3D projection. - The results on ScanRefer are far from satisfactory. I know that this task is hard, however, the baselines are not moderate/strong enough. I do not it make sense to put the random guessing or one-stage method numbers here... - What is the resolution of the position embeddings and location tokens? Given a large 3D scene such as LiDAR, accurately localizing the objects needs quite a lot of tokens. For example, if a scene is 50m*50m*6m, then you need a lot of tokens. - For grounding, what we expect is <loc_x><loc_y><loc_z>, what if the outputs are not what we expected, such as <loc_x><loc_z> or <loc_x>text1<loc_y>text2<loc_z>? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See comments above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See comments above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We'd like to express our sincere gratitude for your thorough review of our paper. We greatly appreciate your suggestions which are crucial in improving the quality of our paper. > Q1: Aggregating 2D features to 3D is ill-posed Thanks for raising this concern. We want to emphasize that building LLMs on 3D world is extremely challenging, due to severely limited amount of existing 3D data and extreme difficulty to gather more such data. There are two potential solutions to approach the unexplored area: The first one is to encode 3D features using pre-trained 3D encoders and input to LLMs, which is non-trivial due to the limited scale and diversity of 3D assets. Existing 3D encoders focus on simple objects instead of real-world scenes[1,2]. Taking the suggestion of reviewer 6ag4, we replace 3D-LLM's features with features from pretrained 3D encoders (ULIP). Results below show their performances are very poor. ||EM|B1|B2|B3|B4|METEOR|ROUHEL|CIDER -|-|-|-|-|-|-|-|- ULIPPointMLP+t5|7.5|18.4|7.2|2.7|1.4|7.4|18.1|26.9 ULIPPointMLP+opt|8.4|19.1|7.3|2.7|1.9|7.4|18.2|28.0 ULIPPointBERT+t5|14.5|29.2|17.9|10.3|6.1|11.6|28.1|50.9 ULIPPointBERT+opt|13.8|28.8|16.9|9.7|5.9|11.3|27.9|50.5 3DLLM(BLIP2opt)|19.3|35.9|22.5|16.0|9.4|13.8|34.0|63.8 3DLLM(BLIP2t5)|20.5|39.4|25.2|18.3|12.3|14.9|35.9|69.3 The second one is learning models of the 3D world by leveraging the abundance of 2D multiview images and aggregating 2D features to 3D. There has been a recent surge of works using 2D features to construct 3D representations[3,4], and there is significantly more plentiful 2D multiview data of scenes than there are 3D scans of these scenes. These methods further show superior ability in zero-shot and open-vocabulary reasoning, indicating this is a promising and approariate strategy for processing 3D scenes at the moment. **While we do not come to the conclusion that the second solution, utilized by 3D-LLM, is better than the first one, and indeed has several limitations, it is the most practical solution given the limited data for the time being.** 3D-LLM serves as a promising first step into exploring LLMs grounded in the 3D physical world and brings inspiration into the community. We believe that in the future, more powerful models will be built upon the combination of the two solutions. More discussions will be added in the revision. > Q2: Performance under pre-trained weights Attached below. ||EM|B1|B2|B3|B4|METEOR|ROUHEL|CIDER |-|-|-|-|-|-|-|-|- 3DLLM(flamingo)|14.0|22.5|12.1|5.5|2.7|10.0|24.3|49.8 3DLLM(BLIP2opt)|13.2|26.4|14.3|7.2|3.4|11.9|27.1|52.7 3DLLM(BLIP2t5)|13.8|28.6|17.0|9.9|6.6|12.3|28.0|52.7 > Q3: Hallucination We explore two metrics[5]. CHAIR: Portion of hallucinated objects in all mentioned ones HRF@k: Portion of frequent objects in hallucinated ones We report scores of two tasks. |||CHAIR↓|HRF@10↓ -|-|-|- Task Decom.|t5|55.6|69.7 ||BLIP2t5-Image|35.4|56.8 ||3DLLM(BLIP2t5)|7.5|39.2 Dialog|t5|51.6|59.9 ||BLIP2t5-Image|29.2|54.6 ||3DLLM(BLIP2t5)|5.5|33.2 > Q4: 3D captioning baseline We attach the result of OpenShape[6], a model trained on 3 kinds of captions on 800k Objaverse data. We evaluate on OpenShape testset using pre-trained 3D-LLMs without finetuning on their training set. No data leakage among splits. ||B4|B3|B2|B1|METEOR|ROUGHL -|-|-|-|-|-|-| OpenShape|1.8|3.6|8.4|19.7|5.9|18.4 3DLLM(BLIP2opt)|8.5|11.0|15.2|21.7|10.3|29.4 3DLLM(BLIP2t5)|9.0|11.4|16.7|23.6|11.0|31.3 Results on our Objaverse testset. ||B4|B3|B2|B1|METEOR|ROUGHL -|-|-|-|-|-|- OpenShape|0.9|2.0|4.8|11.7|4.3|14.9 3DLLM(BLIP2t5)|20.1|24.7|31.0|39.8|17.7|42.6 3D-LLMs outperform OpenShape a lot, even on OpenShape's testset, with less than 10% training data of OpenShape's. We give both models' qualitative results in PDF. We'll run more models if the reviewer has suggestions. > Q5: Some 3D points can be never mapped into pixels. * We agree that our framework is not suitable for all kinds of point clouds, but note that many existing pointclouds can be mapped and rendered with 2D images to get features. * In the PDF, we show the results on ModelNet with rendered images, features and 3D-LLM responses. For KITTI, we could take a partial point cloud at each step, which we already did that with our navigation task. KITTI results are also in the PDF. * For pointclouds that may not be easily rendered, we could learn a separate stream to directly encode 3D features from the pointclouds and align them with the 3D features from RGB images. This allows us to take advantage of both 3D data and plentiful 2D multiview data. > Q6: ScanRefer results * We gladly share with you our new result, outperforming previous one by 5%. We also add the required stronger baselines. New result is achieved by a few modifications: 1) Before we followed the original version of [3] using MaskFormer to get dense 2D features, now we use Segment Anything 2) Pos embeddings are added to the features rather than concatenated. ||Acc@0.25 -|- OracleRand|29.9 OracleRefer|40.6 VoteNetRand|10.0 SCRC|18.7 OneStage|20.4 VoteNetGRU|39.5 ScanRefer|41.2 3DLLM(BLIP2t5)|30.3 3DLLM(BLIP2t5)-new|35.2 > Q7: Resolution Pos embedding: $256^3$. Location token: $64^3$ (64 tokens,applied in 3 dims). We could expand the token number (e.g., 64 to 256) for larger scenes. > Q8: Wrong grounding output Grammarly-incorrect output is considered wrong with 0 IOU. [1]PointCLIP: Point Cloud Understanding by CLIP [2]ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding [3]ConceptFusion: Open-set Multimodal 3D Mapping [4]3D Concept Learning and Reasoning from Multi-View Images [5]Evaluating Object Hallucination in Large Vision-Language Models [6]OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding We wish that our response has addressed your concerns and turns your assessment to the positive side. If you have more questions, feel free to let us know during the rebuttal window. Thank you! --- Rebuttal 2: Title: Follow-up on rebuttal Comment: Thank you for your comments! We would like to follow up on whether our response and our additional experiments on pre-trained models, ULIP, hallucination and captioning baselines have cleared your concerns. We are looking forward to your further comments on these perspectives, and we are more than happy to make further adjustments if necessary! Thanks for your time again! --- Rebuttal Comment 2.1: Comment: I really appreciate the authors for producing such a great number of new experiments during the rebuttal. Most of my concerns are addressed. Though I am concerned with a true 3D point cloud encoder, we may just leave it for future work. I will raise my score.
Rebuttal 1: Rebuttal: We sincerely appreciate all reviewers’ time and efforts in reviewing our paper. In addition to the response to specific reviewers, here we would like to highlight our contributions and the new experiments that we add in the rebuttal. &nbsp; **[Our Contributions]** We are glad to find out that the reviewers generally acknowledge our contributions: * The motivation is reasonable and clear. [KXNR, QPPK] * The idea of injecting the 3D world into large language models is novel, valid and functioning. [G3Mn, 6ag4] * The technical contributions, including data generation, overall framework, and experimental analysis are solid and convincing. The 3D-Language dataset will be released. [G3Mn] * The experimental results cover a wide range of tasks, including 3D captioning, 3D question answering, task decomposition, 3D grounding, 3D-assisted dialog, and navigation. The results are substantial and solid. [QPPK] * The organization and writing of the paper are fluent.[KXNR, G3Mn, QPPK] **[New Experiments]** In this rebuttal, we have added more supporting experiments to address reviewers’ concerns. * Results using existing pre-trained 3D encoders for 3D-LLMs [6ag4] * Results on pre-trained 3D-LLMs without finetuning [KXNR, QPPK] * Comparision with recent 3D captioning model [KXNR] * Measurement of Hallucination [KXNR] * New results on ScanRefer [KXNR] * Comparison among 3D feature generation approaches [G3Mn] * ScanQA baselines pretrained on 3D-language data [QPPK] **[Qualitative examples]** We attach two qualitative examples in the PDF: * Results on ModelNet and KITTI [KXNR] * Qualitative examples of comparison between OpenShape and 3D-LLM on captioning [KXNR] &nbsp; We hope our responses below convincingly address all reviewers’ concerns. We thank all reviewers’ time and efforts again! Pdf: /pdf/cf035eb3c492cd7c333bc1331aa77841c0191678.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Harnessing Hard Mixed Samples with Decoupled Regularizer
Accept (poster)
Summary: This paper proposes Decoupled Softmax(Eq. 4), which is an interesting improvement to the previous mixup method, which mitigates the impact of noise in mixed samples by modifying the loss. Strengths: The proposed idea is simple and effective. The manuscript has a high degree of completion and is rich in experiments. Weaknesses: I see no obvious disadvantages. There are some related literatures that I think are close to the author's claim: The Benefits of Mixup for Feature Learning, It found that modifying the lambda of y in the mixup loss does not significantly affect the performance of the model. UMIX: Improving Importance Weighting for Subpopulation Shift via Uncertainty-Aware Mixup, This paper also found that appropriate modifications can be made in the mixup loss to improve the generalization of the model. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have no other issues and the paper is written clearly and easily understood. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing our work! These two papers you have mentioned are very interesting works, but many of the differences are worth discussing. - Although The Benefits of Mixup for Feature Learning argues the different linear interpolation parameters for features and labels can still achieve similar performance, their analysis is still limited to the standard Softmax, ignoring the fact that the semantic information of the mixed samples should be greater than "1". If the setting for the sum of the weights is greater than 1 and directional (not a completely random linear interpolation), then I suspect the conclusion is likely to change. - UMIX proposes a new mixup loss function from an uncertainty perspective, but his need for additional training time and hyperparameters to obtain the importance weight makes the method not so convenient. We will discuss them in the related works in the revised version. Please feel free to ask any other questions!
Summary: The authors propose a new objective function with decoupled regularizer named decoupled mixup (DM) to harness hard mixed samples and mine discriminative features adaptively. This method is available on supervised learning and semi-supervised learning. Unlike the previous approaches which propose a more complicated dynamic mixup policy with extra computation, the proposed DM can adaptively utilize those hard mixed samples to mine discriminative features without losing the original smoothness of mixup. Strengths: 1. Harnessing hard mixed samples without losing the original smoothness of mixup is an interesting idea. 2. The proposed DM enables static mixup methods to achieve comparable or even exceed the performance of dynamic methods without any extra computation. 3. The authors provide lots of experiments to demonstrate the effectiveness of the proposed method. Weaknesses: 1. It is confusing about the important notations, such as i,j, a, b. The author should follow some notational conventions. 2. Section 4.2 is abrupt with a poor description, confusing notation definitions, and low contextual relevance. 3. Equation (2) is not closely related to the context. Please explain it in detail. 4. Since Table 6 reports the experiments on transfer learning, the authors should describe that their proposed DM can be adapted to transfer learning in the related work. In addition, the title of Section 5.2 should be changed to "Transfer Learning Benchmarks". It might be more reasonable if the authors exchange Sections 5.2 and 5.3 since the authors pay more attention on semi-supervised learning. 5. It would be better if the authors move lines 164 - 175 and Figure 3 to Section 5. Figures 5 and 6 are too small in the font and the colors of the lines are hard to distinguish. 6. Some important equations are not numbered, and there should be a ',' link between the equation and the "where" in the same sentence. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors described potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your precious time and great efforts. Your insightful suggestions and professional questions are the key to improving the quality of the paper. We will address your questions one by one and make the corresponding changes in the reversion. --- ### Answers to questions > 1. It is confusing about the important notations, such as i,j, a, b. The author should follow some notational conventions. Thanks for your constructive suggestion for improving the readability, we will make the notation more clear in our revised version. Actually, we have followed the famous mixup method CutMix to define the notations of mixed samples from different two samples $x_a, x_b$ and use $i,j$ as indices to access vectors/matrices, which are conventional notations in most machine learning papers. To be more clear, taking the label $y$ as example, one-hot label $y_a\in\mathbb{R}^C$, mixed label $y_{(a,b)}\in\mathbb{R}^C$. Introduce $i,j$ to access specific values , $y^i_a\in\mathbb{R}$. Specifically, $$ y_a^k=\\left\\{ \\begin{aligned} 1& &&k=i\\\\ 0& &&k\neq i \\end{aligned}\right. $$ $$ y_{(a,b)}^k=\\left\\{ \\begin{aligned} \lambda &&k=i\\\\ 1-\lambda &&k=j\\\\ 0 && k\neq i,j \\end{aligned}\right. $$ > 2. Section 4.2 is abrupt with a poor description, confusing notation definitions, and low contextual relevance. - Section 4.2 is an extension of DM to multi-label classification. Since the manuscript has been too compact, some of the detailed statements have been overlooked resulting in poor presentation in this section. To improve the presentation, in an updated version we will simplify section 3.2 (e.g., merge Equation 3 into 3.1 and reduce the length). Then in section 4.2, we will add back-to-back statements to improve the coherence, e.g., "softmax-based models cannot deal with multi-label problems, thus how to introduce DM mechanism in multi-label classification is a question worth considering.” To make the notation more clear, the rescaled $\lambda$ is harmonized as $\lambda'$ then in Line 212, the equation becomes $y_{(a,b)}=\lambda'_a y_a+\lambda'_by_b$. > 3. Equation (2) is not closely related to the context. Please explain it in detail. - Equation (2) shows minimizing $L_{MCE}$ is equivalent to a regression task taking $\lambda$ as labels during the optimization. Therefore, this equation is direct evidence that $L_{MCE}$ suppresses prediction confidence. This is the motivation why we need to propose DM. We will explain this more clear at the beginning of section 3.2 in our updated version. > 4. Since Table 6 reports the experiments on transfer learning, the authors should describe that their proposed DM can be adapted to transfer learning in the related work. In addition, the title of Section 5.2 should be changed to "Transfer Learning Benchmarks". It might be more reasonable if the authors exchange Sections 5.2 and 5.3 since the authors pay more attention on semi-supervised learning. - Thanks for your constructive again! We will adopt your suggestions in the revision: a) enriching the related works about transfer learning. b) change the title of section 5.2. > 5. It would be better if the authors move lines 164 - 175 and Figure 3 to Section 5. Figures 5 and 6 are too small in the font and the colors of the lines are hard to distinguish. - Sure, we will re-organize the paper according to your valuable suggestions. > Some important equations are not numbered, and there should be a ',' link between the equation and the "where" in the same sentence. - Thanks for this careful review, we will fix them in the revision. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your response and clarifying many details to the questions. I increase the score from 5 to 6.
Summary: This paper introduced a simple strategy decoupled mixup (DM) to improve the effectiveness of Mixup and its variants. Regarding the softmax result of a mixed image with a pair of classes, one class is removed from the denominator when computing the loss of the other class. Authors provided both theoretical and empirical analyses to confirm DM has the effect of increasing the confidence of the predicted classes. DM can also be applied in semi-supervised learning and multi-label classification. Extensive experiments have been conducted covering standard image classification, semi-supervised learning and semi-supervised fine-tuning. Strengths: Strengths 1. The work is well motivated with the idea of making confident predictions for Mixup training. 2. The proposed idea is novel and can be combined with existing Mixup variants. 3. DM is proved to be effective on various tasks. Weaknesses: 1. Authors claimed the smoothness of Mixup can be preserved by DM, but I didn't see detailed discussions about this. The idea of DM seems contradict with smoothness of Mixup or label smoothing. To this end, the mechanism of DM is not totally clear. 2. Why PuzzleMix and AutoMix are not evaluated in image classification tasks? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your great efforts, these valuable questions and constructive suggestions, which are exactly what the paper needs. We will take your suggestions and solve your problems one by one, and all corresponding changes will be reflected in the revision. --- ### Answers to questions >1. Authors claimed the smoothness of Mixup can be preserved by DM, but I didn't see detailed discussions about this. The idea of DM seems contradict with smoothness of Mixup or label smoothing. To this end, the mechanism of DM is not totally clear. - The final form of $L_{DM(CE)}$ is composed of two parts: $L_{MCE}$ and $L_{DM}$. Our claim in Line 160 is that $L_{DM}$ is the role of the regularizer in mining hard mixed samples to improve the discriminability of the model. According to Equation 2, we can clearly see that the optimization of $L_{MCE}$ can be regarded as a regression task with $\lambda$ as the target, which brings smoothness to decision boundaries. Therefore, we say that decoupled mixup (DM) has both the properties of smoothness and enhanced discrimination at the same time. In a word, The mechanism of DM is that when dealing with mixed samples with information greater than "1" (Line 135-139), it can break the limit of $L_{MCE}$ to fully utilize the extra information from hard mixed samples. > 2. Why PuzzleMix and AutoMix are not evaluated in image classification tasks? - Due to the efficiency of data augmentation, static methods are still mainly used in the main text. However, according to L331 **" View results of dynamic mixups in the Appendix."**, in our supplemental submission, we have experimented with not only AutoMix and PuzzleMix in full accordance with the setup of the main text, but also included other dynamic methods, SaliencyMix and SAMix, as detailed in Tables A2, A3, A4, A5, A6, A7, and A8.
Summary: The authors point out that while $\textit{dynamic}$ mixup methods are shown to be effective, they induce too much computational cost. To address this issue, they propose a $\textit{static}$ method called Decoupled Mixup, which utilizes the hard mixed samples. The authors suggest that the Softmax function will suppress the model's confidence on hard mixed samples. Based on this idea, the authors propose a decoupled mixup cross-entropy loss which uses a decoupled version of Softmax that ease the "sum-to-one" constraint of Softmax. This loss is then added in the standard mixup loss as a regularization term. Empirically, the authors show that decoupled mixup improves the top-1 accuracy performance beyond some standard mixup methods on a variety of benchmark datasets. They also show that this method can be generalized with good performance on semi-supervised learning. Strengths: 1. The idea of utilizing hard mixed samples from the perspective of Softmax function is novel and interesting. 2. The theoretical explanation of the effectiveness of the proposed algorithm is solid. 3. The experiments are conducted thoroughly on plenty of tasks, datasets and Mixup methods. 4. Experiments settings are explained in details, especially the configurations of the hyperparameters of the proposed new method DM, leaving great convenience for future practitioners trying to reproduce the work. 5. The improvement of DM beyond standard Mixup methods is empirically shown to be significant. Also, while there isn't much improvement of DM beyond dynamic Mixup, the saved computational costs are also valuable. Weaknesses: 1. The typesettings of some figures and tables are a bit too dense. 2. A few typo. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Line 72. Should the word "conformation" be "confirmation"? 2. Proposition 1, line 133. What does it mean by saying "$\textit{to regress corresponding}\ \lambda$" in the gradient Equation (2)? 3. Line 144, "... for mixed data point $z_{(a,b)}$". Should it be "$x_{(a,b)}$"? 4. Line 147, "$\textbf{Decoupled Softmax}$". The subtitle seems not in the right place. Probably layout error. 5. Line 144-147. From my understanding the text here is to provide the definition of Softmax function and to introduce $\sigma(\cdot)$ as its denotation. I would simply put it in Section 3.1 before Equation (1), along with the descriptions of all other notations. 6. Line 148-150. The text here is basically telling the same story as line 137-140, that Softmax suppresses the confidence of the model on hard mixed samples, the sum of whose semantic information should be more than $1$. I think they can be combined together, rather than having the idea repeated twice in one paragraph. 7. Equation (4), "$\phi(z_{(a,b)})^{i,j}$". Since it is suggested in the previous text that superscripts denote the index, I think here $j$ can be put as a subscript of $\phi$ as an indication of the function, making the expression clearer. 8. Line 153, "the decoupled Softmax makes all items associated with $\lambda$ becomes $-1$ in gradient". Though the proof of this statement should be straightforward, can you provide it in the Appendix as well, since it's mentioned that "the derivatiopn is given in the A.1" while there is only a proof of Equation 1 in A.1? 9. Line 180, "multi-classification task". Should it be "multi-lable classification task" to be more specific? 10. Line 186, "... the unlabeled data with large $\lambda$ ..." . Does it mean unlabeled data with large combination weight? i.e. the $(1-\lambda)$ as in "$\hat{x}_{(a,b)}=\lambda{x_a}+(1-\lambda)u_b$" actually? 11. Equation under line 192. The expression in RHS (particularly $z_{(a,b)}$) is not clear enough to indicate that only the labeled part is retained in $\mathcal{L}_{DM}$. 12. Line 220 "threshold $t$" and line 222 "$\xi$". The notations are used exchanged. 13. Appendix A.1, first line of the equations under line 526. Should the minus sign "$-$" here be the equal sign "$=$"? 14. The idea of raising confidence for hard mixed samples is interesting, but conventionally from the perspective of calibration, one may wish the confidence to not be too large. Will there be a contradict (or a trade-off) between leveraging hard mixed samples and improving the models' calibration performance? 15. In some tasks (datasets), mixup may not necessarily create hard mixed samples, especially when the type of data doesn't have apparent semantic information, for example points on a 2D plane. Also, when manifold intrusion occurs, the true label of a mixed sample may differs from both the labels of the pair of real samples. In these cases, DM may provide no significant improvement, or even degrade the performance. Do you have any insight or principle to determine the "level of necessity" or "effectiveness" of applying DM in a given task and dataset? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Not many obvious limitations. The derivation of some theory statement is not complete. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your great effort and very constructive comments to help us improve the manuscript. We will address your questions one by one and make the corresponding changes in the reversion. Please note that due to compilation issues, $L$ denotes $\mathcal{L}$ --- ### Answers to questions > 1, 3, 4, 5, 6, 7, 9, 12, and 13, typos and writing suggestions. - Thanks so much for your careful feedback! We'll correct all typos and adopt your suggestions to polish the writing of this paper in the revised version. > 2. Proposition 1, line 133. What does it mean by saying "to regress corresponding $\lambda$" in the gradient Equation (2)? - Here “regress to corresponding $\lambda$” means equation (2) can be regarded as a regression task with the $\lambda$ as a target when $i=a$ or $b$. Because there are gradients if the predicted probability is not exactly equal to $\lambda$. Maybe proposition 1 could be more clear if it becomes “Assuming $x_{(a,b)}$ is generated from two different classes, minimizing $L_{MCE}$ is equivalent to a regression task taking $\lambda$ as labels”. Besides, this equation is an explanation that implies $L_{MCE}$ suppresses prediction confidence. > 8. Line 153, "the decoupled Softmax makes all items associated with $\lambda$ becomes -1 in gradient". Can you provide it in the Appendix as well? - Sure, similar with A.1 we have: $$ \\begin{align*} \big( \nabla_{z(a, b)} L_{DM} \big)^{l} &=\frac{\partial L_{DM}}{\partial z_{\tiny(a, b)}^l} = -\frac{\partial}{\partial z_{\tiny(a, b)}^l} \Big(y_{[a,b])}^{T}\log\big(H(z_{(a, b)})\big) y_{[a,b]} \Big) \\\\ &=-\frac{\partial}{\partial z_{\tiny(a, b)}^l} \Big( \sum_{i,j=1}^{C} y_a^i \log(\frac{\exp(z_{\tiny(a,b)}^i)}{\sum_{k \neq j}^{C}\exp(z_{\tiny(a,b)}^j)}) y_b^j+\sum_{i,j=1}^{C} y_a^j \log(\frac{\exp(z_{\tiny(a,b)}^i)}{\sum_{k \neq i}^{C}\exp(z_{\tiny(a,b)}^j)}) y_b^i \Big) \\\\ &=-\sum_{i,j=1}^{C}\Big( y_{a}^{i}y_{b}^j \frac{\partial}{\partial z_{\tiny(a, b)}^l}\big(\log(\frac{\exp(z_{\tiny(a,b)}^i)}{\sum_{k \neq j}^{C}\exp(z_{\tiny(a,b)}^k)}) + \log(\frac{\exp(z_{\tiny(a,b)}^j)}{\sum_{k \neq i}^{C}\exp(z_{\tiny(a,b)}^k)}) \big) \Big) \\\\ &=-\sum_{i,j=1}^{C}\Big( y_{a}^{i}y_{b}^j \big(\delta_i^l - \frac{\sum_{k \neq j}\exp(z_{\tiny(a,b)}^k)\delta_k^l}{\sum_{k \neq j}\exp(z_{\tiny(a,b)}^k)} + \delta_j^l - \frac{\sum_{k \neq i}\exp(z_{\tiny(a,b)}^k) \delta_k^l}{\sum_{k \neq i}\exp(z_{\tiny(a,b)}^k)} \big) \Big) \\\\ &=\frac{\sum_{k \neq i}\exp(z_{\tiny(a,b)}^k)\delta_k^l}{\sum_{k \neq i} \exp(z^k_{(a,b)})}+\frac{\sum_{k \neq j}\exp(z_{\tiny(a,b)}^k)\delta_k^l}{\sum_{k \neq j} \exp(z^k_{(a,b)})} - \delta_i^l - \delta_j^l. \\end{align*} $$ Thus, for $L_{DM}$ loss: $$ \\begin{align}(\nabla_{z_{(a,b)}} L_{MCE})^l= \\begin{cases} -1+\frac{\exp(z^i_{(a,b)})}{\sum_{c \neq j} \exp(z^c_{(a,b)})}, & l=i \\\\ -1+\frac{\exp(z^j_{(a,b)})}{\sum_{c \neq i} \exp(z^c_{(a,b)})}, & l=j \\\\ \frac{\exp(z^l_{(a,b)})}{\sum_{c \neq i} \exp(z^c_{(a,b)})}+\frac{\exp(z^l_{(a,b)})}{\sum_{c \neq j} \exp(z^c_{(a,b)})}, & l \neq i, j \\end{cases} \\end{align} $$ > 10. Line 186, does it mean unlabeled data with large combination weight? - Yes, it is. As we stated in Line 191, we set $\lambda<0.5$ to achieve this result. I'm sorry for confusing your understanding, the description here is not rigorous, we would change “large $\lambda$” to “large combination weight” in our revised version. > 11. Equation under line 192. The expression is not clear to indicate that only the labeled part is retained in $L_{DM}$. - Given $ z_{(a,b)}$, the (pseudo) label index of $x_a,u_b$ is $i,j$, we have: $$ \hat{L_{DM}}=log(\frac{exp(z^i_{(a,b)})}{\sum_{k\neq j}^C exp(z^j_{(a,b)})}) $$ This means calculating the prediction of class $i$ by decoupling class $j$. Written in matrix form this becomes $y_a^T log(\phi(z_{(a,b)}))y_b$, which means accessing the results of ground-truth label $y_a$ after coupling off the pseudo-label $y_b$. Thus, we say this formula retains the labeled part. To make it more clear, we will add an intuitive example in the revised version. > 14. Will there be a contradict (or a trade-off) between leveraging hard mixed samples and improving the models' calibration performance? - This is the fundamental reason why we introduced the hyperparameter $\eta$ in $\mathcal{L}_{DM(CE)}$. This parameter is the very trade-off between smoothness(calibration) and discrimination(leveraging hard mixed samples). the value of $\eta$ can be referred to as 5.4(1), which generally takes 0.1 by default for static methods and 1.0 for dynamic methods. > 15. Do you have any insight or principle to determine the "level of necessity" or "effectiveness" of applying DM in a given task and dataset? - Although the semantic information is not as centralized in these tasks as in concrete object classification, aggregating some dispersed local features can implicitly construct semantic features with sufficient discriminative information, thus also forming implicit hard mixed samples. Therefore, we have done experiments on Place205 dataset for Decoupled Mixup and show that DM is also gainful for all kinds of mixup algorithms. We evaluate the performance gain of DM(CE) upon various mixup methods based on ResNet-18 on Place205. We follow the settings of the Place205 mixup benchmark in OpenMixup. The results are shown in the following table. We can thus conclude that our proposed DM(CE) loss can also mine hard mixed samples in the scenic tasks. | Methods | $\alpha$ | (MCE) | +DM(CE) | |-------------|----------|-------|---------| | MixUp | 0.2 | 59.33 | +0.38 | | CutMix | 0.2 | 59.21 | +0.49 | | ManifoldMix | 0.2 | 59.46 | +0.36 | | SaliencyMix | 0.2 | 59.50 | +0.27 | | FMix | 0.2 | 59.51 | +0.22 | | ResizeMix | 1 | 59.66 | +0.18 | | PuzzleMix | 1 | 59.62 | +0.19 | | AutoMix | 2 | 59.74 | +0.11 | --- Rebuttal Comment 1.1: Comment: Thank you for the response. Questions addressed. --- Reply to Comment 1.1.1: Comment: Thank you again for your detailed and high-quality review!
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Spatio-Angular Convolutions for Super-resolution in Diffusion MRI
Accept (poster)
Summary: The authors utilize a parametric continuous convolution network to capitalize on the geometry of diffusion MRI. They enhance prior work (PCConv), integrating domain context and global information. They show that they obtain accurate high resolution dMRI using merely sparsely sampled data and demonstrate performance on two subsequent clinical tasks. Strengths: Typical CNNs do not utilize the geometry in dMRI, which opens an opportunity. The approach combines existing ideas to seize this opportunity by creating a framework that convolves over both dense grid data and sparsely sampled spherical data concurrently, augmented by context/domain information. I consider analyzing two downstream clinical tasks a considerable strength of the paper. Moreover, the paper is clearly written and provides a proper mathematical description of the method. Weaknesses: While the paper is clearly written, it is sometimes also hard to understand due to the complexity of the medical context and diffusion MRI terminology/jargon. This makes the motivation and approach of the work somewhat vague. I would suggest elaborating a bit more on the structure of the data (i.e. in paragraph 2 of the introduction) and explaining how the series, b-vector, and multi-shell are connected and what they mean/measure. Secondly, it only becomes clear quite late that you opt for super-resolution and inferring the dMRI values from synthetically under-sampled data. I missed why this is a clinically relevant problem. Is data typically under-sampled in clinical practice? Will it speed up the acquisition? Some clinical context would be helpful. Moreover, I feel that the results vary quite a lot throughout the experiments. Sometimes they are better than the baseline, other times they are not. While the results are as they are, I miss a proper discussion on these results to discuss why this is the case. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Provide more explanation on the structure of the data. Specifically, in the second paragraph of the introduction, clarify the relationship between the series, b-vector, and multi-shell, including what they measure. 2. Make it clear earlier that the goal is super-resolution and deducing the dMRI values from synthetically under-sampled data. Explain why this is relevant in a clinical context. Is under-sampling common in practice? Would this approach speed up acquisition? 3. Think about adding a more detailed explanation of the baseline methods. It's clear that the methods are convolutional, but how do they manage the angular data? Make it obvious what sets your work apart. 4. Include a thorough discussion of the varying results. While it's understood that results can differ, a deeper analysis could shed light on why the outcomes are sometimes better than the baseline and sometimes not. This would enhance the readers' understanding. E.g. , why do the modifications sometimes work but not always? 5. How does the FOD model surpasses others in the experiments discussed in section 3.2? 6. Table 5 / Figure 3: The performance differences appear minor. Do these slight enhancements have an impact in a clinical setting? For instance, could these marginal gains influence patient treatments or clinical outcomes? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no limitations section present in the manuscript. A suggestion for a limitation is that the method was developed on one data set. There is, however, a very large variation between datasets, institutions, etc, in medical imaging. This also holds for MRI. It would be good to show or at least discuss the performance on other datasets or discuss whether this is a study limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reviewer PJ9o Rebuttal We thank the reviewer for taking the time to review our work and look forward to further discussions. ## Weaknesses We will revise the second paragraph of the introduction to further clarify the specifics of b-vectors and multi-shell acquisition. As this is an application paper in a particularly technical field it is difficult to avoid MRI terminology and jargon, however, we will revise the introduction of the camera-ready submission to increase clarity for the reader unfamiliar with dMRI terminology. The task of super-resolution is clinically relevant because of two factors: 1) MRI acquisitions in clinical settings are time-limited due to their high operating costs and desire for minimal scanning times to reduce patient discomfort. 2) Many of the downstream analysis techniques, which are prevalent in clinical *research*, require high angular resolution datasets which are typically prohibitive in normal clinical workflow due to the long scanning times they would require. Angular super-resolution looks to mitigate this problem by using models trained on prior data to produce high angular resolution datasets when only *acquiring* low angular resolution data. As a point of clarification, you mention that we are "inferring the dMRI values from synthetically under-sampled data". However, the dMRI values we infer *from*, i.e. the inputs to the model, are real (measured) data and not synthetically created. ## Questions ### Q.1 "Provide more explanation..." Diffusion MRI (dMRI) measures the degree to which water molecules can diffuse within biological tissue. This acts as a proxy for cell structure, particularly in neuroimaging, where diffusion within white matter is restricted anisotropically, whilst gray matter has isotropic diffusion. Within dMRI you can only measure the degree to which water diffuses in one spatial direction at a time. This direction is known as the b-vector and is a 3D vector on the unit sphere. To build up an appropriately informative map of the diffusion profile within the brain, you need to measure the diffusion sensitivity in many different directions. How many directions you acquire is known as the angular resolution and determines how well you can resolve certain tissue structures within the brain. You can also think of a diffusion dataset as acquiring a *series* of 3D volumes, each with a different sensitivity direction. Along with the direction of diffusion sensitivity, you can also vary the strength of the diffusion sensitivity, also known as the b-value. A stronger b-value will result in a larger contrast between an area of high diffusion (in a specific direction) and low diffusion. This is biologically relevant because different tissue (e.g. gray matter, white matter, CSF) have different, non-linear, relationships to the b-value. Therefore using multiple b-values (aka multi-shell) is key to determining the structure of the measured tissue. ### Q.2 "Make it clear earlier..." We will revise the introduction to make it more clear, early on, that the goal of this task is to infer synthetic dMRI data from under-sampled data. Under-sampling is a very common approach to angular super-resolution. This is generally the case as readily available large datasets, such as the HCP, have high angular resolution data. A typical workflow is then to select a subset of the b-vector and dMRI volume pairs and treat them as the low angular resolution input. Low angular resolution data is under-sampled (as compared to high angular resolution data) and therefore does indeed take less acquisition time. ### Q.3 "Think about adding a more..." We will revise the supplementary appendix to discuss the network architecture of the comparison methods used, making particular note of how this handles the angular component. We will also clarify how the angular components are managed briefly within the main manuscript when introducing the methods within the body of the text. ### Q.4 "Include a thorough discussion..." We agree with the reviewer that a deeper analysis would be beneficial in understanding how the modifications to the PCCNN affect performance. In particular, we will include an analysis looking at the various combinations of input shell, output shell, and input size. Training and validation loss, as well as preliminary analysis point to the fact that the PCCNN-Bv-Sp performs best for the highest number of combinations. However, why it does not *always* perform best is currently unclear. It could be the case that, due to the complexity of the problem a single model is not able to perform best in all task combinations. This can be further investigated in additional experiments by observing changes in task performance given different random weight and training initialisations, as well as observing how performance varies when models are trained on single task combinations. ### Q.5 "How does the FOD model..." The FOD-Net model likely performs well because of the constraints it imposes on the task of angular super-resolution. Specifically, it restricts the problem formulation to one downstream analysis (FOD). This imposes certain limitations on what the signal could be, as well as smoothing over high-frequency noise. Regressing over this more restricted data type is therefore arguably an easier problem, which in part explains why their model performs very well. An additional contributing factor to the FOD-Net's performance is that it only regresses over the FOD residuals. This again simplifies the problem somewhat as you only need to predict the *difference* between low-res data and high-res data. It is infeasible to predict the residuals when regressing over the raw dMRI data because the possible difference in contrast *in one voxel* between one measured direction and another is as large as the data range itself. This in effect means regressing over residuals likely would not simplify the problem. --- Rebuttal Comment 1.1: Comment: I have read the other reviews and point-by-point responses from the authors. I want to mention that the other reviewers noted points that are very important. For instance, the distinct advantages of each model modification were not evident. I appreciate the inclusion of the ablation studies. The evaluation of the performance in one test subject is still quite limited. I also feel that your statement that “why it does not always perform best is currently unclear. “ is honest, but also worrisome. This needs further exploration. Moreover, I agree that the paper is more an application paper and methodological contribution is limited. I have a few questions regarding my points. Weaknesses Thanks for addressing the mentioned weaknesses, it is clearer now. I would recommend to clarify these points in the camera-ready version as well. Q4) “ This can be further investigated in additional experiments by observing changes in task performance given different random weight and training initialisations, as well as observing how performance varies when models are trained on single task combinations.” Is this something you are planning to do in the camera-ready version? And, are you going to include the discussion in your response to Q4 as a point in the manuscript? Q5) Will you discuss this in the manuscript? Q6) I would appreciate it if you could reflect on Q6. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read through the reviews and comments. Whilst the evaluation over one subject is limited compared to the other analyses, given the tight timeframe we were working in it was all we were able to feasibly provide. Having said that, as mentioned within bdM9 W.1 response, one test subject still constitutes ~100,000 voxels being inferred. Additionally, given that there is a large degree of heterogeneity between patches within the brain volume, one subject still represents a reasonably wide sample of the data. Q4) Yes we will include the mentioned ablation studies and discussion into the camera-ready manuscript. We agree that the current manuscript lacks a thorough enough analysis of the different modifications to the PCCNN and so will amend it to include this. Q5) The rationale behind why the FOD-Net model uses a fundamentally a more constrained regime is important context within the analyses. We will therefore include the aforementioned discussion into the manuscript. Q6) Apologies, as we were limited by character count within the original response. See our response to Q6 and Limitations below. We have additionally added the rest of the responses to the other review threads. ### Q.6 To what extent the enhancements have on real-world clinical workflows, such as patient outcomes, is an important question. Ultimately it is beyond the scope of this research, as this is establishing a methodological framework for angular super-resolution, not validating angular super-resolution within pathological workflows. However, this is certainly an area that we wish to expand the research into. To briefly comment quantitatively; if you refer to Figure 1 of the rebuttal PDF there is a visualisation of the FODs predicted from various models. Here there is a clear visual difference between the model's prediction and the low-resolution input data. This would have definitive implications for probabilistic tractography, which is typically used in studies involving [connectomics](https://www.nature.com/articles/nrn3901). These studies look at the connectivity of different regions of the brains across different populations. Therefore we would hypothesise that the application of these models would, for example, increase the validity of analyses done in connectomic studies. ## Limitations We agree with the reviewer on the need for a more comprehensive limitations section, therefore we propose the following revision: Super-resolution within a medical imaging context presents inherent risks including hallucinations or incorrect predictions, particularly when making predictions on out-of-distribution data. Given that this work only includes data from relatively young healthy adults, future work will be needed to validate these methods in a diverse set of diagnostic settings, such as datasets with pathologies, a wide age range, and a variety of acquisition parameters. Modern deep learning frameworks rely on highly optimised subroutines to perform standard discrete convolutions. Subsequently, despite having a lower number of parameters, the PCCNNs inference and train times were longer than the other methods presented in this work. This is because the PCConv uses a bespoke implementation that does not benefit from the aforementioned subroutines. This highlights the need for future research into more efficient implementations of the PCConv operation.
Summary: The paper presents a learning-based method for q-space interpolation in diffusion MRI. The approach constructs particular convolution operators that lend themselves to the structure of the space to be interpolated, then learn convolutions from examples. Experiments show the learned interpolation substantially outperforms spherical harmonic interpolation and learned methods that use more generic convolutions less appropriate to the problem. Strengths: The method is well thought through and seems appropriate to the application domain. Results are strong for the specific problem at hand. Weaknesses: Should really use a 3D q-space signal representation as a baseline, e.g. https://pubmed.ncbi.nlm.nih.gov/23587694/. While the work is nicely formulated, it is a very niche from the point of view of a machine learning conference like NeurIPS. I struggle to see how the methods developed have broader interest other than to researchers/users of the particular MRI modality under consideration. The authors make no attempt to explain why this would be relevant to NeurIPS rather than an imaging conference. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How does this relate to other patch-based super-resolution techniques for diffusion MRI? Well known works on this such as https://pubmed.ncbi.nlm.nih.gov/23791914/ and image-quality transfer (https://pubmed.ncbi.nlm.nih.gov/28263925/, https://pubmed.ncbi.nlm.nih.gov/33039617/) are not mentioned at all, which is a bit worrying. Do the technical contributions have any more general utility beyond the specific application to diffusion MRI? Why might they be of broad interest at a machine learning conference? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Nothing to add Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reviewer bfmq Rebuttal We thank the reviewer for taking the time to provide this review. ## Weaknesses ### W.1 "Should really use a 3D q-space..." There are a myriad of diffusion models that we could have included in this analysis if we had unlimited time and space. For example, *dipy*, the diffusion imaging Python framework, supports [28 different](https://dipy.org/documentation/1.7.0/examples_index/#reconstruction) reconstruction models. We chose the most appropriate analysis methods based on consultation with clinical experts. Further to this point, the [two](https://arxiv.org/abs/2203.15598) other [studies](https://arxiv.org/abs/2106.13188) within the same task did not include MAP in their analysis. ### W.2 "While the work is nicely formulated..." We feel that this work is suited for publication within NeurIPS for three main reasons. Firstly, this work falls under both the "applications" and "neuroscience" categories within the NeurIPS 2023 call for papers. Secondly, there have been diffusion MRI-specific publications within NeurIPS in the past years such as [Caiafa et al.](https://papers.nips.cc/paper_files/paper/2017/hash/ccbd8ca962b80445df1f7f38c57759f0-Abstract.html), [Zheng et al.](https://papers.nips.cc/paper_files/paper/2014/hash/215a71a12769b056c3c32e7299f1c5ed-Abstract.html), [Pasternak et al.](https://papers.nips.cc/paper_files/paper/2005/hash/dfb84a11f431c62436cfb760e30a34fe-Abstract.html) and, most recently, [patch2self](https://papers.nips.cc/paper_files/paper/2020/hash/bc047286b224b7bfa73d4cb02de1238d-Abstract.html) which is included within the methodology of this study, as well as in more general [MRI studies within NeurIPS](https://papers.nips.cc/papers/search?q=MRI). Thirdly, whilst this study focuses on a specific application, it uses and extends the more general parametric continuous framework, and will provide a working implementation of the PCConv layer (of which there is none from the original paper) that can be used for any task pertaining to parametric continuous convolutions. ## Questions ### Q.1 "How does this relate to other patch-based..." Due to the brevity required of this research being published within a conference, we decided to limit the scope of the paper to angular super-resolution. [Tanno et al.](https://pubmed.ncbi.nlm.nih.gov/33039617/) is not mentioned as it only performs spatial super-resolution and therefore falls out of the scope of this page-limited publication. As further work, we intend to investigate the proposed PCConv framework in a spatial super-resolution setting, in which case this line of work will be used as a baseline. Similarly, [Coupé et al.](https://pubmed.ncbi.nlm.nih.gov/23791914/) only performs spatial super-resolution within dMRI, and additionally is arguably outdated as being published over a decade ago. Whilst [Alexander et al.](https://pubmed.ncbi.nlm.nih.gov/28263925/) represents notable work, we felt that this was out of the scope of this conference submission as it pertains to a non-deep learning-based method of regression, namely random forest and linear transform. Having said that, we feel that this and similar works are worth mentioning as context to the wider problem and will adjust the camera-ready manuscript accordingly. ### Q.2 "Do the technical contributions have any..." Broadly, this work represents the application of a general framework (parametric continuous convolutions) to a new domain (super-resolution in diffusion MRI). By demonstrating the PCConv framework we add to the body of evidence that this framework can be used in a wide array of applications. To that end, we will also provide code to apply the parametric continuous framework in general, which is of direct interest to the broader machine learning community, as it was not released with the original parametric continuous convolution publication. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their responses and for considering my feedback and that of the other reviewers. 3D q-space interpolation: I do not agree that all 28 methods listed in dipy are equally appropriate baselines. Ultimately the authors are trying to interpolate in q-space. While many of the methods dipy lists under "Reconstruction" could be used that way, only a few are specifically designed for that purpose. MAP, SHORE and other bases have clear physical motivation as choices of functions that provide appropriate continuous representations for q-space, so are a direct "physics-based" competitor for the proposed learning approach to the same problem. Remit. I do not really see this paper as "neuroscience", although diffusion MRI is a tool sometimes used in neuroscience. However, the authors' response to Q2 gives me some encouragement that this work could be made more appropriate for NeurIPS. Personally, I would have been more supportive if the work had been framed as an advance on the general topic of parametric continuous convolutions with diffusion MRI as an example application (ideally one of several). At the very least some discussion of what other areas the proposed advances might benefit would help this work seem less niche and of broader interest. --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments. ### "3D q-space interpolation: I do not agree that..." We agree that MAP, and it's 1D ~equivelant SHORE are particularly relevant reconstruction methods within dMRI. However, to our knowledge, these are multishell reconstruction methods, and therefore not applicable to this work. Specifically, the multi-shell experiments within this work are derived from single-shell data only. Therefore MAP and SHORE are not applicable as **baselines**, as they require multishell data to fit the coefficients within their respective model. MAP could be used as a downstream analysis method, similar to how FBA and NODDI are within this manuscript, however given this is a conference submission, we feel the two single-shell and two multi-hell analyses are sufficient scope for this work. We agree that a more expansive format, such as a journal submission, should include a MAP analysis. ### "Remit. I do not really see this paper as neuroscience..." Naturally as an application paper, this does not neatly fit into one category. However, we do feel strongly that this paper is an *application* paper within computer vision (line 1 of call for papers listed categories) pertaining to *health sciences* broadly (line 6) and more specifically, given diffusion MRIs wide use in both clinical and research, *neuroscience* (line 7). We feel this warrants sufficient relevance to be included within the NeurIPS proceedings. To point to a recent example, the [patch2self paper](https://arxiv.org/abs/2011.01355) published within NeurIPS 2020, is purely a dMRI application paper related to denoising dMRI data by leveraging the unique geometry of the data. We would argue this is as specific, if not more so, as our work because we relate to and extend a general (parametric continuous) framework. We will additionally provide a working implementation of the parametric continuous framework, which can be used in other domains such as [scene reconstruction](https://papers.nips.cc/paper_files/paper/2021/hash/46031b3d04dc90994ca317a7c55c4289-Abstract.html) and [neural radiance fields](https://dl.acm.org/doi/abs/10.1145/3503250), that share a similar geometric formulation.
Summary: This paper proposes a parametric continuous convolution (PCConv) framework for Diffusion MRI (dMRI). The PCConv convolves across both spatial and angular dimensions of dMRI data. Meanwhile, the authors introduce a Fourier feature mapping, global coordinates, and domain specific context into PCConv. Experiments on PCCNN and three variants (PCCNN-Bv, PCCNN-Sp, PCCNNBv-Sp) show that the proposed method is competitive with less parameters. Strengths: 1. Extensive experiments on both single-shell and multi-shell demonstrate that the proposed PCCNN is competitive with previous methods. 2. This paper is written clearly, and the proposed method is reasonable. Weaknesses: 1. The PCCNN has been proposed in previous work [31]. The authors mainly apply the PCCNN in MRI tasks. Meanwhile, the proposed Factorised Convolutions (Section .2.5) is also designed in [31]. Therefore, the contribution and novelty of this paper are not enough. 2. The authors design four models: PCCNN, PCCNN-Bv, PCCNN-Sp, and PCCNNBv-Sp. However, in the comparisons, PCCNN-Bv-Sp combines all additions and does not achieve the best results. The performance of the four models is close to the comparison results. That is, the proposed additional improvements have little effect. The authors should demonstrate the effectiveness of the additional improvements. 3. The paper lacks an ablation study to indicate the effectiveness of each component. For example, the impacts of the angular kernel size k_q and the fixed radius d_max for PCCNN. I think these are some important hyperparameters. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Line 314, the authors indicate that this method's inference and train times are longer than previous methods. Please give some explanation for this phenomenon, since the model size of PCConv is much smaller than other models. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not discuss its limitations or potential negative societal in a separate section. But some limitations (e.g., running time) are mentioned in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reviewer Kmgz Rebuttal We thank the reviewer for taking the time to provide this review. ## Weaknesses ### W.1 "The PCCNN has been proposed in previous work..." Whilst the PCCNN developed in this study is an extension of the PCConv framework proposed by [Wang et al.](https://arxiv.org/abs/2101.06742), we feel that this work constitutes a noteworthy contribution to the literature. A significant amount of work has gone into applying the framework to the dMRI domain, such as the preprocessing steps, domain expertise in the construction of the network, choice of hyperparameters, and extension of the continuous parametric framework into higher dimensional q-space. Additionally, we provide ablation studies demonstrating optional additions to the network such as the coordinate embedding and incorporation of global information. Further, we include an extensive analysis across several different experiments to demonstrate the generalisability of our methods in different downstream dMRI workflows. ### W.2 "The authors design four models..." Given the limited rebuttal timeframe we were able to provide the following analysis to demonstrate the benefit of the PCCNN-Bv-Sp modifications. The table below looks at performance in one test subject across input shell, output shell, and input size combinations. For example, one combination would be $(b_{\mathrm{in}} = 1000, b_{\mathrm{in}} = 2000, q_{\mathrm{in}} = 6)$ and another $(b_{\mathrm{in}} = 3000, b_{\mathrm{in}} = 2000, q_{\mathrm{in}} = 10)$. Listed is the count where each model performed best in class (1st), second in class (2nd), etc. This table clearly demonstrates, in this subject, that the PCCNN-Bv-Sp performed best most of the time when looking at all combinations. However, there are some obvious caveats to this. Firstly, due to the limited timeframe we were only able to conduct this analysis within one subject. Secondly, the individual modifications do not perform better than the unmodified PCCNN. This can be further investigated in additional experiments by observing changes in task performance given different random weight and training initialisations, as well as observing how performance varies when models are trained on single task combinations. | Model | 1st | 2nd | 3rd | 4th | |-------------|-----|-----|-----|-----| | PCCNN | 5 | 18 | 4 | 0 | | PCCNN-Bv | 4 | 3 | 16 | 4 | | PCCNN-Sp | 2 | 2 | 7 | 16 | | PCCNN-Bv-Sp | 16 | 4 | 0 | 7 | ### W.3 "The paper lacks an ablation study..." We have included two additional ablation studies as shown by Table 2 within the rebuttal PDF. Below is a summary of those experiments. #### No B-vector Information Here we compared a PCCNN model trained up to 50,000 iterations in two scenarios: 1) with b-vectors included in the coordinate embedding "Baseline" and 2) a model with b-vectors **excluded** from the model in all layers "No b-vector". The difference in MAE, PSNR, and MSSIM all demonstrate the value of providing this information into the network. Despite only measuring data across one subject, the standard deviation within the main manuscript suggests that this difference in error is highly statistically significant. We will include a more in-depth ablation study demonstrating this within the camera-ready manuscript. #### Varying $d_{\mathrm{max}}$ We compared a PCCNN model trained up to 50,000 iterations with three values of $d_{\mathrm{max}}$ set for all layers within the network: $d_{\mathrm{max}} = 1$ "Baseline", $d_{\mathrm{max}} = \frac{\pi}{4}$, and $d_{\mathrm{max}} = \frac{\pi}{8}$. Table 2 demonstrates a clear dropoff in performance across all three error metrics as $d_{\mathrm{max}}$ decreases. This is in line with what we would expect, as a lower $d_{\mathrm{max}}$ corresponds to datapoints that have a greater angular distance being excluded from the kernel. Similarly, we will include a more in-depth ablation study within the camera-ready manuscript. #### Varying $k_{\mathrm{q}}$ Varying $k_{\mathrm{q}}$ would be equivalent to varying the input angular size, and is therefore demonstrated through the three input q-space sizes $(q_{\mathrm{in}} = 6, 10, 20)$ shown throughout the experiments within the main manuscript. Similarly to varying $d_{\mathrm{max}}$, this restricts the number of q-space points that is included within the kernel and effectively reduces the performance as you include fewer and fewer points. ## Questions Whilst the number of parameters within the PCCNN is much smaller than the other models presented in this work, this difference is primarily due to how the number of parameters within the PCCNN model scales with kernel size. For example, within a standard convolutional layer, the number of parameters depends on the size of the kernel, the number of input channels, and the number of output channels. Conversely, the number of parameters within the PCConv layer does not depend on the kernel size, as each weight within the kernel is sampled from the hypernetwork. Here, the PCConv layer requires more computation versus an equivalent convolutional layer due to the computation needed to sample the weights from the hypernetwork. --- Rebuttal Comment 1.1: Comment: ## Limitations We agree with the reviewer's assessment that the description of the limitations should go into more detail and therefore we propose the following revision: Super-resolution within a medical imaging context presents inherent risks including hallucinations or incorrect predictions, particularly when making predictions on out-of-distribution data. Given that this work only includes data from relatively young healthy adults, future work will be needed to validate these methods in a diverse set of diagnostic settings, such as datasets with pathologies, a wide age range, and a variety of acquisition parameters. Modern deep learning frameworks rely on highly optimised subroutines to perform standard discrete convolutions. Subsequently, despite having a lower number of parameters, the PCCNNs inference and train times were longer than the other methods presented in this work. This is because the PCConv uses a bespoke implementation that does not benefit from the aforementioned subroutines. This highlights the need for future research into more efficient implementations of the PCConv operation.
Summary: The paper with title: Spatio-Angular Convolutions for Super-resolution in Diffusion MRI applies established fully parametric continuous convolution network (PCCNN) to diffusion SR, demonstrating the potentials. Strengths: 1. This paper proposes a practical method to apply parametric continuous ConvNet to diffusion SR. Q-space in Diffusion MRI is a great example or application of continuous conv. How to leverage the diffusion gradient direction has been an open problem in the community. 2. The authors present analysis on various diffusion coefficients that are clinically useful. Weaknesses: 1. The authors propose a few variations to PCCNN (i.e., BV, SP), however I did not see the clear improvements from the results what are the roles of BV and SP, they are at similar values. 2. The paper needs more comparisons to existing methods, now only RCNN in many experiments. 3. From my viewpoint, this paper misses an important ablation studies, the authors should evaluate the same architecture without the input of diffusion gradient direction and see how it contributes. 4. Needs more visual comparisons, and examples of SR results, very critical. 5. The novelty of this work is quite limited, since the core idea (PCCNN) is adopted from previous work, but application wise, its a fair application paper. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Can this work be applied to other diffusion MRI tasks? for example diffusion MRI denoising. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: 1. Minor improvements over existing works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reviewer Vyq4 Rebuttal We thank the reviewer for their time in providing this review. ## Weaknesses ### W.1 "The authors propose a few variations to PCCNN..." The "Bv" modification increases the co-ordinate embedding dimensionality by splitting the coordinate $\rho_{j} - \rho_{i}$ into its constituent factors $\rho_{j}$ and $\rho_{i}$. The former coordinate only relates the *difference* between the output shell and the input shell. For example, when inferring $b = 2000$ data from $b = 1000$ data this co-ordinate would be the same as when inferring $b = 3000$ from $b = 2000$, even though the relationship between the dMRI intensity values when comparing different shells is non-linear. This is the intuition behind providing these values separately, as it allows the hypernetwork to learn this non-linear relationship explicitly. The "Sp" modification is the addition of the mean patch coordinate of the data. As each training example constitutes a small patch of the entire image, including the approximate location of the patch w.r.t. the image as a whole (i.e where the patch lies within the brain), would enable the model to modulate the kernel depending on the approximate environment the data is within. For example, patches on the exterior parts of the image would predominantly consist of gray matter, whereas inner patches would predominantly consist of white matter or csf. This difference in tissue type would in turn produce very different response functions as you vary the diffusion intensity. Overall these modifications serve to demonstrate the flexibility of the parametric continuous layer in this context. We will provide an additional analysis to further investigate the effects of the modifications for the camera-ready manuscript. ### W.2 "The paper needs more comparisons..." We have included results featuring an additional model comparison, proposed by [Ren et al.](https://arxiv.org/abs/2106.13188), within Figure 2 of the rebuttal PDF. The figure shows the mean absolute error (MAE) of two models previously included within the main manuscript (RCNN and PCCNN-Bv-Sp) as well as the model proposed by Ren et al "Q-space CGAN". The Q-space CGAN was trained with noisy data from the HCP, and given the short rebuttal timeframe we only were able to train two models with noisy HCP data, thus the limited scope of this analysis. However, we will include the Q-space CGAN in all analyses within the camera-ready manuscript. Within these preliminary results, Figure 2 clearly shows a distinctly higher MAE in the Q-space CGAN as compared to the two other models. This suggests a similar trend will be present when applying this method to other experiments. In addition to this, as stated in the response to Reviewer bdM9, we will endeavor to compare our methods against an equivariant dMRI network (see section C.1 of bdM9 response), as well as dMRI analysis coefficient regression methods (see section C.3 of bdM9 response) within the camera-ready manuscript. ### W.3 "From my viewpoint, this paper misses an important ablation..." We agree with the reviewer's concern for more ablation studies and have conducted the following experiments, as shown in Table 2 within the rebuttal PDF. #### No B-vector Information Here we compared a PCCNN model trained up to 50,000 iterations in two scenarios: 1) with b-vectors included in the coordinate embedding "Baseline" and 2) a model with b-vectors **excluded** from the model in all layers "No b-vector". The difference in MAE, PSNR, and MSSIM all demonstrate the value of providing this information into the network. Despite only measuring data across one subject, the standard deviation within the main manuscript suggests that this difference in error is highly statistically significant. We will include a more in-depth ablation study demonstrating this within the camera-ready manuscript. #### Varying $d_{\mathrm{max}}$ We compared a PCCNN model trained up to 50,000 iterations with three values of $d_{\mathrm{max}}$ set for all layers within the network: $d_{\mathrm{max}} = 1$ "Baseline", $d_{\mathrm{max}} = \frac{\pi}{4}$, and $d_{\mathrm{max}} = \frac{\pi}{8}$. Table 2 demonstrates a clear dropoff in performance across all three error metrics as $d_{\mathrm{max}}$ decreases. This is in line with what we would expect, as a lower $d_{\mathrm{max}}$ corresponds to datapoints that have a greater angular distance being excluded from the kernel. Similarly, we will include a more in-depth ablation study within the camera-ready manuscript. ### W.4 "Needs more visual comparisons..." We have provided two new figures within the rebuttal PDF. Figure 1 shows visualisations of the fibre orientation distribution (FOD) maps in a crossing fibre region within one subject, across various models. Here it can be seen that the angular super-resolution models improve the qualitative fidelity of the FODs as compared to the high-resolution baseline. This visualisation helps to demonstrate the value that these super-resolution models would have for applications such as tractography, which uses FODs to determine streamlines. Figure 2 is discussed in comment W.2 of this response and pertains to the MAE in an additional model, the "Q-space CGAN". We will expand this figure within the camera-ready manuscript to include more models and are happy to include other figures and visualisations in addition to this. --- Rebuttal Comment 1.1: Comment: ### W.5 "The novelty of this work is quite limited..." Whilst the PCCNN developed in this study is an extension of the PCConv framework proposed by [Wang et al.](https://arxiv.org/abs/2101.06742), we feel that this work constitutes a noteworthy contribution to the literature. A significant amount of work has gone into applying the framework to the dMRI domain, such as the preprocessing steps, domain expertise in the construction of the network, choice of hyperparameters, and extension of the continuous parametric framework into higher dimensional q-space. Additionally, we provide ablation studies demonstrating optional additions to the network such as the coordinate embedding and incorporation of global information. Further, we include an extensive analysis across several different experiments to demonstrate the generalisability of our methods in different downstream dMRI workflows. ## Questions ### Q.1 "Can this work be applied to other diffusion MRI tasks?..." Yes, this absolutely can be applied to other diffusion MRI tasks. For example, the network could be used as-is on a denoising task. The only difference would be the input and target data, as they would need to be replaced with noisy and denoised data respectively. For other tasks such as segmentation or classification, modifications to the network hyperparameters (such as the dimensionality of layers) may need adjusting to accommodate these new tasks, but the PCConv building blocks would remain the same.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for taking the time to review our work in great detail. We look forward to further discussions. The notable additions we have made during this rebuttal period include: - A figure that visualises FODs reconstructed from different angular super-resolution models. This helps to qualitatively quantify the effects of applying the methods to this particular analysis. - A figure with a new comparable dMRI angular super-resolution model (Q-space CGAN). In this figure, we demonstrate the difference in mean absolute error in an axial slice of a subject. In particular, we show that our method distinctly outperforms the Q-space CGAN. - A table that demonstrates two additional ablation studies. One pertaining to the removal of b-vector information entirely from the network, as well as one looking at the effect of varying the hyperparameter $d_{\mathrm{max}}$. - A table displaying the PSNR values for one of the experiments. - Given the limited rebuttal period, we were unable to conduct extensive analyses. However, we have detailed further analyses that we will pursue for the camera-ready manuscript. The additional figures and tables can be found in the rebuttal PDF. Pdf: /pdf/55ad565689907294e7243261903bde67a0699e23.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: **Background for ML audiences**: Multi-shell diffusion MRI (dMRI) is a 6D (3 dims of space + 2 angular dimensions + 1 radial dimension) imaging modality where each 3D voxel contains (potentially concentric) spherical signals which correlate with local white matter properties within the brain. In research settings, each voxel-sphere is sampled densely to recover underlying white matter pathways, but clinical settings often acquire only a few angles for speed in critical settings (e.g. stroke monitoring). **Summary**: Submission 13443 presents an architecture for processing dMRI images and aims to perform angular super-resolution. To do so, it applies existing parametric continuous convolutional nets (i.e. continuous (x,y,z,r,phi,theta)-location conditioned networks) and adds better positional embeddings and positional conditioning. Experimentally, it evaluates angular super-resolution performance by means of plain predicted image-based performance and downstream analysis such as NODDI parameter fitting. Strengths: - To my knowledge, fully continuous convolutions for dMRI processing has not been attempted before and the idea is promising due to the extremely nonlinear nature of the 5D spatial and angular images. - The submission performs a reasonably thorough ablation of its various moving components. - The analysis of downstream applications of dMRI (fixel analysis, NODDI fitting) is well considered as an evaluation strategy of angular super-resolution. Weaknesses: ### (A) Inadequate evaluation with lower methodological contributions: The proposed framework is an application of continuous convolutions to dMRI images alongside combinations with factorized convolutions and Fourier features, the latter of which has been used extensively with coordinate-based hypernetworks and equivalently for [continuous convolutions](https://arxiv.org/abs/2102.02611). In my opinion, this is fine as dMRI processing is a non-trivial and non-traditional application. However, the presented experiments require significant improvements in depth to make this argument. For example, - Only ~**40 subjects** were used for training/validation/testing with final evaluation only performed on 8 subjects. However, the publicly and freely accessible dataset used in this paper (WU-Minn HCP) has diffusion MRI from 1065 subjects. Why was the entire dataset (or at least a much larger portion) not used? In my opinion, this is a non-trivial limitation as this sample size is small and the evaluation on 8 subjects only reveals no clear results trend amongst all of the various ablations proposed. - The paper claims to improve PCCNN networks in performance and computational efficiency with the addition of factorized convolutions and Fourier feature embeddings. However, the gains specific to factorized convolutions and FF embeddings do not appear to be quantified in an **ablation study**, please do so as they are central claims. - There is no mention of how the **baselines** were tuned for these experiments. For example, was the degree of the SH interpolation tuned on validation data? ### (B) Potentially suboptimal modeling choices: - Section 2.4 introduces “**global conditioning**” by means of concatenating coordinates of a reference neuroimaging template. However, this is quite confusing as dMRI are routinely *not* registered to a template space as that requires non-trivial reorientation of each voxel-sphere after registration, so I do not see why this would be helpful and the ablation results are inconsistent and hard to interpret. Further, it is unclear what is technically meant by L159. Please clarify these aspects. - It is unclear and unstated why the proposed framework uses **rotationally-invariant kernels**. The main advantage of the proposed work over previous attempts (listed below) is that the hypernetwork can be quite expressive for highly nonlinear data, but rotationally-invariant kernels reduce this to a significant extent. Why was this choice made? - The paper performs **zero-filling** of missing angles in its experiments (L208) which in my opinion is not well motivated as 0 diffusivity is a physical value and not a masked missing value to be estimated. As continuous convolutions naturally allow for handling missing values, I do not see why zero-filling is necessary, please clarify. - All experiments undergo **denoising** via Patch2Self which would typically significantly reduce the high-frequency content of the sparsely-sampled image which could potentially be useful for nonlinear neural networks. Why is this preprocessing used? - As the proposed method is not roto-translation equivariant by design (as in other dMRI processing networks), it is largely data-driven. However, only 27 training images are used and there is no mention of data augmentation which would help with generalization. Was **data augmentation** not used? ### (C) Missing existing work on this topic: Currently, most experiments in the paper use 1-2 baseline methods as comparisons, which would be fine if the submission was tackling a recently-formed application. However, there is extensive work on topics relevant to the submission such as dMRI convolutions, angular super-resolution, and regressing microstructural coefficients which are detailed below. It would improve the paper for it to contextualize its contributions w.r.t. existing work and (if possible) add one or more of the most relevant baselines to each experiment. #### w.r.t. dMRI convolution and/or q-space processing: The current version of this paper presents convolutions for dMRI data as a fundamentally new and unaddressed problem. However, there are several lines of existing work that tackle developing convolutions for dMRI exactly. This includes equivariant and geometrically-motivated work: - Using tensor field networks: https://arxiv.org/abs/2102.06942 - Using separable kernels: https://openreview.net/forum?id=7S1l2zzUZFI - Using equivariant graph convolutions: https://arxiv.org/abs/2304.06103 - Using manifold-valued convolutions: https://link.springer.com/chapter/10.1007/978-3-030-78191-0_24 - And for completeness, there exist ad-hoc methods which concatenate neighboring voxel-spheres channel-wise such as https://hal.science/hal-02946371/document. In my opinion, it would be good to acknowledge and discuss the advantages and disadvantages of the proposed framework w.r.t. one or more of these methods. For example, - A potential disadvantage: the proposed method is not equivariant to the underlying symmetries of the data and is purely data-driven. - A potential advantage: the above methods cannot directly produce outputs at angular coordinates not in the inputs without interpolation and/or model-fits, whereas the proposed method can as it uses a hypernetwork. #### w.r.t. dMRI super-resolution DMRI angular super-resolution has had several previous attempts in the literature that could be discussed and/or compared against when possible. For example, [DeepDTI](https://www.sciencedirect.com/science/article/pii/S1053811920305036) uses 6 angular samples (w/ universally co-acquired structural images) to super-resolve the diffusion tensor. Further, while the paper uses continuous convolutions via hypernetworks both spatially and spherically, the continuous mechanism is not used spatially as all inputs and outputs lie on a regular spatial grid. As a result, it is similar to [this paper](https://arxiv.org/abs/2106.13188) which also uses a q-space-conditioned hypernetwork with spatial gridded convolutions for dMRI super-resolution. Lastly, while SH interpolation is compared against as a non-deep learning baseline, one would typically use SHORE interpolation in practice. #### w.r.t. regressing dMRI coefficients: There are some works that directly regress dMRI scalars (such as NODDI in this paper) which would also be good to discuss and/or compare against if possible such as: - https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.27568 - https://link.springer.com/chapter/10.1007/978-3-031-16431-6_15 - https://arxiv.org/abs/2207.00572 - https://link.springer.com/chapter/10.1007/978-3-031-16431-6_11 ### Minor comments - Why is the related work in the methods section? That choice seems out of place. - Please add PSNR and SSIM as evaluation measures on top of AE. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: I have merged my questions and suggestions with the weaknesses (mainly weaknesses A and B), please see above. If the rebuttal adequately addresses weaknesses A and B I would be happy to raise my score. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: The paper does not list any limitations beyond long compute times. In my opinion, this is insufficient as angular super-resolution in biomedical applications is a high-risk application as it may introduce hallucinations or incorrectly predict angular samples for unseen populations. Further, the paper only studies 40 “normal” relatively-young adults, whereas angular super-resolution would be most beneficial in time-sensitive clinical settings with lesions and strokes. These limitations should be addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reviewer bdM9 Rebuttal Thank you for taking the time to provide such a robust and thorough review, it is greatly appreciated. ## Weaknesses ### A.1 "Only ~40 subjects were used for training/validation/testing..." Whilst only ~40 subjects were used in total, each subject constitutes a lot of data. For example during training, each subject is split up into patches of dimensions $(10, 10, 10, q_{\mathrm{in}} + q_{\mathrm{out}})$ where $q_{\mathrm{in}}$ and $q_{\mathrm{out}}$ form the input and target set respectively. Overall, each of the 27 training images provided on the order of ~4000 unique training examples, or ~100,000 across all subjects. In addition to this, each training example will contain on the order of 20,000 voxels. The evaluation statistics for the 8 test subjects span ~600,000,000 voxels in total, depending on the experiment. Given that each datapoint within each table in the manuscript is computed from 600M voxels, increasing the test dataset size would drastically increase the test evaluation computation time. Similarly, increasing the training dataset size to the order of hundreds of subjects would dramatically increase training time. Other deep learning-based studies have used a similar number of subjects from the HCP, for example; [Ren et al.](https://arxiv.org/abs/2106.13188) used 9 for training and 9 for test, [Alexander et al](https://www.sciencedirect.com/science/article/pii/S1053811917302008) used 8 for training and 8 for test, and [Ye et al](https://link.springer.com/chapter/10.1007/978-3-030-32248-9_65) used 5 for training and 20 for test. ### A.2 "The paper claims to improve PCCNN networks..." Performing an ablation study by removing the factorised convolutions would be technically difficult due to the memory limit implications it would pose. Specifically, the current model(s) would require too much memory to fit into our available GPU hardware, and therefore a significantly smaller model would be required to be able to test this. The high memory requirements of the model are due to the relatively high dimensionality of the data, and therefore kernels, involved. For example, a typical 2D natural image convolutional kernel would involve the multiplication of $(3 \times 3) = 9$ weights per input channel per output channel. Conversely, our non-pointwise non-factorised kernel would use $(3 \times 3 \times 3 \times 20) = 540$ weights per input channel per output channel. Whilst we did not have time to conduct the ablation study that detailed the effects of removing the Fourier features, we were able to run small-scale ablations asked by reviewers Kmgz and Vyq4. These ablations look at removing b-vector information and varying $d_{\mathrm{max}}$ within the PCCNN network. We refer the reader to the rebuttals of these two reviewers for a more detailed discussion of those results. We will of course provide an ablation study of the Fourier features for the camera-ready submission. ### A.3 "There is no mention of how the baselines were tuned..." Details, such as data and training regime, for the FOD-Net and SR-q-DL model comparisons were listed in the Appendix and referred to on line 211. Details of the RCNN model comparison were omitted as they did not differ from the PCCNN models. However, we will amend the Appendix to explicitly state this for clarity. We will also include the spherical harmonic interpolation procedure, and amend the manuscript to mention this. See below for an example of the amendment we will make. #### RCNN Data and Training Training parameters, the hardware used, and the data sampling scheme used to train the RCNN were the same as outlined within Section 2.7. Model training took approximately 3 days. #### Spherical Harmonic Interpolation Spherical harmonic interpolation of single-shell dMRI data followed the same procedure as outlined in [Lyon et al.](https://arxiv.org/abs/2203.15598). Briefly, SH coefficients for each spatial voxel within the low angular resolution dataset were fit using the pseudo-inverse least squares method using a spherical harmonic order of 2. These derived SH coefficients were then used to calculate the interpolated spatial voxels for the target b-vectors. ### B.1 "Section 2.4 introduces global conditioning...” Indeed dMRI are not routinely registered to template space due to the non-trivial reorientation. However, only the affine transformation, which relates voxel space to template space, is required to obtain the coordinates of patches within template space. Therefore a workflow would look like this: 1) register your dMRI to a dMRI template space. 2) Throw away the registered dMRI data whilst keeping its affine transformation. 3) Use that affine transformation to calculate the template space coordinates. Including this coordinate is hypothesised to be helpful as it provides the model with extra context for where the input (patch) data is located within the whole image/brain, similar to how ViT's encode patch order via positional encoding. L159 describes how the patch-wise spatial coordinate is normalised to a range $[0, 1]$, via the brain mask. For example, a patch located in the inferior region of the brain would have a z-component of close to 0. ### B.2 "It is unclear and unstated why the proposed framework..." In preliminary experiments, we found that restricting the kernel to be rotationally invariant yielded better training results and therefore included this in the final model(s). Whilst we did not have time within the review period to do so, we will provide an ablation study within the camera-ready manuscript to demonstrate these results. --- Rebuttal Comment 1.1: Comment: ### B.3 "The paper performs zero-filling..." Zero-filling is necessary from an implementation standpoint because, while the continuous framework naturally handles missing values, the data are stored and computed with dense tensors. Specifically, during training, each batch will have training examples with various non-empty angular dimension sizes contained within one dense tensor. The minimum size of the tensor in this dimension should be the largest angular dimension size possible and have zero filled values where data are not present. Importantly though, this is not equivalent to treating the data as b0 volumes. Firstly, the zero-filled values are masked such that none of these empty values contributes to the gradient update. Secondly, whilst b0 volumes are not included in this study, their inclusion would be marked by the $\rho_{i}$ or $\rho_{j}$ component within the $\mathbf{p}$ vector. ### B.4 "All experiments undergo denoising..." Denoising is an important step in the diffusion processing pipeline, and given that the method demonstrated superior denoising capabilities compared to others (as demonstrated within the patch2self [paper](https://arxiv.org/abs/2011.01355)), we opted to use it as part of the preprocessing for this study. Additionally, the formulation of patch2self lends quite naturally to angular super-resolution. In essence, patch2self removes noise from a given 3D dMRI volume $v_{j} \in \{v_{1}, \ldots, v_{N} \}$ by posing it as a prediction problem. Linear regression is used to predict $v_{j}$ by using all other $N -1$ volumes as input. This process removes noise by using regression coefficients that are calculated via the averaging across all voxels within a given 3D dMRI input volume $v_{j}$. Assuming noise is uncorrelated, the best prediction an angular super-resolution model could yield would be equal to this mean value predicted by the denoiser. Having said that, patch2self, or indeed any denoising step, is not *necessary* in this application and the model(s) could be trained with noisy data. Indeed we have the RCNN and PCCNN-Bv-Sp models trained on noisy data, and results from which are included in Figure 2 of the rebuttal PDF. Whilst we hypothesise the trends across models and experiments will be the same, we will include an ablation experiment within the camera-ready submission demonstrating this. ### B.5 "As the proposed method is not roto-translation equivariant..." Whilst only 27 training images are used, as stated previously, the training dataset constitutes approximately 100,000 unique training examples. This was deemed a reasonable amount of data to use and is concurrent with other similar studies. Additionally, augmentation was not used due to the added complexity that this would yield when dealing with dMRI data. Specifically, as you previously pointed out, deformations are not routinely used in dMRI processing due to the non-trivial effects they would impose on each voxel-sphere. Therefore it is not immediately clear what a valid augmentation to the data would, and would not, be whilst maintaining a realistic dataset. --- Reply to Comment 1.1.1: Comment: ### C.1 "w.r.t. dMRI convolution and/or q-space processing" It was not our intention to present convolutions for dMRI as a fundamentally new problem. As you have highlighted there has been excellent work done within the geometric deep learning field on incorporating the mathematical frameworks of equivariant networks into dMRI deep learning. Whilst, to our knowledge, there are no works that use the equivariant networks within angular super-resolution, it would nonetheless be a good choice to develop a network comprised of such layers to serve as a comparison to our method. Good candidates for this would be derived from either [Muller et al.](https://arxiv.org/abs/2102.06942) or [Elaldi et al.](https://arxiv.org/abs/2304.06103). Whilst we are unable to produce this during the rebuttal timeframe, this is something we will do for the camera-ready submission. In the meantime, we can certainly speak to some advantages and disadvantages of the equivariant methods. Regarding advantages, these equivariant dMRI frameworks lend naturally to the geometry present within the data and enforce appropriate symmetries, such as rotational and translational, that guarantee validity given transforms within that domain. Additionally, because of these equivariances, roto-translational transformations are explicitly tied into the model and therefore do not need to be learnt via examples or augmentation within the dataset. This generally allows for these methods to be more appropriate in low-data domains. As you pointed out, the formulations do not come with an out-of-the-box way to produce outputs at specific angular coordinates. In this respect, there is less flexibility regarding coordinate system choice. On that note, the continuous convolution framework is much less rigid when introducing additional coordinate systems. Equivariant networks are limited regarding network choices, as certain non-linearities break the guarantee of equivariance, whereas our method can be used in conjunction with any non-linearity or sequence of layers. --- Rebuttal 2: Comment: Dear Reviewer bdM9, The author-reviewer discussion is closed on Aug 21st 1pm EDT, could you please read the rebuttal and give your final rating? Thanks so much! Best, AC --- Rebuttal 3: Title: Rebuttal response Comment: Thank you for the extensive rebuttal and my apologies for the delay in response. It admirably addresses most points of concern raised in my review and I am raising my score from a 'borderline reject' to a 'borderline accept'. My score is not higher primarily due to limited evaluation concerns arising from weakness A1 i.e. only using 8 test subjects. While I agree that several comparable works also use limited sample sizes for evaluation, they typically have some signal amongst the results. There appears to be no clearly discernible pattern amongst the ablations in the primary results presented in the tables of the main text across the various modes of evaluation. I speculate that this is due to a too small sample set for effects to be clear. Cross-validation instead of a single static split might also reveal more interpretable results, but that would require large-scale reexperimentation that is infeasible for a rebuttal period. In fairness, for readers unfamiliar with its details, dMRI processing and preprocessing is severely time and compute intensive to a much greater extent than traditional volumetric data. It is thus understandable that a single static split with 8 evaluation subjects was used in the current submission. Nevertheless, I am uncertain as to what the quantitative takeaway is from the current version of the results. I am open to further discussion. --- Rebuttal Comment 3.1: Comment: Thank you for revising the score in light of the continued discussion. We agree that the current analyses do not present a definitive signal with regards to the relative benefits of the PCCNN modifications. However, the modifications primarily serve to demonstrate the *flexibility* of the parametric continuous framework in incorporating additional coordinate information. The focus of the paper is how the PCCNN performs compared to baselines within this task. Here, the PCCNN family of models clearly outperform other comparable models, and this point is further demonstrated by the inclusion of the q-space CGAN as can be seen in Figure 2 of the rebuttal PDF.
null
null
null
null
null
null
On Measuring Fairness in Generative Models
Accept (poster)
Summary: The paper considers the problem of measuring fairness in generative models. In particular, the paper has two main contributions: (1) they have produced a dataset of hand labeled (sensitive attributes, SA) dataset for various SOTA generative models; and (2) they have proposed a method for estimating the expected sensitive attribute distribution which utilizes the error rates of the SA classifier. These two contributions are used to show that SA classifiers with low error can still cause high errors in previous methods of approximating the sensitive attribute distribution. Strengths: - The paper presents a strong empirical study of measuring fairness in generative models, and outlines flaws in prior studies. - CLEAM seems to be an intuitive correction to the naive baseline method. Weaknesses: 1. One weakness of the paper is a hole in the narrative: how is $C_{u}$ obtained? In particular, it is unclear in the main-text on how one would generate a SA classifier. Furthermore, the assumption of knowing the underlying error rates should be discussed as a potential limitation. I assume that the SA classifier is trained on data which are not samples from the generative model one is trying to measure the fairness of (as otherwise we already have labels). In this case, the validity of error rates transferring might be questionable. 2. The notation in the paper is somewhat strange. In particular, as far as I can see, $ Pr(u \mid x )$ is not a probability … despite the notation. And it is further “aliased” as $ C_u(x)$ which is additionally confusing. I don’t see why one could not just define the “argmax classification” as $C_u(x)$ directly. 3. The notation of $\hat{p}$ and $p^{\*}$ seems consistent to me. In Section 2, it seems that $p^{\*}$ is the population statistic. However, in Section 3 $p^{\*}$ becomes the estimate generated from GenData. Yes, one could argue that GenData somewhat becomes the “new” population, but this change in perspective is not clear. I think it makes more sense to think of the $p^\*$ in this Section as a high quality estimate of the true statistic. It may be worth changing notation to reflect the 3 possible $p$’s: that generated by the SA-classifier, that generated by GenData, and the unknown true statistic which is being approximated by the former two. Typos: - Appendix Eq (3) LHS Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Relating to “Weakness 1”, how exactly are the SA classifiers typically obtained? 2. Further relating to “Weakness 1”, how well do the error rates of the SA classifier transfer? That is, what is the difference between the SA classifier’s error rates with the dataset it was trained on versus the error rates on GenData (which I am assuming is that of Table 1 & 2)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I think the assumption of knowing the underlying error rate / accuracy needs to further discussed (see Questions) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1**: "how is $C_u$ obtained?" **A1**: Thanks for your comment and we apologize if it was unclear. As discussed in the main paper (Sec.3), to obtain $C_u$, we strictly follow previous work (e.g., Imp-Weighting [1] and fairTL [2]). Particularly, we train the SA classifiers using the labeled datasets w.r.t. SAs based on standard training procedure (e.g., train a ResNet-18 as SA classifier on CelebA dataset considering BlackHair as SA). To follow previous work, we train various SA classifiers on different sensitive attributes. We remark that as mentioned in the main paper (Sec.3) we defer the details of training SA classifier to Supp. F, due to lack of space. In addition, as mentioned in the main paper (Sec. 3), we utilize CLIP as an additional SA classifier to explore zero-shot SA classification. The details of utilizing CLIP is included in Sec. E where after encoding images using image encoder, we define two related text prompts for our SA following the guidelines in CLIP [6], and encode these text prompts using text encoder. Then the cosine similarity between the encoded image and two encoded text prompts is used to get the output. $ $ >**Q2**: “Furthermore, the assumption of knowing the underlying error rates should be discussed as a potential limitation. I assume that the SA classifier is trained on data which are not samples from the generative model one is trying to measure the fairness of (as otherwise we already have labels). In this case, the validity of error rates transferring might be questionable.” **A2**: Thank you for your comment, Reviewer’s understanding is correct: SA classifiers are trained on **real data**, and they are not samples from the generative model which fairness measurement is performed. However, in our work we have addressed the validity of error rate transferring. In Supp. D.7, we have validated that the error rate on real validation data can be transferred to the setups where we measure the attributes of the generated data. The results in Tab. 15 of Supp. D.7 shows that for two GANs used in our study (StyleGAN and StyleSwin), the obtained error rate ($\alpha$) from real validation data (real CelebA-HQ data) is similar to the error rate obtained from our labeled dataset of generated images. Results in Tab. 16 show that, in our proposed CLEAM, using error rate $\alpha$ from real validation data has similar performance as that using the error rate from generated images. Similar results are shown for Stable Diffusion in Tab. 17. We emphasize that as we mentioned in Supp. D.7, the error rate for generated data is not assumed to be available in our fairness measurement, and the analysis in Supp D.7 is solely to validate error rate transferring. We thank the Reviewer for the comment, and we will make this clear in the revised version. $ $ >**Q3**: “I don’t see why one could not just define the “argmax classification” as $C_u(x)$ directly.” **A3**: We thank the Reviewer for the good suggestion and will directly define $C_u(x)$ as the argmax classification for clearer discussion. $ $ >**Q4**: “It may be worth changing notation to reflect the 3 possible $p$’s: that generated by the SA-classifier, that generated by GenData, and the unknown true statistic which is being approximated by the former two.” **A4**: We appreciate the Reviewer's sharp observation. Reviewer's understanding is correct. In the paper, to avoid introducing additional symbol and with a slight abuse of notation, we use $p^*$ to denote population statistics in Sec 2, and $p^*$ to denote estimation from GenData in Sec 3. We will consider the Reviewer's suggestion seriously and update the submission. $ $ >**Q5**: Relating to “Weakness 1”, how exactly are the SA classifiers typically obtained? **A5**: Please find the details to this question in A1. $ $ >**Q6**: Further relating to “Weakness 1”, how well do the error rates of the SA classifier transfer? That is, what is the difference between the SA classifier’s error rates with the dataset it was trained on versus the error rates on GenData (which I am assuming is that of Table 1 & 2)? **A6**: Addressed earlier in previous questions. --- Rebuttal Comment 1.1: Title: Re: Response Comment: Thanks you for the detailed response. I currently have no further questions and will keep my scores for now. --- Reply to Comment 1.1.1: Title: Thank you for the positive feedback and insightful comments Comment: We sincerely thank the reviewer for the insightful comments. Thank you for the positive feedback and evaluation. We will include all additional results in the revised version. Sincerely, Authors
Summary: This paper considers the fairness measurement for generative models. The contributions of this paper are three-fold. First, the authors reveal that the existing frameworks have significant measurement errors, even using sensitive attribute classifiers. Second, the authors propose a new framework namely CLEAM that uses a statistical model to evaluate inaccuracies of SA classifiers, thus reducing the measurement errors. Finally, the authors use the proposed CLEAM to measure fairness in important text-to-image generators and GANs, which show the effectiveness of the proposed framework. Experimental results with a manually labeled dataset show that the proposed CLEAM can achieve lower error as compared with some baseline schemes. Strengths: 1)Propose a novel framework to measure the fairness of generative models from both theoretical and experimental perspectives. 2)The fundamental statistic model is easy to follow, and the proposed method CLEAM is able to reduce the fairness measurement error. 3)The dataset created in this paper will benefit the research community. Weaknesses: After reading this manuscript, this reviewer finds that there are some issues that need to be addressed, as 1)On page 6, the authors say that “the probability of the counts for each output cT in Eqn. 2 (denoted by Nc) can be modeled by a multinomial distribution.” Does this assumption hold in practical systems? 2)The presentation of the statistical model should be improved. Why do the authors assume a multivariate Gaussian distribution instead of other statistical distributions? 3)On page 6, what does "M" represent in equation (3)? 4)The description of equation (8) is not clear enough. The authors are suggested to explain it in detail. 5)The authors are suggested to provide more experiments with other datasets and generative models, in order to demonstrate the effectiveness of the proposed framework. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to my comments for details. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please refer to my comments for details. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1**: “On page 6, the authors say that “the probability of the counts for each output $c^T$ in Eqn. 2 (denoted by $N_c$) can be modeled by a multinomial distribution.” Does this assumption hold in practical systems?” **A1**: Thank you for your insightful question. In Sec. 4.1, we have considered the system very carefully (more details in Supp. A.1) and find that this model does indeed hold in most practical systems (i.e., practical generative models). Specifically, we recall that Eqn. 2 mentions that the sequence of $n$ independent experiments (image generations), each with the same success rate (for example, same probability of generating an image with value of sensitive attribute=1 in each generation), can be modeled as a multinomial distribution based on the definition of this distribution. We remark that in a practical system, as long as generation of each sample is independent, and the generative model is time invariant, the requirements are satisfied and our statistical model holds. Note that for the current image generative models like GANs and diffusion models, usually both requirements are met. We will add a statement to clarify this. $ $ >**Q2**: “The presentation of the statistical model should be improved. Why do the authors assume a multivariate Gaussian distribution instead of other statistical distributions?” **A2**: We thank the Reviewer for the feedback. The assumption of multivariate Gaussian distribution is based on “normal approximation to the multinomial” [36,37], an application of the central limit theorem. Specifically, as mentioned in Sec 4.1 of the main paper (and also in our previous response), we use multinomial distribution to model the possible events of SA classifier output. Then, since $\mathbf{p}$ in Eqn. (2) is not extreme and $n$ is reasonably large, this multinomial distribution can be approximated by a multivariate Gaussian distribution [36,37] (more details in Supp. A.1). We additionally remark that, to the best of our knowledge, multivariate Gaussian distribution should be the most appropriate approximation, and this enables us to later estimate the distribution of the $\hat{p}$ (Eqn. (4) and (5)) with more ease. Note that the accuracy of this prediction is validated in Supp. C. We will shift some details from Supp., and add some explanations to this part to make it clear. $ $ >**Q3**: “On page 6, what does "M" represent in equation (3)?” **A3**: We apologize if it was unclear. In Sec 4.1, the matrix $\mathbf{M}$ is a component of the covariance matrix of the multivariate gaussian distribution i.e., $\mathbf{\Sigma}=n\mathbf{M}$, which is determined following the literature [36,37] and $\mathbf{M}$ characterizes the interaction of elements of the probability vector $\mathbf{p}$. As mentioned, the expanded form of this term can be found in Supp. A.1. We will make this part more clear in the final version. $ $ >**Q4**: The description of equation (8) is not clear enough. The authors are suggested to explain it in detail. **A4**: We apologize if it was unclear. As discussed in Sec 4.2. , Eqn. 8 is the maximum likelihood approximation of $p^*$, taking into account error in the sensitive attribute classifier ($\alpha$) and therefore could achieve better estimation and measurement of fairness. We refer to Eqn. 8 as the CLEAM’s point estimate. Due to space limitation, we provide a compact derivation of this equation in the main paper. However, in Supp. A.2, we provide a step-by-step derivation (as mentioned in the main manuscript) and additional in-depth mathematical intuition –statistical requirements and assumption– on how this maximum likelihood approximation is derived and what it entails. We will make this part clearer. $ $ >**Q5**: Authors are suggested to provide more experiments with other datasets and generative models, in order to demonstrate the effectiveness of the proposed framework. **A5**: Thank you. We would like to respectfully clarify that our experiments already consider multiple different generative models and datasets. Specifically, in Tab.1 we consider two state-of-the-art (SOTA) generative models StyleGAN2 (Conv-based) and StyleSwin (transformer-based) which are both based on the CelebA-HQ dataset. Then in Tab.2, we consider Stable Diffusion Model (SDM; as SOTA text-to-image generative model) which is based on the LAION-5B [a] dataset. Finally, in the ablation studies (Sec 5.2) we provide further assessment on the AFHQ dataset, which due to space constraint we briefly discuss the results in the main manuscript. More details can be found in Supp. D.3 and D.5. We will include an additional remark, to make these points more explicitly in the final manuscript. Based on reviewer’s suggestion, as an additional experiment, we further study our proposed CLEAM on an additional generative model and dataset. Specifically, we carried out a similar procedure to analyze the bias w.r.t. $\texttt{Gender}$ of another pre-trained diffusion model [b] on the FFHQ dataset [c]. We utilized CLIP as the SA classifier. For evaluation, we similarly find the ground-truth (GT) $p^*$ by the same procedures in Supp. H to hand-label the generated samples. Note that the GT is used solely for evaluation. Our results show that CLEAM is similarly able to reduce the errors ($e_\mu$ and $e_\rho$) when compared against the baseline. We will include these results in the revised version of the paper. | Model | GT | $\mu_{Base}$ | $e_{\mu}$ | $\mu_{CLEAM}$ | $e_{\mu}$ | $\rho_{Base}$ | $e_\rho$ | $\rho_{CLEAM}$ | $e_{\rho}$ | |---|---|---|---|---|---|---|---|---|---| | Diffusion model [b] | 0.57 | 0.585 | 2.63% | 0.571 | 0.18% | [0.578,0.593] | 4.04% | [0.564,0.579] | 1.58% | [a] Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS’22. [b] High-resolution image synthesis with latent diffusion models. CVPR’22. [c] A style-based generator architecture for generative adversarial networks. CVPR’19. --- Rebuttal Comment 1.1: Title: Re: Response Comment: Thank you for the detailed response. I don’t have any further questions at the moment and will keep my scores for now. --- Reply to Comment 1.1.1: Title: Thank you for the positive feedback and insightful comments Comment: We extend our appreciation to the Reviewer for their insightful comments. We are grateful for the positive feedback, and evaluation. As previously mentioned, the revised manuscript will incorporate all the necessary clarifications and additional results discussed in this rebuttal. $ $ Sincerely, Authors
Summary: The authors conduct a study on the fairness of generative models. They propose a CLassifier Error-Aware Measurement (CLEAM) framework which accounts for inaccuracies in classifiers involving sensitive attributes. The authors also create a new dataset of generated images from a text-to-image generator which are then used to evaluate the accuracy of existing fairness frameworks. Strengths: Better measurement and quantification of bias is a very critical topic both in classification as well as generation tasks. The proposed CLEAM framework appears to produce results which are more balanced than the baseline methods. Comparisons are made to other approaches. Weaknesses: The text/language/grammar could be improved in many places, especially the abstract. Even the opening of the intro is confusing- "fairness is defined as equal generative quality and equal representation w.r.t some sensitive attributes (SA). In this work, we focus on the more widely utilized definition – equal representation." Isn't this the same, or actually less demanding definition? Does generating samples from both classes, but one at a much lower quality level, actually constitute fairness? Doesn't there need to be some demand on the quality of generation? The jumping around between the language of SA classifiers and generative models is confusing throughout. The description/introduction of CLEAM (lines 59-68) is wordy and a bit confusing. Other than the fact that it is "a statistical model", it's unclear what it is after reading this section. Since this is the focus of the paper, it should be crystal clear what the core of this method is after reading this section. The authors state "we observe an intriguing property in Stable Diffusion Model"- however what they are observing is instability due to noise. Again, I believe this is more related to instability/adversarial attacks than "bias" as it is traditionally defined. I think disentangling these (related) concepts is important. The paper is very dense with many results as well as mathematics. It's clear the authors have much to tell with limited space. However, they need to walk the reader through it a bit more as it is hard to read through the tables and understand what are the key take-aways from each. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The abstract would benefit form fairly substantial rewriting IMO. "A ResNet18 for Gender with accuracy 97%" is both an awkward phrase, but also unclear what/why it is so specifically being referenced in the abstract. The abstract should seek to make a general observation or summary as to the goals and abstract of the work. Additionally generative models are mentioned first, but then the focus is on classifiers, before moving back to generative models. Cleaning up the abstract would be very beneficial. Are the results of Figure 2 "bias" or instability to adversaries? Traditionally bias is usually thought of when a difference in classes is introduced due to something like "doctor" being associated with men and "nurse" being associated with women. Here there is literally no difference between "a" and "one", yet the distribution shifts. This feels more similar to an adversarial attack where random noise is added as opposed to a biased classifier. Usually adversarial instability in image generation is addressed by adding noise either to the input or latent space. Here the issue is being caused by noise within the text prompt that is impacting the generation. It would be helpful to put the SE in the tables and not the supplemental only (makes it tough on the reader jumping back and forth). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This work is explicitly designed to overcome limitations of other frameworks which may have generative bias Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the valuable suggestions. They are very helpful. We will clean up the abstract and introduction, shorten the discussion of CLEAM in the introduction, shift some results/mathematics to Supp., and better arrange the discussion of SA classifiers/generator as suggested by the Reviewer. $ $ >**Q1**: "fairness is defined as equal generative quality and equal representation w.r.t some sensitive attributes (SA). In this work, we focus on the more widely utilized definition – equal representation." Isn't this the same, or actually less demanding definition? Does generating samples from both classes, but one at a much lower quality level, actually constitute fairness? Doesn't there need to be some demand on the quality of generation? **A1**: We apologize for a typo here. The sentence should be: "fairness is defined as equal generative quality **or** equal representation w.r.t some sensitive attributes (SA). In this work, we focus on the more widely utilized definition – equal representation." We would like to clarify that i) equal generative quality and ii) equal representation are two different fairness definitions. In this work, we focus on equal representation. We remark that to the best of our knowledge, majority of works in fair generative modeling focus on equal representation, e.g. to learn a fair generative model with more equal representation [1,2]. Meanwhile, equal generative quality is not the focus of our work. We will clarify this. $ $ >**Q2**: Are the results of Figure 2 "bias" or instability to adversaries? Traditionally bias is usually thought of when a difference in classes is introduced due to something like "doctor" being associated with men and "nurse" being associated with women. Here there is literally no difference between "a" and "one", yet the distribution shifts. This feels more similar to an adversarial attack where random noise is added as opposed to a biased classifier. Usually adversarial instability in image generation is addressed by adding noise either to the input or latent space. Here the issue is being caused by noise within the text prompt that is impacting the generation. **A2**: Reviewer's comment is very insightful. The intention of this experiment is to apply our proposed fairness measurement framework to reliably measure biases in popular generative models. When we perform this measurement on Stable Diffusion Model (SDM), we follow best practices for input prompts [24, 29-31] and use indefinite (gender-neutral) pronouns or nouns [32, 33], see Sec. 3 for details. Our careful design of input prompts is to avoid gender stereotypes as mentioned by Reviewer. However, in our experiments, we observe markedly different biased outputs for SDM as shown in Fig. 2. To the best of our knowledge, such observation has not been reported previously for SDM. We note that in our work, for generative models' fairness/bias, we follow the definition of equal representation as discussed in introduction (also discussed above) instead of the traditionally definition of stereotyping (i.e. generalization about a group of people based on certain traits). Reviewer's viewpoint on instability to adversaries is very insightful, and we believe this could be the **root cause** of our observation of different biased outputs (unequal representation). However, understanding and validating the root cause of such based outputs would require substantial investigation, and is beyond the scope of this work. In our work, the focus is to study fairness measurement and to report biases in popular generative models using our improved framework. We believe it will be a very interesting future study to understand the root cause of our observed bias, and Reviewer's insight on instability to adversaries is a promising direction to uncover underlying reason. $ $ >**Q3**: It would be helpful to put the SE in the tables and not the supplemental only (makes it tough on the reader jumping back and forth). **A3**: We appreciate the Reviewer’s feedback. Given the limited space, including the SE in the tables may make it a bit difficult to read. However, we would attempt to reformat the table to include this following the suggestion by the Reviewer. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I have read the other reviews and authors' responses. I am willing to update my recommendation to weak accept. I think the question around bias vs. adversarial stability is an important point which is worth addressing. If the authors make the improvements to the text which they have indicated throughout the review, I think this paper will offer contribution to the community. --- Reply to Comment 1.1.1: Title: Thank you for the very constructive feedback and increasing the rating Comment: We sincerely thank the reviewer for the insightful comments and for increasing their rating. We will indeed include the above discussion in the revised manuscript. Sincerely, Authors
Summary: The paper studies measuring fairness in generative models, which is defined as equal number of samples generated from different groups. The measurement needs a sensitive attribute (SA) classifier to predict group attribute to compute the fairness. The paper emprically finds out the error in SA classifier would largely impact the measurement performance of the fairness. They test it by manually labeling samples and comparing fairness measure. The paper then proposes a calibration trick to reduce the fairness measurement error from the SA classifer's error, Strengths: 1. Fairness in generative models is an important problem and open problem Weaknesses: 1. The paper has no insights on why the error in SA classifier would propagate to fairness measurement. The finding is entirely empirical, and seems independent of what kind of fairness definitions used. Would the SA classifier error impact all fairness definitions equally? How much would it impact? What determines the fairness error? The paper is mostly emprical in this finding without providing good insights. 2. The mitigation method seems to be simply computing the metric on different data subsets and then taking average and computing confidence interval. The technical contribution is a little too elementary. 3. The paper largely ignore the vast literature of noisy label, which I think is directly relevant to the problem, i.e. studying the label noise (i.e. imperfect SA prediction in this case) on the impact of fairness and models. For example: [1] Natarajan, Nagarajan, et al. "Learning with noisy labels." Advances in neural information processing systems 26 (2013). [2] Lukasik, Michal, et al. "Does label smoothing mitigate label noise?." International Conference on Machine Learning. PMLR, 2020. There also seems to be some connection between label smoothing and the mitigation. Can you point out any if it exists? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See Weakness 1 and 3. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Regarding Summary: Thank you. Our apologies if unclear, but Reviewer’s Summary is not very accurate: - Our statistical model is overlooked: We develop a statistical model to understand how errors in SA classifier ($\alpha$) affect the fairness measurement ($\hat{p}$). See Fig 1.b., entire Sec 4.1 and Supp A.1 which discuss the statistical model. Therefore, Reviewer’s summary is not accurate: "The paper emprically finds out the error in SA classifier …". Instead, our finding is supported by statistical analysis. - Our proposed method is mis-understood: Based on our statistical analysis, we propose a new measurement framework to mitigate such error. See entire Sec 4.2, Algo 1, and Supp A.2. Our statistical modeling of SA classifier error (Sec 4.1) and our new estimators taking into account such error (Eqn. 8, Eqn. 10) have been validated in Supp C. Therefore, Reviewer's summary is not accurate: "The paper then proposes a calibration trick …". Instead, we propose a new measurement framework with statistical grounding. - This contribution is overlooked: Using our proposed framework, we evaluate biases in SOTA GAN and diffusion models. $ $ >**Q1**: The paper has no insights on why the error in SA classifier would propagate to fairness measurement. The finding is entirely empirical, and seems independent of what kind of fairness definitions used. **A1**: We apologize if unclear, but we have developed a statistical model to provide insights on how errors in SA classifier ($\alpha$) propagate to the fairness measurement ($\hat{p}$). See Fig 1.b., entire Sec 4.1 and Supp A.1. **Importantly, based on our model, Eqn. 4, 5 directly relate the error in SA classifier to fairness measurement, explaining how error in SA classifier impacts fairness measurement statistically.** $ $ >**Q2**: Would the SA classifier error impact all fairness definitions equally? How much would it impact? What determines the fairness error? The paper is mostly emprical in this finding without providing good insights. **A2**: Based on our statistical model, Eqn 4, 5 clearly illustrate SA classifier error $\alpha$ determines error in fairness $\hat{p}$ and its impact, see Sec 4.1. As explained in Fig 1.a, 1.b and entire Sec 4.1, our statistical model is specific to equal representation, the most popular fairness definition for generative model (see Sec 1). $ $ >**Q3**: The mitigation method seems to be simply computing the metric on different data subsets and then taking average and computing confidence interval. The technical contribution is a little too elementary. **A3**: Our apologies if unclear, but our mitigation method is misunderstood by Reviewer. **Our method is grounded on our statistical model which relates error in SA classifier ($\alpha$) to statistical distribution of fairness measurement ($\hat{p}$)**. Based on this model, we derive our framework (CLEAM) and new estimators, taking into account SA classifier error $\alpha$ to achieve improved fairness measurement. See Fig 1.b, Sec 4.1, Sec 4.2, Algo 1, Supp A.1 and A.2. Supp Sec C validates our proposed framework and our proposed estimators statistically. By taking into account SA classifier error, our proposed framework could significantly reduce fairness measurement error. **It is important to note that our improved accuracy cannot be achieved by the simple approach mentioned by Reviewer, which we refer to as Baseline and has been extensively compared with our framework CLEAM in Sec 5.** $ $ >**Q4**: The paper largely ignore the vast literature of noisy label, which I think is directly relevant to the problem, i.e. studying the label noise (i.e. imperfect SA prediction in this case) on the impact of fairness and models. **A4**: Our apologies if unclear, but our paper does not ignore label noise. Instead, our statistical model (Fig 1.b) encompasses and takes into account broadly different causes of imperfect SA prediction, e.g. task hardness (ln 208), label noise, and other causes. Specifically, in our statistical model, $\alpha$ captures SA error arising from different causes, including label noise. Meanwhile, our proposed CLEAM takes into account $\alpha$ to achieve accurate fairness measurement (Sec 4.2). Note that, instead of concerning itself over a specific cause of imperfect SA prediction (e.g., label noise), and attempting to mitigate such a specific issue, our statistical model broadly encompasses different causes of SA classifier error (captured in $\alpha$), and achieves improved fairness measurement without the need to update the SA classifier. In the next response, we demonstrate effectiveness of our approach vs. an alternative approach to focus on label noise narrowly. $ $ >**Q5**: There also seems to be some connection between label smoothing and the mitigation. Can you point out any if it exists? **A5**: Our proposed mitigation is fundamentally different from label smoothing. As discussed, our statistical model broadly encompasses different types of SA classifier error (captured in $\alpha$) and takes them into account under one unified method, without the need to update the SA classifier. This is in sharp contrast to previous work which applies label smoothing to mitigate label noise to train an improved classifier. To compare our approach vs. label smoothing, we conduct this experiment. We implement label smoothing in the training of a ResNet-18 based SA classifier with the same setup as Tab.1(A) w.r.t. SA $\texttt{Gender}$. In this setup, label smoothing does not significantly improve accuracy of the SA classifier (see $\mathbf{\alpha}$ in the Table), as the SA classifier is already very accurate without label smoothing. However, even with these accurate SA classifiers, fairness errors are significant using Baseline, consistent with our findings in other setups. However, with our proposed CLEAM, fairness errors can be reduced considerably (with or without label smoothing). Please see global response for Tab.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable time and effort to review our work. We appreciate the Reviewers' kind comments and recognition, such as: - "The paper is well written, with clear intuitions, illustrations, and experimental results." (Reviewer 2H4p) - "Fairness in generative models is an important problem and open problem" (Reviewer uk5v) - "The fundamental statistic model is easy to follow, and the proposed method CLEAM is able to reduce the fairness measurement error." (Reviewer o1Z2) - "The proposed CLEAM framework appears to produce results which are more balanced than the baseline methods." (Reviewer rFFs) - "CLEAM seems to be an intuitive correction to the naive baseline method" (Reviewer 6ZNP) - "Propose new datasets based on generated samples with manual labeling w.r.t. SA." ( Reviewer uNfr) We would also like to express our appreciation to all the Reviewers for giving us the opportunity to clarify our work, as well as the constructive comments. We will consider all suggestions seriously. To briefly recap our work: 1. Our work is the first to **challenge the accuracy of the existing fairness measurement framework** in generative models. Our major contribution is a detailed **statistical model** to model fairness measurement (Fig 1.b, Sec 4.1 and more details in Supp. A.1). Importantly, our statistical model relates error in sensitive attribute (SA) classifier to statistical distribution of fairness measurement (Eqn. 4, 5), providing insights on how SA classifier error impacts accuracy of fairness measurement. 2. Based on our statistical model, we derive our new fairness measurement framework (CLEAM) and new estimators, taking into account SA classifier error to achieve improved fairness measurement. See Sec 4.2, Algo 1, Supp A.2. Supp Sec C validates our proposed framework and our proposed estimators statistically. 3. To make it possible to study fairness measurement, we develop and make available a new dataset (GenData) consisting of labeled generated samples from three State-of-the-Art (SOTA) generative models. We remark that in generative modeling, fairness is popularly defined as **equal representation** [1, 2, 7, 9, 12, 16, 17] and should not be mistaken with the classifier’s fairness definitions (more detailed comparison in Supp. Sec. G) 4. We have performed comprehensive experiments to validate our proposed measurement framework and demonstrate consistent and substantial improvement in accuracy compared to other approaches. Our experiments include: - **3 different generative models**: StyleGAN2, StyleSwin and Stable Diffusion model - **3 different datasets**: CelebA-HQ, LAION-5B [a] and AFHQ - **5 different SA classifier**: ResNet-18/34, MobileNet2,VGG-16 and CLIP . Due to space limitation, additional experiments (e.g., with different sensitive attributes) have been diverted to the Supp. Sec.D 5. Finally, utilizing CLEAM, we carry out **reliably** measurement of biases in existing SOTA generative models. $ $ In this rebuttal, we have also included few additional experiment, as requested by the reviewers. All results support our measurement framework and is superior compared to previous work. 1. Additional generative model and dataset: a Diffusion Model [b] pre-trained on FFHQ [c] 2. Additional comparison with label smoothing: our proposed framework outperforms label smoothing significantly. It should be noted that in our original submission we have already compared with other classifier correction methods: i) mitigate label shift in SA classifier and ii) calibrate SA classifier. All experiments show our proposed framework is superior. More details can be found in Supp. D.8 and G. In what follows, we provide comprehensive responses to all questions. We have provided anonymized link to Area Chair for the code of all additional experiments. We could provide more details if there are further questions. We hope that our responses can address the concerns and we sincerely hope that reviewers could consider increasing the ratings if our responses have addressed all the questions. $ $ ### Additional experiments on Diffusion Model [b] on FFHQ dataset | Model | GT | $\mu_{Base}$ | $e_{\mu}$ | $\mu_{CLEAM}$ | $e_{\mu}$ | $\rho_{Base}$ | $e_\rho$ | $\rho_{CLEAM}$ | $e_{\rho}$ | |---|---|---|---|---|---|---|---|---|---| | Diffusion model [b] | 0.57 | 0.585 | 2.63% | 0.571 | 0.18% | [0.578,0.593] | 4.04% | [0.564,0.579] | 1.58% | $ $ ### Additional comparison with Label Smoothing | Model | $\alpha$ | GT | $\mu_{Base}$ | $e_{\mu}$ | $\mu_{CLEAM}$ | $e_{\mu}$ | $\rho_{Base}$ | $e_\rho$ | $\rho_{CLEAM}$ | $e_{\rho}$ | |---|---|---|---|---|---|---|---|---|---|---| | R18 w/o smooting | \{0.947,0.982\} | 0.642 | 0.610 | 4.98% | 0.638 | 0.62% | [0.602,0.618] | 6.23% | [0.629,0.646] | 2.02% | | R18 w smooting | \{0.935,0.985\} |0.642 | 0.605 | 5.76% | 0.641 | 0.16% | [0.595,0.615] | 7.32% | [0.632,0.650] | 1.56% | $ $ [a] "Laion-5b: An open large-scale dataset for training next generation image-text models." NeurIPS’22. [b] "High-resolution image synthesis with latent diffusion models." CVPR’22. [c] "A style-based generator architecture for generative adversarial networks." CVPR’19. Pdf: /pdf/8439252ab9455b2583dbb072b947d6a3f91890b5.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a framework for fairness measurement. It first shows that existing framework has considerable measurement errors even when highly accurate sensitive attribute classifiers are used, then propose CLassifier Error-Aware Measurement (CLEAM), a new framework which uses a statistical model to account for inaccuracies in SA classifiers. Strengths: 1. Shows the significant measurement errors of existing frameworks by experiments. 2. Propose new datasets based on generated samples with manual labeling w.r.t. SA. 3. Proposes a simple statistical approximation method is proposed to obtain a stable and accurate estimation of the GT probabilities. Weaknesses: 1. The organization of the paper is kind of hard to follow. The first contribution takes too long and may confuse the readers. 2. The introduction to the proposed method is too short and not very solid. Some theoretical support may be better. Maybe you can talk about cases when using distributions other than Gaussian to approximate the distributions. 3. Only a public dataset CelebA-HQ is used. It's better to test methods on various datasets. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: plz refer to the "weaknesses" part. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The structure of paper is hard to follow and somehow boring for readers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable suggestion. We will shorten the first contribution and discuss more on our proposed method in the introduction following the Reviewer's suggestion. $ $ >**Q1**: The introduction to the proposed method is too short and not very solid. Some theoretical support may be better. **A1**: We apologize if this was unclear and would like to respectfully clarify that the entire proposed solution (Sec.4) is based on theoretical statistical modeling. As mentioned, the proposed method (Sec.4.1) first derives a statistical model to link the observed errors in the fairness measures to the SA classifier’s inaccuracy. Only then are we able to utilize the statistical model to systematically account for the classifier’s error for fairness measurement (Sec.4.2). Furthermore, we remark that our intention was to highlight the essential information to understand CLEAM in the main manuscript, while Supp. A.1 and A.2 provide extensive step-by-step details and derivation of the theoretical model, as indicated in the paper. Nevertheless, we will follow the Reviewer's suggestion and would shift some of these details back into the manuscript. $ $ >**Q2**: Maybe you can talk about cases when using distributions other than Gaussian to approximate the distributions. **A2**: Thanks, we would like to clarify that we model our system as a Gaussian Distribution based on “normal approximation to the multinomial” [36,37], an application of the central limit theorem. To the best of our knowledge, this is the most appropriate distribution [36,37] that enables us to derive the distribution of $\hat{p}$. We will clarify this in the paper. $ $ >**Q3**: Only a public dataset CelebA-HQ is used. It's better to test methods on various datasets. **A3**: We apologize if this was unclear and would like to respectfully clarify that **our experiments were carried out on three different datasets. This is more than the number of datasets used in previous fair generative modeling work [1,2].** Specifically, as mentioned by the reviewer, In Tab.1 StyleGAN2 and StyleSwin are used and are based on the CelebA-HQ dataset. Then, in Tab.2, the pre-trained Stable Diffusion Model is based on the LAION-5B[a] dataset. Finally, in the ablation studies (Supp D.3/5) we provide further assessment on the AFHQ dataset (mentioned in Sec.5.2). Overall, our proposed CLEAM demonstrates improved performance compared to other approaches when evaluated over all three datasets. As an additional experiment in this rebuttal, we include a fourth dataset to assess our proposed CLEAM. Specifically, we carry out a similar procedure to analyze the bias w.r.t. $\texttt{Gender}$ of a pre-trained diffusion model [b] on the FFHQ dataset [c]. We utilize CLIP as the SA classifier. Here, we similarly find the GT $p^*$ by utilizing the same procedures discussed in Supp.H – to hand-label the generated samples. Note that the GT is for evaluation and is not used in our proposed method. Similar to other datasets, our results show that our proposed CLEAM can significantly reduce the errors for this dataset (see $e_\mu$ and $e_\rho$). We will include these results in the final manuscript. | Model | GT | $\mu_{Base}$ | $e_{\mu}$ | $\mu_{CLEAM}$ | $e_{\mu}$ | $\rho_{Base}$ | $e_\rho$ | $\rho_{CLEAM}$ | $e_{\rho}$ | |---|---|---|---|---|---|---|---|---|---| | Diffusion model [b] | 0.57 | 0.585 | 2.63% | 0.571 | 0.18% | [0.578,0.593] | 4.04% | [0.564,0.579] | 1.58% | $ $ [a] "Laion-5b: An open large-scale dataset for training next generation image-text models." NeurIPS’22. [b] "High-resolution image synthesis with latent diffusion models." CVPR’22. [c] "A style-based generator architecture for generative adversarial networks." CVPR’19. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I think basically it solves my confusions and I raise my score. --- Reply to Comment 1.1.1: Title: Thank you for the very constructive feedback and increasing the rating Comment: We sincerely thank the reviewer for the insightful comments. Thank you for the positive feedback and for increasing the rating. We will include all additional results in the revised version. Sincerely, Authors
Summary: The objective of this paper is on measuring the fairness in generative models. There are three contributions. (i) Consideration of measurement errors of sensitive attribute (SA) classifiers in fairness measurement of generative models. (ii) A classification error aware measurement framework, called CLEAM, which based on statistical model accounts for the inaccuracies of SA classifier to reduce measurement error in generative models. (iii) As an application, the authors demonstrate that CLEAM can be applied to measure fairness in text-to-image generator and GANs. Strengths: The paper is well written, with clear intuitions, illustrations, and experimental results. Weaknesses: - Putting the application of gender bias in the introduction seems to be out of place, possibly undermining the main framework CLEAM. Similarly, table 1 is also out of place, throwing numbers to the readers without explaining the setup. - The authors make an assumption of ground-truth labels of sensitive attributes, which may not be available in practice. - Some of the human faces in Figure 2 and figure 3(a) are different, revealing the uncertainty of the generative models. I understand the intuition of the authors to provide a demonstration of the paper at an early part of the paper. Please clarify if I misunderstood anything. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - The presented method considers binary sensitive attributes. However, modern generative models easily generates images with multi-sensitive features. How do the authors extend to this scenario? Is naive enumeration the only possible way? - In Equation (2), are $p_i$ and $\alpha_i$ known? - In line 222, is $ s = 30 $ large enough? - Is there any future work based on this paper? Please also address the points in "Weakness" above. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1**: Putting the application of gender bias in the introduction seems to be out of place, possibly undermining the main framework CLEAM. Similarly, table 1 is also out of place, throwing numbers to the readers without explaining the setup **A1**: Thank you for your suggestion. We will fix the placement of Fig. 2 and Tab. 1 $ $ >**Q2**:The authors make an assumption of ground-truth labels of sensitive attributes, which may not be available in practice. **A2**: We thank the reviewer for the comment. Importantly, we would like to respectively clarify that ground-truth (GT) labels are not used in our proposed method. Meanwhile, the GT labels for the generated data (GenData introduced in our work) is used **solely for evaluation** of different methods. Again, the GT labels for the generated data is not used in our proposed method. Our method assumes availability of a SA classifier and its validation accuracy ($\alpha$). In practice, $\alpha$ is computed during the validation stage of a SA classifier. Note that a SA classifier is needed in all existing measurement methods. $ $ >**Q3**: Some of the human faces in Figure 2 and figure 3(a) are different, revealing the uncertainty of the generative models. I understand the intuition of the authors to provide a demonstration of the paper at an early part of the paper. Please clarify if I misunderstood anything. **A3**: We apologize for any confusion. To clarify, the images displayed across Fig. 2 and Fig. 3(a) contain some different seed values, while in between each individual figure we utilize the same seed value. However, for improved clarity, we have included a revised Fig.3(a) (in the attached pdf, Fig. 1(rebuttal) ) with the same seed values for the displayed images as Fig 2. $ $ >**Q4**: The presented method considers binary sensitive attributes. However, modern generative models easily generates images with multi-sensitive features. How do the authors extend to this scenario? Is naive enumeration the only possible way? **A4**: Thank you for your comment. Considering the multi-sensitive attributes is indeed an interesting idea. However, we remark that in current literature, fairness of generative models has been studied for binary sensitive attributes mainly due to lack of an available large labeled dataset needed for systematic experimentation. As a result, CLEAM similarly focuses on binary SA to address a common flaw in the evaluation process of the many proposed State-of-the-Art methods. Assuming that constraint of dataset is addressed, our same CLEAM approach can be easily extended to a multi-label setting. For example, given a 3 label sensitive attribute where $p^*_j$ is the probability of generating a sample with label $j$ and ${\alpha}\_{i|j}$ denotes the probability (“accuracy”) of the SA classifier in classifying a sample with GT label $j$ as $i$ for $i,j \in \\{0,1,2\\}$, Fig. 2 (Rebuttal) in the attached Pdf shows our statistical model for this setting. We can then similarly solve for the $p^*$ point estimate by solving the matrix: $$ \begin{bmatrix} \alpha\_{0|0} & \alpha\_{0|1} & \alpha\_{0|2}\\\\ \alpha\_{1|0} & \alpha\_{1|1} & \alpha\_{1|2}\\\\ \alpha\_{2|0} & \alpha\_{2|1} & \alpha\_{2|2}\\\\ \end{bmatrix} \begin{bmatrix} p^*\_0 \\\\ p^*\_1 \\\\ p^*\_2 \end{bmatrix} = \begin{bmatrix} \ddot{\mu}\_\hat{p_0} \\\\ \ddot{\mu}\_\hat{p_1} \\\\ \ddot{\mu}\_\hat{p_2} \end{bmatrix}$$ We will include the detailed procedure and full solution in the final version of the paper. $ $ >**Q5**: In Equation (2), are $p_i$ and $ \alpha_i$ known? **A5**: We apologize if it was unclear, as discussed in Sec.3 of the main paper, $p^*_i$ is the unknown GT distribution that we are trying to find and $\alpha_i$ is a known accuracy of the SA classifier. In practice, accuracy of the classifier ($\alpha_i$) is evaluated during the validation stage, a common practice when training discriminative models (line 113-114). On the other hand, as first introduced in Sec.2, the ground truth SA distribution ($p^*_i$) is unknown as the SA classifier only outputs some approximation $\hat{p}$. Hence, in Sec. 4.1., we provide the setup to model the theoretical distribution, and then utilize it in Sec.4.2 to propose CLEAM for a better estimation of $p^*_i$ with eqn. (8) and eqn. (10). $ $ >**Q6**: In line 222, is s=30 large enough? **A6**: Thanks, our experimental results in Supp. F-Fig 5 demonstrates that $s$=30 is sufficiently large enough to significantly reduce measurement error, where increasing $s$ does not result in considerable performance improvement. Supp. C.1 then further substantiates these findings by showing that $s$=30 provides a close approximation between the sample based estimate and the model based estimates. $ $ >**Q7**: Is there any future work based on this paper? **A7**: Thanks for your question. As future work, now with a more reliable fairness measurement framework, firstly, we consider utilizing this more accurate measurement as a bias mitigation technique during fair generative model training. Secondly, inspired by the findings in Sec.6, we propose to further look into the biases that exist in stable diffusion models (discussed in Sec 6 of our manuscript) and identify their possible sources. --- Rebuttal Comment 1.1: Title: After rebuttal Comment: Thanks for the response. It clarifies my questions. Best of luck. --- Reply to Comment 1.1.1: Title: Thank you for the positive feedback and constructive comments Comment: We express our gratitude to the Reviewer for their constructive remarks, as well as the positive assessment. As discussed, we will carefully address the Reviewer's comments in the final manuscript. Sincerely, Authors
null
null
null
null
Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
Accept (spotlight)
Summary: This paper presents a theoretical characterization and some theorems supporting previous empirical finidings on perceptually aligned gradients (PAG) of classification neural network models. Specifically, it provides the first rigorous definitions for the previously qualitative definitions of "PAG", and provides the first attempt at a theoretical framework connecting PAG and model robustness, which has been observed empricially in previous work. Strengths: - Very well-written and well-presented paper. Covers and taxonomizes the relevant literature very nicely. - Math and definitions are intuitive, relevant, and sound. I especially like the fact that the theorem assumes almost nothing on the model behavior or training objective. - Overall, this paper addresses a very important and relevant topic, characterizes a lot of themes in it, and provides a great theoretical explanation for it. This theoretical justification was notably missing from previous PAG literature. - Empirical evaluation metrics are carefully picked and well-thought-out. Comparing PAG to score-based models is especially relevant (this was first suggested in [6], I think a reference is due on lines 264-274). Weaknesses: - lines 106-107: noise can also have both on- and off- manifold elements (think of a linear combination of two on- and off- manifold noise vectors). Therefore, the "otherwise" on line 107 is incorrect. - Minor issue: Defnition 1 is missing a definition of x (for formality). - While the math (theorem 1) is nice, it does not consider adversarially chosen noise. It would be nice to mention that for normal noise with $\sigma \rightarrow 0$, this covers adversarial noise as well, but it should be made clear that more work is needed to specifically consider adversarial attacks. - Line 151: Paragraph title mentions difference between data and signal manifolds, but the paragraph does not discuss "data manifold". Typo? - The definition of signal vs distractor is too simplistic. Signals and distractors are not necessarily separated by a pixel mask only. The underlying signal manifold could lie on a linear or non-linear projection of x, not necessarily on the pixel space itself. This weakens the resulting math (theorem 2), unless the authors can rewrite their proofs w.r.t. a determinstic "signal" function that is generalized beyond simple point-wise masking. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - In the abstract, is noise augmentation and randomized smoothing the same thing? If so, I would suggest listing them as one item and not two. - Lines 272-274: Is the score-based model used here class-conditional? If so, it should be made clear by defining p(x|y) and not p(x) in the score function definitions. If not, further explanation is needed. Why would we characterize perceptual alignment using a class-unaware model? "PAG" usually refers to salient class-specific features. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful review! We're glad you liked the paper and found the theory intuitive and relevant. We address your questions below. 1.*“While the math (theorem 1) is nice, it does not consider adversarially chosen noise. It would be nice to mention that for normal noise with $\sigma \rightarrow 0$, this covers adversarial noise as well, but it should be made clear that more work is needed to specifically consider adversarial attacks.”* Good point! We think adversarial noise are definitely an important case, but slightly tricky to handle as it represents the worst-case perturbation and has measure zero. For example, it can still happen that models may be less robust off-manifold compared to on-manifold, but the only adversarial noise is on-manifold. Essentially, this can happen because adversarial noise is about misclassification, but our notions of robustness are about change in output value. But, note that this theory nonetheless holds for adversarially-trained models in practice, which have on-manifold gradients and hence off-manifold robustness in the average case. Please let us know in case you have more thoughts about this problem! --- 2.*“The definition of signal vs distractor is too simplistic. Signals and distractors are not necessarily separated by a pixel mask only. The underlying signal manifold could lie on a linear or non-linear projection of x, not necessarily on the pixel space itself. This weakens the resulting math (theorem 2), unless the authors can rewrite their proofs w.r.t. a determinstic "signal" function that is generalized beyond simple point-wise masking.”* We agree that Definition 3 can be extended such that the signal and distractor are any pointwise orthogonal complements of each other, and need not be axis aligned to the input pixels. The reason we define these in this manner is to explain the observations of Shah et al. (“Do input gradients highlight discriminative features?”, NeurIPS 2021) who observe that input gradients of robust models highlight discriminative features, which they define as input pixels that can be used to predict the label, which serves as the basis for "phenomenon 1" in our paper. Having said that, we agree that generalizing definition 3 makes the theory potentially more interesting and general. We will add a note about this in the paper. --- 3.*“In the abstract, is noise augmentation and randomized smoothing the same thing? If so, I would suggest listing them as one item and not two.”* By “noise augmentation” we mean the smoothness penalty detailed in Section C.1 in the Supplement, which is different from randomized smoothing. Essentially, this involves training models with an explicit regularization encouraging the model behaviour to be similar with and without noise, i.e., regularizing $ || f(x) - f(x + \epsilon) ||^2$ to be small, where $\epsilon \sim \mathcal{N}(0, \sigma^2)$. --- 4.*“Lines 272-274: Is the score-based model used here class-conditional? If so, it should be made clear by defining p(x|y) and not p(x) in the score function definitions. If not, further explanation is needed. Why would we characterize perceptual alignment using a class-unaware model? "PAG" usually refers to salient class-specific features.”* The diffusion model is unconditional. Interestingly, we found little difference between the scores of class-conditional and unconditional diffusion models on natural images (comparing “cifar10-32x32-uncond-vp” and “cifar10‑32x32‑cond‑vp” from [10]). Empirically, our results are robust towards using conditional and unconditional diffusion models. Thank you also for all the suggestions with the typos, we will make these corrections in the draft. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I maintain my score of 8.
Summary: This paper studies Perceptually Aligned Gradients (PAGs), a phenomenon where the input gradients are semantically meaningful and aligned with human perception. While this trait gained research attention, we do not truly understand it. To this end, the paper proposes an explanation via off-manifold robustness. Namely, they show empirically and theoretically that models that are off-manifold robust have perceptually aligned gradients for different robustification techniques. Moreover, they propose a quantitative way to assess PAG, while prior to this work, PAG was evaluated qualitatively only. In addition, they identify three levels of robustness and analyze them in terms of PAG. Strengths: * Perceptually Aligned Gradients is a fascinating phenomenon, and despite gaining substantial research attention, we do not understand it. Shedding light on this trait is an interesting and important research goal. The paper empirically and theoretically verifies the relationship between PAG and off-manifold robustness. To strengthen their findings, they do so for different robustification techniques. * Decomposing the gradients of a model to on and off-manifold directions is very interesting thinking. Besides being theoretically logical, they also develop a way of doing so. * Excellent and clear writing. The authors have done a good job of providing motivation for this work. In addition, while several papers use this term to describe different behaviors, this paper formulates the definitions and makes an order in the various definitions. Weaknesses: * The connection between a conditional score function and PAG was made in [1]. Despite not being used for assessing PAG, they show that a classifier trained to mimic the score function possesses PAG. Thus, I do not find linking the two for assessing PAG very novel. * Overstatements and bit trivial claims - “In this work, we provide 7 a first explanation of PAGs” [abstract], but [4] offers an explanation of the generative capabilities of adversarial training using energy-based models. Additionally, I find some claims that the authors made efforts to explain quite straightforward. For example, robust models are off-manifold robust. We know robust models have PAG, meaning their gradients are aligned on the manifold. This means that the change of loss in off manifold direction is smaller than the on-manifold (because otherwise, the gradients weren’t aligned), making such models off-manifold robust. **Minor weaknesses and requests** * I find this important to report the LPIPS results in Figure 5 to demonstrate that the used metric indeed makes sense and aligns with the human evaluation of PAG. * Missing citations - I think that in order to motivate the significance of studying PAG for the general audience, it is important to include more works that study and utilize PAG. For instance, [2,3]. [1] Do Perceptually Aligned Gradients Imply Robustness? [2] On the benefits of models with perceptually-aligned gradients. [3] BIGRoC: Boosting Image Generation via a Robust Classifier. [4] Towards Understanding the Generative Capability of Adversarially Robust Classifiers. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * Is the score function being modeled in the measuring Perceptual Alignment paragraph for clean images or noisy images? From my understanding, diffusion networks model the noisy score function and not the clean one. * What is the rationale for measuring the LPIPS for gradients? It is usually used for images, and input-gradient distribution is significantly different. Why does it make sense to use this rather than cosine similarity or other closed-form metrics? * Figure 2 shows that there are points where models are good in off-manifold robustness before being on-manifold robust (for example, the left figure in the top row of Figure 2, around a noise level of 0.5). Can this analysis help us find robust models that have better robustness-accuracy tradeoffs? * Why are the results of Imagenet 64 so low? In general, I find this paper interesting and would like to increase my score given the authors' response. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your great review, and we're glad you found our paper interesting! We address specific comments below. 1.*“The connection between a conditional score function and PAG was made in [1]. Despite not being used for assessing PAG, they show that a classifier trained to mimic the score function possesses PAG. Thus, I do not find linking the two for assessing PAG very novel.”* The reviewer is right that score-functions have been used in the context of PAGs before [1], but to the best of our knowledge, we are the first to use score-functions to provide a quantitative metric to evaluate PAGs. --- 2.*“Overstatements and bit trivial claims - “In this work, we provide a first explanation of PAGs” [abstract], but [4] offers an explanation of the generative capabilities of adversarial training using energy-based models."* Thanks for the reference! While we believe that the provided reference [4] is indeed a valuable contribution, it misses a critical component of the PAGs phenomenon, that gradients of robust models highlight only the discriminative components (see Phenomenon 1, line 38 in our paper) while ignoring the rest. However we agree that it can be a partial explanation for Phenomenon 2, i.e., generative capabilities, and hence we shall re-word our claims accordingly. --- 3.*"Additionally, I find some claims that the authors made efforts to explain quite straightforward. For example, robust models are off-manifold robust. We know robust models have PAG, meaning their gradients are aligned on the manifold. This means that the change of loss in off manifold direction is smaller than the on-manifold (because otherwise, the gradients weren’t aligned), making such models off-manifold robust.”* To the best of our knowledge, it has not been well-established in prior works that PAGs imply gradients lie on-manifold (Hypothesis 2), and our paper takes a step in this direction. While theorem 1 is intuitive, it is critical to explaining PAGs as it helps link PAGs to model robustness, which is our main objective. In addition, theorem 1 only applies in very limited settings (under the limit of small Gaussian noise), and the contribution of our work is to show that this also holds for realistic noise levels and for real models. --- 4.*“I find this important to report the LPIPS results in Figure 5 to demonstrate that the used metric indeed makes sense and aligns with the human evaluation of PAG.”* Great point. Please see the pdf file provided as part of the rebuttal. The reviewer can also compare Figure 2 with Supplement Figures 7-11. --- 5.*“Missing citations - I think that in order to motivate the significance of studying PAG for the general audience, it is important to include more works that study and utilize PAG. For instance, [2,3].”* [2] is already cited in the paper, thanks for pointers to [3,4]. We shall discuss applications of PAGs in the paper. --- 6.*"Is the score function being modeled in the measuring Perceptual Alignment paragraph for clean images or noisy images? From my understanding, diffusion networks model the noisy score function and not the clean one.”* Diffusion models provide a family of score functions parametrized by the noise level sigma. In theory, noise training in these models ensures scores are well-defined everywhere. Once trained, the score-function is applicable to all points in the input with an appropriate sigma. We choose the parameter sigma to maximize the perceptual alignment of the score. --- 7.*“What is the rationale for measuring the LPIPS for gradients? It is usually used for images, and input-gradient distribution is significantly different. Why does it make sense to use this rather than cosine similarity or other closed-form metrics?”* We choose the LPIPS metric because we find that it adequately quantifies the observed variation in the perceptual similarity between the score and the input gradients of differently robust models. Independently of the chosen metric, this variation is observed in the qualitative depictions in Supplement Figures 7-11. We agree with the reviewer that the LPIPS metric is not necessarily perfect and that other metrics could be considered. Empirically, however, we found that the LPIPS metric worked best. For example, we did initial experiments with the cosine similarity and found the results, while often qualitatively similar, to be less robust and less aligned with our human perception. A potential reason for this might be that the cosine similarity is rather sensitive to the fact that the score aligns with the data manifold, whereas model gradients align with the signal manifold. --- 8.*“Figure 2 shows that there are points where models are good in off-manifold robustness before being on-manifold robust (for example, the left figure in the top row of Figure 2, around a noise level of 0.5). Can this analysis help us find robust models that have better robustness-accuracy tradeoffs?”* Great question! We would argue that an analysis of the on- and off-manifold robustness curves can help us identify models that are relatively accurate and also have perceptually-aligned gradients (or off-manifold robustness). Considering the example pointed out by the reviewer (noise level of 0.5), this corresponds to a model with relatively high accuracy and perceptually-aligned gradients. We can also help find models with better robustness-accuracy tradeoffs in the sense that we identify that it is preferable for models to be only off-manifold robust as opposed to being robust on-manifold. Thus by choosing methods that prefer only off-manifold robustness, we may be able to achieve a better accuracy-robustness tradeoff. --- 9.*“Why are the results of Imagenet 64 so low?”* On ImageNet-64x64, a performance drop is expected due to the significant downsampling of the images. We train the Resnet 18 models from scratch using standard training procedures without any significant hyperparameter optimization. --- Rebuttal Comment 1.1: Title: Reviewer respone Comment: I thank the authors for their response which addresses my questions and raised points. As I find the paper very interesting, I am happy to raise my score.
Summary: The paper seeks to provide a condition that leads models to have gradients that are aligned with human perception, i.e. which highlight relevant features of the image while ignoring distractor features. The paper first proposes that for any manifold, if a model is robust to perturbations which are off-manifold, then its gradients will be aligned with the manifold. The paper then defines a "signal" manifold as a hypothetical manifold of features which relevant to the classification task, defined as the multiplication of the data with a masking function. The paper then states and proves that a Bayes optimal classifier is robust to perturbations perpendicular to this manifold, and hence its gradients lie on the signal manifold. The paper further hypothesizes that gradients on the signal manifold are perceptually aligned. An argument is then provided that models robust to adversarial perturbations which have similar off-manifold robustness as the Bayes optimal classifier have perceptually aligned gradients. Finally, the paper presents empirical evidence in support of their theoretical claims and hypotheses. Strengths: 1. The paper presents an interesting hypothesis and designs a robust experimental methodology to validate it within the set of assumptions. 2. The notion of weakly robust, optimally robust and overly robust models and their connection to PAGs is novel, and interesting. Comparing such classifiers with score functions is interesting in its own right, and can be a good tool in analysing models empirically. Weaknesses: 1. It is not clear from the experiments what the signal manifold is. Line 278 talks about perturbing about 10\% of the input, but it would be good to clarify this further. 2. Hypothesis 1 is never validated empirically in the paper, but the experiments build upon this hypothesis by comparing the perceptual similarity with the gradients of the score function. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How are on- and off-manifold perturbations generated exactly? 2. Is there any way to provide more empirical evidence for Hypothesis 1? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors address their limitations in sections 3 and 4, where they describe how much of an approximation their empirical setup and theorems are. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review! We address specific concerns below. 1.*“It is not clear from the experiments what the signal manifold is. Line 278 talks about perturbing about 10% of the input, but it would be good to clarify this further.”* The concept of the signal manifold arises from the observation that the input gradients of discriminate models have a tendency to highlight parts of the image that are discriminative. This should be seen in contrast to the score, which models the entire data distribution (the data manifold), including class-irrelevant background information. The signal manifold is not explicitly known. The perturbations applied in this paper are with respect to the data manifold. In Line 278 we should have been more clear. When we measure how much the model output changes in response to a change in the input, we fix the $l_2$-norm of the perturbation that is applied to the input (just as we would do with adversarial perturbations). In our experiments, the $l_2$-norm of the perturbation is about 10% of the $l_2$-norm of the input image. We will clarify this in the paper. --- 2.*“Hypothesis 1 is never validated empirically in the paper, but the experiments build upon this hypothesis by comparing the perceptual similarity with the gradients of the score function.” “Is there any way to provide more empirical evidence for Hypothesis 1?”* Our results in Figure 2 provide partial evidence for Hypothesis 1. Here we observe that the off-manifold robustness of models increases in the top row to increasing robustness, and simultaneously we observe a corresponding increase in the LPIPS metric. This shows that perceptual alignment and lying on the data manifold go hand-in-hand. We also provide results specific to the signal manifold (which lies within the data manifold) in Figure 4. --- 3.*“How are on- and off-manifold perturbations generated exactly?”* We estimate the tangent space of the data manifold using an auto-encoder. Then we draw a random normal vector and project it onto the tangent space. The part of the vector that lies in the tangent space is the on-manifold perturbation. The part of the vector that is orthogonal to the tangent space is the off-manifold perturbation. Line 257 in the main paper goes into more detail about this. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their rebuttal and maintain my score recommending acceptance.
Summary: This paper explores the connection between manifold alignment, perceptually-aligned gradients, and model robustness, deriving a number of novel results that explain previously observed phenomena in the explainability and robustness literatures. Crucially the paper grounds its contributions in Bayes optimal classifiers, arguing that the gradients of such classifiers are aligned with a “signal manifold”. The paper leverages denoising autoencoders to estimate the data manifold and diffusion models to estimate the gradients of a Bayes optimal classifier. Using these quantities, the paper demonstrates a number of novel results, including a strong relationship between manifold alignment and model accuracy. Strengths: This paper presents a timely and likely impactful theoretical contribution with implications not only for robust learning but also for explainability and generalization. The paper is very well organized and well-written. The introduction provides an excellent summary of the literature and current open questions. The paper disambiguates between three closely-related notions of “perceptually-aligned gradients” and provides a powerful taxonomy through the definition of Phenomena 1-3. Drawing on prior work, the paper introduces quantitative metrics to evaluate robustness and gradient alignment with respect to a data manifold. The theoretical contributions are somewhat geometrically simplistic, but powerful. Theorem 1 has been assumed intuitively in prior work, but this paper introduces the formalisms to quantify the connection between off-manifold robustness and on-manifold gradient alignment. Argument 1-2 outline very compelling circumstantial evidence for the broader Hypotheses and will hopefully lead to future theoretical developments. The paper provides extensive experimental evidence, including useful error bars. Figure 2 provides gorgeous plots and the correlation between manifold alignment and model accuracy is striking. Finally, the paper makes a number of practical advances related to effective adversarial training. Figure 5 intuitively summarizes the paper’s crucial claims about three regimes of robust training. Weaknesses: * It does not seem right that the mask defining distractor vs signal features is constrained to be binary (Definition 3). This means the signal and distractor distributions are not invariant to linear transformations of $x$. Shouldn’t the only requirement be the statistical independence on line 163? * All experiments evaluating perceptual alignment used the LPIPS metric. Wouldn’t another metric, such as cosine distance, be appropriate. This would allow for distractors that aren’t defined by a binary mask. * Unfortunately (probably due to computational cost) different results are presented for CIFAR-10 and ImageNet, reducing the global consistency of the paper. * The experiment shown in Figure 4 is not well explained, would it be possible to explain the setup in a little more detail using previously-established notation? Small issues: * The definition of Bayes optimal classifiers assumes an equal class distribution (i.e. uniform p(y)). This most likely will not be the case in most applications. Note that this does not invalidate any of the paper’s findings. * Type on line 200 “Bayes optimal models gradients” * Missing article on line 220: “there exists a trade-off between [the] cross-entropy loss and on-manifold robustness term” * Missing pronoun at line 351: “meaningful in that [they] lie tangent” * At line 354, “gradients is repeated, and do you mean the double negative “cannot be uninterpretable”? * Figures 4 and 5 are out of order. * The numbering of Theorems in the Supplemental material is off (the proofs continue as Theorem 3, 4 rather than 1, 2). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Why is Theorem 2 stated with an if and only if? Don’t you prove that both statements are true for Bayes optimal classifiers? 2. Would it be possible to add an adversarially-trained model for CIFAR-10? 3. On line 228 you say “We elaborate on this argument in the Supplementary material.” Where? 4. On line 255, does “$l_2$-adersarial robust training” mean PGD applied during training? 5. In the legend for Figure 2, is “perturbation” really the right term? Isn’t the plot essentially showing the inverse of robustness? Why not use the relative robustness metric defined in Definition 1? Why is an $l_1$ divergence used here rather than the $l_2$ divergence used in Definiton 1? Can $\rho_1$ here be connected to Brier score to provide a proper score function? 6. Are error bars standard error? 7. Do you have any comment on the point where the lines cross in Figure 2 and why there is a different trade off depending on the training objective? 8. At line 274, how were these specific $\sigma$ values chosen for each dataset to parameterize the diffusion model? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The acknowledged limitations throughout the paper and the supplemental material are superb (e.g. the statement “We provide two lines of argument in support for this hypothesis [Hypothesis 2]. Ultimately, however, our evidence is empirical”). Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review! We're glad that you liked our paper overall, including the acknowledgement of our limitations. We respond to individual questions below. 1.*“It does not seem right that the mask defining distractor vs signal features is constrained to be binary (Definition 3)”* We agree that Definition 3 can be extended such that the signal and distractor are any orthogonal complements of each other, and need not be axis aligned to the inputs. The reason we define these in this manner is to explain the observations of Shah et al. (“Do input gradients highlight discriminative features?”, NeurIPS 2021) who observe that input gradients of robust models highlight discriminative features, which they define as input pixels that can be used to predict the label. Having said that, we agree that generalizing definition 3 in this manner makes the theory potentially more interesting and general. We will add a note about this in the paper. --- 2.*“All experiments evaluating perceptual alignment used the LPIPS metric. Wouldn’t another metric, such as cosine distance, be appropriate. This would allow for distractors that aren’t defined by a binary mask.”* See our response 7 to reviewer aEc6 for the rationale behind the LPIPS metric. We are not sure that we understand the point of how the LPIPS metric does not allow for distractors that aren’t defined by a binary mask whereas the cosine distance does. We would be happy to clarify if the reviewer could elaborate on this. --- 3.*“Unfortunately (probably due to computational cost) different results are presented for CIFAR-10 and ImageNet, reducing the global consistency of the paper.” / “Would it be possible to add an adversarially-trained model for CIFAR-10?”* We partly address this by providing results for an adversarially-trained model on CIFAR-10 in the additional pdf file. --- 4.*“The experiment shown in Figure 4 is not well explained, would it be possible to explain the setup in a little more detail using previously-established notation?”* Certainly! We apologize for the lack of detail, and we will rewrite this setup accordingly. Essentially, we create a variant of the MNIST dataset where the signal and distractor are known by design. We then proceed to train robust models and find that they are indeed more robust to the distractor as opposed to the signal. --- 5.*“Why is Theorem 2 stated with an if and only if?”* The “if and only if” part of theorem 2 is indeed redundant, as we do prove that off-manifold robustness and on-manifold alignment are identical in theorem 1. We only write this again for clarity. --- 6.*“On line 228 you say “We elaborate on this argument in the Supplementary material.” Where?”* We apologize for omitting this discussion in the supplementary due to oversight. The elaboration essentially consists of a slightly more rigorous treatment of the argument presented in the main paper – we can decompose any model f(x) into on-manifold and off-manifold components globally by projecting on the tangent space at each point. Thus in the decomposition in argument 1, the on-manifold objective applies purely on the on-manifold model and similarly to the off-manifold parts. Now, if the behavior of the model is purely independent on- and off-manifold, then the stationary points of the separate objectives applied to the two independent models imply off-manifold robustness of the final model. --- 7.*“On line 255, does “adversarial robust training” mean PGD applied during training?”* Yes. --- 8.*“In the legend for Figure 2, is “perturbation” really the right term? Isn’t the plot essentially showing the inverse of robustness? Why not use the relative robustness metric defined in Definition 1? Why is an divergence used here rather than the divergence used in Definiton 1?"* Great point! The additional pdf file shows that our results are robust to the choice of divergence. We will modify Figure 2 to use the same divergence as in Definition 1. We will also exchange the term “perturbation” with “sensitivity”. Regarding plotting relative robustness vs plotting on- and off-manifold robustness separately, we plot separately to maximize clarity wrt to the underlying phenomenon. We believe relative robustness would simply scale both these curves by the overall robustness value. Nonetheless, we will add a note about this in the paper, and these to our supplement. --- 9.*“Are error bars standard error?”* The error bars in Figure 2 depict the minimum- and maximum observed values when training 10 models with different random seeds (including different regularization terms for the randomized penalties). This will be clarified in the description of the figure. The reason these are min/max instead of standard deviations is that standard deviations across 10 observations might not be very meaningful, and re-training these models is computationally expensive. --- 10.*“Do you have any comment on the point where the lines cross in Figure 2 and why there is a different trade off depending on the training objective?”* Great question! At this point, we don’t have any specific insights except for the general observation that while the overall behavior of different regularization objectives is comparable, these are still different training objectives that do lead to different optima. To given an example, while gradient norm regularization and randomized smoothing both lead to perceptually aligned gradients, a close qualitative comparison of the respective input gradients (Supplementary Figures 7 and 8) reveals that the input gradients resulting from randomized smoothing have a tendency to be perceptually more ‘smooth’. --- 11.*“At line 274, how were these specific values chosen for each dataset to parameterize the diffusion model?”* These values were chosen to maximize the perceptual quality of the estimate of the score obtained from the diffusion model. --- Rebuttal Comment 1.1: Title: Additional clarification Comment: Thank you to the authors for a thorough and insightful rebuttal. I have a couple remaining questions about the experimental setup based on diffusion models. 1. You say that you “choose the parameter sigma to maximize the perceptual alignment of the score”: does this mean that this parameter was based on a human visual analysis or a metrics such as LPIPS? Do you have any anecdotal evidence pertaining to the importance of the $\sigma$ parameter, e.g. confirming the need for better diffusion models to estimate the score function [R1]? 2. Additionally in your response to Reviewer eaFT, you state that “The diffusion model is unconditional.” On line 271, you are aiming to estimate $\nabla_x \log p(x | y)$: this requires a class-conditional model [R2, eq 7]. What am I missing? 3. I believe that this concern in (2) directly connects to my question about metrics (#2 in your rebuttal). My understanding was that your experiments with LPIPS were using the gradients of a Bayes optimal classifier provided by a conditional diffusion model. However in your response to Reviewer aEc6, you say that “A potential reason for this might be that the cosine similarity is rather sensitive to the fact that the score aligns with the data manifold, whereas model gradients align with the signal manifold.” I expected cosine similarity between the gradients of a Bayes optimal classifier and the gradients of a discriminative model to estimate precisely the _projection of the discriminative model’s gradient on the signal manifold_. Is this interpretation inconsistent with your experimental setup? 4. Finally, can you confirm that this comment will be addressed in the revisions? > The definition of Bayes optimal classifiers assumes an equal class distribution (i.e. uniform p(y)). This most likely will not be the case in most applications. [R1] Wang, Zekai, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, and Y. A. N. Shuicheng. "Better Diffusion Models Further Improve Adversarial Training." ICML (2023). [R2] Ganz, Roy, Bahjat Kawar, and Michael Elad. "Do Perceptually Aligned Gradients Imply Robustness?." ICML (2023). --- Reply to Comment 1.1.1: Comment: Thank you for the response! *"You say that you “choose the parameter sigma to maximize the perceptual alignment of the score”: does this mean that this parameter was based on a human visual analysis or a metrics such as LPIPS? Do you have any anecdotal evidence pertaining to the importance of the $\sigma$ parameter, e.g. confirming the need for better diffusion models to estimate the score function [R1]?"* We chose the sigma parameter based on human visual analysis before running the experiments with the LPIPS metric. The ideal quantity of interest to us are the score gradients at \sigma -> 0, at which point it approximates the true noise-less score-gradients, however in practice we found these to be noisy and uninformative, thus indicating that diffusion models in practice do not recover the correct noise-less score gradients. This aligns with the observations of [R2], who also find that too small of a \sigma leads to noisy and uninformative gradients (in section 4.2, and Figure 8 of [R2]). --- *"Additionally in your response to Reviewer eaFT, you state that “The diffusion model is unconditional.” On line 271, you are aiming to estimate $\nabla_x \log p(x | y)$: this requires a class-conditional model [R2, eq 7]. What am I missing?"* We apologize for the confusion. Based on our theory, there are actually two principled ways to evaluate diffusion models and discriminative classifiers. The first is to take the input gradient with respect to a single class and compare it to a conditional diffusion model $p(x | y)$. The second is to sum the input gradients of all the classes and compare it to an unconditional diffusion model $p(x)$ (i.e., computing marginal probabilities). The results in the paper present the first approach on ImageNet and ImageNet-64x-64 and the second approach on CIFAR-10 (We clearly state this in Supplement Sections C.3 and C.4). As we indicated in our response to Reviewer eaFT, Figure 2 with a conditional diffusion model on CIFAR-10 is nearly identical (among others, because the scores of the conditional and unconditional model on natural images are very similar, which is perhaps another drawback of the current diffusion models, also observed in [R2]). The revised version will clarify these points in the main paper and also include the results with a conditional diffusion model on CIFAR-10. --- *"I believe that this concern in (2) directly connects to my question about metrics. My understanding was that your experiments with LPIPS were using the gradients of a Bayes optimal classifier provided by a conditional diffusion model. However in your response to Reviewer aEc6, you say that “A potential reason for this might be that the cosine similarity is rather sensitive to the fact that the score aligns with the data manifold, whereas model gradients align with the signal manifold.” I expected cosine similarity between the gradients of a Bayes optimal classifier and the gradients of a discriminative model to estimate precisely the projection of the discriminative model’s gradient on the signal manifold. Is this interpretation inconsistent with your experimental setup?"* We agree that the ideal quantity to use in the experiments would be the input gradients of a Bayes optimal classifier. In practice, however, we found it non-trivial to obtain these gradients from a conditional diffusion model (see Li et al., “Your Diffusion Model is Secretly a Zero-Shot Classifier”, 2023). This is why we decided to proxy the input gradients of a Bayes optimal classifier with the gradients from the score model $p(x \mid y)$ (see line 266 in the paper). The gradients of the score model lie, by construction, on the data manifold and not the signal manifold. We also agree with your intuition that the cosine similarity between the gradients of a Bayes optimal classifier and the discriminative model’s gradients can be a proxy for the projection on the signal manifold. Note, however, that the signal manifold will in general be multi-dimensional, which means that in order to project arbitrary vectors on the signal manifold we would need to perform this projection with respect to all the basis vectors of the local tangent space. --- *"Finally, can you confirm that this comment will be addressed in the revisions?"* Yes, we thank you for all the minor corrections suggested! We will make these revisions in an updated version of this draft.
Rebuttal 1: Rebuttal: We thank all reviewers for taking the time to review our paper and providing constructive feedback. We are encouraged by their positive assessment of the paper, and we are committed to incorporating reviewer feedback to further strengthen the paper. Based on the reviewers’ comments, we have conducted additional experiments and provide the following plots in a one-page PDF: 1. Results with adversarial training on CIFAR-10 (in response to a question by reviewer 58Dj). We provide plots in the format of Figure 2 in the main paper and visualizations of input gradients as in the Supplement. We trained with projected gradient descent (PGD) and a $l_2$-budget of varying size $\epsilon$, taking 10 steps of size $\alpha = 2 * \epsilon / 10$. We used our default learning rate schedule on CIFAR-10 and proportionally decreased the initial learning rate for very large perturbation budgets (compare Supplement B.2). 2. Robustness of our evaluation with respect to different divergence measures (in response to a question by reviewer 58Dj). 3. Adding the LPIPS metric to Figure 5 (in response to reviewer aEc6). Pdf: /pdf/bc9ed6ac08dc00cfaee1718c07645a83358caba9.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper titled "Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness" investigates the phenomenon of perceptually-aligned gradients (PAGs) in robust computer vision models. PAGs refer to the alignment of model gradients with human perception, enabling these models to exhibit generative capabilities such as image generation, denoising, and in-painting. The paper aims to explain the underlying mechanisms behind PAGs and provides insights into the relationship between off-manifold robustness, gradient alignment, and model accuracy. The paper's contributions can be summarized as follows: 1. Theoretical Explanation: The paper presents a theoretical explanation of PAGs through the concept of off-manifold robustness. It demonstrates that the alignment of gradients with the data manifold is equivalent to off-manifold robustness. It also introduces the distinction between the data manifold and the signal manifold, which helps in understanding the input gradients of robust discriminative models. 2. Connection to Bayes Optimal Models: The paper establishes a connection between Bayes optimal models and off-manifold robustness. It shows that Bayes optimal models achieve both off-manifold robustness and on-manifold gradient alignment. The input gradients of Bayes optimal classifiers lie on the signal manifold, indicating their perceptual alignment. 3. Empirical Analysis: Extensive empirical analysis is conducted to validate the theoretical findings. Robust models trained with various techniques are evaluated on different datasets. The experiments confirm the relative off-manifold robustness of robust models, the correlation between off-manifold robustness and perceptual alignment, and the presence of signal-distractor decomposition in robust models. The paper's main contribution lies in providing a theoretical framework and empirical evidence for understanding the mechanisms behind PAGs in robust computer vision models. It sheds light on the importance of off-manifold robustness and its connection to gradient alignment and model accuracy. The findings have implications for developing more explainable and generative models in computer vision tasks. By identifying different regimes of robustness, the paper also calls for rethinking standard robustness objectives and benchmarks. Overall, the paper advances our understanding of the properties and behavior of robust models, paving the way for further research in improving model interpretability and generative capabilities in the field of computer vision. Strengths: The strength of the paper lies in its combination of theoretical analysis and empirical evaluation. It provides a comprehensive examination of the phenomenon of perceptually-aligned gradients (PAGs) in robust computer vision models. By presenting a theoretical framework based on off-manifold robustness and its connection to gradient alignment, the paper offers a solid foundation for understanding the underlying mechanisms behind PAGs. Furthermore, the empirical analysis conducted in the paper strengthens its findings. The authors perform extensive experiments using different datasets and robust training techniques, validating their theoretical explanations and hypotheses. The empirical results consistently support the theoretical claims, providing evidence for the presence of off-manifold robustness, gradient alignment, and signal-distractor decomposition in robust models. The paper's contribution also extends to the exploration of various aspects related to PAGs, such as the connection to Bayes optimal models, the trade-off between on- and off-manifold robustness, and the correlation between off-manifold robustness and perceptual alignment. These insights broaden our understanding of the behavior and properties of robust models in computer vision tasks. Overall, the strength of the paper lies in its rigorous analysis, clear explanations, and compelling empirical evidence, making a significant contribution to the field of computer vision and model interpretability. Weaknesses: One potential weakness of the paper is that the evaluation of on- and off-manifold perturbations does not cover the entire space of perturbations. There is a significant portion of perturbations that fall in-between the two categories, which are not thoroughly explored in the current research. It would be valuable for future work to address these intermediate cases and provide a more comprehensive understanding of the model's behavior in those scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Clarification on the choice of perturbations: Can the authors explain the rationale behind the specific choice of on- and off-manifold perturbations? How representative are these perturbations of real-world scenarios? Are there other types of perturbations that could be considered in future research? Extension to non-linear models: The paper primarily focuses on linear models and their relationship to off-manifold robustness. Can the authors discuss how their findings and hypotheses can be extended to non-linear models? Are there any unique considerations or challenges in analyzing non-linear models in this context? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: - Limited exploration of perturbation space: The choice of on- and off-manifold perturbations in the experiments might not cover the entire space of possible perturbations. There could be a wide range of perturbations that lie between the two extremes. Exploring a more comprehensive set of perturbations and their effects on on- and off-manifold robustness could enhance the depth of the analysis. - Evaluation metrics: The paper primarily focuses on measuring robustness based on the change in model output. While this is a common approach, it may not capture all aspects of robustness. Considering additional evaluation metrics, such as adversarial examples or other robustness measures, could provide a more comprehensive understanding of model behavior. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive and encouraging review! We are glad that you found our analyses rigorous and our explanations clear. We answer specific questions below. 1.*“One potential weakness of the paper is that the evaluation of on- and off-manifold perturbations does not cover the entire space of perturbations. There is a significant portion of perturbations that fall in-between the two categories, which are not thoroughly explored in the current research.”* *“Clarification on the choice of perturbations: Can the authors explain the rationale behind the specific choice of on- and off-manifold perturbations?”* *“Limited exploration of perturbation space: The choice of on- and off-manifold perturbations in the experiments might not cover the entire space of possible perturbations.”* Please note that by definition, every perturbation can be written as a linear combination of one on- and one off-manifold perturbation (since the tangent space is simply a linear subspace). This is also crucial to our experiments -- we create random Gaussian perturbations and then proceed to project them onto on- and off-manifold parts for analysis in Figure 2. Therefore, the results in our paper already incorporate the intermediate cases mentioned. In case we have misunderstood your questions, please let us know, we are happy to clarify. --- 2.*“Extension to non-linear models: The paper primarily focuses on linear models and their relationship to off-manifold robustness. Can the authors discuss how their findings and hypotheses can be extended to non-linear models? Are there any unique considerations or challenges in analyzing non-linear models in this context?”* We believe this is a misunderstanding – both the theory and the experiments in our paper are for standard deep neural networks, that is, highly non-linear models. Except for argument 2, which discusses linear models, the rest of the paper is about non-linear neural network models. --- 3.*“Evaluation metrics: The paper primarily focuses on measuring robustness based on the change in model output. While this is a common approach, it may not capture all aspects of robustness. Considering additional evaluation metrics, such as adversarial examples or other robustness measures, could provide a more comprehensive understanding of model behavior.”* We are interested in robustness insofar as it is related to the phenomenon of perceptual-aligned gradients. We provide additional results on adversarial noise in the one-page PDF, please see the global comment for details. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I have read the response, and maintain my original assessment of the paper.
null
null
null
null
null
null
Connecting Pre-trained Language Model and Downstream Task via Properties of Representation
Accept (poster)
Summary: The paper mainly addresses two conditions that enable the representation of pre-trained LLMs to be transferred effectively to the downstream tasks which are usually different from pre-trained objectives. 1. The insensitivity of the downstream task of "super-small" probability words must be guaranteed for good downstream performance. Thus, a function like ReLU is needed as an additional structure of the student model to ignore “small” logits which are considered to have no influence on the downstream task. 2. Using a shift-invariant softmax function can render the logit values meaningless in unseen data. Here, the need for structure in the representation space of the pre-trained model arises and leads to stabilization of the partition function with the “Anchor vector” hypothesis. The anchor vector hypothesis has backed up with empirical verification. The authors demonstrate that as more words with top probabilities are excluded to compute the optimal anchor vector, the error of approximated bulk partition function decreases. This finding supports that the anchor vector actually exists and can be utilized to address the shift-invariance of the softmax function. Furthermore, with the anchor vector, the upper bound of the downstream task’s error rate can be made. Strengths: - The paper is well written with a clear flow. Starting from the problem setting, the properties of the learned representation that affect the performance of the downstream task are well stated and addressed with the anchor vector hypothesis. - The empirical verifications provide substantial evidence in support of the anchor vector hypothesis, solidifying its credibility and reinforcing its validity. Weaknesses: (1) The empirical verification of the “anchor vector” hypothesis lacks clear explanations and detailed information regarding the experimental setup, leading to confusion and a sense of insufficiency. One particular point that requires clarification is the choice of using auto-regressive models, specifically GPT-2 and OPT, for the verification. As mentioned earlier in Line 138, the pre-training task of the language model is described as predicting a “label” given a sequence without “label”, which aligns with the mask infilling task associated with masked language models. However, in section 4.2, the verification process utilizes the auto-regressive models, GPT-2 and OPT, which have a pre-training task of predicting the next token of the given sequence. Based on the provided context, it appears that a single token which can be an intermediate token of the sequence is removed from the sequence (-i) rather than considering the last token of the sequence. This discrepancy raises questions about the alignment between the pretraining task and the verification task depicted in the provided context in section 4.2. Addressing these issues and providing a clear explanation of the rationale behind using auto-regressive models instead of masked language models for the verification process would enhance the overall understanding of the presented claim. (2) In Figure 1, the labels for the x and y axes should be provided. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: (1) Based on Figure 1 (a), it is evident that OPT-350M and GPT-2 Medium show minimal decrease in mean squared approximation error, while other three models exhibit a significant decrease in mean squared approximation error as k increases. What could be the underlying reasons for the different tendency? It cannot be attributed solely to the difference in model scale (number of parameters), since OPT-125M displays a considerable decrease in error. Without further analysis, the findings from two of five models (OPT-350M and GPT-2 Medium) present weak evidence to support the “anchor vector” hypothesis. This raises some concerns regarding the generalizability of the proposed hypothesis to pretrained LLMs as a whole. (2) Do you consider including a broader range of activation functions like swish [1] and gelu [2]? Does the proposed hypothesis still hold for these activation functions, other than ReLU, that do not completely ignore (treat as 0) small logit values? [1] https://arxiv.org/abs/1710.05941 [2] https://arxiv.org/abs/1606.08415 [1] Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Swish: a self-gated activation function. arXiv preprint arXiv:1710.05941. [2] Hendrycks, D., & Gimpel, K. (2016). Gaussian Error Linear Units (GELUs). arXiv preprint arXiv:1606.08415. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Limitations are listed in section 6: Conclusions and future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q: Verification of anchor vector using autoregressive models. A: In our definition we allow v*_i to depend on the entire x_{-i} to capture both autoregressive models and masked language models for generality. An autoregressive model can still be a special case of the theory as their v*_i can just depend on the prefix. We chose OPT and GPT-2 mostly because those are the largest models we have access to. Q: x/y axis of Figure 1 A: The x axis is the number of highest frequency words that are removed (k in the text), and the y axis is the mean-squared error for the prediction. We will add these labels in the final version. Q: Figure 1 OPT-350M and GPT-2 Medium. A: We honestly don’t know why those models behave that way and we agree with this weakness. Q: Considering other nonlinearities. A: That is a very interesting direction. We suspect it is not difficult to extend our result to swish and GeLU as they are still very flat in the large negative range and that is mostly what we need. However there will certainly be more difficulty in the proof due to the nonlinearity of these functions. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful response. The paper presents a compelling theoretical perspective on representation within PLMs, particularly focusing on its mechanism in downstream tasks. While the exploration of the link between LLMs, especially ICL, and the anchor hypothesis is intriguing, it appears that further substantiation is required before its generalization can be confidently supported. To better back up this hypothesis, I suggest conducting additional empirical experiments or in-depth analysis. My assessment remains aligned with the initial score assigned.
Summary: This paper investigates the relationship between language model pretraining and downstream classification tasks. Under certain assumptions, the author theoretically demonstrated that pre-trained models can guarantee performance on downstream tasks with the existence of proposed “anchor vectors”. Strengths: 1. The research question of “why pre-trained models can help with downstream tasks” is quite important. 2. The paper is generally quite clearly written, the question definition is clear and the mathematical settings are reasonable. 3. For me, the method to connect minimizing the KL divergence during pre-training and the performance of downstream tasks is interesting and valuable. Weaknesses: 1. I think there is a difference between “pretrained embedding” (e.g., word2vec) and “large language models” (e.g., BERT/GPT), also the large language model is not equal to “the last layer embeddings”. In this paper, the author takes the model fixed and takes the extracted representations for analysis, however, fine-tuning the whole model is actually a more common way for models such as BERT/GPT on downstream classification tasks. 2. Some of the claims and assumptions may be incorrect. For example, “We can also manually select all the words that are irrelevant to the downstream task.” in Line 255 and “Assume there are at most $k$ vectors are relevant to the downstream task” in assumption 4. These only hold true for certain tasks like SST-2, which is not generalizable. 3. There exists more works need to be discussed and compared. e.g., [1] Visualizing and Understanding the Effectiveness of BERT (EMNLP2019) [2] Revealing the Dark Secrets of BERT (EMNLP2019) [3] On Mutual Information Maximization for Representation Learning (ICLR 2020) 4. I see some findings from Appendix F (Existence of anchor vector is not trivial) in the supplementary material, however, most of the conclusions are already found from previous research. e.g., [4] How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings (EMNLP 2019) [5] Representation Degeneration Problem in Training Natural Language Generation Models (ICLR 2019) [6] Isotropy in the Contextual Embedding Space: Clusters and Manifolds (ICLR 2021) Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The downstream tasks discussed in this paper are somewhat limited, e.g. the sentiment classification. Some of the assumptions are also based on the importance of words that are related to the task itself, which makes the overall framework not general enough. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: Large language model is not just last layer embeddings. A: Indeed, this paper only considers the setting where one takes the last layer representation of a language model and uses that in downstream tasks without fine tuning the whole model. While this gives reasonable performance for many tasks, it is indeed often weaker than fine tuning the whole model (although it’s significantly cheaper). We will highlight this limitation in the final version. Q2: Some assumptions may only hold for certain tasks and not generalize. A: In general, we agree that we make many assumptions and they may not always hold in practice. The goal of our paper is to give some theoretical understanding on how representations can be helpful for downstream applications, and unfortunately this is extremely difficult without assumptions. We tried to give explanations/motivations for every assumption we make, but we agree that they certainly don’t cover all the settings. We will add more discussions on the limitations of these assumptions. We also thank the reviewer for the pointers to more references in weaknesses 3 and 4, we will add discussions to these very relevant works. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and the comments by other reviewers. As the authors have replied, the limitations and relationship between existing work should be added. At present, I maintain my rating unchanged.
Summary: This paper presents a sequence of assumptions along with their respective conclusions, which advance the objective of comprehending the intricate connection between the performance of pre-training and downstream tasks in language models. By framing the prevailing language models as log-linear models, this paper initially presents a mathematical demonstration showcasing instances where pre-trained models encounter limitations in effectively transferring their knowledge to address downstream tasks. Additionally, the paper enumerates several imperative prerequisites for successful knowledge transfer, such as the requirement for shift invariance. Furthermore, the authors introduce the "anchor vector hypothesis," which serves as a crucial framework for elucidating the remarkable adaptability of language models to various tasks. Strengths: - Attempted to rigorously prove the effectiveness of pre-training language models with a mathematical framework. - The claims posited in the paper appear to be reasonable, albeit necessitating a potential round of further verification, provided that readers accept the underlying assumptions upon which the proofs are based. - The storyline of the paper is intuitive and persuasive. Weaknesses: - Given the non-negligible series of assumptions outlined in the paper, a legitimate concern arises regarding the practical implications associated with the claims put forth. For instance, the authors commence the proof procedures by assuming that "a downstream task depends on only a small set of words related to this task," which may not hold true for tasks that involve complex rounds of reasoning. - In addition to Section 4.2, it would be advantageous for the paper to include additional empirical experiments that substantiate the claims presented, thereby strengthening the overall argument. Technical Quality: 3 good Clarity: 3 good Questions for Authors: If we examine the standard (masked) language modeling during the pre-training phase, as far as my understanding goes, $p^*(x_i|x_{-i})$ should correspond to an instance of the Dirac delta distribution. If this understanding is correct, I'm wondering the subsequent discussions in the paper, such as the one in Section 3.1 where scenarios are considered where $p^*$ is relatively small but possibly not zero, are still valid. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: While the mathematical framework proposed by the authors is compelling, there is a valid concern regarding the explanatory power of the paper's contents in relation to the inner workings of current language models. This concern arises due to the series of (unrealistic) assumptions upon which the framework is built. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q: Many assumptions A: In general we agree that we make many assumptions and they may not always hold in practice. The goal of our paper is to give some theoretical understanding on how representations can be helpful for downstream applications, and unfortunately this is extremely difficult without assumptions. We tried to give explanations/motivations for every assumption we make, but it is of course still subjective on whether those are sufficient or not. We will add more discussions on the limitations of these assumptions. Q: p*(x_i|x_{-i}) being a dirac delta A: Note that x_i and x_{-i} are discrete (they are words) instead of vector representations. If they are vector representations (which we denote as v*_i(x_{-i}) as in Equation (1)) then that is indeed often deterministic/dirac delta distributions. The distribution p*(x_i|x_{-i}) is the distribution of a word given its context. It usually is not a dirac delta function as that would mean we are absolutely certain which word needs to be at position i given its context, which is rarely true given the inherent amount of ambiguity in language. None of the language models we used in our experiments give dirac delta distributions for p*(x_i|x_{-i}). As for the specific instance of Section 3.1, Theorem 1 is talking about a word that has low probability, which will exist even if p*(x_i|x_{-i}) is almost concentrated on a single word.
Summary: This paper explores how to connect pretraining performance with downstream task performance (i.e., binary classification). The theoretical analysis is based on token representations. The authors find the ``anchor vector'' in the representation space and bridge pretraining and downstream tasks performance based on it. --- Rebuttal response: Thanks for the clarification! It would be better if these parts can be described clearly in the paper. As for the responses to the two questions, I don't think they are convincing. As I mentioned, previous work used ``surprisal'' to measure the information amount brought by that word. “low probability words” could be vital and carry essential information in that sentence. But this is a theoretical analysis paper. So, I'm fine with this assumption. Therefore, I would only slightly raise my score. Strengths: This paper discusses a vital problem in language model understanding from a theoretical view. The idea of the proofs is interesting. Weaknesses: 1. There are many assumptions and some of them may not hold in practice. A1 could fail for contextualized language models (e.g., BERT, GPT-2) since one token could have very different representations with different contexts. A4 may fail since the prediction head of downstream tasks could be nonlinear. Besides, there are some claims that could be wrong (see Q1, Q2). 2. The ``anchor vector'' hypothesis could fail. Even if the authors show that the MSE for the approximation is less than 1. However, it is not sufficient to prove the hypothesis empirically. Language models could have quite similar representations for all tokens, and they may be far from the zero point. More baselines should be included to support the hypothesis. For example, calculating the average/minimum MSE distance between two random words. 3. The scope of this paper is not a little limited. Even if in the introduction and empirical section (4.2) popular language models (e.g., GPT-2) are included, the theoretical discussion only considers binary classification as the downstream task. Besides, not sure if I understand it correctly, the analysis of language models assumes that they are uncontextualized (e.g., GloVe and ELMo). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1 (Line 204-206): Why does this claim hold? First of all, the vocabulary size of LM could be very large (e.g., for BLOOM, its vocabulary size is 250,000). A word with probability 1e-5 is not super-small. Second, words with small probability could be vital to the downstream tasks. Some people use ``surprisal'' to measure the information gained from that word. It means words with small probability could convey information. Q2 (Line 246-248): Does it mean that the frequent words are not that useful? Is the claim contradictory to the aforementioned one? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review. However, there is likely some serious misunderstanding which we try to clarify below: Q: Many assumptions; some of them can fail. A: In general we agree that we make many assumptions and they may not always hold in practice. The goal of our paper is to give some theoretical understanding on how representations can be helpful for downstream applications, and unfortunately this is extremely difficult without assumptions. We tried to give explanations/motivations for every assumption we make, but it is of course still subjective on whether those are sufficient or not. As for the concrete assumptions, we want to emphasize that A1 *does not* need to fail for contextualized language models. It is indeed true that for these models, the representation of a token can be different depending on their context. However, the “vectors” in A1 refer to the weights of the last layer for these large language models, which are fixed. More precisely, referring to Equation (1) in the paper, the representation of a token is the v*_{-i}(x_{-i}) which is indeed allowed to depend on the context (x_{-i}), while the vectors v*_j’s are the weights of the last softmax layer and are therefore fixed. We believe this is a major confusion that the reviewer had and we will clarify this in the final version. Q: Anchor vector assumption might fail. A: Although we provided empirical evidence that anchor vector assumption holds for the language models we considered, it is indeed possible that for different language models it can fail. However, we would like to point out that the possibility mentioned in the review (that anchor vector assumption might be true because the language model has similar representations for all tokens) is ruled out by our experiments: as we can see from Figure 1, when we choose k to be small (i.e., we do not exclude the frequent words) the approximation guarantee is much worse. This is not possible if all the words have similar representations and is a stronger indicator than measuring the MSE between random words. Q: Limited scope: binary classification, simple language models. A: As the analysis is already quite complicated for binary classification, we choose to focus on that for simplicity. However, most of the ideas can be extended to a multi-class classification setting. The paper does not just apply to simple language models and we suspect the misunderstanding is similar to the one we explained in the first question - we will make sure to clarify this point carefully. Q1: Why does 204-206 hold? A: We think there is again a misunderstanding here. When we talk about “low probability words”, we are not talking about the probability of the word in the dataset/without context. Of course, as the reviewer pointed out, a word with probability 10^-4 is still fairly frequent when we don’t condition on context, and it’s unreasonable to leave those words out. However, in this discussion by “low probability words” we mean the word has low probability after *conditioning” on the context. For example, when we use a prompt like “this movie is ***” for a sentiment classification task, we expect the blank to be some of the words that are extremely relevant to the task (such as good, bad, exciting, boring). For a word that is meaningless in this context (say “for”), we shouldn’t care whether the model predicts it with low or extremely low probability. We believe this is justified in practice because the distribution of words changes significantly after conditioning, and in applying language models people frequently use top-k words after conditioning for k that is not large. We will make this more clear. Q2: Line 246-248, does that mean that the high probability words are not useful? A: No, in fact it means quite the opposite - the high probability words have such a strong influence on the partition function, so that if we don’t remove them, we cannot well-approximate the log-partition function.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ContiFormer: Continuous-Time Transformer for Irregular Time Series Modeling
Accept (poster)
Summary: The authors propose a new method, a continuous-time transformer, ContiFormer, for irregular time series modelling. The proposed method extends vanilla Transformer to a continuous domain. In particular, the model incorporates the continuous dynamic modelling of Neural ODE with the attention mechanism of a transformer. Moreover, the authors show that many transformer variants designed for irregular time series modelling are a special case of the proposed ContiFormer. The authors evaluated the method on a continuous time function (spiral), irregular sampled time series for classification and irregular time series for event prediction. The results indicate that the proposed method obtains better results than the baseline models. Strengths: Originality: The proposed work is original. The author's propose a novel continuous-time attention mechanism (CT-MHA). Quality: The authors have included also a comparison of complexity analysis of their method compared to other. The authors have tested the proposed method on multitude of datasets: irregularly-sampled time series for classification with increasing number of data dropped, as well as event prediction. Especially for the latter, the authors compare their method to existing state-of-the-art models for event forecasting (SAHP, THP) and obtain an improved performance. Clarity: Please see weaknesses. Significance: The authors provide a unique theoretical approach, and the obtained results demonstrate a benefit of the proposed method. Weaknesses: Originality, related work: The related work is a bit convoluted. It would be better if the authors divide the related work into 3 subsections: Transformers for time-series modelling, Transformer for irregular time series, Continuous time models (NODEs). The literature of RNN is not as relevant to this work as it builds upon transformers and NODEs. With respect to NODE related work: Line 43-45, the authors say 'the recursive nature of Neural ODEs can lead to cumulative errors if the number of iterations is numerous'. This is not precise. The cumulative error is dependent of the solver used, and there are numerous works which by enforcing, for example, conservation of energy, keep the error tolerance low, for example, https://arxiv.org/abs/1909.12077. Moreover, by selecting difference step size, solver type (adaptive or fixed) one can control/adjust for the numerical error (as mentioned in https://arxiv.org/abs/1806.07366). Please rephrase. With respect to Transformer related work: In line 47-48, the authors mention that due the fixed time encoding or learning upon certain kernel functions the resulting models struggle to model complex data patterns in real-world practice. However, I do not see how this claim is supported. Even in the appendix, for given percentages of observations dropped (50%) mTAN model outperforms ContiFormer. Therefore, the claim is not supported. Please rephrase. The proposed work differs from the existing work in the field, however, some of the claims made about the previous work are not well supported in the current version of the paper. Quality: The proposed continuous time attention mechanism, eq. 3, is unclear, as the underlying functions are not clearly defined. The authors' mention that previous transformer works for irregular time series are special cases of their model, however, the proof of this only mentioned in the Appendix, I would recommend moving the core parts of the proof to the main part of the paper. Lines 281 - 283, the authors say that ContiFormer consistently outperforms all the baselines on all three settings, while in the appendix it can be seen that there are also instances were the baseline models outperform ContiFormer, therefore, I would soften the claim. For lines 303-304, I am missing a comparison to methods that are more aligned with the task at hand. The authors compare against TST and mTAN, however TST model is focused on multivariate time series (not decomposition of different dynamic trends), while mTAN although also designed for irregular time series modelling the paper's focus is on temporally distributed latent representations. For extracting temporal dynamics/trends of the data, there are other works, Autoformer (https://arxiv.org/abs/2106.13008), FEDformer (https://arxiv.org/abs/2201.12740) that would be perhaps a more fair comparison to the present method. Clarity: The submission is not clearly written, thus negatively affecting the understanding of the proposed method. Please clarify for all functions, v(), k(), q() the input, output dimensionalities, as well what these functions are. As this is unclear, it is hard to evaluate what exactly the inner product of the two function spaces is computing. Line 147. Imprecise phrasing: the authors mention that self-attention involves calculating the correlation between queries and keys, which is incorrect. The computation represents an inner dot product, which is not a correlation metric but rather a similarity measure, please rephrase. The main figure (figure 1), is also not immediately clear for the reader. Would suggest to adjust the figure. It is hard to follow all the colors and how they correspond to which input. In addition, for the NODE model compared with, as well as for the ODESolver used in author own work it is missing what kind of solver is used: adaptive or fixed step? This could greatly affect the performance, hence, it is a crucial implementation detail. In the appendix mTAN model has the wrong reference. The reference is for the Autoformer model, please correct. Lastly, for the experiments is it not clear what is the input/ouput data length for the model. Overall, the technical details of the paper are not clear. As this is the main contribution of the paper, it is essential that it is clear so that the extensive experimental results can be objectively evaluated given the method at hand. I would suggest the authors to rewrite this section. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: In line 83, the authors say that previous models 'are insufficient in capturing input-dependent dynamic systems'. How is this claim supported? The previous work also is also condition on the input data. In line 99-100, the author's say 'ContiFormer, employs a non-autoregressive paradigm', however, the vanilla transformer decoder at inference time is an auto-regressive model, can you clarify this? Furthermore, to perform the ODESolver step, you perform auto-regressive P steps? Eq. (1), the formulation is unclear and incorrect: the author's define both the ODE as well as the continuous latent variable with the same parameter. Furthermore, if k_i(\tau) is the ODE, what is f()? In line 136, could you please clarify why you have modeled each observation as a separate ODE? As showed in https://arxiv.org/abs/1806.07366 NODE models can model irregular time-series with a single ODE specification, even more so, close by observations most likely follow the same ODE. In line 136, could you clarify if the ODE is a MLP or any other architecture? Line 138: would suggest the authors to adjust this assumption/claim: from a dynamical systems perspective a future event cannot affect a past event. Vast majority of works modelling dynamics are often based on Markov property. line 144 in text you mention that q(t_i) is a cubic spline approximation, however, in the appendix you clarify that linear interpolation is used due to the lower computational complexity. please clarify the approximation used in your proposed method. What is the dimensionality of Q_i? Eq. (4) Based on the introduced notation it seems like the introduced computation ends up being a matrix (Q_i), vector (k_i() \in R^d) product. Where the Q_i is an approximation of the process via cubic spline, and k_i() is the latent trajectory by an ODE, this would imply that the proposed mechanism measure the similarity of the learned approximation of the ODE to the approximation by the cubic spline function. Is this correct? Eq. (7) For multi-head attention, you also apply a linear transformation to the latent continuous state, q(),v()? How sensitive is the method of using a different approximation method instead of cubic spline? Does each latent continuous trajectory span only two adjacent time points t:[-1,1]? Meaning that depending input length L (data points), you would have L independent latent ODE trajectories? It is unclear from the text. The authors mention that their attention mechanism is measuring the correlation between different observations (line 298), therefore, I would suggest the authors to compare their work to Autoformer (that measures correlation between different time series) or Fourier-based (FEDformer) attention mechanism. Have such comparisons been made? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: The authors have not explicitly addressed the limitation of the presented work. Based on the text, the proposed method models each observation with a separate ODE function. This, however, seems to be sub-optimal as data points close in time probably follow the same underlying dynamics function, furthermore, the authors have not explained why such a design choice would be necessary, as by default Neural ODEs can account for irregularly sampled time series. Moreover, it would be nice if the authors would have addressed (as an ablation) the model's performance on regular time-series data compared to existing state-of-the art methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable suggestion. We will diligently reassess the detailed statements made in the paper and diligently bolster each claim with appropriate supporting evidence. Below we try to address your problems. > **W1: Table 8 mTAN model sometimes outperforms ContiFormer.** Contiformer outperformed mTAN on 18 datasets, while mTAN only outperformed our model on 2 datasets. In Table 3 in the main paper, Contiformer consistently outperformed mTAN on all three data settings (30%, 50%, 70% drop-ratio). > **W2: Which ODESolver was used and what kind of steps are missing** We use fixed-step RK4 solver, L578-581 in Appendix B.3. > **W3: What is the input/output data length** Input: $X\in \mathbb{R}^{N \times f}$ with $N$ as sequence length and $f$ as input feature dimension; Output is in $\mathbb{R}^{N \times d}$ where $d$ is the hidden dimension. > **Q1: L83, clarify 'previous works are insufficient in capturing input-dependent dynamic systems'** Most Transformer-based methods, with the exception of mTAN, capture input-dependent patterns, rather than the underlying dynamic systems, which is a fundamental mathematical and conceptual framework used to describe the evolution of variables over time. For mTAN, although it models continuous-time dynamic systems, its attention mechanism is primarily dependent on time variables, rather than the input information. Consequently, its behavior lacks input-dependent attributes, a crucial factor for effective modeling in complex scenarios. > **Q2: In L99, clarify 'ContiFormer, employs a non-autoregressive paradigm'.** Here "autoregressive" means that the stepwise model inference is upon the output from the last timestep, even given the whole-length sequence input. We mean non-autoregressive for the *encoding part* of Transformer model and our model. > **Q3: if $k_i(\tau)$ is the ODE, what is $f()$?** $k_i(\tau)$ is a time-function controlled by ODE, and $f()$ is implemented as an MLP. > **Q4: L136, why modeled each observation as a separate ODE?** Each observation will derive a corresponding trajectory modeling *result*, rather than assuming each observation follows different ODEs, since all these ODE functions modeling the latent trajectories share the same ODE function parameter, i.e., $\theta_k$ and $\theta_v$ in Eq. (1). > **Q5: L138, from a dynamical systems perspective a future event cannot affect a past event.** Our method can model the relationship among these observations through continuous-time attention, just like the pairwise attention modeling in vanilla Transformer model. Sorry for the confusion. > **Q6: The approximation used for $q(t_i)$. What is the dimensionality of $Q_i$?** We use cubic spline function to obtain $q(t_i)$ and $Q_i \in \mathbb{R}^d$. > **Q7: The attention mechanism ends up being a matrix, vector product.** $q(\cdot): \mathbb{R} \rightarrow \mathbb{R}^{d}$ is a time function, *not* a matrix. $k_i(\cdot): \mathbb{R} \rightarrow \mathbb{R}^{d}$ is also a time function. We actually calculate the similarity (or relationship) between continuous functions within a closed interval. Contiformer extends the vanilla Transformer into the continuous-time domain, we emphasize that our attention mechanism operates directly on continuous-time inputs, in contrast to discrete matrices/vectors. > **Q8: Eq. (7) For multi-head attention, is linear transformation applied?** Yes, as shown in Eq. (7) of our paper. > **Q9: How sensitive is the method of using a different approximation method instead of cubic spline?** We have conducted an ablation study on UEA classification and our model is insensitive to the utilized approximation method. |Drop Ratio (%)|70|| |-|-|-| ||Linear Interpolation|Cubic Spline| |**Avg. Acc**|**0.7775**|0.7749| |**# Top 1**|**13**|7| > **Q10: Does each latent continuous trajectory span only two adjacent time points t:[-1,1]?** Latent continuous trajectory evolves along time, i.e., from $t_1$ to $t_N$, where $t_i$ is the time of the $i$-th observation ($i \in [1,N]$). Therefore, it does *not* span only two adjacent time points. > **Q11: The proposed method models each observation with a separate ODE function. This seems to be sub-optimal.** We would like to clarify that all the observations are modeled with *the same* ODE function with only one set of parameters $\theta$ in Eq. (1), *rather than* modeling by *different* ODE functions. > **Q12: Why such a design choice would be necessary?** The necessary is discussed in **Q4, Q10**, and **Q11**. Besides, Neural ODEs can only calculate the ODE function from the initial time point to the last one, while our method can calculate the ODE function from $t_1$ to $t_N$ separately and parallelly. Please refer to **Q3 in General Response** for more details. > **Q13: Experiment on regular time series.** We further experiment additionally on UEA classification following section 4.2. |Drop Ratio (%)|30|||50|||70||| |-|-|-|-|-|-|-|-|-|-| ||AutoFormer|FedFormer|ContiFormer|AutoFormer|FedFormer|ContiFormer|AutoFormer|FedFormer|ContiFormer| |**Avg. ACC**|0.7035|0.5994|**0.8126**|0.6900|0.5655|**0.7997**|0.6368|0.5312|**0.7749**| |**Avg. Rank**|2|2.7619|**1.1905**|1.8571|2.8571|**1.2857**|1.9524|2.7619|**1.1905**| We also investigate Contiformer's performance on long times series forecasting using Exchange dataset. Contiformer can outperform both models. Following Autoformer, input length is $96$ and the target horizon is in $[96, 192, 336]$. |Exchange dataset|Autoformer||FEDformer||Contiformer|| |-|-|-|-|-|-|-| |Horizon len.|MSE|MAE|MSE|MAE|MSE|MAE| |96|0.1409|0.2711|0.1356|0.2643|**0.1301**|**0.2521**| |192|0.8544|0.6915|0.2764|0.3833|**0.1966**|**0.3193**| |336|0.707|0.6445|0.4657|0.5069|**0.4315**|**0.4852**| We may not conduct full experiments on all the datasets, but we believe these additional experiments have addressed your concerns about our work. [1] Informer: Beyond efficient transformer for long sequence time-series forecasting. AAAI 2021. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response to my questions voiced. I appreciate the authors effort to perform additional comparisons with Autoformer and FEDformer, and clarification on the model details, as such I will increase my score to 5. --- Reply to Comment 1.1.1: Title: Thank you for your reply! Comment: We express our sincere gratitude for your thoughtful response and specific recommendations regarding our paper. Your insightful suggestions and comments have significantly contributed to enhancing the quality of our work, encompassing aspects such as related research, theorems, and technical intricacies. We sincerely thank you for raising the rating score regarding our paper. We are also open to further comments and suggestions for improving our paper.
Summary: This paper introduces the ContiFormer, a novel continuous-time Transformer model designed for handling irregular time series data. In this model, the keys, queries and values are vector-valued functions indexed on time. Each input observation gives rise to a key function, given by the solution of a Neural ODE started at an initial condition determined by the observation. While a similar model is adopted for the value functions, the query function is obtained using natural cubic spline interpolation of discrete query values. By replacing dot products (between discrete queries and keys) with an $L^2$ inner product, a continuous-time attention function is derived. The continuous-time output can be evaluated on a discrete time grid allowing for the stacking of such layers. To reduce the computational time, the paper leverages a reparametrization trick, which makes it possible to solve simultaneously multiple Neural ODEs on different time intervals. The ContiFormer model is evaluated on various time series tasks and datasets demonstrating its effectiveness in handling irregular time-series data. Strengths: - Given the empirical success of transformer models on machine learning tasks involving sequential data, enhancing their capability of effectively handling irregular time series data is an important research question. - This paper proposes an elegant solution based on Neural ODEs and other ideas from the field of neural differential equations such as the use of cubic splines in Neural CDEs. - The paper is clearly written and easy to follow. - The empirical evaluation is well conducted, extensive and provides a compelling demonstration of the superior performance of the ContiFormer model. Weaknesses: - The theoretical results are deferred to the appendix. I think the main theorem (Thm 1 in Appendix) should at least be formally stated and further commented on in the main paper. This would better motivate and support the modelling choices. - The paper is a bit repetitive at times (l100-102, l108-117, l175-178) while some components might deserve further explanation: 1. the main theoretical result; 2. although the $i^{th}$ key and value functions are well defined at $t<t_i$, it could be further commented by expanding l138, and perhaps adapting Fig. 1 with $t<t_i$; 3. the rationale for modelling the key and value functions differently from the query function. - As mentioned in the related work section, a continuous-time attention model based on Neural ODEs has previously been proposed in [10]. While several baselines are considered in the experimental section it is surprising that this model is not included without a justification. - The attention model is not fully formulated in continuous-time, in the sense that it is expressed as a sum over the number of value functions which depends on the number of observations. A natural thought of experiment for a continuous-time model is to replace the input sequences with continuous paths. Is there a way to define the ContiFormer on such paths? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Why did you choose to model the key and value functions differently from the query function? Can the ContiFormer handle partially observed data, given how the key and value functions are defined? - Do you have an intuition/explanation for the fact that $P$ can be chosen to be very small (l 224) and the insensitivity of the ContiFormer to the tolerance error (Appendix F12, F13)? - Related to the comment above about the continuous-time formulation, does the ContiFormer offer any advantage for handling both irregular and long time series? - Although computational complexities are provided, would it be possible to report the actual time it takes to train and evaluate the model? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, some limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your comprehensive comments. > **W1: The theoretical results should at least be formally stated and further commented on in the main paper.** Thanks for your great suggestion! In the revised manuscript, we intend to consider your suggestion. > **W2: L138, adapting functions to $t<t_i$.** We are pleased to adopt your suggestion to make it more clear for all $t$ in the whole time horizon. > **W3: Compare Contiformer with CADN [1].** We re-implement the model and conduct experiments on the UEA classification task. Below, we show a summary of the results. We apologize that due to the time limit of the rebuttal period, we only manage to run 15 out of 20 datasets for CADN. We conclude that Contiformer outperforms CADN on most datasets. |Drop Ratio (%)|30||50||70|| |-|-|-|-|-|-|-| |Metric|CADN|Contiformer|CADN|Contiformer|CADN|Contiformer| |Avg. ACC|0.6869|**0.7879**|0.6852|**0.7786**|0.6898|**0.7563**| |# Top 1|1|**13**|1|**14**|4|**11**| > **W4: Define the ContiFormer on continuous paths.** Thank you for the insightful perspective! ConfiFormer incorporates continuous paths in a latent space. Thus, if the input is a continuous path, Contiformer can directly apply ordinary differential equation (ODE) modeling (as described in Section 3.1) on the input sequence and calculate our proposed continuous-time attention upon that. However, directly modeling continuous paths using the ODE assumption will break the parallel execution of Transformer architecture. Thus, as described in Section 3.3, we further propose a novel reparameterization trick to split the time horizon into several fixed-length pieces $t\in[-1, 1]$, parallelly executing continuous-time attention calculation on these pieces and then map back to the original time horizon without loss of performance. This parallel execution requires sampling timing points on the continuous path, which needs further refinement and exploration in our future work. > **Q1: Why model key & value differently as query? Can Contiformer handle partially observed data?** Thanks for your insightful question. The effects of the query function and key and value functions are somehow different. Thus, we separate the roles of query and key/value into distinguishable ones. This design aligns the original attention mechanism such as [2, 3] where the query, key, and value functions are different. Indeed, Contiformer can handle partially observed data. Our model is designed for irregular time series modeling, including irregularly-sampled data caused by missing time points (if we correctly understand your mentioned partially observed data). The result shown in section 4.2 also supports the clarification, of which observations are randomly dropped. Technically, our model inputs an irregular time series in any format and outputs a continuous-time latent trajectory of the system. > **Q2: Why $P$ can be very small? Why Contiformer's insensitivity to the tolerance error?** Thanks for your insightful question. As shown in Tables 12 and 13 in the Appendix, our model is robust to the tolerance error of the ODE solver and the numerical error. We explain the phenomena in two aspects. First, our framework, akin to the Transformer architecture, circumvents cumulative errors by eliminating the necessity of sequentially passing neural networks in a regressive manner. Second, we guess that, since the output from the attention module is actually a weighted sum of the tokens, therefore, the total variance is lowered. For instance, assume that $x_1, ..., x_N \sim \mathcal{N}(\mu, \sigma^2)$, then the total variance of the mean value $\bar{x} = \frac{1}{N}(x_1 + x_2 + ... + x_N)$ is $\frac{\sigma_2}{N}$, and the variance is significantly lowered given large $N$. Overall, our model is not sensitive to tolerance error, which makes our model robust to different scenarios. > **Q3: Can Contiformer handle irregular long time series?** As shown in Table 10 in the Appendix, the BookOrder dataset with both irregular properties and a relatively long time series. Our model outperforms all the baselines on this dataset. Nevertheless, due to the nature of Transformer-based architecture, it might necessitate increased memory capacity and a richer training dataset to achieve optimal performance. But we believe that the other works alleviating the burden of memory cost of the vanilla Transformer model may also inspire the improvement of our method, which is a promising direction for future work. Thank you for your valuable comment! > **Q4: The actual time it takes to train and evaluate the model.** We appreciate the reviewer's interest in the practical aspects of our work's computational performance. While we have provided comprehensive computational complexity analysis, we acknowledge the importance of reporting actual training and evaluation times for a more practical understanding. We report the actual time in the below table on the Synthetic dataset for event prediction task relative to RMTPP, which is a recurrent neural network. Experiments are conducted on NVIDIA RTX A6000. |Model|RMTPP|THP|GRU-$\Delta$t (atol=0.1)|mTAN|ContiFormer (atol=0.1)| |-|-|-|-|-|-| |Relative Time for Training|1 $\times$|1.07 $\times$|3.22 $\times$|1.05 $\times$|6.96 $\times$| |Relative Time for Testing|1 $\times$|1.05 $\times$|2.82 $\times$|1.02 $\times$|4.18 $\times$| As demonstrated in the table, our model exhibits an approximate slowdown of $6$ times compared to the vanilla Transformer model (THP), and it demonstrates a $2$ times decrease in speed compared to the ODE-based model (ODE-RNN). [1] Chien, Jen-Tzung, and Yi-Hsiang Chen. "Learning continuous-time dynamics with attention." TPAMI 2022. [2] Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. "Neural machine translation by jointly learning to align and translate." ICLR 2015. [3] Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. "End-to-end memory networks." NIPS 2015. --- Rebuttal Comment 1.1: Title: Reply to authors rebuttal Comment: Thank you for thoroughly addressing my questions and for your comprehensive rebuttals that are substantiated with further experimental results. Your rebuttal work will certainly enhance the quality of your paper and increases my confidence in my evaluation. --- Reply to Comment 1.1.1: Title: Thank you for your reply! Comment: We sincerely thank your valuable insights and suggestions regarding our paper. We are also open to further comments and suggestions for improving our paper.
Summary: The paper describes a continuous time extention of the transformer architecture using ODE blocks to propagate the effect of each observation individually through time. For computing attention values inner products between functions are used, where in the implementation the resulting integral is approximated. The paper demonstrates the method's application to irregular time-series data and in temporal point processes in different tasks including interpolation, extrapolation, classificaton. Strengths: The paper gives a continuous-time extension of transformers capable of handling irregularly sampled continuous time series, and point processes. Computational complexity is assessed in details which is quite important for practical application, and helps the reader to better estimate the real life cost of scaling the methods to real life problems. Weaknesses: My main problem is with the missing statistical rigor during evaluation. More specifically: In Table 3 the authors report averages over 20 datasets. I assume these datasets have different sizes, complexity, dataset imbalance etc., therefore it is hard to rigorously evaluate the average metrics.There are dataset-by-dataset level results in the appendix but there is still no standard deviation. Similarly in Table 4, no standard deviations. Probably 3 std differences are significant, but with 3 repeats I am not sure sure 2 std is very convincing. (For just a very rough estimate how big delta is needed check a 3 sample unpaired t-test table) Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please give standard deviations and mark significant differences. Also it is interesting to see that NeuralCD is outperformed by ODE-RNN here for the 30 and 50% dropped case. As there is result provided for CharacterTrajectories in the appendix (this dataset was used in the original NeuralCDE paper) I compared the results. format: present paper, vs, Kidger et al. | Method | 30% dropped | 50% dropped | 70% dropped | |-----------|-----------|-----------|-----------| GRU-D | 0.9325 vs. 0.942 | 0.8872 vs. 0.902 | **0.8433 vs. 0.919** | GRU-deltaT | **0.8558 vs. 0.936** | 0.9088 vs. 0.913 | **0.8496 vs. 0.904** | N CDE | **0.9276 vs. 0.987** | **0.9285 vs. 0.988** | **0.9241 vs. 0.986** | ODERNN | 0.9415 vs. 0.954 | 0.9506 vs. 9.960 | 0.9471 vs. 0.953 | These are serious differences on the same dataset. I can maybe explain away the case with Neural CDE as the creator of the method may be able to tune it better (this is in fact the main drawback of this entire comparison table idea we do, but this is a problem we are not going to solve here :) But results of several other baselines are quite different. This is concerning. "We generated a dataset of 300 2-dimensional spirals, sampled at 150 equally-spaced time points" <-- what is the temporal dimension? is this CD spiral a parametric curve where time is the parameter? Are you using vanilla NeuralODE in this experiment? I find it hard to understand why you get the interpolation behavior from NODEs as it is visible in Figure 2. The interpolation looks like piece-wise linear. Can you provide interpolation results with ODE-RNNs and neural CDEs? Do you see a way to avoid resampling between layers? It would be useful and quite interesting to see some discussion about the apparent contradiction in using transformers (long range interactions) and ODEs. ODEs operates with derivatives and their solutions are locally described. The assumptions of a smooth differentiable solution seems to stand in contradiction with long range interactions unmediated by values in between, or in a case of a latent-ODE decriptions by latent system states. Can you formalize the properties of systems (e.g. the mentioned stock market time series) where you expect this type of modelling give the most benefit? Minor: "To overcome the discrete nature of RNNs, a different strategy involves the exponential decay of the hidden state between observations, governed by modifiable decay parameters [5, 9, 37]." [9] is the reference for the Neural ODE paper from Chen at al., it does not apply exponential decay. typo: denoted vs donated Please stress the difference between irregularly sampled data and temporal point processes eg. in [37] more clearly (missing data vs. events). NOTE: the authors clarified some points, provided new results, and we identified a misunderstanding about the Neural ODE figure. The authors used Latent-ODE, but cited a previous paper in the table: that lead me believe that they fitted a Vanilla NODE to the trajectory. I raised my score by one. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: No systematic discussion on limitations provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1: Table 3 covers 20 datasets, hard to rigorously evaluate the average metrics.** 1. We would like to clarify that our benchmarking follows the prevalent common practice [1, 2], where the averaged metrics are over all the datasets in the UEA benchmark. 2. The experiments on irregular time-series data follow the settings of Neural CDE [3], [3] only evaluated one of the UEA datasets while we evaluate over 20 datasets. The full results are in Appendix E.2.2. > **W2: The results in the appendix lack standard deviation (std.)** We quite appreciate your reminder and *we have included the standard deviation of results for the UEA benchmark in Table 1 of the attached PDF file*, on the setting when 70% of observations are dropped, due to the word limit. We will incorporate the std. results for all three settings in the revised paper. Please refer to **Q2 in General Response** for more details. > **W3: With 3 repeats, not sure 2 std is very convincing.** For the statistical significance in Sec. 4.3, we further perform experiments on Synthetic and Bookorder datasets, with *10* random seeds for each method. We report the complete experimental result in the attached PDF file, Table 2 and Table 3. Also, we conduct a significance test. We can conclude that when the p-value is less than $10^{-6}$, Contiformer can significantly outperform these baselines in 4 out of 6 metrics, i.e., LL and RMSE for Synthetic, LL, and Accuracy for BookOrder. In the revised manuscript, we will conduct a significance test on all the datasets. Please refer to **Q2 in General Response** for more details. > **Q1: The results of several baselines are different from the original paper** Thanks for your concern. After carefully checking the original paper (Kidger et al. https://arxiv.org/pdf/2005.08926.pdf) we find a significant difference in data setting. In both ours and Neural CDE paper, the raw dataset was randomly dropped by the ratio of 30%, 50%, and 70% to conduct the irregularly sampled dataset. In Section 4.1 of Kidger et al., "The randomly removed data is *the same* for every model and every repeat."; however, in our setting, we apply randomness on dropping the data points for each random run, which is more general to evaluate the robustness than on the same dataset dropped only once. This explains that our evaluation results of rerunning the baseline methods are different from those in the Neural CDE paper. Next, we checked the implementation details of these ODE-based baselines, which are the same as the official repository of [Neural CDE](https://github.com/patrick-kidger/NeuralCDE/tree/master/experiments/models). We also put openly available implementations in the original submission to ensure transparency and fairness. We will also revise our paper to clarify this. > **Q2: What is the temporal dimension? is this CD spiral a parametric curve where time is the parameter?** Sorry for the confusion. Temporal dimension means the variable $t$. This 2D spiral is a parametric curve. As in Eqs. (47, 48) in Appendix E.1.1, the 2D spiral function $(x(t), y(t)) \in \mathbb{R} \rightarrow \mathbb{R}^2$ is dependent on the time variable $t$, with the hyper-parameters $a, b$ controlling the curve shape. > **Q3: Are you using vanilla NeuralODE? In Fig. 2 why the interpolation looks like piece-wise linear?** Thanks for your valuable inquiry. Yes, we are using the vanilla NeuralODE implementation in our experiment. The reason why the interpolation from NODEs in Figure 2 looks like piece-wise linear is intrinsically from the more challenging data setting than the original one, as explained in **Q1 of the General Response** part. > **Q4: Do you see a way to avoid resampling between layers?** Insightful question! To our knowledge, neural networks can not directly handle continuous-time input while stacking multiple layers. To support stackable property, one can either discretize the output in continuous-time dimension, like our resampling method or discretize the parameter space, like the state space model [4, 5]. > **Q5: Can you formalize the properties of systems where you expect this type of modeling give the most benefit?** Our model hinges on the premise of continuous-time dynamics. We treat the irregularly-sampled data as a sequence of observations stemming from an underlying continuous-time process [3]. Moreover, we posit self-correlation among the observations [6]. Given these foundational assumptions, ContiFormer is poised to excel, capitalizing on the power of continuous-time modeling and inherent self-correlation within the data. > **Q6: "...decay parameters [5, 9, 37]." [9] is incorrect.** Thanks for the suggestion. Fixed in the revised version. > **Q7: difference between irregularly sampled data and temporal point processes e.g., in [37]?** Based on our understanding of your question, here "irregularly sampled data" refers to the irregularly sampled time series data, a kind of sequence of signal values sampled in irregular time intervals; And "temporal point processes" are a branch of modeling methods such as [37], which are often utilized to model event sequences that is a type of sequence containing discrete events happened at irregular time points. The differences are as below. |Aspect|Irregularly sampled time series data|Event sequence data (w/ temporal point processes)| |-|-|-| |Collection method| Irregularly sampled|Triggered by events| |Data type|Signal values|Event features| [1] A transformer-based framework for multivariate time series representation learning. KDD 2021. [2] Timesnet: Temporal 2d-variation modeling for general time series analysis. ICLR 2023. [3] Neural controlled differential equations for irregular time series. NeurIPS 2020. [4] Simplified state space layers for sequence modeling. ICLR 2023. [5] Efficiently modeling long sequences with structured state spaces. ICLR 2022. [6] Attention is all you need. NeurIPS 2017. --- Rebuttal Comment 1.1: Title: Answer to Bebuttal Comment: I would like to thank the authors for their time to answering my (and other reviewers') questions. **W1**:Being "prevalent common practice " (which is hard to assess based on 2 papers) does not automatically mean good practice. However, I appreciate the fact that if it is reported like this in previous works, you followed this way. Also you report individual results (now with error bars), which make it possible to check these. I agree with **jYof** on that, claiming "overall superiority" is a bit strong based on "average" superiority. The provided error bars and significance tests are appreciated. **Q1**: This may explain the difference. I am not sure however that rerandomizing for all test is the better approach. In a real life scenario the data is given, so there is no possibility to execute these repeats, while random restarts can be performed even in a real life scenario. So, keeping the data fixed is more lifelike. But your evaluation is still valid of course. **Q3 (General Q1)**: **[my major remaining problem]** We (you, **2E9G** and myself) would be in an easier position if you would have added code to the supplementary. You mention you use the same code as the original. Can you point to this original (git of original paper's repo, specific file describing the experiment setup most similar to your special experiment setup with the spiral - Be careful **not** to link own repo due to blind review!) My problem is, even if you use the same model code, it is still possible that the model used differently during experimental setup. Can you describe how you train and interpolate with NODE in a few points? Like: "I take the 2D time series data (x(t), y(t)), initialize a NODE with 2 hidden state, with XY architecture.., I propagate back XY error (MSE I assume)".. and so on. **Q5: [second remaining issue]** What you wrote in your reply, you already properly stated in the paper. I understand this. My question was: isn't there a contradiction here? You write for example for **jYof**'s question 1 that you assume ODE like dynamics is fundamental in your datasets, this is why transformers are not enough. Now this type of dynamics are by definition smooth and local (in time), therefore why to expect that the Markov assumption is violated in your datasets in the same time, when you assume an ODE structure is a good prior. Now I do not claim this type of data does not exist. I just asking: Can you give a convincing example? --- Reply to Comment 1.1.1: Title: Rely to your remaining problems Comment: Thank you for your quick reply and explanation. **Reply to W1**: We agree that not all common practices meet standards. Error bars and significance tests will be put into our revised version. We are also committed to tempering the assertiveness of our claims. **Reply to Q1**: We understand the importance of realism in fixed data outcomes. Your input has spurred us to enhance our evaluation methodology. **Reply to Q3**: > **Point to the repo with a similar experiment setting.** We follow the code from [link](https://github.com/rtqichen/torchdiffeq/blob/master/examples/latent_ode.py) as mentioned in **General Response Q1**. For spiral data generation, following Line 31-105, where we modify Line 86-98, to generate irregularly-sample data and add noise to parameters $a$ and $b$ as claimed in the paper and Appendix E.1.2. For model training and implementation, we follow the code in Lines 181-194 to calculate the loss function and use the same code as Lines 108-159 to build the Neural ODE model. > **Describe your special experiment setup of NODE?** The training flow of Neural ODE is as follows: given a batch of 2D spirals, i.e., (200, 50, 2) for (batch size, sequence length, feature dimension). We use a LatentODEfunc with 3 hidden layers with ELU activation function and set the hidden dimension to 20. The model contains a RecognitionRNN to encode the input, followed by ODE to reconstruct the hidden state trajectory. Finally, a decoder with 2 hidden layers and ReLU activation is adopted to obtain the output. We train Neural ODE with ELBO loss as stated in Appendix E.1.3 and keep it the same as the original code. To better address your concern, we dig into the issue of why Neural ODE generates piece-wise linearity in our data setting. The main conclusions are as follows: 1. Sampling Strategy: Line 96 of the original implementation used regular sampling. Oriented toward irregular time series, we adopted irregular sampling. Unexpectedly, this may reveal linear piece-wise patterns. The impact of sampling is noted by Reviewer 2E9G (Q2). 2. Random Seed: We ran our experiments 10 times. Notably, Neural ODE may yield smooth outputs in some random tests. We suspect this might stem from training uncertainty, potentially influencing the linearly piecewise behavior. We are ready to provide the visualization by anonymous link (not recommended by NeurIPS official instruction email.) Overall, among all the random runs, we observe that the prediction result of Contiformer can outperform Neural ODE with p-value $< 10^{-6}$ in significant tests. Moreover, we alter these data settings to enhance the evaluation of irregular time series modeling and to effectively showcase method performance, as detailed in the General Response. We hope that all these explanations would help address your questions. **Reply to Q5**: > Any contradiction here? Can you give a convincing example? Thank you for your further explanation! We first express our understanding of your mentioned "contradiction". On one hand, ODE assumes data following Markov property and models the dynamical changes, which is "smooth and local" as you mentioned. Meanwhile, Transformer assumes that the data share both short-term and long-term dependencies, which seems "rough and global" and violates Markov assumptions. If it's correct, we kindly provide some examples below. 1. From a practical view, stock prices initially exhibit smooth auto-regressive patterns but are also affected by business conditions and market events. For instance, large technical companies' stock prices (e.g., MSFT and GOOG) display consistent evolving trends yet are impacted by short-term and long-term events such as the release of large language models, etc. Similarly, traffic data display both local similarities within short periods and global seasonality over longer periods (Fig. 4 in [1]). That intuitively explains why the time series will follow both the Markov property that ODE models and the global influence properties that Transformer models. 2. From the literature view, one of our baselines, the Transformer Hawkes Process [2] assumes the Hawkes process which is locally defined on the infinitesimal time interval. Besides, [2] also claims the shortcomings of Hawkes process assumption that "fails to capture complicated short-term and long-term temporal dependencies". Thus, they also incorporated other approaches leveraging long-term dependencies to close the gap. In conclusion, in our paper, we want to construct a system that is smooth in time span (like ODE-based methods) while capturing long-term dependency using powerful Transformer architecture. We hope our provided examples can address your questions. And we will also refine our description of our assumption part in our paper correspondingly. [1] HetETA: Heterogeneous information network embedding for estimating time of arrival. KDD 2020. [2] Transformer hawkes process. ICML 2020.
Summary: The paper proposes a new deep learning model called ContiFormer to model continuous-time dynamics on irregular time series data. The paper argues that existing methods such as recurrent neural networks (RNNs), Neural Ordinary Differential Equations (ODEs), and Transformers have limitations in modeling continuous-time data and fail to capture intricate correlations within these sequences. The proposed ContiFormer model extends the relation modeling of the Transformer model to the continuous domain and incorporates the modeling abilities of continuous dynamics of Neural ODE with the attention mechanism of Transformers. Both numerical and theoretical justifications are provided. Different from the standard Transformer model, the proposed model is parameterized on the first-order derivative of $q$, $k$, and $v$. The real $q$, $k$ used in the attention part is approximately computed with finite sum (e.g., via Runge-Kutta 4th (RK4) order ODE solver). By restricting the function form/class of the first-order derivative on some smooth enough kernel, the proposed model could yield better interpolation/extrapolation ability. I have two concerns about the proposed model. The first one is the usage of ODE solvers may introduce computation overhead. In Table 1, the order of computation cost is already $O(N^2 \cdot P \cdot d^2)$, where $P$ is the iterative number in RK4 solver, and in my understanding, $P$ should be 4. In L580 at Appendix B. the RK4 ODE may require 80 forward passes to finish one of the RK4 round. It implies the order of computation cost for a one layer attention model should be $O(N^2\cdot 80\cdot d^2)$, which can be very computationally expensive even for models with moderate size. Moreover, if the proposed model has more than one layer (e.g., 2 layers), in order to finish the second layer's attention computation, we will need first to finish the computation of the first layer with $O(N^2\cdot 80\cdot d^2)$ cost. After that, every second layer's computation will still involve the forward pass of the first layer, which becomes $O(2\cdot N^2\cdot 80\cdot d^2)$. Therefore, for the $M$ layer attention model, the total computation cost could be $O(M^2\cdot N^2\cdot 80\cdot d^2)$. The second one the underlying construction is kind of similar to the Rotary Position Embedding in [1]. In this paper, the kernel trick is used to facilitate the computation (e.g., Eq. (27)- Eq. (32) in Appendix C). Based on the current presentation, the kernel function is only time point $t$ dependent but independent on the detailed $q,k$. Given the choice of the kernel function in Eq. (30), the effects of introducing this kernel function look like just adding a rotary positional embedding. At present, the authors have not provided sufficient evidence to demonstrate the contribution of their model, which may not be compelling enough for top machine learning conferences like NeurIPS. Despite this, the reviewer is willing to reconsider the decision after the authors' rebuttal. Overall, the paper presents an interesting approach for modeling continuous-time dynamics, but the issues raised above need to be addressed before its acceptance for a top-tier conference. Reference: [1] Su, J., Lu, Y., Pan, S., Murtadha, A., Wen, B., & Liu, Y. (2021). Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864. Strengths: The paper proposes a new deep transformer model incoperating the continuous-time dynamics on irregular time series data. Both numerical and theoretical justifications are provided. Weaknesses: Please see my comments in the Summary section. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Please see my comments in the Summary section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 2 fair Contribution: 1 poor Limitations: The authors include a section on the limitations of the proposed work in Appendix G at Page 30. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comprehensive comments. > **Q1: The order of computation cost one layer attention should be $O(N^2 \times 80 \times d^2)$; for the $M$ layer attention model, it could be $O(M \times N^2 \times 80 \times d^2)$.** Thank you for your inquiry about time complexity. In Table 1 of our paper, we initially specified the computation cost as $O(N^2 \times P \times d^2)$, where $P$ represented RK4 solver iterations. Upon re-evaluation, we acknowledge that the actual number of forward passes relates to the product of RK solver order and the number of segments across the reprameterized variable interval $[-1, 1]$. This leads to a refined time complexity of $O(N^2 \times \tilde{P} \times d^2)$, where $\tilde{P}$ signifies true forward pass counts. Moreover, in our time complexity analysis in our paper, we focus on the computation within one individual layer, as stated in the caption of Table 1 and the corresponding text description in our paper. We assume that all the compared models can be stacked with multiple layers. Thus, we omit the variable of the layer number in this analysis. Thank you for your comment. In the revised manuscript, we are committed to refining Table 1 to ensure its alignment with our findings and discussions. > **Q2: The underlying construction is kind of similar to the Rotary Position Embedding in [1], where the kernel trick is used to facilitate the computation (e.g., Eq. (27)- Eq. (32) in Appendix C). The effects of introducing this kernel function look like just adding a rotary positional embedding.** We are grateful for your question, which provides an opportunity to clarify the distinctions between our paper and Roformer [1]. We are confident that our work stands out from Roformer due to several essential reasons. The first clarification is that the discussion from Eq. (27)- Eq. (32) in Appendix C in our paper is about the Kernalized Attention mechanism, which is *not* to categorize our method as Kernalized Attention, but rather to demonstrate that Kernalized Attention is a form of Transformer attention variant. In fact, we mathematically prove that, as stated in Lines 583-584 in Appendix C, with carefully designed function hypotheses, various Transformer variants (including those with Kernelized Attention) can be encompassed as special cases of ContiFormer. Consequently, given Roformer's utilization of a pre-defined "position" matrix, i.e., $\boldsymbol{R}_{\Theta, m}^d \in \mathbb{R}^{d \times d}$ in Eq. (15) of Roformer's paper. It falls into the category of the "Time Embedding Method", as defined in Eq. (29) in the Appendix of our paper. As evidenced by Theorem 1 in Line 643 of our Appendix, Roformer (as well as all the relative position embedding methods) can be seen as a specific instance of ContiFormer. Technically, ContiFormer stands apart from Roformer in the below two aspects. 1. We prioritize modeling intricate dependencies of the inputs and learning the continuous-time evolving mechanism in irregularly-sampled data through a novel parametric continuous-time attention mechanism, differing from Roformer's focus on integrating positional information into the attention mechanism. 2. Our attention mechanism (Eq. (3), Section 3.1) captures evolving relationships of the input which is input dependent and (continuous-)time-aware, beyond reliance on only categorical positional information in Roformer which cannot model the continuously evolving dynamics of the system in the irregular time-series data. We greatly appreciate your query, as it allows us to elaborate on these nuanced distinctions between our work and Roformer, providing a clearer understanding of our contributions. > **Q3: At present, the authors have not provided sufficient evidence to demonstrate the contribution of their model.** Our work introduces a continuous-time attention approach tailored to the continuity and self-related nature of irregular time series. The contribution is three-fold. 1. To the best of our knowledge, we are the first to incorporate a continuous-time mechanism into attention calculation in Transformer as shown in Section 3 of our paper, which is novel and captures the continuity of the underlying system of the irregularly sampled time-series data, as shown in our experiments. 2. To tackle the conflicts between continuous-time calculation in continuous attention mechanism and parallel calculation property of the Transformer model, we also provided a novel reparameterization method in Section 3.3, to divide the whole time horizon and map them into a fixed time range, and then we can parallelly execute the continuous time attention in the different time range, without hurting the model capacity (as shown in our experiments). This can provide another novel perspective to accelerate the calculation of continuous-time modeling in other works such as Neural ODE. 3. Notably, we mathematically proved in Section 3.4 and Appendix C.2 that, our proposed continuous attention mechanism is a universal attention approximator, and various Transformer variants including kernel-based methods can be viewed as special instances of our model. Our approach uniquely aligns with the intricate characteristics of irregular time series and offers a broader scope that encompasses Transformer variants, which can shed some light on the further exploration of continuous process modeling in the Transformer model. We thank you for your pointing out the confusion about our description. We will further refine our paper according to your valuable suggestions. [1] Roformer: Enhanced transformer with rotary position embedding. (2021). --- Rebuttal Comment 1.1: Title: Thanks for your comments. Comment: Thanks for the authors' comments. After reading the authors‘ feedback and other reviewers' comments. Most of my concerns are addressed and I'll increase my rate accordingly. My remaining concern is mainly about the computation cost, the usage of RK4 solver can be viewed as an implicit layer that is typically very time composing for large-scale problems or large-scale model configurations. However, given the capacity of modern deep-learning hardware and the scale of real-world time series problems, I believe the computation cost of the proposed method won't be a major issue. It would be great if the authors could include a stress test in the final version on the computation cost v.s. problem scale/model size and discuss the *sweet zone* of the proposed method. --- Reply to Comment 1.1.1: Title: Rely to your remaining problems Comment: > **Q1: The usage of RK4 solver can be very time consuming.** Thank you for your insightful feedback. Utilizing an ODE solver, such as the RK4 solver, requires performing multiple forward passes, which can lead to computationally demanding operations. In our reply to reviewer GCXG, we can see that the time for training and validation is roughly $6$ times as vanilla Transformer model, which is somehow acceptable, with RK4 step size equal to $0.1$ and hidden dimension as $8$ in our experiment. We can also leverage some other techniques and packages to accelerate the ODE solver procedure, like diffrax as mentioned by reviewer 2E9G. These efforts are orthogonal to our work and we leave that as one important future work. Thank you for your suggestion. > **Q2: It would be great if the authors could include a stress test in the final version on the computation cost v.s. problem scale/model size and discuss the sweet zone of the proposed method.** Thanks for your valuable suggestion. Although we have measured the time cost on a single dataset, a stress test is necessary for a complete evaluation of the time cost and scalability of our model. In our revised manuscript, we are committed to addressing this aspect by incorporating a stress test that assesses how our proposed method performs across various input lengths and hidden sizes. Also, we intend to report the precise training/inference time for a wide range of datasets. *** In short, we sincerely thank you for raising the rating score regarding our paper. We are also open to further comments and suggestions for improving our paper. We will follow your suggestion to improve the quality of our paper.
Rebuttal 1: Rebuttal: ## General Response We thank all the reviewers' valuable and insightful suggestions! And we are encouraged by positive comments from the reviewers, e.g., * Addressing import research problems with high practical value (Reviewer GCXG) * The proposed method is novel with significant contribution in the area (Reviewer 2E9G, v819, GCXG, jYof) * The paper is well-written and easy to follow (Reviewer 2E9G, GCXG) * Experimental results are promising with theoretical analysis. (Reviewer 2E9G, GCXG, jYof) Below we try to clarify a few concepts in our paper and address some common problems. *** > **Q1: Concern about interpolation setting and visualization results in Section 4.1.** As raised by reviewers 2E9G, PCsf. The interpolation result of Neural ODE in Fig. 2 of our paper is confusing. Here, we make the following explanation. 1. We want to clarify that the codes in our experiments are the same as that in the original implementation [Neural ODE repo](https://github.com/rtqichen/torchdiffeq/blob/master/examples/latent_ode.py), we have kept most experimental settings the same. 2. To better illustrate the relative performance of the compared methods, we slightly modified the ground-truth spiral data generation process by adding some random noise to the underlying spiral function hyper-parameters (i.e., $a$ and $b$ as discussed in Appendix E.1.1). The task becomes more difficult since the dataset incorporates some distributional shifts by this modification. As a result, the compared baselines (Transformer and Neural ODE) could not handle this challenge well and illustrated poor interpolation. In contrast, our proposed Contiformer is more robust in this setting showing better interpolation performance with lower error (shown in Table 2) and better interpolation visualization (in Fig. 2) in our paper. We also reproduce the experiment with *the same* spiral data generation in [Neural ODE repo](https://github.com/rtqichen/torchdiffeq/blob/master/examples/latent_ode.py), without changing any of the code, and we find that when Neural ODE is not well-trained, piece-wise linear results may still occur. To address the concern, we have provided more visualization results in the attached PDF file (Fig. 1 and Fig. 2). > **Q2: Missing of standard deviation for significant test in Section 4.2 and Section 4.3.** As pointed out by reviewers 2E9G and PCsf, showcasing the results with standard deviations is crucial to establish the statistical excellence of Contiformer. We regret for our oversight on this part. We have included the standard deviation result in the attached PDF file (Table 1 and Table 2). Moreover, it's important to note that for the UEA task in Section 4.2, our reporting approach aligns with the prevalent practice in the field of time series analysis [1, 2], where the overall performance of the mean accuracy and mean ranking are reported and compared. What's more, as pointed out by reviewer PCsf, with 3 repeats, it is not sure 2 std. is very convincing. Therefore, it is suggested to check the unpaired t-test table. We really appreciate the valuable suggestion. However, due to the rebuttal timeline, we only manage to perform a complete t-test on Synthetic and BookOrder datasets, using *10* random seeds over Contiformer and 6 baseline models. The significant test results can be found in the attached PDF file (Table 3 and Table 4 for Synthetic and BookOrder datasets, respectively.) which illustrate the significant improvement of our method. [1] A transformer-based framework for multivariate time series representation learning. KDD 2021. [2] Timesnet: Temporal 2d-variation modeling for general time series analysis. ICLR 2023. > **Q3: More detailed illustration of how we achieve parallelization; why reparameterize the time horizon to [-1, 1]** In L199-207 of our paper, we state that to preserve the parallelization of Transformer architecture and meanwhile implement the continuous-time attention mechanism in Eq. (6), we first adopt time variable ODE [1] to reparameterize ODEs into a single interval [−1, 1], followed by numerical approximation method to approximate the integrals. Here, we believe that it is novel to incorporate the time variable ODE to achieve the parallelization of our model. We would like to discuss more about this novelty below. Originally, to calculate the ODE process from $t_1$ to $t_N$ for $L$ latent ODE trajectories, we need to iteratively forward from $t_1$ to $t_N$, which is hard to parallel. To resolve the problem, rather than applying the ODE solver directly modeling along the whole time horizon, we split the time horizon and reparameterize the time range $[t_i, t_j]$ ($1\leq i \leq j \leq N$) as $[-1, 1]$. Then, the single invoke of the ODE solver will be applied to these time ranges since these trajectory pieces share the same ODE function parameters in Eq. (1) of our paper. Through this novel way, we can parallelly calculate these ODE functions without sequentially calculating from $t_1$ to $t_N$. We believe that this is aligned with the parallel execution of attention calculation in the vanilla Transformer model (applying the same attention function on the copied & stacked input sequence to enable parallel execution), and it will further facilitate the parallel ability of our Contiformer model. [1] Ricky T. Q. Chen, et al. Neural spatio-temporal point processes. ICLR, 2021. Pdf: /pdf/faba5c9420602fdc782639439b923cf4f013842f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces ContiFormer, a continuous time transformer-based model that leverages parallelism and can handle irregularly sampled data well, thereby removing the need to transform these datasets into discrete uniform bins. This set-up incorporates the continuous dependence on the data from differential equation based neural networks. Finally, an extensive range of experiments on irregularly sampled data are performed to show promising results for ContiFormer. Strengths: This paper presents an original architecture class that can encompass many existing models. The discussion on attention combined with the continuous time set-up is limited and therefore this paper will be a significant contribution in this area. The paper is generally well-written and extensive numerical experiments were performed. Weaknesses: Whilst in general a fairly well-written paper, key parts of the model can be made clearer. For example, it is not clear for line 137 how the key and values are initialised. There is also an assumption that is not explained "we assume that every observation has influence on the dynamic system even before its occurrence." Combined with the fact that the query uses natural cubic spline, does that mean the model is not causal and cannot be applied in an online fashion (see for example [Morrill et al 2022 On the Choice of Interpolation Scheme for Neural CDEs])? This paper does not appear to be comparing with the state-of-the-art models. To my knowledge, the S5 model (Simplified State Space layers for sequence modeling by Smith et al 2022), appears to already outperform quite a few of the baselines chosen in this paper, specifically, mTAN, ODE-RNN, GRU-$\Delta$t for irregularly sampled pendulum exercise. Therefore this should be added into a benchmark. The pendulum regression task used in Smith et al. seems to be used in quite a few irregular sampling task papers, and therefore it would be of interest to see how the ContiFormer performs in this case. Whilst there is an extensive range of experiments, the full results with the standard deviation across the 3 repeats for Section 4.2 and Section 4.3 seem to be missing. These seem quite key to back up some results, particularly for Table 4, where the standard deviation is referred to. Without this, it is difficult to quantify "overall statistical superiority of ContiFormer" as claimed. There does not appear to be any evaluation on regularly sampled datasets. Whilst I appreciate that the marketed strength of this method is in irregularly sampled data, it would be useful to see what the performance is on regularly sampled datasets (against benchmarks) and whether one can just choose this model to be applied on all timeseries. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Since ContiFormer requires $N^2$ ODEs to be solved, how does the $P$ relate to $L$ in Table 2? Are these on similar orders? It seems like the neural ODE performs much worse than expected even for the interpolation compared with the example we see on the diffrax website https://docs.kidger.site/diffrax/examples/neural_ode/. Is this because of the regular vs irregular sampling issue? Figure 4 in appendix does not seem to be plotting the same curve at each row? Neural ODE clearly is on a different scale or a different curve in the third row quite clearly? Minor point: Line 119: donated -> denoted Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations were not discussed in the paper. Given that the model makes use of cubic splines, the literature suggests this is will not be causal. Performance on regularly sampled dataset not clear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your comprehensive comments. > **W1: L137 how key & values initialised.** As in L124, the input ($Q$), $K$, and $V$ are initially derived from the input variable $X$. Besides, we also explain in Equations (7) and (8) where $K_i = X_i W^K$ and $V_i = X_i W^V$. Moreover, the evolution of $k_i$ and $v_i$ is controlled by a function $f(\cdot): \mathbb{R}^{d+1} \rightarrow \mathbb{R}^d$, which is implemented as an MLP as elaborated in Appendix F.4. > **W2: L138, an assumption that is not explained.** We sincerely apologize for any confusion. Each observation's latent trajectory, rather than influence, spans the entire time horizon, as exemplified in Figure 1. Our method will further model the correlation among the underlying latent trajectories in the evolving system, which closely resembles the operation of Transformer, where each token computes attention scores across all tokens along the temporal axis. For auto-regressive prediction scenarios, we can apply causal attention with a masking mechanism, similar to vanilla Transformer, to avoid mistakenly leveraging future information. > **W3: Can Contiformer be applied in an online fashion?** Thank you for your great point! We indeed use cubic spline interpolation, which might limit its application to online fashion. However, we want to clarify that the core architecture is not limited to the interpolation method. Therefore, we can explore alternatives to support online applications [1]. > **W4: The paper does not compare to SOTA models.** We apologize for the oversight in comparison to this model. Due to the limited timeline, we leverage the public implementation of S5 model, i.e., [s5-pytorch](https://github.com/i404788/s5-pytorch), and derive the experimental comparison on the benchmark of UEA Classification in Section 4.2. Our model Contiformer exhibits a notable performance advantage over S5 UEA task. Besides, we believe that, with a wider range of hyperparameter searches, S5's performance might be improved. |Drop Ratio (%)|30||50||70|| |-|-|-|-|-|-|-| |Model|S5|ContiFormer|S5|ContiFormer|S5|ContiFormer| |Avg. ACC|0.7139|**0.8126**|0.6831|**0.7997**|0.6455|**0.7749**| |# Top 1 |2|**18**|2|**18**|1|**19**| > **W5: The pendulum regression task should be included.** Due to the limited timeline, we do not perform much parameter search and use most of the default hyper-parameter settings from the UEA task. The preliminary results are shown below. We run the experiment 3 times and report the mean and standard deviation. |Model|MSE ($\times 10^{-3}$)(std.)| |-|-| |ODE-RNN|7.26(0.41)| |CRU (original)|4.63(1.07)| |CRU (in S5 paper)|3.94(0.21)| |S5|**3.41**(0.27)| |Contiformer|4.21(0.24)| We wish to emphasize that ContiFormer showcases superior performance compared to the S5 model within the UEA classification benchmark as shown in the response to **Q4** above. > **W6: The standard deviation result is missing for sections 4.2 and 4.3.** We recognize the importance of incorporating standard deviation as a vital component for robust statistical analysis. We report the results in the attached PDF file (Table 1 for section 4.2, Table 2 for section 4.3). Moreover, it's important to note that for the UEA task in Section 4.2, our reporting approach aligns with the prevalent practice in the field of time series analysis [2, 3], where the overall performance of the mean accuracy and mean ranking are reported. Please refer to **Q2 in General Response** for more details. > **W7: Missing evaluation on regularly sampled datasets.** We reevaluate our model and other baselines on a regularly sampled time-series setting (i.e., with the drop-ratio as 0%). We also include an additional strong baseline Autoformer [4], which is specifically designed for regular time-series data. |Model|ODE-RNN|Neural CDE|Autoformer|TST|Contiformer| |-|-|-|-|-|-| |**Avg. Rank**|3.5|4.2|3.3|2.1|**1.85**| We also want to recall that Contiformer has illustrated superiority over all the baselines in the irregularly sampled data setting, as shown in Table 3 of our paper. > **Q1: How does the $P$ relate to $L$ in Table 2? Are these on similar orders?** $L$ represents the number of function evaluations that the ODE solver requests in a single forward pass from $t_1$ to $t_N$, which is related to the sequence length $N$ as well as the time difference $(t_N - t_1)$. While $P$ is the number of intermediate steps for integral approximation, which is a constant that is not related to either $N$ or the time difference $(t_N - t_1)$. Therefore, they are not on the same order and $P \ll L$. Also, as is shown in L224, we set $P \leq 5$ in our experiments. > **Q2: Neural ODE performs worse for interpolation.** We want to clarify that the codes in our experiments are almost the same as that in the original implementation [Neural ODE repo](https://github.com/rtqichen/torchdiffeq/blob/master/examples/latent_ode.py), we have kept most experimental settings the same. We modify the data generation process (a more comprehensive response can be found in **Q1 in General Response** section), and as a result, Transformer and Neural ODE may yield poor interpolation. In contrast, Contiformer is more robust with better visualization. More visualization results are uploaded in the attached PDF file. Sorry for the confusion and we will clarify the details of this experiment in our revised paper. > **Q3: Incorrect Figure 4 in the Appendix.** Apologies for the confusion. We have uploaded a figure in the attached PDF file. Please refer to **Q1 in General Response** for more details. [1] On the choice of interpolation scheme for neural CDEs. TMLR 2022. [2] A transformer-based framework for multivariate time series representation learning. KDD 2021. [3] Timesnet: Temporal 2d-variation modeling for general time series analysis. ICLR 2023. [4] Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. NIPS 2021. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for responding to some of the points that I raised and my questions, in particular, for doing further experiments and some comparisons with S5 and testing on the pendulum task in such a short time period. With regards to online applications, I would still like to check its practical implementation. Of course I see that the interpolation methods can be changed and you have done an ablation study responding to **jYof** to show that the results may not be so sensitive. In terms of your parallelization, it seems that each irregular time period is rescaled to $[-1,1]$ first and then functions are suitably transformed. The point of this seems to be "decoupling" the problem along the time axis, is there anything to ensure continuity in time? --- Reply to Comment 1.1.1: Title: Rely to your remaining problems Comment: > **Q1: With regard to online applications, I would still like to check its practical implementation.** Thanks for your inquiry. We acknowledge that our current implementation using natural cubic spline may suffer in online applications. However, we can make the following change in our codes to support interpolation methods like Cubic Hermite splines with backward differences [1], as you suggested. For implementation, given the inputs matrix $Q$ and the corresponding time for each observation $T$, we will invoke `coeffs=torchcde.natural_cubic_coeffs(Q, t=T)` to construct the continuous path. Therefore, in the case of online applications, we can replace it with `coeffs=torchcde.hermite_cubic_coefficients_with_backward_differences(Q, t=T)`. > **Q2: It seems that each irregular time period is rescaled to $[-1, 1]$ first and then functions are suitably transformed. The point of this seems to be "decoupling" the problem along the time axis, is there anything to ensure continuity in time?** Thanks for your great point! The rescale operation used in Eq. (9) can enable the parallel computation of the calculation over time. However, this operation may induce additional numerical errors. Therefore, unfortunately, there is no guarantee to ensure continuity in the time axis. However, we would like to highlight that there are two factors that can help control the error, and therefore relieve influence of the continuity issue of the latent trajectories in time. 1. Error in ODE Solver: When utilizing a numerical ODE solver, inherent errors may arise during the solution process. Nevertheless, the mitigation of such errors is attainable by adopting a smaller step size or error tolerance, the errors can be effectively controlled [2]. 2. Error in numerical approximation. As shown in Eq. (9) of the paper, we use a numerical approximation method (e.g., Gauss-Legendre Quadrature approximation) to approximate an integral from $[-1, 1]$. However, the approximation error can be bounded. Besides, we can increase $P$, which is the number of intermediate steps for integral approximation, to achieve a lower approximation error. Overall, as we discussed in Appendix D Line 658-660, the output of ContiFormer can be considered “continuous” only if we overlook the approximation and numerical errors in the ODESolver. Also, our framework allows the user to trade off speed for precision. Furthermore, the empirical findings presented in Section 4 illustrate that Contiformer consistently delivers commendable performance across diverse tasks with numerical error. In the revised manuscript, we will provide a more detailed discussion about the continuity of the Contiformer. We hope these explanations have solved your concern. [1] On the choice of interpolation scheme for neural CDEs. TMLR 2022. [2] Chen, Ricky TQ, et al. "Neural ordinary differential equations." NeurIPS 2018.
null
null
null
null
null
null
Conformal Prediction Sets for Ordinal Classification
Accept (poster)
Summary: Conformal prediction for classification considers non-ordinal classes, which potentially produces sub-optimal prediction set size for ordinal classification. The proposed approach addresses this problem for a unimodal label distribution. To this end, the proposed approach designs a novel conformity score function that fully utilizes the unimodal assumption, so a trained score function strictly returns a unimodal distribution over labels. The efficacy of the proposed approach is theoretically justified in theorems and empirically demonstrated over one synthetic dataset and four real datasets, showing that the proposed approach consistently achieves smaller set size compared to baselines and always returns contiguous sets (as claimed). Strengths: **Originality**: To my understanding, this is a novel usage of conformal prediction to ordinal classification. Moreover, the paper proposes a simple yet effective sufficiently-parameterized scoring function (i.e., (5)) that always returns contiguous sets under the unimodal assumption. Even better, this new score function design is theoretically justified in Theorem 2. This theorem is also empirically justified (via CV% = 0). Given the unimodal score function, existing conformal prediction algorithms intuitively return a contiguous set. But, this is carefully analyzed in Theorem 1. **Quality**: I think the paper quality is good. The notations are well-defined, the claims are rigorously analyzed via theorems, and the main claim is also well-justified empirically. **Clarity**: The paper is clearly written (and thanks for errata in Appendix). **Significance**: I believe the proposed scoring function (5) is sufficiently novel (and simple) to bring interesting related papers in conformal prediction and ordinal classification. Last but not least, this paper sufficiently motivates the practical necessity of conformal prediction for ordinal classification in introduction (by using examples, like proximal shoe size). I like this practical connection of conformal prediction in real scenarios, which is usually missing in conformal prediction papers. Weaknesses: I have minor comments. * My main concern is the unimodal assumption (which is also mentioned in the limitation section). Based on the empirical results, many distributions for ordinal classification closely satisfy the unimodal assumptions (based on the results that the proposed approach’s CV% is zero). Even though it is violated, I think it does not affect coverage rate. Even though It looks good to discuss potential mitigations if this assumption is violated. * When I read Section 4.2, I wanted to see a simple illustration between a label distribution from a naive DNN and that of (5), which contrasts the naive approach fails to achieve a unimodal output, while (5) does. If the left image of Figure 1 is an illustration of the actual data, it’s better to highlight this. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I also only have minor questions. * What can be potential mitigations if the unimodal assumption is violated? * For Table 3, is there a particular reason that LAC and APS are compared via SSCV instead of the coverage rate? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The limitations that I expected are discussed in Conclusion, and as mentioned before, it would be more interesting to dicuss possible mitigations on these limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments. Please find our response below **Comment:** My main concern is the unimodal assumption … Even though it is violated, I think it does not affect coverage rate.” **Response:** Yes, it is true that even if the unimodality assumption does not hold, the theoretical coverage guarantees of the prediction set produced by COPOC would still apply. The bound itself, however, would be weaker since the fitted distribution deviates significantly from the true underlying distribution. **Comment:** When I read Section 4.2, I wanted to see a simple illustration between a label distribution from a naive DNN and that of (5), which contrasts the naive approach fails to achieve a unimodal output, while (5) does. If the left image of Figure 1 is an illustration of the actual data, it’s better to highlight this.* **Response:** Thanks for the nice suggestion. Figure 1 in the paper is not from real data but in the Figure 1 of the uploaded 1-pager PDF (in global rebuttal section), we have included an illustration that is based on examples from a real public dataset Adience. In the revised version of the paper, we will consider including this new figure itself. **Comment:** What can be potential mitigations if the unimodal assumption is violated? **Response:** We acknowledge that this is an important aspect to consider since COPOC will lead to a sub-optimal fit if the unimodality assumption does not hold. One potential mitigation is to evaluate the validity of the assumption by comparing the likelihood of the Vanilla DNN trained with cross-entropy with that based on the COPOC approach. If the COPOC fit is much inferior to that of the unconstrained DNN, it would most likely indicate that the assumption is not valid. In such a case, a direct application of conformal prediction such as APS would be preferable. The table below shows the negative log-likelihood (NLL) of vanilla DNN fitted with cross-entropy loss (V-CE) and COPOC on the four real datasets that we considered. For these datasets, the superior fit of COPOC indicated by lower NLL justifies the validity of the unimodality assumption, which predictably led to smaller prediction set sizes. | | V-CE | COPOC | |--------------|----------------|----------------| | HCI | 1.73 &pm; 0.13 | 1.59 &pm; 0.15 | | Adience | 2.33 &pm; 0.18 | 1.66 &pm; 0.21 | | Aesthetic | 1.49 &pm; 0.01 | 0.71 &pm; 0.02 | | Retina MNIST | 1.24 &pm; 0.04 | 1.23 &pm; 0.04 | **Comment:** For Table 3, is there a particular reason that LAC and APS are compared via SSCV instead of the coverage rate? **Response:** In Table 3, we compare against two of the most popular conformal prediction (CP) methods- APS and LAC on the output prediction of our proposed unimodal model, at $\alpha$ = 0.1. Since we fix $\alpha$ at 0.1 both LAC and APS ought to produce a Prediction set with at least 90% (= $1-\alpha$) marginal coverage on unseen test points. Equation of marginal coverage is given in Eqn. 1 in our manuscript. Equation of conditional coverage is given in Line 79 which is a stronger notion of coverage is some sense. To compare different CP methods in terms of conditional coverage, Size-Stratified Coverage Violation (SSCV) is a metric that is commonly used to measures violations of the conditional coverage property and is particularly suited for high dimensional data. (A. Angelopoulos et. al. '20). Since LAC produces shorter prediction sets as seen from Table 3, we thought it would be interesting to compare both of them against conditional coverage metric. From Table 3 and Fig. 6, it is evident that LAC achieves the smallest prediction set size but sacrifices adaptiveness (conditional coverage) in the process. Please let us know if you have any other questions or if there is anything else that we could add to further improve the submission --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for the response! The answers address my minor concerns so I'll maintain my score.
Summary: The authors present a method for ordinal classification, COPOC, which guarantees (by functional form) unimodal prediction distributions over the ordered classes and consequently guarantees contiguous prediction sets for uncertainty estimation via conformal prediction . The authors argue that unimodality is desirable for many ordinal classification tasks, e.g. the prediction set for customer shoe sizes should be something like size 5,6,7 rather than e.g. sizes 4,8,9. A theoretical result is obtained which bounds the size of the fitted model prediction set in terms of the size of the ground truth prediction set as well as the distance between the fitted prediction distribution and the ground truth prediction distribution for a given desired conformal prediction coverage level. Favorable experimental results in comparison to competing methods are presented on a suite of real-world ordinal image classification datasets. COPOC is compared to competing techniques on a variety of metrics, including accuracy, size of prediction set and contiguity of prediction set. Experiments on synthetic datasets are used to illustrate the consistency of COPOC as well as to examine the merits of different possible choices of conformal prediction algorithm. Strengths: Overall, the work is original to the best of my knowledge. I find the writing and presentation to be clear for the most part. The architecture presented in section 4.2 seems well-designed and does indeed seem to have just the right level of inductive bias, if we take for granted that unimodal predictive distributions are desirable. It guarantees unimodality without making additional parametric assumptions. Although I would have preferred more detail on the real world experiments, the breadth of results and metrics evaluated seems adequate to me. 4 datasets evaluated on 6 metrics is a reasonably thorough set of experiments. Although I would not characterize the topic as being highly significant or extremely important, the significance seems adequate to me for NeurIPs acceptance. Conformal prediction is a highly useful technique in which interest continues to grow. Ordinal classification, while not a top priority for many modelers, is important enough in certain applications that its intersection with conformal prediction is a worthwhile topic. Weaknesses: Although unimodal prediction distributions may often be desirable for ordinal classification, I find the case for unimodality to be overstated. Although the authors do acknowledge on line 357 (in the limitations section) that unimodality might not hold, I would have preferred a bit more discussion of the scenarios where it might not hold. For instance, I have some experience with recommendation systems (Netflix-style 5 star ratings) and there are merits to models in that domain which might detect "love-it-or-hate-it" scenarios where both 1-star and 5-star are more likely than 3-star. Even for cancer stage detection, I think it's possible that non-unimodal predictions could be desirable. One could imagine that there are 2 possible hypotheses explaining an observed symptom. If the observed symptom is due to a preexisting condition unrelated to the cancer, then the overall evidence might point to stage 1. If the observed symptom is due to the cancer itself, stage 4 might be implied. So more of an acknowledgement that unimodality might lead you astray would be good here. I also find the details of the real-world-dataset experiments to be a bit lacking. There is minimal discussion of hyperparameter optimization. I would have more confidence in the results if I knew a thorough attempt was made to optimize the hyperparameters of each method. Although I don't have a detailed understanding of the competing methods, the nonparametric ones (POE, Uni-Loss) would (at first glance) appear to have some hyperparameters associated with them and it's not clear to me whether those hyperparameters were optimized. I think more details on the real-world experiments and less detail on the synthetic experiments would be a better use of the 9 pages allowed. Some suggested typo-ish edits: Line 137 consider minimal contiguous set -> consider the minimal contiguous set Line 169 has->have Line 242 of underlying distribution -> of the underlying distribution Line 261 Accuracy@k lowercase k vs capital K earlier (line 12 in abstract) is inconsistent. ***** Update post -rebuttal *** In light of the additional information provided by the authors (both regarding the appropriateness of the unimodality assumption and the hyperparameter tuning), I have raised my score to a 7. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Did you optimize the hyperparameters of each competing method on some sort of validation set before doing the final test set evaluation for each method? If so, for which of the various metrics reported was the hyperparameter optimization conducted? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments. Please find our response below **Comment:** I find the case for unimodality to be overstated... So more of an acknowledgement that unimodality might lead you astray would be good here. **Response:** We completely agree with the reviewer that the unimodality assumption might not be universally applicable for all ordinal classification scenarios. The Netflix ratings scenario mentioned by the reviewer is an apt example. However, there do exist a large number of critical ordinal classification applications where it is beneficial to assume unimodality as validated by multiple notable works in computer vision domain [4, 22, 41, 12, 15, 20]. Examples include medical diagnosis applications such as cancer stage detection. In this case, the symptoms or indicative factors are often real-valued or ordinal variables such as the size of the tumor, number of cells in the tumor, the amount of cancer in mammary, axillary, and sentinal lymph nodes, the number of lymph nodes involved, the number of cancer-afflicted organs, which are monotonically related to the target label i.e the stage of cancer, which makes it reasonable to assume a unimodal class distribution. The table below shows the negative log-likelihood (NLL) of vanilla DNN fitted with cross-entropy loss (V-CE) and COPOC on four real datasets from our paper. The superior fit of COPOC indicated by lower NLL justifies the unimodality assumption for these datasets. | | V-CE | COPOC | |--------------|----------------|----------------| | HCI | 1.73 &pm; 0.13 | 1.59 &pm; 0.15 | | Adience | 2.33 &pm; 0.18 | 1.66 &pm; 0.21 | | Aesthetic | 1.49 &pm; 0.01 | 0.71 &pm; 0.02 | | Retina MNIST | 1.24 &pm; 0.04 | 1.23 &pm; 0.04 | We do, however, acknowledge the reviewer’s point. In the revised introduction, we will definitely mention ordinal classification examples (e.g., prediction of preference ratings, event-hour-of-day) where unimodality does not hold. We have also pointed out in the limitation section of our manuscript that COPOC makes an assumption on the underlying distribution being unimodal and might lead to a sub-optimal fit if the assumption does not hold. We will additionally mention that one could potentially check the validity of the assumption by comparing the likelihood of the unimodal and unconstrained fits. Please do note that even if the unimodality assumption is not true, the theoretical coverage guarantees of the prediction set produced by COPOC would still hold but the bound itself is weaker since the fitted distribution deviates significantly from the true underlying distribution. **Comment:** Did you optimize the hyperparameters of each competing method on some sort of validation set before doing the final test set evaluation for each method? If so, for which of the various metrics reported was the hyperparameter optimization conducted? **Response:** We have presented some of the key implementation details (feature extractor backbone and training procedures) on real-world datasets and experiments in appendix C.1. We do acknowledge that there might be some ambiguity on the hyperparameters. We want to apologize the reviewer for the same and would like to take this opportunity to clarify. On the public benchmark datasets where the official best hyperparameters are available for baseline methods (For instance in Adience, HCI and Aesthetic dataset best settings for POE and SORD, and for Binomial best settings on Adience were available) from the corresponding authors work or code, we directly use those settings. We were able to replicate the results (MAE and Accuracy) on these datasets as reported by them. For all other cases (namely AVDL, Uni-Loss), we optimize for MAE in hyperparameters search since that is the most common metric used for all ordinal classification tasks across competing benchmarks. We cross-validate over the following grid: - learning rate ∈ {1e−2, 1e−3, 1e−4} with decay rate of 0.2. - weight decay ∈ {0, 1e−3, 1e−2, 1e−1} - dropout rate ∈ {0.1, 0.25, 0.5, 0.75} - Adam optimizer with default settings Few additional algorithm specific hyperparameters that were missed in the main manuscript and needed tuning were: - For POE, there are two extra hyperparameters of $\alpha$ and $\beta$ in its distance-aware loss function in embedding space which we search over {1e−3, 1e−4, 1e-5, 1e-6} as suggested by the authors. - SORD describes three type of distance metric losses - absolute inter class distance, squared distance and its log variant. We search over these loss functions too. - AVDL requires choosing the initial variance (of the Gaussian) of all images which we search from {0.25,0.5,1,2} similar to their work. - Uni-loss has $\lambda$ hyperparameter that controls the weightage between unimodality and mean-variance component of its loss function. We search $\lambda$ over {10,100,500,1000,5000}. **Comment:** I think more details on the real-world experiments and less detail on the synthetic experiments would be a better use of the 9 pages allowed. **Response:** Thanks for the feedback. We had to perform an ablation study on synthetically generated data drawn from various unimodal distribution to study the efficacy of our proposed non-parametric unimodal DNN model against other state-of-the-art baseline models. In the revised version, we will attempt to better balance the placement of content between the main paper and the appendix. **Comment:** Some suggested typo-ish edits: … inconsistent **Response:** We sincerely appreciate the reviewer’s careful reading of our submission and will fix the typos. We respectfully request that the reviewer assess our contributions again and consider increasing the score. Please do let us know if there is anything else that we can do to clarify or improve our submission. --- Rebuttal Comment 1.1: Title: Thanks for the response. I raised my score to 7 Comment: Given the additional info provided regarding the justification and context for the unimodality assumption and regarding hyperparameter tuning, I raised my score to 7. --- Reply to Comment 1.1.1: Comment: Thanks so much for reading through our response and revising your score. We will add the appropriateness of the unimodality assumption and potential mitigation steps and additional details regarding hyperparameter tuning in revised version of the paper.
Summary: The authors explained the difference between their method and prior work satisfyingly. I had a misunderstanding in my previous reading. I'd be happy to support the paper's acceptance. It is a minor contribution from a theoretical perspective, but I agree that practically, it's probably a better way of constructing unimodal prediction sets than Lu et al. _______________ The paper proposes a modification of conformal prediction for the ordinal classification case. The idea is that, when the distribution is unimodal, you should always be building continuous intervals; so you restrict the family of possible prediction sets to be the ones that are only contiguous. Strengths: The paper is relatively clear, and the problem is important. Weaknesses: Post-rebuttal note: I was wrong about the below. The methods are definitely related, but not identical. Lu et al. applies the constraint of unimodality during the set construction. The present manuscript does so during the model training. _______ The method is not novel, and was previously proposed in this reference: Lu, C., Angelopoulos, A. N., & Pomerantz, S. (2022, September). Improving trustworthiness of AI disease severity rating in medical imaging with ordinal conformal prediction sets. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 545-554). Cham: Springer Nature Switzerland. (Equation 2 in https://arxiv.org/abs/2207.02238 is the same as Equation 1 in the present manuscript.) There has also been since a followup of that work on risk control: https://openreview.net/forum?id=9R5qObx8WO5 . For this reason, I do not believe the paper contains any novel methodological advances, and thus cannot be accepted. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Can the authors clarify my comment about novelty? If I have misunderstood, I would be happy to hear it. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for pointing us to Lu et. al. ‘22 and Yunpeng et. al. ’23, both of which we were not aware of earlier. While the motivation for both these papers (namely conformal predictions for ordinal settings through contiguous prediction sets) overlaps with that of ours, our solution approach and contributions are significantly different as we clarify below. **Comment:** The method is not novel, and was previously proposed in Lu et. al. **Response**: Lu et. al. propose a new conformal prediction (CP) method to output a contiguous prediction set (PS) over ordinal labels irrespective of whether the posterior distribution generated by the model is unimodal or not. Similar to typical conformal prediction methods, Lu et. al’s ordinal APS approach involves a calibration step followed by inference. In the calibration step, the conformal threshold $\lambda (\alpha)$ for the desired coverage $(1-\alpha)$ is learned using a calibration dataset following the equation before Theorem 1 in Lu et. al. During inference, given a desired coverage $(1-\alpha)$, the prediction set for a given instance is constructed by starting at the label with highest predicted score and progressively expanding the intervals on either side till the covered probability mass exceed $\lambda(\alpha)$ (Algorithm 1). However, this greedy construction can often lead to large PS sizes when the posterior distributions are far from unimodal (e.g., a non-unimodal output prediction distribution [0.2, 0.1, 0.4, 0.3] over 4 ordinal classes will result in PS size of 4 to include 0.9 probability mass, while a unimodal distribution [0.1, 0.2, 0.4, 0.3] would result in PS size of 3. Figure 2 in the submitted pdf (in global rebuttal) shows the three steps (training, calibration, inference) of COPOC and Lu et. al. **Differences in contributions** - **Conformal Calibration and Inference:** Unlike Lu et. al., we do not propose any new CP method and instead leverage well established works of APS (Romano et. al. '20), LAC (Sadinle et. al. '19). We show that in case of ordinal classification where the true class distribution is unimodal, any model constrained to output a unimodal distribution can be used with APS, LAC or similar conformal prediction algorithms to yield contiguous PS with a guaranteed coverage level. - **Unimodal Training:** Our key contribution is a novel non-parametric method for training DNN which is guaranteed to output a unimodal posterior over class labels while ensuring that any arbitrary unimodal class distribution can be approximated (Theorem 2 of our paper). Lu et. al. does not propose any modification to the model training step. - **Theoretical Results:** In Theorem 1 of our paper, we provide a tight upper bound for the cardinality of the PS generated by APS on the top of unimodal posterior predicted distribution in terms of the optimal set. On the other hand, Theorem 1 in Lu et. al. is a standard coverage guarantee that follows from Vovk et. al. '99 without any bound on PS cardinality size. - **Empirical Results:** We provide empirical results on synthetic and real-world datasets comparing different approaches to achieve unimodality and SOTA CP methods to gain insights on different methods and demonstrate the efficacy of COPOC. Lu et. al. focuses primarily on the spine stenosis severity prediction comparing their proposed Ordinal APS with LAC. **Relative Performance of the methods:** Table below shows an empirical comparison of COPOC against APS and Ordinal APS of Lu et. al. applied over a Vanilla DNN trained with Cross-entropy loss (V-CE) on synthetic data D4 and public datasets mentioned in Sec. 5 of our manuscript. For V-CE with APS we consider a minimal contiguous interval that covers the output PS and report its size. | | V-CE with APS | V-CE with Ordinal-APS of Lu et. al. | COPOC | |--------------|----------------|-----------------|----------------| | Synthetic-D4 | 4.67 &pm; 0.03 | 4.59 &pm; 0.03 | 4.50 &pm; 0.02 | | HCI | 3.28 &pm; 0.14 | 3.03 &pm; 0.15 | 2.66 &pm; 0.13 | | Adience | 4.82 &pm; 0.24 | 2.67 &pm; 0.12 | 2.26 &pm; 0.06 | | Aesthetic | 1.96 &pm; 0.2 | 1.77 &pm; 0.05 | 1.70 &pm; 0.06 | | Retina MNIST | 3.6 &pm; 0.08 | 3.28 &pm; 0.02 | 3.03 &pm; 0.01 | We observe that Lu et. al. produces significantly shorter sets compared to V-CE with APS . However, COPOC significantly outperforms Lu et. al. across all datasets because of better unimodal data fit. **Comment:** Equation 2 in Lu et. al. is the same as Equation 1 in the present manuscript. **Response:** We presume the reviewer is referring to Equation 3 in Lu et. al. matching that of Equation 1 in the current paper. This is the definition of marginal coverage. It is natural that the basic notations and equations defining marginal and conditional coverage are similar as these are common to conformal prediction literature, and based on the seminal work of (Vovk et. al. '99). **Comment:** There has also been since a followup of that work on risk control [Yunpeng et. al. ’23]. **Response:** The cited follow up work was published recently in UAI’23 [Aug 1st-3rd] much after the NeurIPS submission deadline. Furthermore, the primary new contribution of Yunpeng et. al. is extending Lu et. al. to the case where classes have differential weights. It is in fact orthogonal to our work and ideas proposed in Yunpeng et. al. can also be applied along with our COPOC method to accommodate differential weighting of classes. We will definitely cite Lu et. al. and Yunpeng et. al. in the revised version and add the experimental comparison with Lu et. al. as additional baseline. We respectfully request that the reviewer assess our contributions again and consider revising the score. Please do let us know if there is anything else that we can do to clarify or improve our submission. --- Rebuttal Comment 1.1: Title: I agree with the comments Comment: I now understand better the contribution of the paper, thanks. What I meant to say was that the definition of $\mathcal{T}$ in Lu et al. is the same construction as (4) in the present manuscript. That much is certainly true. I was confused by the manuscript's set construction, because it is an emergent property of using APS or LAC with a unimodal model. I see now that Theorem 1 is novel, and true. I don't see it as a major theoretical contribution, but it does in some sense justify the method. Great that it beats Lu et al. It sounds like regularizing the model to be unimoal is a better approach; this makes a lot of sense to me. The Lu et al paper should be included as a baseline somewhere, even if in the appendix. I'll revise my score to a borderline accept. Thanks for correcting my mistake. --- Reply to Comment 1.1.1: Comment: Thanks so much for reading through our response and revising your score. We really appreciate the pointers to the Lu et. al. and Yunpeng et. al. We will definitely discuss both in the related work and also include the empirical comparison with Lu et. al. in the revised version. Below is the response to the comment on the prediction set (PS) construction. It is indeed true that construction in Eqn. 2 of Lu et. al. and Eqn. 4 of our paper both define the minimal contiguous set for a desired coverage as the oracle prediction set . However we wish to clarify that the actual output prediction sets from these algorithms (LAC, APS or Ordinal APS of Lu et. al.) are different from the oracle prediction sets since these algorithms operate on the fitted distribution and not the true one. Furthermore, to ensure marginal coverage of $(1 − \alpha)$ these CP algorithms try to find minimal prediction sets that covers probability mass greater than or equal to $\lambda$ in Lu et. al. and $\hat{q}_{D\\_cal}(\alpha)$ in Eqn. 3 of our manuscript, which are determined by suitable conformal calibration on hold-out data. Typically, these parameters would be larger than $(1 − \alpha)$ if the fitted distribution is not very accurate. Lemma 1 in our paper establishes the contiguity of the prediction sets resulting from LAC and APS, while Theorem 1 provides a bound on the cardinality of the PS generated by the APS algorithm with the unimodal training relative to the oracle PS. While Lu et. al. also produces a contiguous set, since the fitted distribution is not unimodal, it often leads to a bigger prediction set. When the fitted distribution is unimodal, ordinal APS of Lu et. al. generates the same prediction set as APS.
Summary: The paper addresses the problem of adapting the conformal prediction methods to ordinal classification so that the predictor outputs contiguous prediction sets. Contributions can be split into two parts. The first part deals with adopting the existing conformal prediction methods to ordinal classification. The main observation is that the predictor with unimodal posterior over classes is enough to guarantee that the conformal prediction methods will return contiguous prediction sets. This part is relatively straightforward. The second part involves proposal of a novel non-parametric method for training NN which outputs unimodal posterior over class labels. The proposed method is empirically evaluated and shown to provide competitive results when compared to existing ordinal classification methods used for conformal prediction. Strengths: The paper is sound and it is very clearly written. A new non-parametric method for training an NN-based ordinal regression predictor with unimodal posterior over class labels is a simple elegant idea, which is to my knowledge novel and potentially useful in practice; not only in the context of confomal prediction. Weaknesses: I have no major objections. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In the experiments on real data, tab 1, the proposed method does not perform best in terms of MAE which might be attributed to the fact that true posterior might not be exactly unimodal. It would be instructive to see whether this changes on the synthetic data, which presumably are generated from an unimodal distribution (although whether it is true is not clear to me; the description should be more explicit in this respect). I.e. I would suggest to report also MAE in table 5.3. --- The authors satisfactorily addressed my questions in the rebuttal. I keep my positive ratings. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments. Please find our response below **Comment:** In the experiments on real data in Table 1, the proposed method does not perform best in terms of MAE which might be attributed to the fact that true posterior might not be exactly unimodal. **Response:** The reviewer's insight is likely true. For real datasets in Table 4 (in appendix), we do observe that COPOC performs at par with other state-of-the-art baselines in terms of MAE. This is also evident from the fact this model has overlapping error intervals with other state-of-the-art models in terms of MAE. The real benefit of COPOC is in improving the Acc@2 and 3 using its unimodality bias. **Comment:** It would be instructive to see whether this changes on the synthetic data, which presumably are generated from an unimodal distribution (although whether it is true is not clear to me; the description should be more explicit in this respect). I would suggest to report also MAE in table 5.3. **Response:** We would like to apologize for the lack of clarity on the synthetic data. All synthetic datasets D1-D4 in Sec 5.3 (Table 2) are indeed drawn from various unimodal distributions. The goal was to do an ablation study of our proposed non-parametric unimodal DNN model against other baseline models for different underlying unimodal data distributions. Below we present the MAE metrics for the datasets D1-D4 in Table 2 in Sec. 5.3 of our paper. Mean and std. error is reported across 10 random trials. Best mean results are bolded. Labels in the table are as described in Table 2 of Sec 5.3 | | V-CE | SORD | AVDL | Binomial | Binomial-temp | Uni-loss | COPOC | |----|--------------------|--------------------|--------------------|----------------|----------------|----------------|--------------------| | D1 | **0.65** &pm; 0.02 | **0.65** &pm; 0.01 | 0.67 &pm; 0.02 | 0.68 &pm; 0.02 | 0.69 &pm; 0.01 | 0.68 &pm; 0.03 | **0.65** &pm; 0.02 | | D2 | **0.56** &pm; 0.01 | 0.59 &pm; 0.01 | 0.60 &pm; 0.01 | 0.61 &pm; 0.02 | 0.61 &pm; 0.01 | 0.63 &pm; 0.04 | 0.57 &pm; 0.02 | | D3 | 0.24 &pm; 0.02 | 0.25 &pm; 0.02 | **0.23** &pm; 0.03 | 0.28 &pm; 0.01 | 0.26 &pm; 0.02 | 0.27 &pm; 0.04 | **0.23** &pm; 0.02 | | D4 | 1.26 &pm; 0.02 | 1.27 &pm; 0.03 | 1.27 &pm; 0.02 | 1.31 &pm; 0.04 | 1.29 &pm; 0.02 | 1.30 &pm; 0.03 | **1.24** &pm; 0.01 | Our conclusion from the above table is quite similar to what we have presented in Sec. 5.3. For dataset _D1_, _SORD_ fits the data well and has the lowest MAE as it explicitly models exponential distribution assuming all classes to be equi-spaced. Similarly for D3, AVDL performs well as samples in D3 are drawn from Gaussian distribution which AVDL explicitly models. COPOC matches the best performance in all the datasets (including the complex dataset D4) except on D2 where it is slightly inferior to Vanilla-cross entropy fit. Thus, we can conclude the performance of the methods depends largely on the validity of the underlying data distribution assumptions and the relatively unconstrained nature of COPOC makes it more versatile. Interestingly, simple vanilla cross-entropy loss model _V-CE_ performs almost at par with other sophisticated baselines in terms of MAE which we did not observe with real data. This could be due to the fact these synthetic datasets lie in a low-dimensional space (10 D), and actual benefits of sophisticated baselines are seen on high dimensional image data. Please let us know if you have any other questions or if there is anything else that we could add to further improve the submission --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for showing the results on synthetic data. It make sense. The good performance of cross-entropy loss is not surprising if the data are low-dimensional and the number of examples is sufficient. I am satisfied with the answer.
Rebuttal 1: Rebuttal: We thank all the reviewers for their detailed comments and suggestions. We have attempted to address the reviewer concerns and questions to our best including additional experimental results and figures. Below we summarize the key points of our responses. **Novelty of our current work and overlap with Lu et. al. '20 and Yunpeng et. al. '23:** In our response to reviewer vR8Y, we have clarified that while the papers cited by the reviewer share the same motivation, the actual methodology and contributions are substantially different. Figure 2 (in the 1-pager PDF in the global rebuttal section) is meant to further elucidate this point. We have also included empirical results on real datasets that demonstrate the superior efficacy of COPOC relative to Lu et. al. We thank reviewer vR8Y for pointing us to these related works and will include them in the revised version. **Validity of unimodality assumption and mitigation approaches:** We agree with reviewers Qwu5 and w3W6 that this aspect needs more discussion in the paper and will revise it accordingly. In our response to the reviewers, we motivate the validity of the unimodal assumption for ordinal classification applications such as cancer stage detection. The papers cited by Reviewer vR8Y (e.g., Lu et al) that deal with spinal stenosis also make the case for the unimodal formulation. We have also included empirical results comparing the likelihood with unconstrained DNN and unimodal DNN (COPOC) model on four real world public datasets to provide additional justification. Based on our exploration, comparing the likelihoods of the unconstrained and unimodal DNNs to figure out the appropriate conformal prediction approach seems like a good mitigation strategy. We also wish to clarify that the theoretical results (Lemma1 and Theorem 1) continue to hold even if the unimodality assumption is not true. **Details of Hyperparameter Optimization:** In our response to reviewer Qwu5, we point to the relevant parts in Appendix C.1 that provide details on the datasets and hyperparameter settings. We also included additional details that should enhance the reproducibility of our work and will add it in the appendix of the revised version. **Questions on metrics:** We have attempted to address the questions by reviewers ieEd and w3W6 regarding the choice of SSCV and the behavior of MAE along with the empirical results. **Figure with real world example:** We have added a new picture (Figure 1 of the uploaded 1-pager PDF in global rebuttal section) with examples from a public age estimation dataset (Adience) to motivate the COPOC approach. Thanks again for the review process and the valuable comments that should aid us in improving the clarity of the paper. We will revise the submission as per the feedback. If there are any comments/questions we overlooked or if there are further ways to improve the paper, please let us know and we will be glad to work on them. Pdf: /pdf/449fce97b764d42b2c444fce47bd8eb2022b1768.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
NICE: NoIse-modulated Consistency rEgularization for Data-Efficient GANs
Accept (poster)
Summary: The authors propose a noise modulation and regularization scheme for GANs that reduces disciminator overfitting and improves training stability in the low-data scheme. The technique demonstrate consistent improvements when applied to several different network architectures and datasets. Strengths: The paper contains very extensive comparisons to competing previous methods. The theoretical motivation for the proposed technique is very extensive (but also very hard for non-experts to understand). The theoretical analysis sheds some light on the effectiveness of consistency regularization techniques that have shown promise previously. The numerical results are excellent. The paper is very math-heavy, but the authors do a good job of including some intuition of the different propositions and lemmas (e.g. lines 142-147, 156-160, 173-175, 191-196). The inclusion of separate tFIDs and vFIDs is refreshing to see. Weaknesses: Not a single generated image is shown in the main paper. Given that some concerns about the reliability of FID have been raised as of late, visual results (e.g. the 100-shot results from the supplemental, Fig. 11) would be appreciated. The paper mentions many similar-sounding terms (generation gap, generalization gap, output gap, generalization error, discriminator discrepancy) that might confuse non-experts. A brief description of the effect and its implications on training should be included for non-expert readers. Many images in the supplemental are of unusably low quality. This is especially true for CIFAR results, which need to be integer-upscaled to preserve sharpness. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I do not understand where the second derivatives come from in equation 9. Equation 10 seems to perform a Taylor expansion, is this accurate? If so, the fact should be mentioned for clarity. Is equation 1 a contribution of this paper or something presented in previous work? What does MA in tables 1-4 mean? In figure 2, the y-axis labels are incorrectly formatted (1e^2 instead of 1*10^2 or 1e2) The authors claim (line 171) that proposition 1 can be extended to more complicated architectures - is an example of this shown somewhere, such as previous work? The datasets used in Figures 2 and 3 don't seem to be mentioned NICE seems to work best when combined with geometric augmentation techniques, especially ADA. A comment on this would be useful - ADA already minimizes the gap between the real and fake distributions, so what does NICE add in this case? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitations are not properly discussed in the main text. Is there a wall clock cost to the proposed method? Does it require changes to training hyper parameters (learning rate etc.)? Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Rev. 5 (ijuK) ***We thank the reviewer** for the constructive review and valuable questions that have helped us improve our work.* ## 1. Not a single generated image shown in the main paper. We apologize. We have selected now **images on 100-shot and AnimalFace datasets from Figure 11 (supplementary material)** and added them into the main draft. ## 2. The paper mentions many similar-sounding terms. Thank you for bringing our attention to multiple terms requiring disambiguation, which is invaluable in improving the clarity of our work and helping readers. The table below summarizes each the meaning of each term (we have added it now to our revised work): |Term|Meaning| |-|-| |generalization gap|the difference in a model's performance between the training data and unseen testing data| |output gap|the difference between the average discriminator output for real data and that for fake data| |discriminator discrepancy|the difference in the average output of the discriminator between two distributions| |generalization error|the error of the discriminator's predictions on unseen testing data| Additionally, we have duly rectified the term from "generation gap" to "generalization gap". ## 3. Many images in the supplemental are of unusably low quality. We apologize. We have now upscaled them accordingly given unlimited supplementary material size. ## 4. Where the second derivatives come from in Eq. 9? Thank you. They come from expanding $h(\cdot)$ around ${\bf a}\_f$ with the Taylor expansion. This sort of analysis and first- and second-order expansion are a typical starting point in several works: >* *What Regularized Auto-Encoders Learn from the Data-Generating Distribution?*, Alain & Bengio, JMLR'14 >* *Sharpness-Aware Minimization for Efficiently Improving Generalization*, Foret et al, ICLR'21 >* *How Does Mixup Help With Robustness and Generalization?* Zhang et al, ICLR'21 As higher-order terms usually decay according $\mathcal{O}(\frac{1}{o!})$ where $o$ indicates the order, they are negligible. ## 5. Is Eq. 1 a contribution of this paper? Eq. 1 is a standard formulation for analyzing the generalization of GANs, as in: > *On the Discrimination-Generalization Tradeoff in GANs*, ICLR'18, Zhang et al. **We have updated it now with the equation given in Resp. 1 to Rev. 3 (LrCf)**, which is our contribution. We have also included extension of Fig. 5 and explanations from Sections B.2 and B.3 of the supplementary material to complement Fig. 1 in the main paper. ## 6. What does MA in Tables 1-4 mean? We apologize. MA denotes `massive augmentations' (including DA and ADA) first used in: > *Generative co-training for generative adversarial networks*, AAAI'22, Cui et al. DA and ADA are: > *Differentiable augmentation for data-efficient GAN training*, NeurIPS'20, Zhao et al. \ *Training generative adversarial networks with limited data*, NeurIPS'20, Karras et al. ## 7. Claim that Prop. 1 can be extended to more complicated architectures. We believe this is a misunderstanding (we revised now the language). Our aim was to emphasize that the beneficial effect of implicit weight regularization achieved through noise modulation, despite being analyzed within a simplified two-layer system, remains pertinent for networks encompassing multiple layers, including convolutional neural networks (indeed, we use NICE across several layers). This is attributed to the fact that networks with multiple layers can be conceptually treated as an aggregation of two layers, and a convolutional layer is a specialized case of a linear layer. These propositions find validation in the following references: >* *On Dropout and Nuclear Norm Regularization*, Mianjy et al, ICML'19 >* *The Implicit and Explicit Regularization Effects of Dropout*, Wei et al, ICML'20 >* *Dropout: Explicit Forms and Capacity Control*, Arora, ICML'21 ## 8. Datasets used in Figures 2 and 3 do not seem to be mentioned. We apologize. In Fig. 2 and 3, we use a 10% data, CIFAR-10, OmniGAN ($d'=256$). ## 9. ADA already minimizes the gap between the real and fake distributions, so what does NICE add in this case? Thank you for the interesting question. Below we analyze ADA's underlying issues. While both ADA and NICE effectively prevent overfitting, they employ differing strategies. NICE reduces the Rademacher complexity of the discriminator while ADA expands the data space through diverse augmentations. However, **NICE possesses an additional advantage which ADA lacks. It implicitly penalizes gradients** (e.g., minimizing Eq. 10 implies minimizing gradient norms $f'^2(\cdot)$ and $f''^2(\cdot)$), thereby improving the stability of GAN training. \ \ In contrast, ADA lacks such properties. With augmentation techniques that involve random noise additions or multiplications, an analysis based on Eq. 9 suggests that ADA might in fact lead to an increase in gradient norms. Empirical observations support this supposition: we encountered increase in gradient norms for ADA (**kindly see Figure 4 in the rebuttal PDF**). However, when combining NICE with ADA, the gradient issues caused by ADA were alleviated, giving the best of two worlds (**ADA and NICE are complementary**). ## 10. Limitations, training cost and whether it requires changes to training hyper-parameters (learning rate et.?) We have now moved the limitations addressed in Section I of the supplementary materials to the main paper. While it is true that implementing NICE comes with small added training costs, our design (see Fig. 5, supplementary materials) enables efficient parallelization of the process. Moreover, even without parallelization, the incremental cost compared to the baseline is minimal, as illustrated in Figure 5 in the rebuttal PDF and Table 7 in the supplementary materials. NICE is built on GAN backbone architectures, and we keep all the learning hyper-parameters of original backbones untouched. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response. My concerns have been adequately addressed. I will follow the other discussions here and reconsider my rating if need be. --- Reply to Comment 1.1.1: Comment: Thank you for your kind and constructive response. Authors
Summary: This paper proposed NICE, a technique that enforces the discriminator to be consistent with the same inputs under different noise modulations. The authors showed us both in theory and practice that NICE is effective at preventing discriminator overfitting and achieves superior performance in image generation under limited data settings. Strengths: 1. The method is based on theoretical proof, and the experimental results show the effectiveness. 2. The method achieves competitive performance against many existing methods. The evaluation involves multiple GAN models, and baseline methods. Weaknesses: 1. The authors show the computational overhead of NICE in appendix, which is not small and it could increase the training time by a large amount. There is no quantitative measure for scalability of NICE. For example, what will happen when the image resolution is 1080p or even higher, or when the dataset is quite large. What is the training cost? 2. NICE introduces some new hyperparameters but it is not clear how they are tuned. What's their sensitivity to different GAN models and data distribution (or datasets)? What's their sensitivity to baselines in Table 3? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to “Weaknesses". It seems that authors missed a strong baseline method SSGAN-LA [1] in Table 3 which could achieved a better performance. It is better to compare with missing baselines. [1] Self-Supervised GANs with Label Augmentation, Neurips 2021. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please refer to “Weaknesses" for limitations. The authors adequately addressed the potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Rev. 4 (m7vf) ***We thank the reviewer** for the constructive review and valuable questions that have helped us improve our work.* ## 1. The computational overhead of NICE. Kindly notice **our computational overhead is small**. Firstly, our usage of multiplicative noise modulation as in **Fig. 5 (supplementary material)** and Fig. 1 (main paper), **is restricted to the last 3 or 4 blocks**, i.e., $L-l+1$, of the discriminator with $1,\cdots,L$ blocks. Moreover, **these last few blocks are responsible for handling low resolution feature maps and can be readily parallelized**. Even without parralelization, increase is by 0.35sec per 1000 images on OmniGAN. On StyleGAN2, which is a slower network, the extra time required by NICE was only 18\% on top of StyleGAN2. Meanwhile, **Fig. 5 in the rebuttal pdf illustrates the FID gain *vs.* speed**, which shows that **a slightly increase in training time yields substantial improvements**, particularly when compared to recent state-of-the-art approaches like FakeCLR and InsGen. We believe the increased computational overhead is outweighed by the considerable benefits and the improvements justify the extra fraction of time. ## 2. What will happen when the image resolution is 1080p or even higher? **As our module is used only for the last 3 or 4 blocks of the discriminator**, i.e., $L-l+1$, this means they are not exposed to high resolution feature maps. (see Fig. 5 (supplementary material)). **Table below** shows no large speed increase in such a case. Below we apply NICE module to the StyleGAN2 for the last 4 blocks, and provide the sec/$k$img given resolutions as follows: |Resolution|$512\times512$|$1024\times1024$|$2048\times2048$| |-|:-:|:-:|:-:| |StyleGAN2|18.65|51.76|143.25| |StyleGAN2+NICE|20.74|56.03|147.76| |relative increase|11.17\%|8.25\%|**3.05\%**| As our NICE is implemented on last few blocks of the discriminator, which are low resolution feature maps, **the relative computation cost increase is actually smaller as the remaining standard network layers dominate compute time**. ## 4. What hyper-parameters do you use and what is their sensitivity to different GAN models? We keep the learning hyperparameters of the original GAN backbones untouched. NICE introduces four new hyperparameters: the place to apply NICE within a block $c\in\\{C_1, C_2, C_R\\}$, which blocks to use $l\in\\{1,...L\\}$, the threshold $\eta$ and the regularization strength $\Delta_\gamma$. $c$ can be simply choosed from $C_1C_2C_R$ and $C_1C_2$ (we use $C_1C_2C_R$ for OmniGAN and BigGAN, and $C_1C_2$ for StyleGAN2). $l$ is often applied for the last 3 or 4 blocks. $\eta$ is fixed as 0.5 for BigGAN and OmniGAN. On StyleGAN2, we recommand lower $\eta$ for diverse datasets. we set $\eta=0.6$ for FFHQ dataset $\eta=0.9$ for the 5 low-shot dataset. Generally ($c=\\{C_1, C_2\\}$, $l=\\{L-3, ..., L\\}$, $\eta=0.6$, $\Delta_\gamma=0.05$) is a good starting point when applied to new dataset. Below, we varied the hyper-parameters on the 5 low-shot dataset (Obama, Grumpy Cat, Panda, AnimalFace Cat, Animal Face Dog). **A slight change the hyper-parameters does not make a big difference regarding the results**, showing that our NICE is not too sensitive to hyper-parameters: |$l\in\\{1...L\\}$|Obama|Grumpy Cat|Panda|AnimalFace Cat|AnimalFace Dog| |-|:-:|:-:|:-:|:-:|:-:| |4,5,6|25.66|20.01|9.25|26.15|48.18| |3,4,5,6|24.56|18.78|8.92|25.25|46.56| |2,3,4,5,6|26.35|19.34|9.14|25.61|47.68| |$c\in\\{C_1, C_2, C_R\\}$|Obama|Grumpy Cat|Panda|AnimalFace Cat|AnimalFace Dog| |-|:-:|:-:|:-:|:-:|:-:| |$C_1$|25.77|19.42|8.98|24.97|47.17| |$C_1,C_2$|24.56|18.78|8.92|25.25|46.56| |$C_1,C_2,C_R$|26.51|19.51|9.56|26.18|46.93| |$\Delta_\gamma$|Obama|Grumpy Cat|Panda|AnimalFace Cat|AnimalFace Dog| |-|:-:|:-:|:-:|:-:|:-:| |0.01|26.48|19.1|9.09|25.33|47.15| |0.05|24.56|18.78|8.92|25.25|46.56| |0.1|25.46|19.52|9.01|25.42|47.06| |0.2|25.74|19.85|9.13|25.30|47.87| |$\eta$|Obama|Grumpy Cat|Panda|AnimalFace Cat|AnimalFace Dog| |-|:-:|:-:|:-:|:-:|:-:| |0.95|25.20|20.06|9.07|25.20|46.44| |0.90|24.56|18.78|8.92|25.25|46.56| |0.80|25.71|18.90|9.15|26.05|47.43| ## 5. Authors missed a strong baseline method SSGAN-LA [1] in Table 3. SSGAN-LA [1] did not perform experiments on the five low-shot datasets, 100-shot Obama/Grumpy cat/Panda and Animal Face Cat/Dog datasets. They only released code for BigGAN. \ \ Thus, we have followed the authors' code implementation for BigGAN and re-implemented it for StyleGAN2 on the five low-shot datasets. For SSGAN-LA, we tried the multi-hinge loss and the cross_entropy loss provided in the code of SSGAN-LA. We obtained the best the results with the multi-hinge loss for StyleGAN2+SSGAN-LA but our StyleGAN2+NICE is still a stronger performer: |Dataset|Obama|Grumpy Cat|Panda|AnimalFace Cat|AnimalFace Dog| |-|-|-|-|-|-| |StyleGAN2+SSGAN-LA|79.88|38.42|28.6|78.78|109.91| |StyleGAN2+NICE (ours)|**24.56**|**18.78**|**8.92**|**25.25**|**46.56**| --- Rebuttal Comment 1.1: Comment: I have read the authors' response and I find that the current state of this paper is satisfying, though some aspects of this paper could be improved. I have raised my rating. Thank the authors for their effort during the discussion phases. --- Reply to Comment 1.1.1: Comment: Thank you. We really appreciate the reviewer's comments we received and help in shaping up our work.
Summary: This paper proposes a training approach called NoIse-modulated Consistency rEgularization (NICE) to improve the data-efficiency of generative adversarial networks (GANs) by addressing issues related to limited data. It introduces adaptive multiplicative noise into the discriminator to modulate its latent features, preventing discriminator overfitting. To mitigate the instability of GAN training caused by increased gradient norm, a constraint is imposed on the discriminator to ensure consistency for the same inputs under different noise modulations. The experimental results demonstrate the effectiveness of NICE in reducing discriminator overfitting and improving the stability of GAN training, achieving state-of-the-art results on various datasets and low-shot generation tasks. Strengths: - The paper is well-organized and well-written, making it easy to understand its contribution. - The paper provides theoretical analysis to understand the connection between introducing multiplicative Gaussian noise to the discriminator and GAN generalization. The authors also uncover the negative impact of simple noise multiplication on gradient norm and propose a noise-modulated consistency regularization with theoretical grounds to improve it. - Experimental results presented in the paper validate the effectiveness of the proposed method, surpassing the baseline in most experiments. Weaknesses: - The complete objective function of the proposed method should be included in the main text, rather than in the appendix. - In the GAN field, it is suggested to conduct multiple experiments and report the results in terms of mean and standard deviation due to the random variation of the results caused by different trials. - According to the theoretical analysis in this paper, it is easy to understand the motivation behind the introduction of Eq12, but the motivation behind Eq13 and Eq14 is unclear. Although the ablation experiments in Table 6 empirically demonstrated the benefits of the proposed method, I still expect the authors to provide a more explicit explanation for these equations. Moreover, it is recommended to report the experimental results using only Eq12 and Eq14, as this would further demonstrate the effectiveness of the method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Does Eq1 encompass all GAN objective functions, such as the original min-max and non-saturating GANs? It is suggested that the authors can provide detailed explanations, as the theoretical work of this paper is based on Eq1. Some typos: - Lack parentheses in Eq9. - f(x) -> f(\alpha) in Line 185. - N(0,\beta^2 I^d) -> N(1,\beta^2 I^d) in Line 200. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No, the authors have not addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Rev. 3 (LrCf) ***We thank the reviewer** for the constructive review and valuable questions that have helped us improve our work.* ## 1. The complete objective function of the proposed method should be included in the main text, rather than in the appendix. Thank you. Absolutely. We have now combined details from Sections B.2 and B.3 of the supplementary material. In general, we have improved the notation and Eq. 12-14 can be directly injected into the GAN objective by $\Omega(\cdot)$. To that end, we have compacted the objective as follows: $ \begin{cases} L_D^\text{AN}=\min\limits\_{{\boldsymbol\theta}_d}\mathbb{E}\_{\bf x\sim\nu_n}\[h\_\text{AN}({\bf x}; {\boldsymbol\theta}_d)\\!+\\!\gamma\Omega({\bf x})\]+\mathbb{E}\_{\bf x\sim\hat{\mu}_m}\[ -h\_\text{AN}({\bf x}; {\boldsymbol\theta}_d)\\!+\\!\gamma\Omega({\bf x})\]\\\\ L_G^\text{AN}=\min\limits\_{{\boldsymbol\theta}_g}\mathbb{E}\_{\bf z\sim p\_z}\[-h\_\text{AN}(g({\bf z}; {\boldsymbol\theta}_g))\\!+\\!\gamma\Omega(g({\bf z}; {\boldsymbol\theta}_g))\] \end{cases}\\;\text{where}\\;\\;\Omega({\bf x})=||f\_{1}({\bf x})-f\_{2}({\bf x})||_2^2, $ where $h\_\text{AN}({\bf x}; {\boldsymbol\theta})$ denotes our modified discriminator inclusive of the noise injection, ${\boldsymbol\theta}_d$ and ${\boldsymbol\theta}_g$ are learnable parameters of discriminator and generator, and $f\_1(\cdot)$ and $f\_2(\cdot)$ are two feature extractors with two different multiplicative noise injections, as per Fig. 5 in the supplementary material. Moreover, $\nu_n$, $\hat{\mu}_m$ and $p\_z$ represent the finite fake distribution, the empirical training distribution and distribution of the generator input. $\gamma\geq 0$ is the regularization hyper-parameter. ## 2. Conduct multiple experiments and report the mean and standard deviation. Thank you. In fact, we have run 5 trials for NICE and NICE+ADA in Tables 1-5 and reported the mean. As the standard deviations were less than 1% relatively, we followed the practice of most GAN papers (listed below) which typically do not report the standard deviations: >* *FakeCLR: Exploring Contrastive Learning for Solving Latent Discontinuity in Data-Efficient GANs*, ECCV'22, Li et al. >* *DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data*, NeurIPS'22, Fang et al. >* *Differentiable Augmentation for Data-Efficient GAN Training*, NeurIPS'20, Zhao et al. However, **Tables 1-4 in the rebuttal PDF contain full results for ADA+NICE with STD** supplemented as per your request. ## 3. The motivation for Eq. 12 is clear but motivation of Eq. 13-14 is unclear. Thank you. This is a very interesting question. \ \ Eq. 12 provides regularization for discriminator given the real data. In the same spirit, the discriminator has to be regularized w.r.t. the fake data, as in Eq. 13, to stablizing GAN training. Regularizing both real and fake data was explored in several papers: >* *Which Training Methods for GANs do actually Converge?*, ICML'18, Mescheder et al. >* *Improving Generalization and Stability of Generative Adversarial Networks*, ICLR'19, Thanh-Tung et al. >* *Stabilizing Training of Generative Adversarial Networks through Regularization*, NeurIPS'17, Roth et al. >* *DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data*, NeurIPS'22, Fang et al. These works have found that regularizing gradients on both real and fake samples improves convergence and stability of GAN training. Thus, we also apply NICE on both real and fake samples. In terms of why we use NICE$\_{G_f}$, it can be explained as reducing first- and second-order gradients of discriminator while optimizing generator parameters ${\boldsymbol\theta}\_g$. Below we compare the Taylor expansions of generator objective without NICE$\_{G_f}$ *vs.* with NICE$\_{G_f}$ (in both cases NICE$\_{D_r}$ and NICE$\_{D_f}$ are switched on). Let ${\bf a}\_f$ be some fake sample from the generator. We optimize parameters ${\boldsymbol\theta}\_g$ of generator by minimizing: * without NICE$\_{G_f}$: $-h\_\text{AN}({\bf a}\_f)\approx-h({\bf a}\_f)-\frac{\beta^2}{2}\frac{\partial^2 h}{\partial f^2}|\_{{\bf a}_f}f''({\bf a}_f){\bf a}_f^2$ * with NICE$\_{G_f}$: $-h\_\text{AN}({\bf a}\_f)+\gamma\Omega({\bf a}\_f)\approx-h({\bf a}\_f)-\frac{\beta^2}{2}\frac{\partial^2 h}{\partial f^2}|\_{{\bf a}_f}f''({\bf a}_f){\bf a}_f^2+\underbrace\{2\gamma\beta^2{\bf a}_f^2f'^2({\bf a}_f)+\gamma\beta^4{\bf a}_f^4f''^2({\bf a}_f)}\_{\text{grad. norms}}$, where $f'^2(\cdot)$ and $f''^2(\cdot)$ are simply squared norms of first- and second-order gradients (see underbrace in the eq.) of discriminator. \ \ This means that without NICE$\_{G_f}$, the generator is trained to generate images to the point where the gradient norms of discriminator can become large. In contrast, with NICE$\_{G_f}$, for some well-chosen $\gamma>0$, these first- and second-order gradient norms of discriminator are reduced by optimizing ${\boldsymbol\theta}\_g$ of generator, as they show up in the Taylor expansion. \ \ In that sense, Eq. 14 is also meaningful because while we are optimizing here the generator, the output of generator is still passed via discriminator. Thus, it makes sense to seek parameters of generator that not only improve the generator to ``fool'' discriminator but also stablizing the GAN training. \ \ To conclude, stabilizing GAN training can be achieved with Eq. 12-14. **See figure 2 in rebuttal PDF** for the gradient analysis when we ablate the Eq 12-14. ## 4. Report the experimental results using only Eq. 12 and Eq. 14. Kindly see response in the General Rebuttal (for all reviewers) due to the 6000 characters limited space. ## 5. Does Eq. 1 encompass all GAN objective functions, i.e., the original min-max and non-saturating GANs? Kindly see response in the General Rebuttal (for all reviewers) due to the limited space.
Summary: This paper proposes a regularization method called noise-modulated consistency regularization (NICE) to train GANs with limited data. In this method, this paper proposes modulating the discriminator's latent features using noise and imposing a constraint on the discriminator so that the middle outputs of the discriminator (particularly, the outputs before the prediction head) for differently modulated data are the same. This paper also provides a theoretical analysis, in which it is shown that the proposed regularization penalizes the first- and second-order gradients of latent features and improves the GAN training stability. The effectiveness of the proposed method was demonstrated using small-scale typically-used datasets, including CIFAR-10, CIFAR-100, ImageNet, and FFHQ datasets, and in low-shot generation tasks. Strengths: 1. The effectiveness of the proposed method was demonstrated in various scenarios, including evaluation on small-scale typically-used datasets (CIFAR-10, CIFAR-100, ImageNet, and FFHQ datasets) and evaluation in low-shot generation tasks. In many scenarios, the proposed method achieves state-of-the-art performance while comparing with various baselines. Furthermore, the applicability of the proposed method was also demonstrated by applying the proposed method to various GANs (e.g., BigGAN and OmniGAN in Tables 2 and 3, and StyleGAN2 in Table 3) and using the proposed method with orthogonal methods (e.g., LeCam, DA, and ADA). Ablation studies are also conducted. 2. Not only the effectiveness of the proposed method is demonstrated, but also the theoretical analysis is provided. This analysis verifies the proposed method is useful for regularizing the first- and second-order gradients of latent features and improving the GAN training stability. This explanation is reasonable. 3. This paper is well written and easy to read. Although some explanation is slightly too concise, the discussion on related work is thorough. Weaknesses: 1. Through theoretical analysis, I understand that the proposed method is useful for regularizing the first- and second-order gradients of latent features. However, this analysis raises a question of what happens when regularizing the first- and second-order gradients of latent features directly. As discussed in related work, there are several previous studies that propose gradient regularizations. I guess that the proposed method is better than a direct regularization method in terms of calculation cost; however, I would appreciate it if I could hear the opinion from the authors. 2. I cannot find the discussion on the increase in the calculation cost. I suspect the proposed method increases the calculation cost because the discriminator needs to process data twice, compared to a standard discriminator. For a fair comparison, it would be better to be discussed. 3. Some results are excluded in Tables 1–3 (e.g., IS/tFID for DigGAN in Table 1). I cannot find a clear explanation for why the results are excluded. It seems that DigGAN is one of the comparable baselines; therefore, I would appreciate it if the authors could provide the missing scores. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What happens when directly regularizing the first- and second-order gradients of latent features? (See weakness 1) 2. Discuss the calculation cost. (See weakness 2) 3. Why are some results excluded in Tables 1–3. Provide the scores if possible (See weakness 3) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. Image synthesis technologies could be misused for synthesizing fake images. In particular, few-shot image synthesis technologies will make it easy to synthesize such images because they reduce the data collection cost. It would be better to discuss the social impact in this aspect. 2. Although the versatility of the proposed method is demonstrated, I suspect that there may be some previous methods that are not compatible with the proposed method. Discussing this will be useful for future research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Rev. 2 (v2y4) ***We thank the reviewer** for the constructive review and valuable questions that have helped us improve our work.* ## 1. What happens when regularizing the first- and second-order gradients of latent features directly? Thank you. Consider the regularization term in Eq. 10 (main paper) and its Taylor expansion. Indeed, this expression penalizes the squared norms of first- and second-order gradients of $f({\bf a}_i)$. However, notice that our expression has specific penalty weights $2\beta^2||{\bf a}_i||_2^2$ and $\beta^4||{\bf a}_i||_2^4$ for penalty $||f'({\bf a}_i)||^2_2$ and $||f''({\bf a}_i)||^2_2$ respectively. In fact, these specific penalty weights with the penalty given by $2\beta^2||{\bf a}_i||_2^2||f'({\bf a}_i)||^2_2 + \beta^4||{\bf a}_i||_2^4||f''({\bf a}_i)||^2_2$ \ emerge only in case of using multiplicative noise drawn from $\mathcal{N}({\bf 1},\beta^2{\bf I})$ for Eq. 9 and 10. \ \ **Importantly, see Resp. 4 to Rev. 1 (fQTZ)**. In that response we demonstrate that: * **the multiplicative noise ${\bf z}\sim\mathcal{N}({\bf 1},\beta^2{\bf I})$ does not introduce new semantics into feature vector ${\bf a}_i$ by operation ${\bf z}\cdot{\bf a}_i$**, i.e., only feature semantics that are present in feature vector (features of ${\bf a}_i$ that are non-zero) that describe object are modulated. * yet, **this multiplicative noise** does help control the Rademacher Complexity (RC) due to Lemma 2, Eq. 7 (main paper). We suspect that by ``regularizing the first- and second-order gradients of latent features directly'', Rev. 2 (v2y4) means imposing a generic penalty of form $\beta_1||f'({\bf a}_i)||^2_2 + \beta_2||f''({\bf a}_i)||^2_2$ or its variant $2\beta^2||f'({\bf a}_i)||^2_2 + \beta^4||f''({\bf a}_i)||^2_2$ that arises when the additive noise ${\bf z}\sim\mathcal{N}({\bf 0},\beta^2{\bf I})$ is applied to ${\bf a}_i$, i.e., ${\bf z}\cdot{\bf a}_i$. Let ${\bf z}\sim\mathcal{N}({\bf 0}, \beta^2{\bf I})$. we use Taylor expansion to expand the additive noise modulated consistency regularization as follows: $\mathbb{E}_{{\bf z}_1, {\bf z}_2}||f({\bf a}_i+{\bf z}_1)-f({\bf a}_i+{\bf z}_2)||^2\approx \mathbb{E} _{{\bf z}_1, {\bf z}_2}||({\bf z}_1-{\bf z}_2)f'({\bf a}_i)+\frac{{\bf z}_1^2-{\bf z}_2^2}{2}f''({\bf a}_i)||^2=2\beta^2||f'({\bf a}_i)||^2_2 + \beta^4||f''({\bf a}_i)||^2_2$ **Such a variant means that semantics that are not present in ${\bf a}_i$ may be ``activated'' by the noise, drastically altering the meaning of ${\bf a}_i$ and damaging the information it carries about image/object.** (see **Resp. 4 to Rev. 1 (fQTZ)**). \ While directly applying gradient penalization with $\beta^2||f'({\bf a}_i)||^2_2 + \beta^4||f''({\bf a}_i)||^2_2$ does not introducing noise, It's connection to the additive noise suggests it will have a negative effect on semantics. Kindly **see the table in Resp. 4 to Rev. 1 (fQTZ) that evaluates multiplicative vs. additive noise modulators, and the direct penalties.** ALGP (adaptive latent gradient penalization), NICE$_{add}$ (consistency regularization with additive noise), AWR (adaptive weight regularization) 10\% CIFAR-10/100 on OmniGAN ($d'=256$), |Dataset||10\% CIFAR-10|||10\% CIFAR-100|| |-|-:|:-:|:-|-:|:-:|:-| ||IS|tFID|vFID|IS|tFID|vFID| |OmniGAN|8.49|22.24|26.33|8.19|45.41|50.33| |+ALGP|8.52|19.15|22.72|9.18|32.98|37.51| |+AWR+ALGP|8.72|16.82|20.45|10.14|26.44|30.23| |+NICE$_{add}$|8.64|17.94|21.59|9.34|28.59|33.02| |+NICE|**9.26**|**7.23**|**11.08**|**11.50**|**16.91**|**21.56**| Kindly see **Figure 3 in the rebuttal pdf**, which plots the classification accuracy of the discriminator on testing images for different variants. AN achieves higher accuracy than AAN (adaptive additive noise) and AWR. NICE obtains best accuracy than other variants, showing that the multiplicative noise modulation preserves the semantics better than other variants. This answers why we not directly penalize the gradient norms or directly regularize the weight norms, as they drastically alter the semantics of the features. ## 2. Is the proposed method better than a direct regularization in terms of calculation cost? What is the increase in the calculation cost? The specific penalty weights are important, as elaborated above. The computations are also simpler as we do not have to directly tap into the backpropagation and gradients to penalize them. Kindly notice that only last few blocks of discriminator (blocks $l,\cdots,L$) use our noise modulator penalty (kindly see Fig. 5 in the supplementary material). This design is also simple and very easy to parallelize. Even without parallelization, **the extra cost compared with the baseline is negligible**, as shown in **Figure 5** in the rebuttal pdf and in **Table 7** of supplementary material. ## 3. Some results are excluded in Tables 1–3. DigGAN is one of the comparable baselines. * We directly use the code provided by the DigGAN and test the IS, tFID, and vFID, and obtain the results for BigGAN+DigGAN ($d'=256$) on CIFAR10 and CIFAR100 as follows: |Dataset||100\%|||20\%|||10\%|| |-|-|-|-|-|-|-|-|-|-| ||IS|tFID|vFID|IS|tFID|vFID|IS|tFID|vFID| |CIFAR-10 BigGAN+DigGAN|9.28|5.33|9.35|8.81|13.28|17.25|8.32|18.54|22.45| |CIFAR-10 BigGAN+NICE|**9.50**|**4.19**|**8.24**|**8.96**|**8.51**|**12.54**|**8.73**|**13.65**|**17.75**| |CIFAR-100 BigGAN+DigGAN|**11.15**|8.13|13.06|9.98|16.87|21.59|**9.04**|23.10|27.78| |CIFAR-100 BigGAN+NICE|10.99|**6.13**|**11.08**|**10.32**|**13.17**|**17.80**|8.96|**19.53**|**24.33**| * As the authors did not release the code for StyleGAN2 on the five low-shot datasets, we reproduce their method and swap the hyperparameter ($\lambda\in\\{10, 20, 50, 100, 150\\}$ in their paper) for this tasks. We obtain the best results with $\lambda=20$ for (DigGAN+ADA) as follows: |Dataset|Obama|Grumpy Cat|Panda|AnimalFace Cat|AnimalFace Dog| |-|-|-|-|-|-| |ADA+DigGAN|36.38|25.42|11.54|35.67|59.98| |ADA+NICE|**20.09**|**15.63**|**8.18**|**22.70**|**28.65**|
Rebuttal 1: Rebuttal: # General Rebuttal We thank the reviewers' for constructive suggestions and in depth analysis helping us refine our work. We are humbled by such a positive response, and we truly appreciate it. \ \ **Also, kindly refer to the rebuttal PDF** (**at the bottom of this panel**) for additional figures and tables. Individual rebuttals refer to them in more detail. # Below we clarify remaining issues for Rev. 3 (LrCf) ## 1. Report the experimental results using only Eq. 12 and Eq. 14 (Rev. 3 (LrCf)). Below are the results as per request. As expected, stablizing GAN training is the best when all steps of GAN strive to regularize the gradient (Eq.12-14): |Regularization Eq.||10\% CIFAR-10|||10\% CIFAR-100||Obama| |-|-:|:-:|:-|-:|:-:|:-|:-:| ||IS|tFID|vFID|IS|tFID|vFID|FID| |Eq. 12 + 14|9.16|8.69|12.59|11.19|18.80|24.13|29.95| |Eq. 12 + 13 + 14|**9.26**|**7.23**|**11.08**|**11.50**|**16.91**|**21.56**|**24.56**| **Figure 2 in the rebuttal PDF** also shows gradient norms of discriminator. ## 2. Does Eq. 1 encompass all GAN objective functions, i.e., the original min-max and non-saturating GANs? (Rev. 3 (LrCf)). Certainly, Eq. 1 forms the cornerstone for analyzing the generalization of diverse GAN objective functions. Eq. 1 was introduced by Zhang et al. in "On the Discrimination-Generalization Tradeoff in GANs", ICLR'18. Their study adeptly showcased that Eq. 1 encompasses a wide range of GANs, including the well-known $f$-GANs. These encompass GANs that minimize the $f$-divergence, which notably encompasses the original GAN formulation. Additionally, insights from "Non-saturating GAN training as divergence minimization" (arXiv:2010.08029, Shannon et al.) further bolster this applicability. This work discusses how non-saturating GANs can be approximated via minimizing a specific $f$-divergence, aligning with Eq. 1. These collective contributions highlight that Eq. 1 provides a unifying lens at GANs, encompassing a wide range of models, including the classic min-max and the non-saturating variants. Pdf: /pdf/52d10b7a721a1783ac62690be17fd1a7cd02117a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors present a novel approach called NoIse-modulated Consistency rEgularization (NICE) to solve the challenge of training GANs with limited data. The experiment was conducted on reduced small-scale CIFAR-10, CIFAR-100, ImageNet, and FFHQ datasets. Additionally, they applied their method to low-shot generation tasks, and the results demonstrated state-of-the-art performance. Overall, the proposed methodology is well motivated and supported by theoretical analysis. The flow of the paper and the writing are easy to follow. Strengths: 1. The paper is written in a clear and well-structured manner, making it easy for readers to follow the presented ideas. 2. Through extensive experiments on various reduced small-scale datasets, the method consistently achieves state-of-the-art performance, demonstrating its effectiveness. 3. The paper is well-grounded in theoretical motivation and effectively validates the proposed methodology through empirical analysis. Weaknesses: Even though the experimental results support their theory, the ablation study and the experiment for gradient analysis are only based on some specific datasets. Evaluating the theory on more datasets would provide more convincing evidence. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Table 1, Table 2, Table 3, and Table 4, what "MA" refers to? 2. Figure 2 was plotted based on a specific dataset, but the paper does not clearly state which dataset was used to generate this figure. It would be helpful if the authors provide more information about the dataset used for Figure 2. 3. The notation used in the paper is inconsistent, for instance, with the symbols $\boldsymbol{x}_{\text {real }}$ in page 5 row 205 and $\boldsymbol{x}_r$ in page 5 row 182. Additionally, there is no explanation for the symbols $\boldsymbol{x}_r$ and $\boldsymbol{x}_f$. It would be beneficial for the authors to clarify and provide consistent notation throughout the paper to avoid confusion. 4. In Equation 8, it is evident that training with features modulated by Gaussian noise leads to regularization of the internal weight by the (2,1) norm. However, it is not clear why the authors did not directly penalize the (2,1)-norm to reduce the Rademacher complexity. It would be valuable if the paper includes a comparison between the Discriminator with adaptive noise (AN) and directly penalizing the (2,1)-norm with adaptive strength $\beta$. 5. In Equation 9, the training of the discriminator involves minimizing the second-order gradient at fake images. However, Figure 2 (b) shows that the gradient norm at the layer before the classification head for fake images increases. Furthermore, the ablation study with $\mathrm{NICE}{D_f}$ demonstrates improved performance, as well as with $\mathrm{NICE}{G_f}$, but there is no analysis of the gradient norm for the generator. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations could be further clarified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Rev. 1 (fQTZ) ***We thank the reviewer** for the constructive review and valuable questions that have helped us improve our work.* ## 1. What does MA refer to in Tables 1-4? We apologize. MA denotes `massive augmentations' (including DA and ADA) first used in: > *Generative co-training for generative adversarial networks*, AAAI'22, Cui et al. DA and ADA are: > *Differentiable augmentation for data-efficient GAN training*, NeurIPS'20, Zhao et al. \ *Training generative adversarial networks with limited data*, NeurIPS'20, Karras et al. ## 2. Which dataset was used in Fig. 2? Thank you. This is a 10% data, CIFAR-10, OmniGAN ($d'=256$). ## 3. Improve notations. What are ${\bf x}_r$ and ${\bf x}_f$? Thank you. We have now unified notations as per request. ${\bf x}_r$ and ${\bf x}_f$ are the real and fake samples passed to the discriminator. Indeed, ${\bf x}\_{real}$ is a redundant symbol as it means ${\bf x}_r$. ## 4. Why the authors did not directly penalize the (2,1)-norm? * For **the multiplicative modualtion** with the noise, we have: \ $\hat{L}\_{mul\\_noise}:=\hat{\mathbb{E}}\_i\mathbb{E}\_{{\bf z}\sim\mathcal{N}({\bf 1}, \beta^2\mathbf{I}) }||{\bf y}_i-{\bf W}_2({\bf z}\cdot{\bf a}_i)||^2_2=\hat{\mathbb{E}}\_i||{\bf y}_i-{\bf W}_2({\bf z}\cdot{\bf a}_i)||^2_2+\beta^2(\hat{\mathbb{E}}\_i||{\bf a}_i||^2_2)||{\bf W}_2||^2\_{2,1}$, where ${\bf z}\cdot{\bf a}_i$ is element-wise multiplication of $\bf z$ and ${\bf a}_i$. This implies that while direct penalty of network weights, $||{\bf W}_2||^2\_{2,1}$, is possible, our approach has a **dynamic penalty** $\beta^2(\hat{\mathbb{E}}\_i||{\bf a}_i||^2_2)$ where variance $\beta^2$ is adapted based on the discriminator decisions but also $\hat{\mathbb{E}}\_i||{\bf a}_i||^2_2$ depends on norms of feature vectors. \ \ **The importance of such a multiplicative modulation is that semantic contents is not added by noise ${\bf z}$ to the modulated feature vector ${\bf z}\cdot{\bf a}_i$**, i.e., only `active' channels ${\bf a}_i\neq 0$ are modulated: they can be suppressed or magnified according to variance $\beta^2$. * In contrast, the **additive noise modulation** results in the standard penalty $\beta^2||{\bf W}_2||^2\_{2,1}$ which we suspect the reviewer asks about: $\hat{L}\_{add\\_noise}:=\hat{\mathbb{E}}\_i\mathbb{E}\_{{\bf z}\sim\mathcal{N}({\bf 0}, \beta^2\mathbf{I}) }||{\bf y}_i-{\bf W}_2({\bf z}+{\bf a}_i)||^2_2=\hat{\mathbb{E}}\_i||{\bf y}_i-{\bf W}_2{\bf a}_i||^2_2+\beta^2||{\bf W}_2||^2\_{2,1}$, \ \ While obvious, $\beta^2||{\bf W}_2||^2\_{2,1}$ does not have the **dynamic penalty** term related to feature norms. This also means (based on the derivation) that **the additive noise may change feature semantics**, i.e., ${\bf z}+{\bf a}_i$ may ``activate'' channels which are non-active in ${\bf a}_i$ (features ${\bf a}_i= 0$). \ Therefore, **our multiplicative modulator not only controls the Rademacher Complexity (RC), but controls it in a meaningful manner for discriminator**. Only feature semantics that are present in feature vector (that describe object) are modulated while helping control RC. Additive noise would likely introduce semantics that are not present for a given image/object. * While directly regularizing the network using $\beta^2||{\bf W}_2||^2\_{2,1}$ does not introduce additive noise, it's regularization effect shares similarity with additive noise incorporation. Thus, it's connection to the additive noise suggests that the direct weight regularization has an effect of changing the feature semantics, which will negatively impact the classification accuracy. * Below we provide **experimental comparisons**. Please note the consistency loss and dual branch from Fig. 1 (right) (main paper) are not used here as that would result in additional penalties on gradient norms: 10\% CIFAR-10/100 on OmniGAN ($d'=256$), AAN (adaptive additive noise), AWR (adaptive weight regularization), AN (our adaptive multiplicative noise), |Method|||10\% CIFAR-10|||10\% CIFAR-100|| |-|-|-:|:-:|:-|-:|:-:|:-| | |Eq.| IS|tFID|vFID|IS|tFID|vFID| |OmniGAN| |8.49|22.24|26.33|8.19|45.41|50.33| |+AAN | ${\bf a}_i\:={\bf a}_i\\!+\\!{\bf z}; {\bf z}\\!\sim\\!\mathcal{N}(0,\beta^2{\bf I})$ |8.52|20.12|24.65|9.64|37.68|42.01| |+AWR | $\beta^2\|\|{\bf W}\|\|^2\_{2,1}$ |8.44|18.42|22.56|9.80|32.05|36.53| |+AN| ${\bf a}_i\:={\bf a}_i\\!\cdot\\!{\bf z}; {\bf z}\\!\sim\\!\\mathcal{N}({\bf 1},\beta^2{\bf I})$ |**9.16**|**10.14**|**13.80**|**11.22**|**23.76**|**28.34**| In addition, **Figure 3 in the rebuttal pdf** shows that AN achives higher classification accuracy than AAN and AWR, demonstrating that multiplicative noise preserves the feature semantics better than the additive noise and the weight regularization. ## 5. Fig. 2(b) shows the gradient norm at the layer before the classification head for fake images increases. This is because when we train the generator, we optimize the generator by minimizing: $-h\_\text{AN}({\bf a}\_f)\approx-h({\bf a}\_f)-\frac{\beta^2}{2}\frac{\partial^2 h}{\partial f^2}|\_{{\bf a}_f}f''({\bf a}_f){\bf a}_f^2$ This means the generator is trained to generate images to the point where the gradient norms of discriminator can become large. Please also see **Resp. 3 Rev. 3 (LrCf)** for discussing of NICE$\_{D_f}$ and NICE$\_{G_f}$ and the **Figure 2 in the rebuttal pdf** for the gradient analysis. ## 6. Results of different ablation models on CIFAR-100 on OmniGAN ($d'=256$) Kindly see below reprint of Figure 4(a) which is a table in the main paper. |Method||10\% CIFAR-100|| |-|-:|:-:|:-| ||IS|tFID|vFID| |OmniGAN|8.19|45.41|50.33| |+AWD|9.64|37.68|42.01| |+AN+AGP|10.72|27.73|32.15| |+NICE$_{add}$|9.34|28.59|33.02| |+DA|10.16|24.50|28.96| |+ADA|11.23|23.11|27.58| |+AACR|11.37|21.42|25.76| |+NICE|**11.50**|**16.91**|**21.56**| Additionally, we provide additional gradient analysis in **Figure 1\&2 in the rebuttal pdf**.
null
null
null
null
null
null
Inverse Reinforcement Learning with the Average Reward Criterion
Accept (poster)
Summary: This paper addresses Inverse Reinforcement Learning (IRL) in the average-reward setting. The authors propose a Stochastic Policy Mirror Descent (SPMD) method to solve the Average Reward Markov Decision Process subproblem and use SPMD to propose the Inverse Policy Mirror Descent method to solve the IRL problem. The authors provide complexity results for these methods and experimentally validate them using the MuJoCo benchmark. Strengths: - The authors develop a novel method and show its merits both theoretically and experimentally compared to existing methods. - The theoretical analysis is very thorough and the authors clearly outline the assumptions that are made. - The proposed IPMD method achieves good results on a variety of environments. Weaknesses: - The paper is a bit light on experiment details (e.g. hyperparameters). More detail here would be appreciated (e.g. see Questions). - It’d be nice if the authors could provide an intuitive explanation for the result of Theorem 3.4. Equation 25 is a bit difficult to understand. - The authors make a fair number of assumptions in their theoretical analysis. It'd be nice if the authors added some discussion on which assumptions hold in practice and/or how they enforce those assumptions, as well as how restrictive these assumptions are. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Does Assumption 3.2 hold in practice? Do the authors do anything to enforce these constraints? - What is $\omega$ in Equation (3) and Algorithm 1 chosen to be in practice? - Can the authors provide architecture details for the policy, Q network, and learned reward function in the appendix? - Equation 24 in Theorem 3.4 makes an assumption about step size. Is this assumption implemented in practice (i.e. in Algorithm 1 what is $\eta_k$ chosen to be? - It seems the authors assume access to a large number of expert trajectories (Appendix A.9 says expert demonstrations are collected for five million steps). I’m curious how IPMD would perform with a more limited number of expert trajectories (for instance, the f-IRL[1] paper has experiments where they just use 1 expert demonstration). Have the authors considered how number of expert trajectories affects performance of IPMD? [1] f-IRL: Inverse Reinforcement Learning via State Marginal Matching (Ni et al.) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our sincere appreciation for your insightful review and thoughtful comments, which greatly contribute to the refinement of our paper. We are pleased to address each of your points below: 1. We recognize the importance of providing comprehensive experiment details, including hyperparameters, to enhance the transparency and reproducibility of our work. In response to your suggestion, we will furnish a more comprehensive account of hyperparameters, encompassing aspects such as learning rates, discount factors, and neural network architectures. Please see our response to later questions and our global reply. 2. We value your input regarding Equation 25 and its comprehension. Equation 25 encapsulates a convergence overview without specifying explicit step size choices. Such formulations, common in optimization literature, highlight the significance of step size selection on convergence speed. To be specific, $\alpha_k, \beta_k$ are both step sizes and the central term $\sum \rho_k - \rho^*$ (distance to the optimal value) plus $D(\pi_k, \pi^*)$ (the distance of the current policy to the optimal policy) will shrink as $k$ grows, which is be bounded by some combination of $\alpha_k$and $\beta_k$ plus an irreducible function approximation error $\varsigma$. The left-hand side is a weighted average of function value error plus an average error from step $k=1$ to $k=K$. We acknowledge the less intuitive nature of Theorem 3.4 and will provide a more intuitive explanation of its implications, coupled with illustrative examples, to enhance clarity. For a specific step size choice and the condition on $\mu$, we get different rates of convergence, which is presented in Corollary 3.5. 3. Assumption 2.1 (uniform ergodicity) is considered restrictive but often necessary for analysis, albeit potential violations in practice. Assumption 3.2's practicality hinges on model choices; if neural networks are employed, gradient "clipping" may be utilized. This also applies to Assumptions 4.1 and 4.2. Assumptions 3.3, 3.6, and 4.3, concerning stochastic estimator proximity, are less restrictive given bounded state and action spaces. Your insight into detecting issues in practice is well-taken, and we will incorporate a comprehensive discussion to highlight these nuances. In practice, it is usually obvious when things go wrong, e.g., when the differential Q-function explodes to 10e6. This can happen due to rare occasions and the culprit is usually detectable, e.g., a large learning rate. Please refer to our overall response for additional discussion. 4. As elucidated in Section 3.1, in practice, we select $h$ and $\omega$ as negative entropy due to the subproblem's entropy-regularized nature. However, alternative distance-generating functions for Bregman distance beyond KL divergence are possible, and PMD can accommodate different choices. We will provide additional clarification on this choice and its implications. 5. The policy network employs two fully connected hidden layers of dimension 256 each, taking actions as input and outputting a distribution. Both the Q network and reward function share the same architecture, with ReLU activation used in hidden layers. A double Q-learning technique minimizes overestimation. We will incorporate comprehensive architecture specifications in the appendix for reference. 6. We appreciate your inquiry into the implementation of the assumption in Equation 24. Our comparative exploration of various step size schedules yielded minimal differences, possibly due to subproblem optimization via a few gradient descent steps. We will emphasize this aspect's relevance and practical implementation. 7. Your query on IPMD's performance sensitivity to expert trajectory count is insightful. While unreported at submission, IPMD exhibits strong performance on locomotion tasks involving complex robots with a sole expert demonstration. For example, in one of our ongoing projects, we test IPMD on the MuJoCo Cassie robot (https://mujoco.readthedocs.io/en/stable/models.html), hope Cassie can walk on random terrain from demonstrations. IPMD can reach episodic reward 479.815 from one expert demonstration of episodic reward 447.1955, outperforming the expert demonstration. This suggests IPMD's efficiency in utilizing limited expert demonstrations. Once again, we thank you for your comprehensive review and look forward to addressing your valuable feedback in our revisions. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications!
Summary: In this paper inverse reinforcement learning is studied when the teacher was using an average-reward criterion rather than discounted rewards with known discount factor. The paper proposes a stochastic first-order method starting from stochastic policy mirror descent for MDPs and continuing towards inverse policy mirror descent for solving the IRL problem. The paper contains also some numerical experiments based on MuJoCo benchmarks. Strengths: The paper is very clear and elegant in its use of average reward method in RL and IRL. There are many papers on duality of this type although not yet specifically on the average reward criterion, although in NAC papers similar considerations were made. The supplementary material is strictly supplementary and very clear and comprehensive. It's great the hints for practical use are included. Weaknesses: “To the best of our knowledge” is not a useful sentence in an abstract, in particular in an anonymous manuscript. It is more useful to discuss such opinions later in the paper in the context of related results (see e.g. previous NeurIPS conferences). Duality has been used in many similar contexts, but it could still be said more clearly why it is a good idea here. It is acceptable, if you want to avoid discussing natural AC, but then there is more to explain. Experiments are relatively few with just final results given, so no understanding why the proposed method is useful can be gained. The is no discussion of the experimental results, therefore, although one may guess, it is not clear why the proposed method struggles on Ant or whether there is any relation of the performances comparing tables 1 and 2. There is no attempt included to check whether the complexity results are tight or whether global optima are indeed found. Check: “Acrot” Assumptions 4.2 and 4.3 are formally stated in the main text, but do not seem to be used. In the appendix only 4.2 is mentioned. Ref. 28 is not mentioned in the text, perhaps a separate bibliography for the appendix would make sense. I find it a bit difficult to distinguish between all the Qs. The distinction between math-Q and cal-Q is probably necessary, For example, tilde-cal-Q (32) could as well be substituted by its definition. It occurs again only in (34) and (38), where at least a ref to the definition needs to be included. In “differential Q-function” (in italics in the main text vs. theorem environment) different fonts are used, which should be avoid this, ideally by using the math-Q font that seems to occur also in the theorem environment. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Can you explain more about the improvements (as in Humanoid RL) and deficiencies (as in Ant IRL or walker RL)? What feature of the algorithm contributes to the success or suboptimality of the performance in each problem? Are the assumption not strictly satisfied, or is the sampling the reason for any suboptimality? Which theorems have appeared in earlier work or have strong similarities to theorems in earlier work? I would assume that 1/(1-gamma) not much smaller than K, and the difference between them is related to the number of states and actions. In other words, is there a practical need for the O-symbols or is the complexity already implied by the known factor proportional to 1/(1-gamma)? Why does the performance show variablity even for simple problems (see Fig. 1)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors state in the supplementary material: "We anticipate no potential negative societal impacts concerning this research." which could seem questionable in the context of IRL, because IRL has the potential to uncover hidden motives in legal human behavior without the consent of a person. The research seems fine, but some discussion, beyond the mere statement would be useful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thorough review and insightful comments. We appreciate your time spent on the review and detailed comments on the paper. First we want to point out that our paper focuses on the theoretical development of solving the Inverse Reinforcement Learning (IRL) problem under the average-reward setting. Note that almost all analyses on the discounted MDPs highly rely on (some form of) the contraction property given by the discount factor, and their work cannot be directly translated into the average-reward setting. 1. Regarding "To the best of our knowledge" in the abstract, we acknowledge your point and will consider rephrasing it in subsequent revisions. 2. We recognize the need for a clearer justification of the use of duality. In the context of Inverse Reinforcement Learning (IRL), we highlight that directly solving the primal problem presents challenges, including maintaining constraints and ensuring maximum entropy policies, which are not resolved effectively in the community. Exploiting duality has a history in IRL, allowing us to leverage a more structured dual problem, enhancing our ability to devise effective optimization techniques. Regarding using Natural Actor-Critic (NAC), it is true that one may solve the subproblem with NAC. But there's no existing theoretical analysis on the rate of convergence for NAC for solving the average-reward Markov Decision Processes (AMDPs). As mentioned in the paper, [Cen 2021] uses NAC to solve a discounted MDP with a finite state and action space. Whether its method and analysis can be adapted to solve our problem (average reward with general state and action space and general function approximation) needs nontrivial analysis. Shicong Cen, Chen Cheng, Yuxin Chen, Yuting Wei, and Yuejie Chi. Fast global convergence 314 of natural policy gradient methods with entropy regularization. Operations Research, 2021. 3. While we acknowledge the relatively limited number of experiments and the absence of an in-depth discussion of results, our primary focus was on the theoretical underpinnings of our proposed methods. The experiments validate our theoretical findings and illustrate competitive or superior performance, even without exhaustive exploration of hyperparameters. Notice that since the performance metric is the episodic return, methods that work with discounted MDPs are approximately solving the AMDP, and the performance is very sensitive to the discount factor. In IRL, guessing the discount factor is crucial to estimate the reward. Our method can naturally avoid these problems. 4. We understand your point about complexity analysis and global optima. Given the complexities introduced by function approximation, proving tight complexity bounds or identifying global optima remains challenging. Notice that there are active efforts in the theoretical community hoping to derive a "tight" complexity bound for general Reinforcement Learning, yet no general results are obtained. Additionally, in the context of non-convex optimization, a global optimum is usually unattainable. Nevertheless, our approach offers insights into practical reward recovery, as demonstrated in the reward recovery experiment comparing our method to Inverse Q-learning (IQL), a discounted method. We show that even with the correct discount factor when using IQL, our method is 10x times more accurate. 5. We appreciate your observation about Assumptions 4.2 and 4.3. Assumption 4.2 is crucial in our analysis framework, impacting Lemma A.4 (Equation 72), A.5 (Equation 78), and Thm 4.5 (Equation 88), while Assumption 4.3 is essential for Thm 4.5 (in Equation 93) and 4.6 (Equation 104-107). 6. We understand your concern about distinguishing between different Q-functions. Our choice of $\tilde{\mathcal{Q}}$ aims for conciseness, though we will ensure better clarity in referencing definitions. We will consider using more distinct notations for denoting the differential $Q$-function and its stochastic estimator. 7. We acknowledge the request for a deeper discussion of algorithmic improvements and deficiencies in various environments. For a detailed analysis of the algorithm, please refer to our overall reply. 8. We clarify that our theoretical contributions leverage convex/non-convex optimization techniques. Our formulation of average-reward IRL is novel, bridging the gap between the current understanding of average-reward Markov Decision Processes (AMDPs) and IRL under such settings. While some theorems may appear similar to existing work, adapting them to this novel context requires nontrivial analysis, as evidenced by the cited lemmas. 9. In our complexity analysis, the $O$ notation captures convergence rates, while constants such as $1/(1-\gamma)$ play lesser importance as $K$ can grow sufficiently large. This notation is widely used to convey the speed of convergence. It provides a concise and informative way to describe the growth rate of a function relative to a specific parameter (usually the problem size or the number of iterations). 10. The observed variability in performance, even for simple problems, can be attributed to the stochastic nature of the environments. When an environment is initialized, a random noise is added by the simulator. That contributes to the variability 11. Your ethical concern is valid, and we share your perspective on the responsible use of IRL. Please refer to our overall response for a detailed discussion. Once again, we thank you for your comprehensive review and look forward to addressing your valuable feedback in our revisions. --- Rebuttal Comment 1.1: Comment: Thank you for the comprehensive reply to my concerns, which confirms, on the one hand, that the points I have made were mostly valid, but, on the other hand, confirm that there is clearly the potential to improve the paper, although there is probably no option to check such improvement here.
Summary: This paper proposes an inverse reinforcement learning (IRL) algorithm for infinite horizon average reward Markov decision processes (AMDPs). At first, the authors show the stochastic policy mirror descent (SPMD) algorithm that achieves $\mathcal{O}(\varepsilon^{-1})$ rate of convergence. Then, the authors propose the inverse policy mirror descent (IPMD) algorithm that achieves $\mathcal{O}(\varepsilon^{-2})$ rate of convergence. The SPMD algorithm is compared with SAC on the MuJoCo benchmarks, and the SPMD achieves on-par performance with SAC. The IPMD algorithm is compared with f-IRL and IQL, and the experimental results show that IPMD is slightly better than f-IRL. Interestingly, the proposed algorithms perform much better than the baselines for the Humanoid environment that has many degrees of freedom. Strengths: - Originality: RL and IRL algorithms for AMDP with strong theoretical basis are novel, although Dvijotham and Todorov proposed IRL under the linearly solvable Markov decision processes (LMDP). - Quality: The paper provides some strong theoretical analysis on AMDP. - Clarity: The authors show detailed steps to prove the theorems in Appendix. - Significance: The standard IRL algorithms deal with infinite horizon discounted reward problems under the assumption that the discount factor is known in advance. It is problematic because the expert's discount factor is usually unknown. The proposed method is promising because it can avoid tuning the discount factor. Weaknesses: - Some recent studies regarding AMDPs are not mentioned in the manuscript. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Major comments - Is the SPDM algorithm for AMDPs (Algorithm 1) an on-policy method? The objective function for training $\bar{\mathcal{Q}}^{\pi, \zeta}$ (35) suggests an on-policy algorithm, but I did not find the explanation. In addition, (35) does not use a target network. Is it a benefit of the property of the on-policy algorithm? - I do not fully understand why $\nabla \omega (\pi)$ is usually unavailable, as described in Lines 143-144, because $\omega$ is selected by ourselves. Furthermore, it is unclear why its estimator is parameterized by $\phi$ because $\pi$ is parameterized by $\xi$ (33). - Please show $g(\theta; \zeta)$ in detail because it is used to update $\theta$. For example, is $g(\theta_k; \zeta^\pi_k)$ the estimator of $\mathbb{E}_{(s, a)~d^\pi} [\nabla_\theta c(s, a; \theta)]$? - The experimental results are promising, but one potential reason for successful results is that the reward function of the MuJoCo benchmark is well-shaped. For example, the reward of Walker2D is given by healthy_reward + forward_reward - control_cost: https://gymnasium.farama.org/environments/mujoco/walker2d/ . It may imply that the average reward formulation is more appropriate than the discount reward one in the MuJoCo benchmark. My interest is the performance of the proposed method when it is evaluated in sparse reward settings such as navigation tasks. Minor comments - The following paper proposes a framework based on stochastic mirror descent for AMDP. Please discuss the relationship between these papers and the proposed method. - Y. Jin and A. Sidford. (2020). Efficiently Solving MDPs with Stochastic Mirror Descent. In Proc. of ICML. G. - Neu et al. (2017). A unified view of entropy-regularized Markov decision processes. http://arxiv.org/abs/1705.07798 . - C.-Y. Wei, et al. (2020). Model-free Reinforcement Learning in Infinite-horizon Average-reward Markov Decision Processes. In Proc. of ICML. - As mentioned in Strength, the following paper proposes a method to estimate the reward function under LMDP: K. Dvijotham and E. Torodov. (2010). Inverse Optimal Control with Linearly Solvable MDPs. In Proc. of ICML. It would be better to discuss their paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Minor comment: The authors briefly mention issues on potential societal impacts in A.10. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time reviewing our paper. Your points are valid with respect to the arrangements of the paper. We appreciate your keen observation and suggestion regarding recent studies on Average-Reward Markov Decision Processes (AMDPs) that may contribute to our work. While we strive to provide a comprehensive review of relevant literature, we acknowledge that some recent studies might not have been explicitly referenced in the manuscript. We will conduct a thorough review of the literature to identify and appropriately reference any pertinent works that can enrich our discussion and provide context for our contributions. 1. The SPDM algorithm for AMDPs, as outlined in Algorithm 1, is designed to accommodate both on-policy and off-policy training modes. In practice, our RL algorithm implementation is based on off-policy training schemes with a target network, similar to the approach employed in SAC. However, the adaptability of SPDM allows for on-policy training as well. 2. We understand your query regarding the availability of $\nabla \omega(\pi)$. By unavailable we meant it needs stochastic estimation. While we can indeed choose $\omega$ ourselves, the precise value of $\omega(\pi)$ and its gradient $\nabla \omega(\pi)$ needs estimation, just as the differential $Q$-function. Notice that even though $\omega$ can be a general distance-generating function other than negative entropy, evaluating its value might require stochastic estimation from the sample we collected. Regarding the parameter $\phi$, we note that in cases where the action value function is parameterized separately from the policy, $\nabla \omega(\pi)$ might not share the same parameters as $\xi$ in Equation (33). $\omega$ may also be present in the form of a neural network or other models, which we denote its parameter $\phi$, the same as the $Q$-function. We recognize that this aspect could be clearer in our explanation and will revise the manuscript accordingly. 3. We appreciate your request for more detailed clarification regarding $g_k$. In practice, we employ a simple stochastic estimator using sample averages, denoted as $g(\theta; \zeta)= \tfrac{1}{N} \sum_{t=1}^N \nabla c (s_t,a_t;\theta)$, where $N=|\zeta|$ represents the number of samples in the collection $\zeta$. This estimator aids in the update of $\theta$ as part of the algorithm. In this sense, $g(\theta_k;\zeta_k)$ is indeed the estimator of $E_{(s,a)\sim d^{\pi_k}}[\nabla_{\theta} c(s,a;\theta_k)]$. 4. Your observation about the well-shaped reward functions in the MuJoCo benchmark is insightful. Indeed, the presence of dense rewards in MuJoCo tasks contributes to the success of our proposed method. We acknowledge your interest in the performance of our method in sparse reward settings, such as navigation tasks. This is an area of interest for us as well, and we plan to explore and discuss the behavior of our algorithm in such environments in future work. 5. Thank you for pointing out the relevance of [Jin and Sidford], [Neu et al. 2017], and [Chen et al., 2020] in the context of Mirror Descent-based methods. [Jin and Sidford] solve the average reward problem using linear programming-based methods, which is not comparable to policy gradient methods in practice due to scalability issues, although their analysis yields a novel complexity bound. [Neu et al. 2017] propose an entropy-regularized Reinforcement Learning framework, which covers important policy gradient methods and casts them as mirror descent or dual averaging. However, no convergence analysis is provided. [Chen et al., 2020] is more comparable to our RL analysis. However, the regret bound is known to be different from general convergence analysis. Nevertheless, all the above works consider MDPs with finite state and action spaces. Our work considers a more general setting where action and state can be continuous instead of categorical, for example, in robotics research and animal behavior studies. 6. We appreciate your reference to Dvijotham and Todorov's work on IRL under linearly solvable MDPs (LMDP). While their approach offers an effective control-oriented perspective on IRL, we recognize that using Maximum Likelihood Estimation may face limitations when applied to larger and more complex problems, such as those involving continuous state and action spaces. Our choice to formulate the problem under a maximum entropy framework accounts for these challenges and allows for a more robust and flexible approach. We will certainly expand on the discussion regarding the relationship between our approach and the LMDP framework in our forthcoming revision. Once again, we sincerely thank you for your thoughtful review, which has significantly contributed to the refinement of our paper. Please kindly refer to our global response for societal impacts.
Summary: This paper aims to address the problem of inverse reinforcement learning (IRL) under the maximum entropy framework (MaxEnt-IRL) and an average reward criterion. The MaxEnt-IRL problem is formulated as a combination of an average reward Markov decision process (AMDP) and a dual IRL problem. The proposed algorithms, namely the SPMD algorithm and the IPMD algorithm, are utilized to solve the AMDP and dual IRL problem, respectively. The SPMD algorithm achieves a gradient computation step complexity of $\mathcal{O}(1/\epsilon)$ under general state and action spaces, while the IPMD algorithm has a complexity of $\mathcal{O}(1/\epsilon^2)$. The empirical studies conducted on the MuJoCo benchmark and various control tasks confirm the theoretical findings. Strengths: 1. The idea of formulating the IRL problem as a dual IRL problem, incorporating an AMDP as a subproblem, is interesting. 2. The paper extends a previous RL algorithm [1] to general state and action spaces with a general function approximation class for AMDPs. The paper is clear and easy to follow. 3. The proposed algorithms are well-founded and supported by theoretical analyses of average reward Markov decision processes. The convergence analysis of the algorithms is provided under certain assumptions on the approximation function classes. Overall, the paper is well-written and easy to follow. The proposed algorithms are well-motivated and supported by theoretical foundations. However, certain assumptions regarding function approximation classes and the critic step may be too restrictive, potentially resulting in the failure of convergence in scenarios involving deep neural networks. Despite these concerns, the theoretical contributions of the paper could be of interest to the community, and I recommend the acceptance of the paper. [1] Tianjiao Li, Feiyang Wu, and Guanghui Lan. Stochastic first-order methods for average-reward markov decision processes.arXiv preprint arXiv: 2205.05800, 2022. Weaknesses: 1/ The advantages of employing the average reward criterion over discounted IRL are not adequately explained in the paper. 2/ Assumption 3.2 assumes weak convexity and Lipschitz continuity for the approximated Q-functions. These assumptions may not hold in practice, particularly when neural networks are used for function approximation. As the global convergence result of SPMD relies on the convexity of the approximated Q-function and the regularizer h, it may not hold in practical scenarios. 3/ Assumption 3.6 imposes a restriction that the errors of the critic step, typically represented by neural networks, are bounded. This assumption may be overly restrictive, as estimations can be highly inaccurate in many cases. The convergence result for general function approximation classes may also fail to hold in practice. 4/ When the reward function could be parameterized by neural networks, which is normally the case in practice, Assumption 4.1 would be too strong. 5/ In the reward recovery experiment, it would be beneficial to compare the proposed algorithms with discounted IRL algorithms since the expert is trained in a discounted environment (the author trained the expert agent using SAC with a discount factor of 0.99). 6/ Including the running time of the proposed algorithms and the baselines would provide valuable insights. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Seen in the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Seen in the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your thoughtful review and the points you've raised. Your insights have been invaluable in refining our paper. Below, we address each of your comments: 1. We apologize for any lack of clarity in explaining the advantages of employing the average reward criterion over discounted IRL. We appreciate your comprehensive understanding of this aspect. Discounted IRL has challenges posed by guessing the discount factor. Our average-reward criterion formulation addresses this issue effectively. First, considering demonstrations using an average-reward metric, our method excels in both reward and policy recovery. Furthermore, when the demonstration indeed uses a discount factor, but the discount factor is unknown, our approach yields improved reward estimates without the need for manual discount factor guesswork. We will enhance our paper to elucidate these advantages more explicitly. 2. We thank you for highlighting the potential limitations of Assumption 3.2, particularly in the context of neural network-based function approximation. You're correct in noting that neural networks don't guarantee Lipschitz continuity. However, as we assume the state and action space are bounded sets, it is reasonable to assume there exists a Lipschitz constant, which can be fairly large. Regarding the convexity of the approximated Q-function, we have additional analysis when the approximated Q-function plus the regularizer h is not convex, which leads to SPMD converging to a stationary point. See Theorem 3.7. Please refer to our overall response for a detailed discussion. 3. Your observation about the potentially restrictive nature of Assumption 3.6 is insightful. We recognize that bounded errors may not accurately capture all cases, especially when dealing with neural network-based estimations. Your suggestion to consider relaxing this assumption aligns with our intention to make our approach more applicable to a broader range of scenarios. We will explore ways to alleviate this assumption while ensuring the reliability of our convergence results. Please refer to our overall response for a detailed discussion. 4. We appreciate your insight regarding the applicability of Assumption 4.1 when dealing with neural network parameterized reward functions. You're right that neural networks don't inherently guarantee Lipschitz continuity or bounded gradients. But as argued above, since the state space and the action space are compact (bounded and closed), the parameterized reward function is indeed Lipschitz continuous as long as its gradient is bounded, which is much less restrictive. It is even less restrictive when simpler models are preferred. For instance, the assumption aligns well with linear reward functions over feature spaces, commonly used in animal behavior studies, which brings room for further inference and interpretation. We will enhance the clarity of this rationale in our manuscript. 5. Both IQL and f-IRL are trained with the same discount factor of 0.99 in all experiments, the same as the expert demonstrations. 6. We thank you for highlighting the value of including running time information for our proposed algorithms and baselines. Our RL algorithm has the same computation efforts as similar algorithms in stable-baselines like PPO, SAC, etc. Training an agent in a single-thread environment with 5 million steps typically takes around 3.5 hours on the Apple M1 laptop. Our IRL algorithms, although carries additional reward estimation in each step, do not impose too much computational burden. Training an agent in a single-thread environment with 5 million steps takes around 3.5 to 4 hours. Please refer to our overall response for all running times. Notice that Humanoid is a larger instance and thus takes longer to train. Once again, we extend our gratitude for your thorough review and constructive feedback, which have significantly contributed to improving our paper. --- Rebuttal Comment 1.1: Title: Thank you for the clarification! Comment: Thank you for the clarification!
Rebuttal 1: Rebuttal: 1. Regarding assumptions being too restrictive: We acknowledge that some assumptions made in the paper are sometimes too restrictive. However, our particular problem setting has a special implication for the assumptions we made. Assumption 2.1 (uniform ergodicity) is considered restrictive but often necessary for analysis, albeit potential violations in practice. Assumption 3.2's practicality hinges on model choices; if neural networks are employed, gradient "clipping" may be utilized. This also applies to Assumptions 4.1 and 4.2. Assumptions 3.3, 3.6, and 4.3, concerning stochastic estimator proximity, are less restrictive given bounded state and action spaces. There are possible remedies to encourage our algorithm to satisfy those assumptions. First, we can "clip" large gradients by a constant; or add penalty terms regarding the norm of the gradient. Regarding the span semi-norm contraction property, we can construct a $J-$step operator in the policy evaluation step, which is well explored in the literature (see [Puterman 1994], [Zhang 2021]). Puterman, M. L. (2014). Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons. Zhang, S., Zhang, Z., Maguluri, S. T. (2021). Finite Sample Analysis of Average-Reward TD Learning and $ Q $-Learning. Advances in Neural Information Processing Systems, 34, 1230-1242. In practice, it is usually obvious when things go wrong, e.g., when the differential Q-function explodes to 10e6. This can happen due to rare occasions, and the culprit is usually detectable, e.g., a large learning rate. 2. Hyperparameter settings: The policy network employs two fully connected hidden layers of dimension 256 each, taking actions as input and outputting a distribution. Both the Q network and reward function share the same architecture, with ReLU activation used in hidden layers. A double Q-learning technique minimizes overestimation. During training, we found that setting the entropy coefficient term to 0.01 make training stable and efficient. The learning rate is set to be 3e-4. Each step of the algorithm samples 512 state-action sample pairs. Additionally, we include our running time of APMD and IPMD for all MuJoCo tasks in the pdf file attached. 3. Experiments analysis: We acknowledge the request for a deeper discussion of algorithmic improvements and deficiencies in various environments. While further performance analysis and tuning are indeed possible, our primary focus was on theoretical foundations. One possible reason Ant is falling behind is that Ant has more ground contact since it has more legs. This will impact the ergodicity of the MDP and our assumption on the 1-step contraction. It is harder to transition from an arbitrary state to another arbitrary state as it involves multiple legs working together. One remedy is that we construct a $J$-step contractive operator for policy evaluation, as done in [Chen 2021]. For the success of humanoid, it is possibly due to a slightly different policy evaluation scheme where the entropy term from the policy no longer plays a part, as the term $r-\rho$ cancels out the additional regularization and entropy of the policy. This is different from discounted setting. We suspect this brings more stable training and thus higher performance. 4. Ethical discussion: Our focus is on methodology, and we acknowledge that IRL, like many other machine learning techniques, has potential implications if misused. IRL can be used in violation of privacy (as the reviewer mentioned) by inferring an individual's intentions and preferences, potentially crafting convincing social engineering attacks or phishing attempts; IRL can also be used to model the behavior of specific demographics, which could result in biased algorithmic decision-making, leading to unfair treatment or discrimination against certain groups, etc. Pdf: /pdf/64dbb816e73392ef084828157a31c0043dad080e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Model-Free Active Exploration in Reinforcement Learning
Accept (poster)
Summary: In this paper, the authors propose a way of obtaining tighter PAC bounds for model-free reinforcement learning. The new theoretical results allow the authors to propose new practical methods for exploration in both discrete and continuous state spaces. The proposed algorithms use ensembles of Q-values, and the results are very competitive when compared with model-based approaches. Strengths: * The new methods adapt to specific problems in an automated way. * The paper is well written, and the discussion is supported by relevant citations. The contributions are backed up by both theoretical arguments and empirical results on a range RL domains. * The contributions seem relevant and important. The results indicate that the proposed methods have merit. * The paper comes with a comprehensive and long appendix (which would be a pain to read for a busy reviewer), but the future readers will most certainly appreciate it. The appendix provides many details that contribute to the quality of the work. Weaknesses: * The authors should make it clear from the start that the new algorithms are ensembles. After reading a few pages of the paper, I was anticipating a clever algorithm that would be purely based on the proposed theory. It was a bit disappointing that ensembles were used at the end. I don't say that this is bad, well there is no free lunch, but it would be fair to expect that, when the new algorithms are mentioned for the first time, ensembles are mentioned too. * The paper argues that model-base approaches are expensive to run. The methods presented in this paper are ensembles. What is their runtime? Are that they really faster than model-based methods? * Since the new methods proposed in this paper are ensembles, I would expect some discussion on ensembles in RL. I would expect that there must exist ensemble based RL methods that could be competitive with the methods presented here. Literature review on ensembles in RL would be useful. A classic paper on this is: Wiering, M.A. and Van Hasselt, H., 2008. Ensemble algorithms in reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 38(4), pp.930-936. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * How exactly is the alearotic uncertainty in the value function accounted for by the methods proposed in this paper? * There is probability p in Alg. 1. It would be useful if its role and rationale were explained. * The use of the variance of the value function is slick, and the authors are careful saying that this addresses the aleatoric uncertainty only, but it would be useful if the authors explained why epistemic uncertainty is not addressed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: I tried to discuss this in the weaknesses and questions sections above. Even though some of my comments may appear critical, I am very positive about this paper. This is solid work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate your review and positive endorsement of our paper's merits. We're particularly pleased that the reviewer recognizes the innovative approach to tighter PAC bounds, the quality of the writing, and the comprehensive appendix. > How exactly is the alearotic uncertainty in the value function accounted for by the methods proposed in this paper? [...] it would be useful if the authors explained why epistemic uncertainty is not addressed by the variance of the value function. Your questions on aleatoric uncertainty, and why epistemic uncertainty is not addressed by the bound, are both very important. 1. Regarding the former question, the variance of the value function in the next state represents a sort of "difficulty" in learning the optimal policy by looking at future trajectories (due to the stochasticity of the MDP). It quantifies the dispersion of values in the next state. In contrast, while the sub-optimality gap provides a measure of how far our current policy is from the optimal one, the variance measures the uncertainty in the value estimate due to the aleatoric nature of the MDP. Finally, our bound generalizes to even-th moments of the value function, but the rationale remains the same. 2. Regarding the latter question, to our understanding, the reason why parametric/epistemic uncertainty is not addressed by the bound is that the result that we along with [1] propose can only be achieved asymptotically, i.e., when the parametric uncertainty has vanished (when all state-action pairs have been visited infinitely often). Our take on this is that the derivation of the bound does not account for the current uncertainty in the estimates of the $Q$-values and $M$-values. Using the certainty equivalence principle, we simply use the plug-in estimator of these values, without considering this uncertainty. This opens up an exciting research direction on how to incorporate this uncertainty knowledge in the derivation of the bound (perhaps using Bayesian methods). To overcome this issue, we leverage a bootstrapping approach to characterize this type of parametric uncertainty and use the ensemble to sample the $(Q,M)$-functions to derive the allocation strategy through Corollary 5.1. We hope that this explanation is clear and addresses the reviewer's questions. > There is probability $p$ in Alg. 1. It would be useful if its role and rationale were explained. Regarding the mask probability computation $p$, it is a user-chosen hyper-parameter that draws similarity to classical Bootstrapping and can speed up the learning process. The higher the value of $p$, the less accurate the characterization will be of the parametric uncertainty of the $Q,M$-values, while a smaller value of $p$ may compromise exploration efficiency. While we also discuss more in detail the algorithm in the appendix, we will make sure to include a more detailed explanation of the masking probability in the revised version of the manuscript. > Clarification on Ensemble Approach. We understand and share the concern regarding the late introduction of the ensemble approach in our algorithms. To be more transparent, we will introduce the ensemble aspect earlier in the paper. We will also include a comprehensive review on ensembles in RL, incorporating the cited classic paper to contextualize our work. Regarding the use of ensembles, we found that randomized prior value functions provide an excellent way to reduce the computational complexity of model-based methods. We typically do not need to increase the number of ensemble members but simply tune the scale parameter of the random prior function. Lastly, we believe that model-based approaches are useful depending on the problem and can be used efficiently in conjunction with our method. \ &nbsp; Thank you once again for the detailed feedback and your overall favorable view of our work. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for answering reviewers' questions. I don't have any other questions.
Summary: The paper introduces a model-free exploration approach developed on an information theoretical basis. Firstly, the lower bound on the number of samples for a near-optimal policy is estimated, and based on this lower bound, the paper develops an exploration strategy for both tabular and deep RL approaches. The exploration strategy is further validated via experiments, where it is found to be superior to other competing approaches. Strengths: The paper is generally well written and addresses an important problem. The developed exploration strategy having an information theoretic basis sets it apart from some of the previous approaches which have mainly been based on heuristics. Weaknesses: The paper does not include experiments in more complex environments such as Montezuma’s revenge, where exploration is known to be a key challenge. A more detailed summary of the main intuitions and theoretical results from [1] could have made the paper more readable. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can more challenging environments possibly be added? If not, are there any fundamental limitations that make this infeasible? 2. Wouldn’t the approximation errors in the deep RL version of the algorithm affect the suboptimality gap? Is this somehow accounted for? 3. Apart from the current results, perhaps for some of the environments, the actual exploration trajectories could be traced/charted out to explicitly show how the exploration is modified. I noticed something along these lines included in the appendix, but it would be good to bring similar results forward into the main text. 4. It would benefit readers to include more details about the main results in [1]. 5. Although it may be obvious, in the cartpole swingup environment, it would be good to include a brief paragraph regarding why/how increasing k makes the task more difficult. 6. In line 297, what is N? 7. I am not sure why Fig 3 shows the performance in Riverswim vs |S| and ForkedRiverswim vs N 8. How is $\Delta_{min}$ initialized in the Deep RL version? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Currently, there is no explicit section outlining the limitations. I suggest the authors include a short paragraph about this covering at least the basic assumptions and requirements for the proposed to be applicable. Perhaps some comments regarding the scalability of the approach to more complex environments could also be added if that is indeed an issue. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insights and constructive feedback. We appreciate the effort you invested in reviewing our paper and have carefully considered your suggestions. Below, we respond to your queries and address the highlighted concerns. > Currently, there is no explicit section outlining the limitations. We actually discuss limitations, including basic assumptions and requirements, in the appendix (right after the table of content, page 17) due to lack of space. > What is $N$? Why Fig 3 shows the performance in Riverswim vs $|S|$ and ForkedRiverswim vs $N$? How is $\Delta_{min}$ initialized? We understand the confusion. The parameter $N$ is more properly explained in the appendix, Section A.2 (page 19). The Forked Riverswim environment consists of two rivers, each of length $N-1$, plus the starting state (that's why $|S|=2N-1$). To improve clarity, we can change the $x$-label of Figure 3 to maintain consistency. Regarding $\Delta_{min}$, we suggest to initialize it a small value. In the attached code, we can see that we initialized it to $10^{-6}$ for the Slipping DeepSea problem, and to $10^{-2}$ for the Cartpole Swingup problem. > Wouldn’t the approximation errors in the deep RL version of the algorithm affect the suboptimality gap? We understand the reviewer's concern. In this case, the approximation error that is introduced is, somehow, modeled by the parametric uncertainty, and thus covered by the technique that we propose (as long as the neural networks are expressive enough to model the true $Q$-value function). > [...]the actual exploration trajectories could be traced/charted out to explicitly show how the exploration is modified. We appreciate your suggestion to bring forward some of the exploration trajectories into the main text of the paper. This could indeed help readers to better understand the practical impact of our exploration strategy. We will consider this for the final version of our paper, by adding perhaps an image in the introduction. > Can more challenging environments possibly be added? If not, are there any fundamental limitations that make this infeasible? As for conducting more complicated experiments, we understand the concern of the reviewer. There are not fundamental limitations that make this infeasible. We simply used hard exploration environments that are used for example by BSP [39] and the DeepMind BSuite library (where they propose the DeepSea problem and the Cartpole Swingup problem to assess exploration properties). Furthermore, we used a more difficult version of the classical DeepSea environment (because of the slipping probability), and evaluated the environments for various difficulty levels for all environments. However, we will try to include more environments and add plots to improve clarity. > It would benefit readers to include more details about the main results in [1]. Your suggestion to include more details about the main results from [1] is noted. We initially aimed for conciseness, but understand that more context could make the paper more readable. We will strive to strike a better balance in the revised version, by adding a more detailed explanation. > In the Cartpole Swingup environment, it would be good to include a brief paragraph regarding why/how increasing $k$ makes the task more difficult. For a detailed explanation of the environment, please refer to Section A.6 of the appendix (page 27). While we are constrained by the page limit for the main paper, we understand the reviewer's request for more details.We will make sure to clarify where to find more information in the main text of the manuscript. In the Cartpole Swingup problem the agent incurs in a negative reward unless the cart's position and the pole's angle satisfy, respectively, the conditions $|x|<1-k/20$ and $cos(\theta)>k/20$. Therefore we see that as $k$ increases, the agent needs to learn a more stabilizing controller to collect positive rewards. \ &nbsp; Thank you again for your review and for being overall positive about the novelty and soundness of our approach. --- Rebuttal Comment 1.1: Title: Thanks for your respones. Comment: I thank the authors for their responses. Overall, the paper could be improved in terms of readability. It seems most of the comments I raised are already addressed in the Appendix. I urge the authors to either bring these into the main paper as much as possible, or at least include sentences in the main paper pointing to the appropriate Appendix locations where these are addressed. It would also benefit readers if the high level messages behind the mathematical results are better emphasized to bring clarity to the overall contribution. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your comments and feedback. In the revised version of the manuscript, we will address the points you raised and, to do so, move some parts from the appendices to the main document. We will also discuss in more detail the intuition behind our main mathematical results. Thank you again!
Summary: The authors propose a model free approach to exploration in RL, that is based around best policy identification. This technique, unlike prior work, uses stochastic approximation to learn a lower bound on the policy performance based on collected samples. Strengths: - The paper presents an model-free approach that's heavily guided by theory, and results show how this technique can be effective in improving exploration in terms of improved performance as well as reduced performance variance. Weaknesses: - The paper is a bit tough to get through; being so theoretically heavy, some intuitive high-level explanation of their implications would be really helpful for the reader to follow along the thought process of the authors. - It is unclear where the derivation of the many theorems is in the text. For example, looking for a derivation on how the authors reached at corollary 5.1, the closest reference I found in the appendix was section B.3, but that's just an explanation on how to use that corollary. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Could you please clarify where the proof and derivations of the different theorems are? For instance, for theorem 4.2 there is a nice explanation of the implications of the theorem, but it is not stated how that upper bound is derived. The same is true about corollary 5.1. Please clarify this, as the evaluation of soundness of the paper heavily relies on being able to find how these theorems where proved. - Algorithm 1 only samples actions according to the allocation omega, does this mean that over time omega converges to pi^*? - Corollary 5.1, the denominator in H_epsilon contains the term delta_min is defined in terms of the optimal policy for the MDP. How are you sampling according to corollary 5.1 (algorithm 1), when the optimal policy pi^* is needed to compute that value? This is a bit unclear. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review and positive feedback. We appreciate the time you took to review our paper and have taken into consideration your suggestions. We address below your questions and discuss the perceived weaknesses. > Could you please clarify where the proof and derivations of the different theorems are?[...] Please clarify this, as the evaluation of soundness of the paper heavily relies on being able to find how these theorems where proved. We apologize for any confusion regarding the location and details of the derivations. The derivation of all the proofs can be found in the appendix, Section C. In Section C.1, we derive some general results, and later in Section C.2, we derive the alternative bound. The main lemma used to prove Theorem 4.2 is Lemma C.10 (line 1156 in the appendix). Corollary 5.1 is proved in Section C.2.3 (see Corollary C.8). Please, note that the bound that we derive in the appendix is \emph{more general} that the one we present in the main paper. In fact, the lower bound we find in the appendix holds also for MDPs with multiple $\varepsilon$-optimal policies. We also want to thank the reviewer for asking this question, we will include additional references in the revised manuscript to clarify where to find the proofs for Corollary 5.1 and Theorem 4.2. > Algorithm 1 only samples actions according to the allocation $\omega_t$, does this mean that over time $\omega_t$ converges to $\pi^\star$? Note that the allocation $\omega_t$ is an exploration policy, and it _should not_ converge to the greedy policy $\pi^\star$. However, the samples that we obtain when exploring should be used to learn the greedy policy $\pi^\star$ in an off-policy way. Therefore, this method is inherently off-policy. As a final remark: the exploration policy $\omega_t$ is derived from Corollary 5.1. When applied in a model-free fashion, as we do, it should converge to the optimal allocation given by Corollary 5.1 projected on the set defined by the navigation constraints. For the interested reader that wants to try with a toy example, in the code you can find in `BoundsAnalysis/utils/utils.py` the function to compute the projection given an allocation and the transition function (`project_omega` or `compute_stationary_distribution`). > How are you sampling according to corollary 5.1 (algorithm 1), when the optimal policy $\pi^\star$ is needed to compute that value? The confusion surrounding the use of the optimal policy in Corollary 5.1's sampling is understandable. As in [1], and other papers that use information-theoretical arguments, we use a certainty-equivalence principle, where we use the current plug-in estimator of the quantity of interest. In our case, we use the $Q$-values of the greedy policy, learnt through off-policy learning, to derive $\pi_t^\star$, the current estimator of the greedy policy at time $t$. > ...some intuitive high-level explanation of their implications would be really helpful for the reader to follow along the thought process of the authors. We understand that the theoretical density of the paper can be challenging to navigate. We will include in the revised version a more intuitive high-level explanation of the concepts and their implications to assist the readers in grasping our thought process. For completeness, we briefly summarize the general idea is as follows: for an MDP $M$ the quantity $T_\varepsilon(\omega)$ represents the characteristic time, i.e., characterizes the sample complexity of estimating the optimal policy using an exploration policy $\omega$. Therefore, the minimum value $\min_\omega T_\varepsilon(\omega)$ yields the characteristic time of the lowest sample complexity that one can possibly achieve. Unfortunately, computing the best optimal exploration policy $\arg\min_\omega T_\varepsilon(\omega)$ amounts to solving a non-convex problem (see [1] for an example). To address this challenge, the main idea is to find a convex upper bound of $T_\varepsilon(\omega)$, that we call $\bar U(\omega)$, and then compute the minimizer $\bar \omega^\star=\arg\min_\omega\bar U(\omega)$ (for example, using corollary 5.1 to obtain a closed form solution). Therefore, we can use $\bar \omega^\star$ to explore an environment. \ &nbsp; We would like to thank you again for your review, and for acknowledging the novelty and soundness of our model-free approach to exploration in RL. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I thank the authors for their response. You comments and pointing to the right section in appendix for proofs helped clarify some questions, but keep in mind that readers will not be able to ask where to find the derivation of different theorems. The paper with an appendix is over 50 pages long, so it is unreasonable to expect a reader to dig through it and find them. I remain positive about the paper, but I think you can improve readability. I suggest editing the paper so that for every theorem/derivation, you point to the exact appendix section where the proof can be found. Some of those might even make sense to bring into the main text. --- Reply to Comment 1.1.1: Comment: Thank you for being positive about the paper and for your constructive feedback. Given the depth of content, we understand your concerns about navigating the manuscript. In our revised version, we will ensure that each theorem or significant point in the main text directly references the corresponding section in the appendix, making the manuscript more accessible. Moreover, based on the feedback, we'll also evaluate if certain crucial derivations or explanations should be shifted from the appendix to the main body to enhance clarity (especially intuitive high-level explanation of the theoretical results, and their implications, as suggested also by some other reviewer). Once again, thank you for your feedback.
Summary: This paper introduces an approximation of a lower bound on the number of samples needed to identify a nearly optimal policy directly applicable to model free RL. They further propose a model free exploration strategy that can be applied to the tabular and continuous MDPs. Strengths: The paper is clear and well written. With experiments showing the proposed approach seems competitive with other exploration approaches. Weaknesses: The paper lacks a comparison of the current approach with more common exploration strategies in DeepRL such as 34, 37. The results remain very toy. It would be great to have an understanding of the kind of exploration the proposed approach performs well, with increasingly sparse rewards? on which circumstances is this preferable. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Why have you used the 2k-th moments in the bound? Can the authors clarify which eq. Compute Allocation, training and estimate minimum gap refer to? How is the mask probability being computed? are the results sensitive to this parameter? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors could provide a more thorough limitation description of the approach under which circumstances does the proposed approach fail, how critical and reasonable are the assumptions made, for instance the communicating MDPs. How does the approach perform in more challenging tasks such as Montezuma's revenge, and more structured exploration tasks. Under which types of exploration difficulty does the proposed approach excel or fail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and positive feedback on our paper. We appreciate the constructive suggestions and have taken them into consideration. We address below your questions and discuss the perceived weaknesses. > Why have you used the 2k-th moments in the bound? There are multiple reasons for the use of 2k-th moments in the bound: (1) first, we wanted to find an alternative bound to [1] that did not involve the span of an MDP, since this quantity is difficult to estimate online (since it involves an $\infty$-norm), and may not characterize very well the difficulty of learning the MDP; (2) secondly, we found that the span of an MDP can be lower bounded by the square root of the variance of the value function (Lemma C.1 in the appendix, line 969). This prompted us to find an alternative bound, through an alternative proof technique (Lemma C.10 is the main tool, line 1156 in the appendix) that led us to the refinement of the bound in [1]. This refinement allows us to characterize the bound in terms of the $2k$-th moment of the value function in each state-action space, which can be estimated online through the use of stochastic approximation. > Can the authors clarify which eq. Compute Allocation, training and estimate minimum gap refer to? How is the mask probability being computed? are the results sensitive to this parameter? For your question about which equations Compute Allocation, training, and estimate minimum gap refer to, please refer to Algorithm 6 in the appendix, page 38. Regarding the mask probability computation $p$, it is a user chosen hyper-parameter. It draws similarity from classical Bootstrapping, and it can speed-up the learning process. The higher the value of $p$, the worse the characterization will be of the parametric uncertainty of the $Q,M$-values, while a smaller value of $p$ may compromise exploration efficiency. While we also discuss more in detail the algorithm in the appendix, we will make sure to include a more detailed explanation of the masking probability in the revised version of the manuscript. > The paper lacks a comparison of the current approach with more common exploration strategies in DeepRL such as 34, 37 Unless we misunderstood the reviewer comment, we compare with the bootstrapped technique from [34,37,38]. In the numerical results it is called BSP (Bootstrapping with additive prior [38], which is an advancement on classical Bootstrapped DQN). We mainly focused on comparing with other information-theoretical strategy for a fair comparison. > It would be great to have an understanding of the kind of exploration the proposed approach performs well, with increasingly sparse rewards? on which circumstances is this preferable. From the numerical results, we see that the strategy performs well in environments with sparse reward. In general, we expect good performance in sparse reward environments due to characterization of the parametric uncertainty through Bootstrapping with random prior functions. This technique, combined with our proposed bound, allows us to derive effective exploration strategies that only explore where it is needed to learn the optimal value function. However, the exploration strategy, since it is guided from theory, relies on some assumptions such as having a communicating MDP. Therefore, it is possible that one may need to tailor it depending on the type of problem. For example, one may not need to explore at every step, but may try to pair the exploration policy with another policy depending on the problem. Finally, we remark that we discuss limitations, including basic assumptions and requirements, in the appendix (right after the table of content, page 17). \ &nbsp; We thank the reviewer once again for your valuable feedback and for your positive endorsement of our paper's contributions.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work focuses on the exploration of reinforcement learning and introduces a novel model-free algorithm. The authors derive a new bound for the lower bound of the number of samples needed to identify a near-optimal policy. Based on that, they develop a model-free exploration strategy that is applicable to both tabular and continuous MDPs. Experimental results demonstrate that their strategy is competitive to existing approaches in efficiently identifying optimal policies. Strengths: - Theoretically, the new proposed bounds look novel and sound. They transform the original lower bound into a more manageable form, making it easier to handle and apply in practical scenarios. - Empirically, the experiments confirm the superiority of the proposed algorithm over existing methods, aligning with their claims. Weaknesses: - There seems to be a big gap between the theory and the algorithm. The theoretical results involve certain quantities (e.g., $\Delta(s,a)$) that are unknown to the algorithm. The authors addressed this issue by approximating these quantities through Q-value (and M-value) learning, subsequently treating the approximated Q and M as ground truth for all computations. This introduces a substantial gap between the theory and the algorithm. - It would improve if the authors could provide an intuitive explanation of the terms in the complicated expressions presented in Theorems 4.1 and 4.2. Understanding and evaluating these results as a whole seems challenging without a better explanation. - Although I admit that the main contributions lie in the theoretical aspects, it would be better to conduct more complicated experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Considering the gap between the theory and the algorithm mentioned above, is there any convergence guarantee for the algorithm? - When the authors select $\bar k=1$ (line 211) and arrive at $\bar{U}^1_{\epsilon}$, does it remain an upper bound for $T_\epsilon$? - I was a bit confused about the main idea. The authors first proposed an upper bound $\tilde{U}$ for the well-established lower bound $T_0$, and then they proceeded to carry out all computations based on this new bound. Although I understand that it is their point to derive this approximation that makes the bound easier to handle, it remains unclear to me why this approach is rational. Typically, what we usually do is to derive a more manageable upper bound on another harder upper bound, or derive a more manageable lower bound on another harder lower bound. Then we just need to minimize the new bound, because in doing so, the original bound is indirectly minimized (or maximized). However, this paper proposed an upper bound for the lower bound, making the aforementioned relationship inapplicable. Therefore, I am curious what the underlying principle here is. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No negative societal impact was identified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and constructive feedback. We appreciate the time you took to review our paper and will address each of your comments individually. > When the authors select $\bar k=1$ (line 211) and arrive at $\bar U_\epsilon^1$, does it remain an upper bound for $T_\epsilon$ ? Concerning your question about $\bar k=1$ and whether $\bar U_\epsilon^1$ remains an upper bound for $T_\epsilon$, the answer is no. Note that for each state-action pair $(s,a)$ the optimal value of $\bar k$ may differ. In principle, it is possible to extend our algorithm to account for this, at the cost of an increased computational complexity that does not seem worth given the numerical results that we obtained. Lastly, the choice of $\bar k=1$ follows simply from the scaling argument that we outline in the paper. > Considering the gap between the theory and the algorithm mentioned above, is there any convergence guarantee for the algorithm? Regarding the convergence guarantee for our algorithm, we understand your concern. In this context, asymptotic almost sure convergence of the $Q,M$-values is guaranteed if we mix the allocation with an $\epsilon$-soft policy (see, for example, Algorithm 6, line 6 in the `EstimateAllocation` function, page 38 of the appendix; similarly, also for the tabular case). However, compared to [1,28], it is hard to derive a sample complexity upper bound due to the various approximations. It is not within the scope of this paper, but we believe that, in absence of approximations, we could apply similar ideas from [1] and [28] to find a sample complexity upper bound. Finally, since the $Q,M$-values are all you need to compute the optimal allocation of Corollary 5.1, then asymptotically, we can still converge a.s. to the minimizer of $\bar U_\varepsilon$. Lastly, regarding your concern about the gap, it is common in information-theoretical algorithms of this type to use the current estimate according to a certainty-equivalence principle (see for example [1,28] and other papers on best arm identification [17,22]). > I was a bit confused about the main idea. [...] Therefore, I am curious what the underlying principle here is. To clarify our approach of deriving an upper bound for the lower bound, we realize that the derivation is not straightforward. The idea is as follows: for an MDP $M$ the quantity $T_\varepsilon(\omega)$ represents the characteristic time, i.e., characterizes the sample complexity of estimating the optimal policy using an exploration policy $\omega$. Therefore, the minimum value $\min_\omega T_\varepsilon(\omega)$ yields the characteristic time of the lowest sample complexity that one can possibly achieve. Unfortunately, this minimum cannot be easily computed since it involves solving a non-convex problem (see [1] for an example). It is then not possible to find a non-trivial smaller bound that is convex, unless it works only for a certain type of MDP. However, it is possible to find a convex upper bound of $T_\varepsilon(\omega)$, that we call $\bar U(\omega)$, and then compute the minimizer $\bar \omega^\star=\arg\min_\omega\bar U(\omega)$. How far is this new minimum from the minimum of $T_\varepsilon$ is still an open question (and quite difficult to answer). > It would improve if the authors could provide an intuitive explanation of the terms in the complicated expressions presented in Theorems 4.1 and 4.2. Your suggestion to provide a more intuitive explanation for the complicated expressions in Theorems 4.1 and 4.2 is well taken. Using the explanation of the main idea in the paragraph above, we can see now how $\bar U(\omega)$ is a convex upper bound of $T(\omega)$ that we try to minimize. This upper bound is characterized by some quantities, e.g. $\Delta(s,a)$ and $M_{sa}^{\bar k}[V^\star]$. The former represents the sub-optimality gap (the hardness of learning the optimal action in a certain state). While the latter term represents a sort of variance of the value function in the next state (to be precise, the $2k$-th moment), and thus, it represents a sort of "difficulty" in learning the optimal policy by looking at future trajectories (it measures the uncertainty in the value estimate due to the aleatoric nature of the MDP). In comparison, in [1] they found a characterization based also on the span of the MDP, which is however not necessary in our formulation. To summarize, if we think of $\bar U$ as the characteristic time of the MDP, then we see that the sample complexity is completely characterized by the suboptimality gaps and the $2k$-th moment of the value function in each state. > Although I admit that the main contributions lie in the theoretical aspects, it would be better to conduct more complicated experiments. As for conducting more complicated experiments, we understand the concern of the reviewer. However, note that we used hard exploration environments that are also used by other practitioners when evaluating exploration algorithms ( see for example [39], or the Bsuite library, where they propose the DeepSea problem and the Cartpole swingup problem to assess exploration properties). Furthermore, we used a more difficult version of the classical DeepSea environment (because of the slipping probability), and evaluated the environments for various difficulty levels for all environments. \ &nbsp; Thank you again for your review that will greatly help us to improve our paper. We are pleased that you acknowledged the novelty and soundness of our proposed bounds and how they transform the lower bound into a more manageable form. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed feedback. It addressed most of my concerns. However, I am not certain about the "certainty-equivalence principle" mentioned by the author, so I have reservations about the gap between the theory and the algorithm (as proposed in the first weakness). In addition, I am still worried about the third question (the reason for deriving an upper bound for the lower bound). I can see that the distance between the new minimum and the true minimum is usually hard to estimate. However, if we are deriving the lower bound for the lower bound, then it is fine because we usually don't need to care about the distance between them if we can improve the new lower bound. Nevertheless, since the authors are deriving an upper bound for a lower bound, if we can't estimate the distance between them, there is not any guarantee for the original problem. Hence, I am not sure how the proposed results can be applied. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback on our paper. We understand your concerns and will attempt to elucidate further. > However, I am not certain about the "certainty-equivalence principle" mentioned by the author Using the certainty-equivalence principle means that we are using the current estimates (of the sub-optimality gap and the variance of the Q-function) as if they were exact when computing the exploration policy. This does not create a gap between theory and the algorithm, because in our algorithm, we mix the computed exploration policy with an $\epsilon$-soft policy, where the parameter $\epsilon$ is decreased over time (for example, see Lemma B.1 in Appendix to see what is the rate at which $\epsilon$ is decreased). In the code that we provided, you can see how we mix the allocation $\omega$ with this $\epsilon$-soft policy (e.g., in the tabular case, for MF-BPI, check the `forward` function in `RiverSwim/agents/mfbpi.py`, line 68 and 69). We first bootstrap the $Q,M$-values using the ensembles (to account for the parametric uncertainty), compute the exploration policy according to Corollary 5.1, and then mix with an $\epsilon$-soft policy (see also line 266-267 in the main paper). By doing this, we ensure enough exploration so that the estimation error vanishes with time (the estimates converge asymptotically). Mixing with an $\epsilon$-soft policy is often referred to as “forced exploration” in the literature. We will make sure to clarify this point in the paper. Certainty-equivalence principles are common in the RL literature, see e.g. [1, 17, 22, 28, 49, 51, 52, 53]. We haven’t explain all this in detail, but we will add a more exhaustive discussion in the revised version of the manuscript. > I am still worried about the third question (the reason for deriving an upper bound for the lower bound). I can see that the distance between the new minimum and the true minimum is usually hard to estimate. However, if we are deriving the lower bound for the lower bound, then it is fine because we usually don't need to care about the distance between them if we can improve the new lower bound. Nevertheless, since the authors are deriving an upper bound for a lower bound, if we can't estimate the distance between them, there is not any guarantee for the original problem. Hence, I am not sure how the proposed results can be applied. The true lower bound (see e.g (1)) specifies the *minimum* amount of exploration needed to identify an approximately optimal policy with some level of certainty. The lower bound is information-theoretical, and cannot be beaten by any PAC algorithm. Hence, one cannot explore less, because this would imply that we could fail at identifying an approximately optimal policy. In other words, an algorithm starting from a lower bound of the lower bound would not enjoy any performance guarantee, because you are exploring less than needed. This is why we use the upper bound of the lower bound. In this way, we ensure that we identify an approximately optimal policy, but at a cost of "over-exploring" a bit, at a rate corresponding to the gap between the upper bound and the true lower bound. This approach is not unique to our study and has been adopted in several works like [1,28,53]. Guarantee-wise, in the paper, we show that our bound obtains a scaling that is comparable, if not better, to the minimax lower bound, as explained after Theorem 4.2 (please see also the related work section where we discuss the minimax lower bound). We believe this underpins the validity of our approach in the context of the problem that we study. We're always open to continued dialogue to improve and refine our work further. Thank you for your time and dedication to reviewing our paper.
null
null
null
null
null
null
$\textbf{A}^2\textbf{CiD}^2$: Accelerating Asynchronous Communication in Decentralized Deep Learning
Accept (poster)
Summary: This work discusses the challenges and potential solutions related to training complex Deep Neural Networks (DNNs), particularly regarding the computational and communication demands. The traditional synchronous, centralized approaches to DNN training, while widely used, face limitations in terms of efficiency and scalability, which can be tackled by distributed training methods. Asynchronous and decentralized methods are suggested as they allow for more efficient parallelization of computations and communications, using time-delay fluctuations between workers. Such methods eliminate the need for a central worker to aggregate results, allow nodes to contribute in proportion to their available resources, and use peer-to-peer communication to streamline training. However, the complexity of these methods and the large number of parameters in DNNs still pose considerable communication challenges. The authors address these issues by introducing a novel acceleration method, A2CiD2 (Accelerating Asynchronous Communication in Decentralized Deep Learning), specifically for peer-to-peer DNN training. This method uses pair-wise gossip acceleration, which is largely unexplored for deep learning, and is supported by the analytical framework of Stochastic Differential Equations (SDEs). It decouples computations and communications, requiring minimal overhead and enhancing communication rates. Key contributions of this work include extending the asynchronous decentralized deep learning training framework to non-convex settings, proposing the A2CiD2 mechanism for improving communication efficiency, and minimizing the gap between centralized and decentralized settings in environments with up to 64 asynchronous GPUs. The method has been implemented in PyTorch and will be released as open-source upon publication. Strengths: The strengths of this paper are as follows: 1: Within the realm of deep learning, the widespread use of High-Performance Computing (HPC) technology has made it possible to achieve exceptional performance in a synchronous and centralized environment. However, not everyone has the means to afford such costly setups, making the development and evaluation of distributed asynchronous algorithms crucial. The authors propose a cost-effective method that significantly enhances communication speed. 2: The A2CiD2 algorithm effectively minimizes the discrepancy between centralized and distributed setups. It functions efficiently in environments hosting up to 64 asynchronous GPUs. This characteristic enhances the algorithm's scalability, rendering it suitable for large-scale machine learning tasks. 3: This study broadens the analytical framework for investigating the design and convergence of these algorithms in non-convex settings. It provides new insights into asynchronous distributed deep learning training. 4: The method proposed substantially improves communication efficiency in distributed learning environments. This could assist in addressing common challenges, such as the straggler problem, synchronization between computation and communication, and bandwidth limitations. Weaknesses: The weaknesses of this paper can be outlined as follows: 1: While the theoretical analysis provided in Section 3.3 is well substantiated, its tightness with respect to the upper bound remains ambiguous. Further explanation is needed on how the asymptotic convergence corresponds with the experimental results for relatively smaller sizes, like those discussed in Section 4. 2: The study mentions that the proposed A2CiD2 algorithm has been tested with up to 64 asynchronous GPUs. While this is quite remarkable, it may not comprehensively represent the broad array of hardware configurations and scales found in real-world applications. A broader evaluation of the method across different scales and architectures would be advantageous. 3: The paper lacks a clear comparison with other existing distributed training methods, be they synchronous or asynchronous, in terms of computational cost, communication overhead, and performance. Such a comparison could help determine whether the proposed method truly surpasses other methods and under which specific conditions. 4: The paper asserts that training Deep Neural Networks (DNNs) with decentralized methods necessitates substantial communications due to the large quantity of optimized parameters. It remains unclear whether the proposed acceleration method sufficiently addresses this issue and how it compares with other techniques that aim to reduce communication overhead. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: In addition to the issues delineated as weaknesses above, I would be grateful if you could respond to the following queries: 1: In Section 3.1, "Model for a Decentralized Environment," you presuppose the symmetry of communication. Could you elaborate on this assumption and explicate its implications for your model? 2: In Section 3.3, "Theoretical Analysis of A2CiD2," you propose a Poisson process. Nevertheless, the validity of this assumption remains nebulous. Could you furnish some justification or explanation for this assumption? 3: In Section 4, "Numerical Experiments," you cite the utilization of 64 A100 GPUs. Could you provide more details about the network configuration they are connected to, such as Infiniband? The performance of the network itself is likely to affect the results of this experiment, and this information would help elucidate the experimental setup. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A : No pertinent content is found in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer LKYk for highlighting the need for cost-effective methods enhancing communication speed in distributed training, and recognizing that our method is a step towards rendering large-scale training possible in this setting by substantially improving communication efficiency while providing new insight into asynchronous distributed deep learning training. **Weaknesses:** 1) Lower bounds to minimize the sum of functions with randomized algorithms exist [ [1]( https://arxiv.org/pdf/1605.08003.pdf ), [2](https://arxiv.org/pdf/1805.10222.pdf) ] and we are probably not tight in that sense. However, we still provide a SOTA communication complexity, even compared to accelerated *synchronous* methods that use Chebychev acceleration as our rate depends on the maximal effective resistance of the graph instead of the spectral gap (see, e.g. [[3]]( http://proceedings.mlr.press/v202/nabli23a/nabli23a.pdf ) ). Note that, if we put $f=0$, then the problem reduces to satisfying the consensus constraint and our algorithm meets the fastest known rate for gossip algorithms [[7]]( https://proceedings.neurips.cc/paper/2021/file/ec26fc2eb2b75aece19c70392dc744c2-Paper.pdf ). Finally, remark that convergence rates in the smooth case are often put as a sanity check in the deep learning literature as they are vacuous in practice [[8]]( http://proceedings.mlr.press/v97/arora19a/arora19a.pdf ). 2) We must stress that we work in **academia** with a **publicly funded cluster**. Thus, while we agree that verifying experimentally that our method is amenable to a wide range of hardware settings would be of interest, this is reasonably beyond the reach of the material capacity of our academic setting. Note that despite those constraints, our numerical experiments use a number of workers of the order of state-of-the-art work [[9]]( http://proceedings.mlr.press/v97/assran19a/assran19a.pdf ). The open-source release of our code is planned to allow such experiments for other actors with more compute resources. In terms of network architecture, we will aso report additional experiments with Vision Transformers shortly (experiments still ongoing at the time of writing), but we expect a similar behavior for these models. 3) We aim at reducing the communication cost of **asynchronous** methods, because they have the potential to be faster than synchronous ones due to the removal of wait barriers. While bringing many practical advantages in the large-scale distributed setting, asynchronous algorithms also lead to specific technical challenges compared to synchronous approaches. As such, the closest method to which we can reasonably compare is AD-PSGD [ [4] ]( http://proceedings.mlr.press/v80/lian18a/lian18a.pdf ), which we have done. We stress that no other existing decentralized asynchronous method displays accelerated rates of communications in the training of neural networks: AD-PSGD [ [4] ]( http://proceedings.mlr.press/v80/lian18a/lian18a.pdf ) has a complexity depending on the *spectral gap* $\omega ,$ whereas ours depends on $\sqrt{\chi_1 \chi_2}$ which is better (typically for the cycle graph, $\omega = \mathcal O (n^2)$ and $\sqrt{\chi_1 \chi_2} = \mathcal O (n)$). 4) We believe that any method taken alone would probably not be sufficient to grow to arbitrarily scale and that our method is another tool to reduce the communication cost for the decentralized asynchronous training of neural networks. As we use a local momentum, our method is completely orthogonal to schemes aiming at reducing the bandwidth requirements through compressed or quantized communications, which can be added on top of $A^2CiD^2$. **Questions:** 1) The symmetry of communications means that our communication network is undirected: when an edge $(i,j)$ “spikes”, both nodes $i$ and $j$ send their parameters to the other, such that both can compute the *average* of $(x_i, x_j)$. While our *implementation* of our method is indeed symmetric for simplicity’s sake, what our theoretical analysis says is a bit more subtle: we only require that the *directed* and *expected* rates of communications between any two workers ($i$ to $j$ and $j$ to $i$) are the same. Thus, our algorithm could also work with non-symmetric communication schemes (i.e push or pull methods). We stress that, in practice, symmetric communications are often assumed for *asynchronous* schemes (see, e.g. [ [4] ]( http://proceedings.mlr.press/v80/lian18a/lian18a.pdf ) ). 2) The Poisson modeling of gossip algorithms is standard in decentralized peer-to-peer networks *(see, e.g., the seminal paper of Boyd et al. [[5]](https://web.stanford.edu/~boyd/papers/pdf/gossip.pdf) )*, we thus follow this literature. To gain more insight, this modeling consider that any individual link in the network has a fixed bandwidth, thus the time between two communications is always *roughly* the same, and variations are taken into account by making this delay stochastic (following an exponential law). However, we agree that this model is not perfect, and some refinements have been made to move closer to reality (see, e.g. [ [6]](https://arxiv.org/pdf/2106.03585.pdf) ). 3) We will add the following details to the paper: all our experiments are done on a cluster with 8 A100 per node using an Omni-PAth interconnection network 100 Gb/s. In all our experiments, one worker amounts to one GPU (and not one node). --- Rebuttal Comment 1.1: Comment: Thank you for your prompt responses, despite the limited time frame. I grasped the main points from the replies, but they didn't address my questions. It would have been helpful had you clearly outlined the limitations from the outset. I found there to be more limitations than I had initially anticipated. The asymptotic behavior of the algorithm for very large data sizes and its practical behavior for small, limited data sizes should be discussed systematically. The dominant terms might vary depending on whether the data size is large or small. While it's reasonable to assume that the algorithm's theoretical properties contribute to its positive experimental outcomes, various factors influence the calculations. Regarding Weakness 2, the objective isn't to conduct numerous experiments across multiple architectures, but to derive a broader spectrum of insights and predictions with a limited set of experiments in various environments. This topic is frequently broached in fields like HPC (High Performance Computing), making such discussions quite beneficial. If there's a model addressing computation, data, and communication quantities, a more comprehensive discussion and set of findings could emerge. I understand the challenge of presenting extensive information within a paper of limited length. However, after revisiting the experimental section, it seems, from a reader's perspective, to be primarily a performance evaluation within a constrained setting. Nonetheless, I'm not disputing the value of your method. --- Reply to Comment 1.1.1: Comment: Thank you very much for participating in the discussion. * There seems to be some misunderstanding about the theoretical analysis. *We don't adopt an asymptotic approach*, as we are not examining the scenario where "t" tends towards positive infinity. Instead, we focus on a quantitative approach, considering specific time steps “t”. Additionally, it's important to note that *the sizes of the data/datasets do not influence or factor into our convergence analysis*. We believe our algorithm's theoretical attributes are well mirrored by our experimental outcomes. If the AC allows us to, we will illustrate this match between our theoretical convergence rate and the observed one with a Figure (this is not surprising given that communications here are "convex," capitalizing on the corresponding theory). We kindly ask the reviewer to clarify what additional factors precisely they believe could be better addressed or clarified in the theoretical analysis. * We’re slightly unclear on the expectations of the reviewer, as our work is built upon the standard norms in the community, both from the theoretical point of view (see [1,2,3,4,5,7], even for the Poisson Process use), and from the experimental set-up perspective ([5,6,7,8] leading to our implementation using 64 GPUs, ResNet model, CIFAR10/ImageNet datasets with the objective to maintain performance under the decentralized constraints). It would be very helpful if the reviewer could provide specific references. We thank you in advance for any further clarifications. [1] *Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks*, ICML 2017 [2] *Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization*, NeurIPS 2020. [3] *DADAO: Decoupled Accelerated Decentralized Asynchronous Optimization*, ICML 2023. [4] *A Unified Theory of Decentralized SGD with Changing Topology and Local Updates*, ICML 2020 [5] *Stochastic Gradient Push for Distributed Deep Learning*, ICML 2019 [6] *Don’t use large mini-batches, use local SGD*, ICLR 2020 [7] *Asynchronous Decentralized Parallel Stochastic Gradient Descent*, ICML 2018 [8] *Consensus control for decentralized deep learning*, ICML 2021
Summary: This work, A2CiD2, proposes a novel decentralized asynchronous training method that incurs only minimal overhead but effectively decouples communications and computations, accelerating pair-wise communications via a provable, accelerated, randomized gossip procedure based on continuous momentum and time. A2CiD2 is also compatible with other asynchronous approaches. Across ResNet benchmarks on image classification, A2CiD2 is able to outperform AllReduce-SGD and AD-PSGD on a cluster of 64 asynchronous workers with A100 GPUs, using various communication network topologies. Strengths: +. Proposed a decentralized asynchronous training algorithm that outperforms the SOTA approaches +. Demonstrated the advantage of the proposed approach both theoretically and empirically +. Provided code for reproducibility Weaknesses: -. Writing needs improvement. Hard to follow. -. Missing benchmark: 1. only ResNet is evaluated (e.g., ResNet18 and ResNet50), how about more models like RNNs and larger models like GPT2? 2. only image classification tasks is evaluated, how about more tasks like language modeling? -. Missing details in experiments: 1. what is the network type and bandwidth? 2. how many GPUs per machine? -. Missing evaluation: 1. what is the time-to-accuracy or time-to-loss speedup for AC2CiD2 over AllReduce-SGD and ADPSGD? 2. what is performance of AC2CiD2 with random stranger in the cluster? Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: See Weakeness Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer fQ1m for stressing that our method outperforms SOTA approaches, both theoretically and empirically. We emphasize that this paper is **not** about implementations tricks but rather about *fundamental research* on new strategies to speedup asynchronous training. Indeed, our work solves the important question of understanding how to accelerate communications in an asynchronous setting. **Weaknesses:** * We corrected a few typos and drafting errors in the revised version of our paper to clarify the writing. To mention a few, a spell-checker only found about 20 of those located at lines independent *(l36: communications -> communication), (l45: gossips->gossip),(l56: allow to->allows us to), (l58: communication -> the communication), (l62: improve -> improves) (l66: time of -> the time of), (l73: independant -> independent), (l94: exists -> exist), (l103: to use strong -> to use of ), (l135: Next sections -> The next sections), (l159: allow to -> allow us to), (l176: step -> steps), (l186: technique degrade accurracy -> techniques degrade accuracy), (l199: adaptation -> adaptations), (l210: networks parameters-> network parameters), (l233: each others -> each other), (l238: allows -> allows us), (l280: finally-> final), (l281: the Fig. -> Fig.)* However, we believe this minor weakness does not diminish the significance of our scientific contribution. * For the image classification task, we will aso report additional experiments with Vision Transformers shortly (experiments still ongoing at the time of writing), but we expect a similar behavior for these models.We emphasize that our evaluation is standard in the literature (see [ [1](https://arxiv.org/pdf/1808.07217.pdf), [2](https://arxiv.org/pdf/1811.10792.pdf) , [3]( http://proceedings.mlr.press/v80/lian18a/lian18a.pdf ) ]). Moreover, we are unfortunately not able to train large language models such as GPT2 for lack of the right computing infrastructures. The open-source release of our code is planned to allow such experiments for other actors with more compute resources. * We will add the following details to the paper: all our experiments are done on an **academic** cluster with 8 A100 per node using an Omni-PAth interconnection network 100 Gb/s. In all our experiments, one worker amounts to one GPU (and not one node). * Thank you for raising interest on these metrics, we will provide additional figures in the revised version of our paper. In the meantime, we report some preliminary figures *(in the pdf attached to the global rebuttal)* showing that applying $A^2CiD^2$ is indeed worthwhile in terms of time. * We are not sure we understand the question. In the “complete graph” setting, our asynchronous algorithm acts as follows: to reduce latency, the first two workers (i.e., GPUs) in the whole pool that declare they are ready to communicate (i.e, finished their previous communication) are paired together for a p2p communication. In practice, as each worker has to perform a random number (following a poisson law) of communications between two gradient computations, this means that the pairs are completely random (no pair of workers can repeatedly synchronize at the same time). We verify that indeed, during the course of a training run, each edge in the complete graph appeared roughly the same amount of time *(see the pdf attached to the global rebuttal)*. We’d be happy to provide further explanations if required. While seemingly working in the High Performance Computing setting, we must stress that our experiments are mainly done to **simulate** a decentralized network (e.g, connected through the internet) and to highlight the potential of using $A^2CiD^2$ to reduce the communication cost in situations where it is a major bottleneck to train neural networks. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal with a detailed explanation. The authors have addressed my concerns to some extent through the response, so I will raise the score by one level. --- Reply to Comment 1.1.1: Comment: Thank you for participating in the discussion! While we recognize the significance of LLMs in this context (which, we acknowledge, is a crucial concern though not yet a standard and widely established baseline in distributed literature, see e.g. [ [1](https://arxiv.org/pdf/1808.07217.pdf), [2]( http://proceedings.mlr.press/v80/lian18a/lian18a.pdf ) ]), we would like to inquire if you perceive any other gaps or areas that might require further elaboration. We’d be happy to offer additional explanations and insights as needed. Thanks.
Summary: This paper proposes an asynchronous gossip-based algorithm for decentralized deep learning by using a continuous momentum. Experiments on real datasets are used for evaluation. Strengths: 1. The studied problem about accelerating communication in decentralized deep learning is interesting. 2. The proposed algorithm accelerates the asynchronous baseline both theoretically and empirically. Weaknesses: 1. The writing can be improved. There are many typos and grammatical errors. Furthermore, there are many informal representations, such as “We demonstrate its efficiency theoretically and numerically; empirically on the ring graph…” in abstract. 2. The proposed method needs to store one more copy of the model, compared with baselines. Hence, it may not be suitable for large models. 3. The experimental results are not convincing. The main baseline for comparison is All-Reduce SGD which has very high communication cost. There have existed many sophisticated decentralized methods, which should be adopted for comparison. 4. This work only considers undirected network topology, which means that the communications are symmetric. But in recent years, directed topology has attracted more and more attention. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Is the formula between Line 139 and Line 140 correct? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad that reviewer o6Xd finds the problem we study interesting and recognizes that our method accelerates previous algorithms both theoretically and empirically. We stress that our method is especially catered for large scale training of deep neural networks: the advantage of **asynchronous** algorithms grows with scale, and further accelerating them is not trivial. **Weaknesses:** 1) We acknowledge that a few typos and drafting errors have slipped through our fingers, which we will correct in the final version of this work. To mention a few, a spell-checker only found about 20 of those located at lines independent *(l36: communications -> communication), (l45: gossips->gossip),(l56: allow to->allows us to), (l58: communication -> the communication), (l62: improve -> improves) (l66: time of -> the time of), (l73: independant -> independent), (l94: exists -> exist), (l103: to use strong -> to use of ), (l135: Next sections -> The next sections), (l159: allow to -> allow us to), (l176: step -> steps), (l186: technique degrade accurracy -> techniques degrade accuracy), (l199: adaptation -> adaptations), (l210: networks parameters-> network parameters), (l233: each others -> each other), (l238: allows -> allows us), (l280: finally-> final), (l281: the Fig. -> Fig.).* However, we believe this minor weakness does not diminish the significance of our scientific contribution. We clarify the abstract as follows: *"Our theoretical analysis proves accelerated rates compared to previous asynchronous decentralized baselines and we empirically show that adding $A^2CiD^2$ has the same effect as doubling the communication rate on the ring graph."* 2) We respectfully disagree with this statement. First, remark that *any* algorithm that uses SGD with momentum is also doubling the number of parameters in memory: that is the cost of storing the momentum variable in the optimizer. Thus, our method is completely analogous to a standard momentum in that sense. Plus, in large models such as Transformers, the main memory cost does not reside in the storing of the parameters, but rather in the memory requirements for storing the activations and computing the gradients (see e.g, [ [ 1, ]( https://arxiv.org/pdf/2111.11124.pdf ) [ 2 ]( https://arxiv.org/pdf/2205.05198.pdf ) ] ). As our method only uses the second set of parameters as a *"memory bank"* (our second "model" does not perform any computation: neither forward nor backward), it is virtually at no cost in terms of memory (the memory for activations in the original model dominating the storing of a second set of parameters). 3) The main baseline for comparison we use is AD-PSGD [ [3] ]( http://proceedings.mlr.press/v80/lian18a/lian18a.pdf ): this is our “asynchronous baseline” in Tab.2 and Tab.4, which is quite standard and challenging to beat. Indeed, our primary objective is to minimize the communication overhead of asynchronous techniques, which are faster than synchronous methods as they aim at removing wait barriers from their implementation. However, we introduce All-Reduce SGD as a benchmark, establishing a target performance that an effective (asynchronous or synchronous/centralized or decentralized) method should achieve.While bringing many practical advantages in the large-scale distributed setting, asynchronous algorithms also lead to specific technical challenges compared to synchronous approaches. We stress that no other existing decentralized asynchronous method displays accelerated rates of communications in the training of neural networks and that any other scheme to reduce communication cost (e.g., compressed or quantized communications) can be added on top of our method. If we missed some, do you have in mind other **decentralized asynchronous** baselines to which we can compare ? 4) We respectfully disagree: using asymmetric communication can be simply done in our case. Indeed, while the *implementation* of our method is indeed symmetric for simplicity’s sake, what our theoretical analysis says is more subtle: we only require that the *directed* and *expected* rates of communications between any two workers ($i$ to $j$ and $j$ to $i$) are the same. Thus, our algorithm could also work with non-symmetric communication schemes (i.e push or pull methods). We stress that, in practice, symmetric communications are often assumed for *asynchronous* schemes (see, e.g. [ [3] ]( http://proceedings.mlr.press/v80/lian18a/lian18a.pdf ) ). **Questions:** The formula is indeed correct, it is a very standard re-formulation of the decentralized optimization problem using the consensus constraint (see e.g eq. 10 in [ [4] ](https://arxiv.org/pdf/1702.08704.pdf) or eq. 2 in [ [5]](https://proceedings.neurips.cc/paper_files/paper/2020/file/d530d454337fb09964237fecb4bea6ce-Paper.pdf )). We’d be happy to provide further explanations if needed. --- Rebuttal Comment 1.1: Title: How is the theoretical communication cost compared to the synchronous algorithm Comment: Given that all workers are with the same computation time, the asynchronous case reduces to the synchronous case. Then the question is how the theoretical communication cost is compared to the synchronous case [1] which gives the optimal communication bound for decentralized algorithms. [1] Optimal Complexity in Decentralized Training, ICML 2021 --- Reply to Comment 1.1.1: Comment: Thank you so much for your active participation in the discussion! From a theoretical perspective, we've made significant advancements over the works similar to [[1]]( https://arxiv.org/pdf/2006.08085.pdf ), which exclusively focus on deterministic (and synchronous) algorithms. In [[1]]( https://arxiv.org/pdf/2006.08085.pdf ), the Laplacian matrix $\Lambda$ is derived from doubly-stochastic gossip matrices, ensuring $\Vert \Lambda\Vert=1$ and guaranteeing that every edge spikes precisely once per communication round (resulting typically in $\chi_2\leq 1$, refer to Prop 3.9 of [[2]]( http://proceedings.mlr.press/v202/nabli23a/nabli23a.pdf ) for an in-depth discussion). Generally, deterministic algorithms establish bounds that depend on the spectral gap $\rho = \Vert \Lambda \Vert \chi_1$, and can be accelerated up to $\sqrt{\rho}$. Our novel stochastic (and asynchronous) algorithm, $A^2CiD^2$, goes beyond this by providing the potential to achieve a dependency of $\sqrt{\chi_1\chi_2}\leq \sqrt{\rho}$ theoretically (see the star-graph case for which we’re strictly better and refer to Prop 3.9 and Tab. 2 of [[2]]( http://proceedings.mlr.press/v202/nabli23a/nabli23a.pdf )). To our knowledge, we are the pioneers in achieving such communication bounds for an algorithm applicable to Deep Neural Networks. From a practical standpoint, synchronous algorithms must *wait for the slowest worker at each step*. Asynchronous algorithms come to the rescue by allowing *each worker to operate at its own pace*. Even though they maintain a similar *total* amount of computations (the training stops when a *total number of grad steps* have been done), slower workers contribute less, while faster ones contribute more. When compared to the synchronous case, our basic implementation necessitates less communication and less time (table for ImageNet training with 64 workers): | method | Time (min) | # grad (slowest worker) | # grad (fastest worker) | |-----------------------|------------|-------------------|-------------------| | AR-SGD (Pytorch DDP) | 1.7 $10^2$ | 14k | 14k | | AD-PSGD | 1.5 $10^2$ | 13k | 14k | | AD-PSGD w/ $A^2CiD^2$ | 1.5 $10^2$ | 13k | 14k | Consequently, we demonstrate that the **asynchronous scenarios outperform the synchronous one**: $A^2CiD^2$ represents a pioneering, accelerated, and asynchronous algorithm, grounded in both theory and practicality. We firmly believe that this contributes significantly to the advancement of the field. [1] *Optimal Complexity in Decentralized Training*, ICML 2021. [2] *DADAO: Decoupled Accelerated Decentralized Asynchronous Optimization*, ICML 2023. --- Rebuttal Comment 1.2: Title: Thanks for the response Comment: Thank you for the response. The authors have addressed most of my concerns. Hence, I raise my score by one level. The issue about experiment has not been addressed. AD-PSGD is relatively outdated. More advanced baselines should be adopted for comparison, such as DADAO (ICML 2023) and other methods cited in the reference list. DADAO: Decoupled Accelerated Decentralized Asynchronous Optimization, ICML 2023. --- Reply to Comment 1.2.1: Title: AD-PSGD is the state-of-the-art baseline in asynchronous decentralized DNN training Comment: Unfortunately, DADAO is designed and evaluated for the convex case and isn't suitable for training DNNs as it relies on the saddle points of a Lagrangian, which has no equivalent in Deep Learning. Indeed our preliminary experiments for this paper found DADAO obtained poor performance for DNNs in practice and it was excluded for further consideration. To the best of our knowledge AD-PSGD stands as the state-of-the-art baseline for asynchronous decentralized DNN training algorithms (see also reply to reviewer fkqi), thus we ask the reviewer to reconsider their assessment regarding the strengths of the baselines.
Summary: This work introduces a new method for decentralized optimization that leverages the notion of continuous momentum to speed up its convergence. The method is justified with theoretical analysis and large-scale experiments on the ImageNet dataset. Strengths: * The work studies an important problem of distributed training over communication-constrained networks * The results of authors have both theoretical justification and empirical validation in a large-scale setting * Overall, the approach is clearly explained and the contributions are easy to understand Weaknesses: * The primary disadvantage of the proposed method is its memory overhead. Having an additional copy of model parameters on each worker is quite expensive for models where the memory footprint of the parameters dominates that of the activations (in particular, for Transformer models). As a result, it might be quite difficult to apply A$^2$CiD$^2$ to models beside convolutional networks (for example, to train modern language models), which are a highly popular area of application for distributed training * I feel that the statement in L241 needs to be clarified. All-Reduce methods indeed require more *connections* with the growth in the number of workers, but the total amount of bandwidth is asymptotically independent of the network size for methods like Ring All-Reduce * From my understanding, momentum acceleration for decentralized DL has already been studied in the past (e.g., [1], which has been cited in the submission), although the underlying frameworks are clearly different. I think this work needs to more explicitly distinguish their approach from the QG-DSGDm and other results (e.g., [2]), and ideally include those methods as an additional baseline on top of AD-PSGD [1] Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data. Tao Lin, Sai Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi. ICML 2021 [2] SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum. Jianyu Wang, Vinayak Tantia, Nicolas Ballas, Michael Rabbat. ICLR 2020 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * In L197, what is the connection speed between the nodes? * What was your motivation for choosing the ring graph topology in the experiments? Is this topology frequently used in practice for peer-to-peer networks? * Is it correct that you consider standard All-Reduce to be a centralized method? (e.g., L80) From my understanding, there is no central worker involved when aggregating updates with this family of algorithms Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I did not see any explicit discussion of limitations of the work: I would be happy if they mentioned the applicability of their method to models with a larger parameter count (for example, Transformers with hundreds of millions or billions of parameters). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer fkqi for acknowledging the importance of the problem studied and remarking that our theoretical and practical contributions are clear. Among those, we would like to emphasize that obtaining an accelerated rate of communication in the **asynchronous** setting is non-trivial, and mostly ignored in the literature. We thus consider our work to be an important contribution to the field. **Weaknesses:** *1) Memory overhead of an additional copy of the model dominates the activations memory* We respectfully disagree with this statement. As their activations memory usually scales with the *squared of the sequence length*, the activations memory requirements of Transformers dominates the memory footprint, which led to many work to consider new methods to reduce this activation cost (see e.g, [ [ 1, ]( https://arxiv.org/pdf/2111.11124.pdf ) [ 2 ]( https://arxiv.org/pdf/2205.05198.pdf ) ] ). We emphasize that our method only uses the second set of parameters as a *"memory bank"*: our second "model" does not perform any computation (no forward nor backward), and only one optimizer is needed (see eq. 4 and l.9 of Algorithm 1). Therefore, our method needs the memory requirement for only one set of activations and gradients. Thus, it is virtually at no cost in terms of speed (no additional heavy compute), and no cost in terms of memory (the memory for activations dominating the storing of parameters). Finally, remark that *any* algorithm that uses SGD with momentum is also doubling the number of parameters in memory: that is the cost of storing the momentum variable in the optimizer. Thus, our method is completely analogous to a standard momentum in that sense. *2) Bandwidth considerations of All-reduce and Ring reduce* Thank you for pointing that out, we will clarify our statement. We are indeed interested in reducing the latency induced by each "communication act" between workers, and as stated in the Ring All-Reduce paper [ [3] ](https://www.cs.fsu.edu/~xyuan/paper/09jpdc.pdf ): *"One limitation of the proposed algorithm is that it is only optimal in the bandwidth term, but not the latency term: the number of communication rounds is proportional to the number of processes."* An orthogonal (but crucial) line of work [ [8](https://arxiv.org/pdf/1610.02132.pdf ), [9](https://arxiv.org/pdf/1802.04434.pdf) ] is indeed to consider lowering the bandwidth in addition to the total number of communications, using compression schemes for example, to which our method can be independently combined with. Moreover, we stress that Ring All Reduce is **synchronous**, adding undesirable barriers that we remove by focusing on asynchronous methods. *3) Distinction from QG-SGD, SlowMo* * QG-DSGDm [ [4] ](https://arxiv.org/pdf/2102.04761.pdf ) introduces a momentum to lower the complexity of **synchronous rounds of computations-communications** in the **heterogeneous** setting. We are rather interested in explicitly lowering the **communication complexity** in the **asynchronous** and **homogeneous** setting. Moreover, in addition to being synchronous, their method requires gradients to be computed **before** communicating, meaning the latencies for computations and communications are **added**, whereas, being decoupled, our method allows running both processes **in parallel.** * SlowMo [ [5] ]( https://arxiv.org/pdf/1910.00643.pdf ) is also a **synchronous** algorithm. We must emphasize that we study an **asynchronous** algorithm. This brings many practical advantages in the large-scale distributed setting, but also leads to specific technical challenges (for example, the Chebyshev scheme widely used to accelerate communications in decentralized methods is, by nature, synchronous and sequential). As such, the closest method to which we can compare is AD-PSGD [ [6] ]( http://proceedings.mlr.press/v80/lian18a/lian18a.pdf ), which we have done. *We stress that no other existing decentralized asynchronous method displays accelerated rates of communications in the training of deep neural networks*. **Questions:** * The cluster we used has an *Omni-PAth interconnection network 100 Gb/s*. We will add this point in the main text, thanks. * Our work aims at studying the impact of the network’s connectivity (as measured by $\chi_1, \chi_2$) on the communication complexity in the decentralized training setup, and showing that $A^2CiD^2$ allows to reduce it. As the ring graph corresponds to one of the worst case settings, it is standard to use it in the decentralized literature, see e.g [ [4, ](https://arxiv.org/pdf/2102.04761.pdf ) [ 7]( https://arxiv.org/pdf/1705.09056.pdf) ]. * While it is technically true that there is no central worker performing the averaging in the *implementation* of modern All-Reduce methods, we still denote them as centralized as they require the computation of a *global* variable using information from *all* the workers. Put simply, although there are multiple ways to implement it, the requirement for waiting barriers and synchronized communication rounds corresponds exactly to a centralized framework. **Limitations:** We emphasize that our evaluation is standard in the literature (see [ [10](https://arxiv.org/pdf/1808.07217.pdf), [11](https://arxiv.org/pdf/1811.10792.pdf) , [6]( http://proceedings.mlr.press/v80/lian18a/lian18a.pdf ) ]). While we agree that verifying experimentally that our method scales to training models with billions of parameters would be of interest, we are unfortunately not in the material capacity to do so, lacking the right computing infrastructures: we must stress that we work in **academia** with a **publicly funded cluster**. The open-source release of our code is planned to allow such experiments for other actors with more compute resources. However, we will aso report additional experiments with Vision Transformers shortly (experiments still ongoing at the time of writing), but we expect a similar behavior for these models.
Rebuttal 1: Rebuttal: We thank reviewers for recognizing that our method accelerates previous state of the art **asynchronous decentralized** methods both theoretically and empirically (reviewers fkqi, o6Xd, fQ1m) which is a good step towards addressing common challenges of large scale training of deep neural networks (reviewer LKYk). We would now like to address frequently raised comments, which we believe will clarify any previous concerns: * **The memory footprint of our method is not high.** Our method is completely analogous to a standard momentum: **any** algorithm that uses SGD with momentum is also doubling the number of parameters in memory: that is the cost of storing the momentum variable in the optimizer. As for any momentum term, no additional heavy computation is performed with these parameters (no forward nor backward), making the cost in time and memory negligible for large models. For instance, *our method (SGD+momentum+$A^2CiD^2$) has the same memory footprint as Adam* (as it has 2 momentum variables). * **Our academic cluster is not able to train models with billions of parameters, and our experiments are standard.** We are *academic* researchers using a *publicly funded* cluster: we were surprised that reviewers requested us to train GPT-like models. Moreover, training ResNets on CIFAR10 and ImageNet is standard in the decentralized training literature (see [ [1](https://arxiv.org/pdf/1808.07217.pdf), [2](https://arxiv.org/pdf/1811.10792.pdf) , [3]( http://proceedings.mlr.press/v80/lian18a/lian18a.pdf ) ]). * **Cluster details.** All our experiments are done on a cluster with 8 A100 per node using an Omni-PAth interconnection network 100 Gb/s. In all our experiments, one worker amounts to one GPU (and not one node). We will add these details in our paper. * **A general remark on Communication Acceleration.** Before this work, communication acceleration - enhancing the communication rate from $\gamma$ (the spectral gap of the matrix) to $\sqrt{\gamma}$ - was (almost) exclusively achieved via Chebychev acceleration, in the context of training DNNs. Thus, most previous studies don't dwell on this, as it simply corresponds to replacing a gossip matrix $W$ with $P(W)$ where $P$ is the corresponding Chebychev polynomial. However, Chebychev acceleration is inherently synchronous and sequential. Thus, such a process is absolutely undesirable in an asynchronous environment, underscoring the significance of our contribution. *It's essential for reviewers to grasp this distinction and recognize that, prior to our work, standard gossip algorithms that simply perform an averaging between node parameter values were the norm.* Thus, we believe our work to be an important contribution to the litterature of asynchronous training procedures for DNNs. Pdf: /pdf/65bf13d7e0d43e1681d9ce567ca185df2084a987.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Finding Counterfactually Optimal Action Sequences in Continuous State Spaces
Accept (poster)
Summary: The authors present a method for finding c/f (optimal) action sequences in sequential decision making problems with uncertainty, with the novelty that they consider continuous state dynamics. They apply their method to the interesting setting of sepsis treatment at the end of the paper and therein demonstrate impressive results for their method (compared to the actual, observed, action sequences). Strengths: Please see the Questions section for the full review. Weaknesses: Please see the Questions section for the full review. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: ## Abstract - I re-read the abstract multiple times and I unfortunately struggle with the first half. Consider changing the sentence construction or just simplify the first half. It is very unclear what you are trying to say and I fear that the positioning of the paper suffers because of it. One suggestion could be to anchor the paper with one or two real examples and then take the justification from there. - The first paragraph of the introduction does a much better job of framing your paper, I would suggest adapting that approach for the first half of the abstract. ## Introduction - Spell out ICU before you use it; it is common-enough term in the West but may be unfamiliar to a non-native speaker (or indeed anyone who does not frequently use English). - Excellent second paragraph. - Line 49: an SCM does not have the concept of state or transitions, merely a distribution P() over the endogenous and exogenous variables in the model (as defined originally by Pearl). Consequently how you have chosen to introduce it here does not quite follows. Consider re-phrasing. Moreover, SCMs are, as originally defined, state-less representations of a causal environment. - It would be helpful to the reader and yourselves if you could show us (the readers) a quadrant of work which has considered MDPs with: (continuous states, discrete actions), (continuous states, continuous action) … and so on — so that we understand what gap you are filling with this work (certainly you spell it out, but I would argue that a quadrant would be more useful and visually more powerful). - Line 60: please explain what a bijective SCM is. It is not enough to simply cite the original paper it if it is going to be key to your method. ## A Causal Model of Sequential Decision Making Processes - Line 76: presumably $a_t \in \mathcal{A} = \mathbb{Z}_+$? - How come you are not considered the discounted cumulative reward? - I think you should differentiate your setting more than what you are currently doing. At present it sounds as if eq (2) is part of the standard SCM definition. Perhaps introduce a definition paragraph for your SCM? That way the reader is clear that your are formalising a new concept and there is now ambiguity between yours and Pearl’s definition. - I would like to see a much longer discussion on why you make the causal sufficiency assumption (Line 97) in this work (no unobserved confounders). It is a common assumption as you say, but merely saying that other works make the same assumption, is not good enough justification to make it a tenable assumption. It is a huge assumption and highly unrealistic in most realistic scenarions. It makes life computationally far easier but trades-off usability down the line. Consequently, please discuss why you make this assumption. - Line 104: the do() express does not have to constitute a hard (atomic) intervention, it can also be a soft intervention (and others). Please consider or at least mention these settings too and why you settle for hard interventions. - Definition 1: is this definition somehow different from the standard one? Perhaps you ought to also give it a source since this is common fare but it may differ from the one readers are used to (and would also elucidate on the matter if it the standard definition or not). - The paragraph at the bottom of page 3 is great, very interesting application of Lipschitz continuity on a real problem. - Define the indicator function in equation 5. - Lines 149-161: I wonder if it may be not better for you to formally have a small lemma here, possibly even a theorem, explicating the results for which you have a proof. It currently reads as if it was merely a by-product of your method, whereas I imagine it is rather more important than that? ## Problem statement - I would recommend revising the structure of your paper given that the problem statement appears on page four at conference with a page limit of nine. The paper will read far better if you frame the problem early on, allowing the reader to understand your angle of attack from the very start. - Line 163: you already introduced the episode on line 78. - Where is practically relevant to know that eq 9 is NP hard? A trivial question no doubt, but I would be interested to hear what the authors have in mind with regards to this theorem (is it really a theorem?) ## Finding the optimal counterfactual action sequence via A* search - Line 210: I think a diagram here would be very helpful, explaining algorithm 1; a lot of page six could gainfully be explained with a figure rather than paragraphs of (rather dense) text. ## Experiments using clinical sepsis management data - Looking at figure 2(a) I would have thought that as k increased we would have seen a much larger increase in the y-axis but this does not seem to be the case, the increase is fairly modest. Can you explain? - According to panel 2(b), 50% of patients see less than or equal to ~5% c/f improvement? Is that correctly read? - I enjoyed this section and the example is good but it is long and arduous, and we have no evidence to suggest that your method works for more (even synthetic) settings. It is an old trope of reviewing that we (reviewers) always want more experiments but in this case I think it is a fair request. Simply because I am struggling to understand how your ideas generalise across different settings and applications (even synthetic) nor do we have a comparison against other methods (are there any to which you can gainfully compare against?)
 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Please see the Questions section for the full review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and insightful comments, which will help improve our paper. Please, find a point-by-point response below. **[Miscellaneous comments on presentation]** We would like to thank the reviewer for all their concrete suggestions regarding presentation/organization of the paper. We will address all of them in the revised version of the paper. **[Line 49]** We would like to clarify that we do not refer to states or transitions as concepts inherent to the original SCM definition. As lines 48-49 mention, we *use* an SCM (as originally defined by Pearl) to *represent* the stochastic state transitions of a sequential decision making environment. For more details, please refer to our response to **[SCM definition]**. **[Bijective SCM]** Please note that we formally define what a bijective SCM is in Definition 2 (lines 136-138). **[Line 76]** $\mathcal{A}$ is a discrete *finite* set of actions, not the set of positive integers. **[Discounted cumulative reward]** Since we focus on a finite horizon setting, we think it is more meaningful to consider the undiscounted cumulative reward. **[SCM definition]** We would like to highlight that, in lines 82-84, where we first introduce the notion of an SCM, we do use Pearl’s (general) definition, i.e., that an SCM is consisted of four parts: (i) a set of endogenous variables (ii) a set of exogenous (noise) variables (iii) a set of structural equations assigning values to the endogenous variables, and (iv) a set of prior distributions characterizing the exogenous variables. Subsequently, in lines 84-92, we formally introduce a particular type of SCM $\mathcal{C}$ that we use to represent sequential decision making processes—an instance of Pearl’s general definition. In our SCM, each state $S_t$ and each action $A_t$ in a finite sequence of actions and states $\\{S_t, A_t\\}_ {t=0}^{T-1}$ corresponds to a different endogenous variable (see lines 84-86), the variables $\\{Z_t, U_t\\}_ {t=0}^{T-1}$ are exogenous variables characterized by prior distributions $\\{P^\mathcal{C} (Z_t), P^\mathcal{C} (U_t)\\}_ {t=0}^{T-1}$ (see lines 88-89 & 91-92), and the value of each variable $A_t$ ($S_t$) is given by a structural equation $g_A$ ($g_S$), as stated in Equation 1 (2). In the revised version of our paper, we will rephrase lines 82-84 to clarify that they refer to the general definition of an SCM given by Pearl. **[No unobserved confounders]** We agree with the reviewer that the assumption of no unobserved confounding may be more or less realistic, depending on the exact domain and application at hand. In our work, our focus is on developing a method to solve the problem of finding counterfactually optimal action sequences in continuous state spaces. Since this problem has not been studied before, for simplicity, we have decided to tackle the problem under the assumption that there are no unobserved confounders, similarly as others in the literature have done when addressing a problem for the first time. However, we do not think that assumption nullifies our contribution, and we hope that future research will build upon our work to develop methods that work under unobserved confounders. Following the reviewer’s advice, in the revised version of our paper, we will include a separate paragraph at the end of Section 2, where we will discuss the implications of this assumption and highlight related work that focuses on environments where unobserved confounding exists (currently discussed in Appendix A in the supplementary). **[Hard interventions]** We settle on hard interventions because they fit better the problem formulation we consider. In the revised version of the paper, we will clarify that it would be interesting to consider variations of our problem using soft interventions. **[Definition 1]** To the best of our knowledge, the specific definition of Lipschitz-continuity for the type of SCMs we consider has not been used before. **[NP-hardness]** It is practically relevant to know that Eq. 9 is NP-hard because it tells us that there is no hope in designing a polynomial time algorithm that solves the problem we focus on. Since the problem has not been proven to be NP-hard before and the reduction we use to prove NP-hardness is novel, we think it is reasonable to include such a result as a Theorem. In the revised version of our paper, after Theorem 3, we will include a paragraph where we will discuss the importance and implications of the Theorem in the design of our algorithm. **[Figure 2(a)]** Figure 2(a) shows that, as $k$ increases, the marginal gains that could have been achieved in terms of the total counterfactual reward are diminishing. That finding follows our intuition since it indicates that, in retrospect, a small number of actions in each episode had the most significant effect on the episode’s outcome. **[Figure 2(b)]** The reviewer reads panel 2(b) correctly, 50% of patients see less or equal to ~5% c/f improvement. As discussed in lines 348-349, this indicates that the treatment choices made by the clinicians for most of the patients were close to optimal, even with the benefit of hindsight. **[Other experimental settings, other methods]** As an implication of Theorem 6, our method is guaranteed to *always* find the optimal solution to the problem we study, and we do not have reasons to expect that it would perform poorly in a different dataset or application. That said, we would like to share that we did consider performing synthetic experiments prior to the submission of the paper, and we chose not to go forward because we believe there are no additional insights one could gain about our method in a synthetic environment, other than the ones already presented in the paper. Due to the rebuttal's space constraints, we kindly ask the reviewer to refer to the response **[Lack of baselines]** to Reviewer **R23o** for more details re. other methods (or lack thereof) to compare our method against. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I am happy to raise my score but will note that I do not think the authors have it right on the unobserved confounder assumption. Whilst I accept the logic for making this assumption here, in the first pass at the problem, it is and remains a significant assumption that rarely holds true save for toy problems. But I agree with the authors that this assumption, nonetheless, does not nullify your contribution. Again, well done, it is a fine paper. --- Reply to Comment 1.1.1: Title: Thank you for engaging in the discussion Comment: We would like to thank the reviewer for engaging in the discussion and for updating their score. When revising the paper, we will make sure to highlight the assumption of no unobserved confounders and its implications, as mentioned in the rebuttal.
Summary: This paper tackles the problem of finding counterfactual action sequences in sequential decision making problems. The main difference to previous methods is that this paper regards a continuous state space instead of a discrete one, which renders previous solution methods infeasible. The authors formalize the problem as an SCM and show that the solution in general is NP hard. As a solution, they present a new method based on the A-star algorithm, which yields for many problems an efficient solution. They evaluate the method on clinical data. Strengths: The paper is well-written and easy to follow. The problem of efficiently finding counterfactual action sequences for the continuous state space seems new to me and relevant to the community. Even if the SCM formulation follows closely the work of Tsirtsis et al. (2021), a likewise dynamic programming approach would require enumerating all possible action sequences. From my point of view, the main contribution lies therefore in the A-star-based search method (and the anchor set selection), which requires further technical details such as bijectivity and Lipschitz continuity of the SCM for the heuristics. Therefore, the technical contribution seems good to me. The method is evaluated with regard to the influence of different parameters. Weaknesses: The worse case complexity of the method is the same as for brute-force search. The efficiency of the method depends on the number of samples of the Monte Carlo samples in the anchor set. It is not clear to me, which conditions the problem must have in order for the method to work effiently and when it falls back to worse case complexity. Minor (typos): line 190 comma before "such that" Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How important are the assumptions that the SCM is bijective and Lipschitz-continuous? You need this assumption for the bounds in your algorithm. Can you still approach the problem if one condition is violated? How much do you think do the conditions restrict the field of applications of your method? Any thoughts about the conditions of the problem under which the method works efficiently? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations and directions for future work are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and insightful comments, which will help improve our paper. Please, find a point-by-point response below. **[Efficiency of our method]** The efficiency of our method depends on the tightness of the bounds $\hat{V}_ {\tau}$, which depends (i) on the number of the Monte Carlo samples in the anchor set, and (ii) on the Lipschitz constant of the SCM. Therefore, in general, our method will work more efficiently the lower the Lipschitz constant of the SCM, as shown in Figure 1(a). However, one cannot just decrease the Lipschitz constant arbitrarily to increase efficiency because this would degrade the goodness of fit of the SCM with respect to observational data. In Figure 4 in Appendix F, we investigate the goodness of fit of the SCM under different values of the Lipschitz constants. **[Bijectivity and Lipschitz-continuity]** The assumptions that the SCM is bijective and Lipschitz continuous are both necessary to conclude that the solution returned by our algorithm is optimal and that our algorithm is efficient. Since the bijectivity assumption is satisfied by many classes of SCMs studied in the literature, as discussed in lines 132-135, and the Lipschitz continuity is a natural assumption, as discussed in lines 122-131, we do not think these two assumptions significantly restrict the field of applications of our method. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. I still think this is a good paper and therefore keep my original score for acceptance.
Summary: The paper tackles the problem of finding optimal action sequences in domains with continuous state spaces with counterfactual reasoning. They propose an A* based search approach and show the efficacy of the approach on a clinical sepsis management problem. Strengths: * The paper focuses on a very significant research problem * The experimental evaluation on a real-world clinical dataset demonstrates the strong utility of the approach Weaknesses: * I have some concerns regarding the causal d-separation assumption (see Q1 & Q2). Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: 1. Graphical implication of an intervention on a variable $X$ (ie. $do(X=x)$ ) is that all the incoming edges to $X$ are removed. In other words, as we are intervening on X, the influence from the $PA_X$ (parent nodes of X) on the node $X$ becomes zero. Refer to paragraph after Def 3.2.1 in Chapter 3 of Pearl. In Equation 3, it seems that all the arrows pointing out of $A_t$ are removed to adjust for $do(A_t = a_t)$ (as mentioned in line 106). So, the d-separation assumed does not follow from the do-calculus. 2. Similar d-separation assumptions are made in Eq. 4 and 6. 1. Equation 1. suggests that two variables have a parent-child relationship: $S_t \rightarrow A_t$. 2. Equation 2 suggests that their variable form a collider: $S_t \rightarrow S_{t+1} \leftarrow A_{t}$. 3. On performing do-operation on $A_t$ would remove the parent-child relationship. But still maintain the collider $S_t \rightarrow S_{t+1} \leftarrow A_{t}$. So, $S_{t+1}$ and $A_t$ are not d-separated by $S_t$. 3. What are the implications of modifying the d-separation assumption on the proposed algorithms? 4. How was the generated counterfactual action sequence evaluated? The SOFA score requires vital signs for calculations, how were the vital signs obtained? 5. Figure 1 needs a legend. It is not clear what the pink & green lines represent. 6. Did any physician analyze the counterfactual action sequences? Was any physian involved at any stage of the process? 7. I recommend authors provide a reference table for notations in the appendix, it was really difficult to keep track of all the notations used in the paper. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 1 poor Contribution: 2 fair Limitations: Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and insightful comments, which will help improve our paper. Please, find a point-by-point response below. **[d-separation]** Although the reviewer is correct that, in general, a do() operation on a variable (in our case, $A_t$) removes all *incoming* edges to that variable (in our case, $\mathbf{S}_ t \rightarrow A_t$), we respectfully disagree that Eq. 3 does not follow from the do-calculus. On the contrary, the first equality of Eq. 3 is a direct application of the 2nd rule of the do-calculus (action/observation exchange), as given in Theorem 3.4.1 of Pearl [8]. Specifically, the rule (applied to our case) states that one can write the (observational) probability $p^\cal{C}(\mathbf{S}_ {t+1} = \mathbf{s} | \mathbf{S}_ t = \mathbf{s}_ t, A_t = a_t)$ as an interventional probability $p^{\cal{C} ; do(A_t=a_t)} (\mathbf{S}_ {t+1} = \mathbf{s} | \mathbf{S}_ t = \mathbf{s}_ t)$ and vice versa if $\mathbf{S}_ {t+1}$ is conditionally independent of $A_t$ given $\mathbf{S}_ t$ in the graph resulting after deleting all *outgoing* edges of $A_t$. Note that this is equivalent to what lines 104-106 of our submission state—in the graph $A_t \leftarrow \mathbf{S}_ t \rightarrow \mathbf{S}_ {t+1}$, where the original outgoing edge $A_t \rightarrow \mathbf{S}_ {t+1}$ is deleted, $\mathbf{S}_ t$ acts as a confounder and conditioning on it d-separates $A_t$ and $\mathbf{S}_ {t+1}$. In the revised version of our paper, we will include a figure illustrating the causal graph, and we will rephrase lines 104-106 to clarify this point. In light of the aforementioned explanation, we do not see any technical issues with Eqs. 3, 5 and 6, and we would be grateful if the reviewer could reconsider their score. **[Evaluation based on SOFA score]** As discussed in lines 344-346, the generated counterfactual action sequences were evaluated based on the counterfactual improvement they would have provided according to the SCM $\mathcal{C}$, i.e., the relative decrease in cumulative SOFA score between the counterfactual and the observed episode. The SOFA score and the eight vitals required to compute it form the dimensions of the state vector, as stated in lines 305-307, and their counterfactual values are given by the (trained) SCM. **[Figure 1]** To avoid occlusions, we specify what the pink and green lines represent in the caption of Figure 1. The pink line represents the effective branching factor (EBF) and the green line represents the average runtime of the A* search (in seconds). Here, we also used matching colors in the left and right y-axis of each panel to further indicate that the pink line corresponds to EBF and the green line corresponds to A* average running time. **[Evaluation by physicians]** The counterfactual action sequences were not analyzed by physicians for the purposes of this submission. In our work, we focus on formalizing and tackling algorithmically the problem of finding counterfactually optimal action sequences for episodes of sequential decision making processes with continuous states. However, as discussed in lines of 370-373, performing a user study with a systematic evaluation of counterfactual action sequences by human experts is a very interesting direction for future work. **[Notation]** Following the reviewer’s advice, in the revised version of our paper, we will include a reference table for notations in the Appendix. --- Rebuttal Comment 1.1: Title: D-separation Comment: Thank you for the clarification on the D-separation assumption. That was my only major concern. As this is addressed, I will update my score accordingly. --- Reply to Comment 1.1.1: Title: Thank you for engaging in the discussion Comment: We would like to thank the reviewer for engaging in the discussion and for updating their score.
Summary: This paper studies a question that is very natural for a sequential decision maker to ask itself: "how could I best improve the return I obtained in a trajectory by only changing a fixed (k) number of actions from the sequence of actions I just executed while keeping the rest of the actions fixed". Authors study this question in a context where the state space is continuous but the action space is discrete. The paper generalizes the result of previous work which studied a similar question, but in the context of discrete states. The setting is very nicely motivated and the problem is quite relevant and important. This formulation seems to be novel, general, and useful. However, the paper shows that answering this question is in fact an NP-hard, one that we can only solve in the worst case using exhaustive search. Therefore, the paper goes on to find an admissible heuristic to be used in conjunction with the A* algorithm to reduce the complexity of the search space. To propose a consistent/admissible heuristic, one needs to find reasonable upper bounds for the value of unseen states which are found using counterfactual reasonings. The paper proposes to get this done using the notion of Lipschitz continuity where intuitively, the difference between the value of the two states is upper bounded by a constant times the norm of the difference of the two states. This may be a stringent requirement generally, but it seems to make sense in the domain that is of interest in the paper, namely in medical trials. Experiments show that using this heuristic does result in significantly reducing the average runtime of the search. Overall, this is a fair contribution in the intersection of causal reasoning and planning/reinforcement learning. Strengths: The competency that the paper attempts to develop, namely being able to answer counterfactual questions in the context of planning and reinforcement learning, is quite interesting. Formulating this problem and showing that the original problem is NP-hard also sheds some light on the complexity of asking such questions. Some of the assumptions made, such as using a bijective causal model does limit the scope of the result, but I still find the formulation very interesting. Weaknesses: We know that Lipchitz continuity can be a pretty lose upper bound for the function value. Especially in the context of this paper, when data is sparse, the upper bounds derived in Lemma 4 can be pretty bad. Moreover, recently there have been more efforts to develop Lipschitz-like tools that are more conducive to RL algorithms by ensuring that the new tool are Coarse-Grained. See for example "Coarse-Grained Smoothness for Reinforcement Learning in Metric Spaces", by Gottesman et al, 2023 or "Zooming for Efficient Model-Free Reinforcement Learning in Metric Spaces" by Touati et al 2020. This paper though relies on more classical notions of smoothness (vanilla Lipchitz continuity) which is not the most effective in the context of RL. Another point is that under the assumptions made in this paper (namely Lipschitz reward and transition dynamics), the value function itself becomes Lipschitz, so when learning the value function one can restrict the set of feasible value functions to be Lipschitz also. Is there any reason why this is not leveraged? In the same vein, we clearly know that the value function is always between (1-\gamma) R_max and -(1-\gamma) R_max. Can you elaborate if the bounds that are obtained by the Lipschitz interpolation would be able to provide much tighter bounds or that the bounds become vacuous? Can you also elaborate on the amount of data needed in your medical domains before these bound become non vacuous? In practice, we also need to make sure that we do not underestimate the Lipschitz constant, because otherwise the heuristic used in A* will no longer be admissible. The estimate also needs to be tight enough so as to make sure that it is effective when used as a heuristic. But I am not sure how this trade-off will be maintained in problems where the Lipschitz constant is unknown. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would like to see a comparison with the discrete case, in particular the "Woulda, coulda, shoulda: Counterfactually-guided policy search" paper of Buesing et al, 2018. I do understand that this paper is dealing with the continuous state space. But one natural baseline to compare against could for example be the paper mentioned above where one can simply discretize the continuous state space and then apply their algorithm to compare against. How important is the kick-starting part with the monte carlo anchoring? In particular, it would have been nice to see ablations where the monte-carlo anchoring is performed more or less to understand its effect on performance. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Authors maintain that they will release code for their experiments if accepted, but they seem to have access to confidential patient data. I hope and trust that authors will they take sufficient precautions to ensure the anonymity of patients whose information is used in these experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and insightful comments, which will help improve our paper. Please, find a point-by-point response below. **[Lipschitz-like tools for RL algorithms]** We would like to thank the reviewer for bringing these papers to our attention, which we will cite in the revised version of our paper. However, we would like to highlight that, in our paper, we do not learn a model-free RL policy nor study the relationship between smoothness, Lipschitz continuity and the amount of data needed to learn a model-free RL policy. Rather, given an observed sequence of actions and states, we search for an alternative sequence of actions that would have retrospectively maximized the (counterfactual) outcome under the counterfactual transition dynamics given by a (trained) structural causal model (SCM)---this is stated verbally in lines 40-42 and formalized in lines 180-183. In other words, our paper studies an *algorithmic* problem and not a *learning* problem. Therefore, it is hard to make a direct comparison with the papers brought up by the reviewer. That said, we would also like to clarify that the bounds obtained in Lemma 4 do *not* depend on the sparsity of the available data and they are an implication of the Lipschitz continuity of the SCM (see Definition 1). **[Lipschitz value function]** In our paper, to guarantee that the computed heuristic function $\hat{V}_ \tau$ is provably consistent, we do leverage the assumption that the SCM is Lipschitz continuous (see Lemma 4, Proposition 5, Theorem 6 and the related proofs in Appendix C). Also, we would like to clarify that our approach does not involve *learning* a value function. Note that the function $V_ \tau$ does not denote the value function, as typically defined in the RL literature, but denotes the counterfactual reward that could have been achieved in a counterfactual episode where, at time $t$, the process is at a (counterfactual) state $s$, and there are so far $l$ actions that have been different in comparison with the observed action sequence in the observed episode $\tau$ (see lines 192-195). To our best understanding, the upper bound $(1- \gamma ) R_{max}$ mentioned by the reviewer is not an upper bound relevant to our function $V_\tau$ but rather a bound for the value function in an RL problem with an infinite horizon. Note that, in our work, we consider a finite horizon (see lines 31, 60 & 75), and the outcome of the decision making process is the sum of the rewards $o(\tau)=\sum_t R(\mathbf{s}_t, a_t)$, which implies that $\gamma=1$. Therefore, together with the explanation given in the previous paragraph, we believe that the bound given by the reviewer is not applicable to our problem. Finally, we would also like to clarify that, to compute a better heuristic function $\hat{V}_\tau$, we do not require more data to get better bounds, but rather more anchor points, which we get via Monte Carlo simulations (see lines 277-295). **[Underestimation of Lipschitz constant]** In our experiments, we do not estimate the Lipschitz constant but rather we train an SCM that is Lipschitz-continuous by design and whose Lipschitz constant we can control. For a detailed description of this process and the overall model architecture, please refer to Appendix F2 in the supplementary. **[Comparison with "Woulda, coulda, shoulda…"]** We would like to clarify that “Woulda, coulda, shoulda: Counterfactually-guided policy search” by Buesing et al. [10] solves a different problem and thus it is incomparable with our method. In this context, a natural baseline to compare against would be the method introduced by Tsirtsis et al., NeurIPS 2021 [13] which solves a closely related problem in discrete state spaces, as we discuss in lines 52-54. Unfortunately, their method has a quadratic complexity with respect to the number of discrete states and thus does not scale to continuous multidimensional vector states as those used in our experiments. For example, discretizing the 9 continuous features we consider in our experiments into ten discrete levels each---a rather coarse-grained discretization---would lead to 1 billion discrete states. **[Monte Carlo anchoring]** The kick-starting approach we describe in lines 277-295 is an important part of our method because it generates the anchor set $\mathcal{S}_ \dagger$ required to compute the heuristic function $\hat{V}_ \tau$. Note that, in Figure 1(b), we vary the amount of Monte Carlo anchoring to investigate its effect on performance and, in Appendix E in the supplementary, we evaluate alternative selection strategies for the anchor set. **[Patient data]** We will release the code we used in our experiments but not the *anonymized* patient dataset (i.e., the MIMIC-III dataset). In this context, we would like to clarify that we are not the data owners, and we have gained access to the MIMIC-III dataset by submitting an application to the official owners (Physionet). As part of the process, we have passed a short online class in ethics. Please, refer to the MIMIC-III website for more details. --- Rebuttal Comment 1.1: Title: Lipschitz Nets Comment: Thanks for the pointer to the Appendix. What I originally meant is that the Lipschitz constants $L_{\phi}$ and $L_h$ could be estimated based on the data. The discussion on how these are chosen is currently pretty limited. For example, the Appendix reads that the Lipschitz models are only 6% worse in terms of log-likelihood. What does this mean? Is this log-likelihood on a single set of data, or log-likelihood on a held out test data? Moreover, when applying this approach to a new domain, how do I know what is a reasonable $L_{\phi}$ and $L_h$? --- Reply to Comment 1.1.1: Title: Response to follow-up question Comment: We would like to thank the reviewer for engaging in the discussion. Perhaps this is already clear to the reviewer, but we would like to highlight that our goal is not to use data to train and estimate the Lipschitz constant of a *single* SCM, since that could lead to an underestimation of its value and could end up being problematic for the optimality of our proposed method, as correctly mentioned by the reviewer in the original review. Instead, our approach is to train *multiple* SCMs that are Lipschitz continuous by design—each one is *guaranteed* to consist of neural networks $h$ and $\phi$ with Lipschitz constants $L_h$ and $L_\phi$ whose values we can control. Then, we evaluate the log-likelihood achieved by each of these SCMs using 5-fold cross validation, as discussed in lines 831-832, and pick the SCM with the best tradeoff between log-likelihood, $L_h$ and $L_\phi$. This procedure is described in lines 830-841 in Appendix F in the supplementary. To evaluate the log-likelihood using 5-fold cross validation, for each configuration of $L_h$ and $L_\phi$, we randomly split the dataset into a training and a validation set (with a size ratio 4-to-1), we train the corresponding SCM using the training set, and we evaluate the log-likelihood of the validation set based on the trained SCM. We repeat the procedure 5 times and we take the average of the achieved log-likelihoods. In the revised version of our paper, we will clarify that the log-likelihood is always measured on a different set of data points than the one used for training. Whenever we write that the Lipschitz models are only 6% worse in terms of log-likelihood, we mean that the log-likelihood achieved by the Lipschitz models is only 6% lower than the log-likelihood achieved by a baseline model whose Lipschitz constants are unconstrained. To pick $L_h$ and $L_\phi$ in a new domain, we would repeat the aforementioned procedure using data from the new domain and defer to domain experts to elucidate what is an acceptable trade-off between log-likelihood, $L_h$ and $L_\phi$.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studies the problem of finding a counterfactual action sequence to maximize the outcome of a trajectory in an MDP characterized by a structural causal model. The focus is on continuous state spaces under a set of Lipschitz constraints. The authors show that this problem is NP hard and propose an algorithm based on A* search, which they test empirical on a dataset on clinical decision making. Strengths: - The paper studies a relevant well-motivated problem that will be of interest to the NeurIPS community - The authors do a good job in clarifying and formalizing the problem - Showing that the proposed problem is NP-hard is a valuable insight to guide future theoretical and empirical work - The proposed algorithm is a natural choice and it's formulation is sound - The empirical evaluation is performed on a realistic dataset, and the results are promising Weaknesses: - The discussion of related work is too limited. I think there is a lot of related work about related problems in SCM (outside RL) that should be mentioned. I'm not very familiar with this literature myself, which made it difficult to judge the potential impact of the contribution the present paper makes. - Theorem 3 (NP-hardness) is a key contribution of the paper. However, it is only stated in the paper and almost not discussed. It would be valuable to give a high level picture of the reduction used to prove the result, as well as a discussion of the implications of this result and how it results in the choice of algorithm later in the paper. - The empirical results are difficult to interpret due to a lack of baselines. While the efficiency results in Figure 1 and performance results in Figure 2 look as expected qualitatively, there is no point of comparison. - For efficiency it would be useful to compare to some naive baselines, like a search based method without a specific heuristic. - The counterfactual performance improvements in Figure 2 are pretty small which makes me wonder if the dataset is a good enough benchmark to evaluate the present method. Maybe the trajectories in the dataset are too close to optimal? - It could be interesting to compare to an approach the discretizes the environment and uses prior methods for discrete state spaces. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - What could be baselines to compare the empirical results to? - What impact does the chosen heuristic have on the performance of the search algorithm? - How much worse would be a method that discretizes the environment? - How can we be confident that the present dataset is a good benchmark to evaluate the approach? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and insightful comments, which will help improve our paper. Please, find a point-by-point response below. **[Related work]** We agree with the reviewer that there is a rich literature on structural causal models (SCMs) that is not connected to reinforcement learning (RL). Due to space limitations, we chose to focus the discussion only on the most closely related work [10-13] in the main body of our paper and include a further discussion of related work in Appendix A. In the revised version of our paper, we will bring some of the related work from Appendix A to the main body, and we will add a footnote explicitly pointing the reader to Pearl [8] for a more complete overview of prior work on causal inference based on SCMs. **[NP-Hardness]** The main implication of the NP-Hardness result is that there is no hope in designing a polynomial time algorithm that solves the problem. Following the reviewer’s advice, after Theorem 3, we will include a paragraph where we will give a high level picture of the reduction used in the proof of the Theorem and discuss the importance and implications of the Theorem in the design of our algorithm. **[Lack of baselines]** We thank the reviewer for their concrete experimental suggestions. To the best of our knowledge, the only natural baseline to compare against would be the method introduced by Tsirtsis et al., NeurIPS 2021 [13] which solves a closely related problem in discrete state spaces, as we discuss in lines 52-54. Unfortunately, their method has a quadratic complexity with respect to the number of discrete states and thus does not scale to continuous multidimensional vector states as those used in our experiments. For example, discretizing the 9 continuous features we consider in our experiments into ten discrete levels each---a rather coarse-grained discretization---would lead to 1 billion discrete states. In terms of naive baselines, we believe it is non-trivial to come up with methods other than an exhaustive search that simultaneously work without a heuristic function and guarantee finding the optimal solution. We do not directly compare our method with an exhaustive search because it is too computationally expensive to run in a reasonable amount of time. That said, our experimental results indicate that our method does perform better than an exhaustive search since, otherwise, it would be exploring the entire search space independently of the exact form of our computed heuristic function. For example, varying the number of Monte Carlo samples $M$ would have no effect on efficiency, i.e., Figure 1(b) would be a flat line. **[Quality of the sepsis management dataset]** We decided to perform experiments using the MIMIC-III sepsis management dataset since it is a commonly used dataset in the literature on reinforcement learning for healthcare (see lines 297-298) and it is relevant to our problem motivation. Note that, as an implication of Theorem 6, our method is guaranteed to *always* find the optimal solution, and we do not have reasons to expect that it would perform poorly using a different dataset. However, we would like to clarify that the reviewer’s comment that *``maybe the trajectories in the dataset are too close to optimal''* is part of our experimental findings (see lines 347-349) and we do not view this as an indicator that the dataset is not a good benchmark for the evaluation of our method. --- Rebuttal Comment 1.1: Comment: Thanks for the response! The responses address my primary concerns, and if the author update the paper as promised, I think the paper would be improved quite a bit. I appreciate the comments regarding the difficulty to choose suitable baselines; I think this should also be discussed in the paper. Overall, I am still a bit skeptical about how reliable the experimental results are; especially, because it is only a single dataset. But given that my other concerns were addressed, I will increase my score from 6 to 7 (assuming the authors make the promised changes to the paper). --- Reply to Comment 1.1.1: Title: Thank you for engaging in the discussion Comment: We would like to thank the reviewer for engaging in the discussion and for updating their score. We will make sure to perform all edits mentioned in the rebuttal, when revising our paper.
null
null
null
null
null
null
A Unified Algorithm Framework for Unsupervised Discovery of Skills based on Determinantal Point Process
Accept (poster)
Summary: This paper focuses on unsupervised option discovery. It uses the framework of Determinantal Point Processes (DPP) with the aim of combining the advantages of variational and Laplacian-based methods, and unify the desiderata of coverage and diversity of the learned options. Empirical validation shows the benefits of the approach over baselines in continuous control tasks. Strengths: - The idea of adopting DPP for option discovery is novel and quite interesting. Ensuring coverage and diversity has been a challenging open question in the unsupervised RL community and the use of DDP is a mathematically clever way to tackle it. The fact that the proposed approach seeks to capture advantages of both variational and Laplacian-based methods, usually part of distinct streams of research, also has a unifying value. - The paper is quite clear, well motivated and well written. - Experiments show improvements over baselines like DIAYN. Weaknesses: - Some related work is missing, including more recent and improved variational methods, such as [1-6]. It would in particular be valuable to discuss the connections with option-discovery approaches like [6-8] that build on the idea of Maximum Entropy Exploration and explicitly seek to optimize coverage, since the latter is a key feature targeted in the paper; see also my first question below.  - The proposed method is quite complicated, with somewhat ad hoc design. While I appreciate the effort to isolate each desiderata (coverage/diversity, intra/inter-option), the final objective function in Eq.(9) is rather involved. In particular, the choice of hyperparameters to trade-off each term looks non-trivial, as acknowledged in the last section of the paper. The method seems quite computationally involved compared to standard variational approaches, and it is unclear how it would scale to more complex setups (e.g, longer horizon, more skills, visual domains). - The empirical part of the paper could be improved. There have been numerous option-discovery works that clearly outperform VIC/DIAYN and it would be relevant to compare to such stronger baselines. For example: DISDAIN [5], EDL [7] (with SMM [6] instead of the oracle version, despite the fact that it is a 'trajectory-first' method as categorized in the paper), DADS (despite the author's argument of distinction between model-free / model-based). As a small side note, it's also nice to add the random policy as a 'free' baseline (as it sometimes outperforms VIC/DIAYN in coverage objectives). - Very minor typos: 'extracts' (l.51), 'generalizes the" (l.202), 'long-horizon' (l.251)). [1] Fast task inference with variational intrinsic successor features, Hansen et al., ICLR 2020 [2] Relative variational intrinsic control, Baumli et al., AAAI 2021 [3] Entropic desired dynamics for intrinsic control, Hansen et al., NeurIPS 2021 [4] Direct then diffuse: Incremental unsupervised skill discovery for state covering and goal reaching, Kamienny et al., ICLR 2022 [5] Learning more skills through optimistic exploration, Strouse et al., ICLR 2022 [6] Active Pretraining with Successor Features, Liu and Abbeel, ICML 2021 [7] Efficient Exploration via State Marginal Matching, Lee et al., arXiv 2019 [8] Explore, discover and learn: Unsupervised discovery of state-covering skills, Campos et al., ICML 2020 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Could the authors provide a rigorous/mathematical definition of what they mean by 'coverage' throughout the paper. Is it in terms of entropy of states visited by the options H(S)? If so, then it's a term explicitly optimized by diversity I(S,Z) = H(S) - H(S|Z), which would make confusing the statement that "variational option discovery maximizes the diversity of the options through a mutual information loss (while ignoring coverage)" (or maybe the authors mean that optimizing MI exactly is difficult and the approximations performed in the literature tend to have poor coverage, which is different from arguing that the targeted objective ignores coverage in the first place). Also, using a Laplacian-based idea that reduces the cover time of the graph induced by the random policy may be a poor proxy of state space coverage in the entropic sense, while some diversity-based works like [6-8] explicitly target this component with good empirical results. As for the coverage visualizations, Figure 1) b) is said to improve coverage but the options are all going to the same corridor, which does not look desirable in terms of state space coverage. Meanwhile on Figure 3, the paper implies that options learned by (c) are more "diverse" and "beneficial for various downstream tasks" than (a) or (b), although one may argue otherwise, as the latter show less clutter around the initial state and attain more distant regions of the state space. Thanks for the clarifications. - Isn't the evaluation on Appendix D.3 on OpenAI Gym rather than Atari? (Is Cartpole an Atari game?) - Although there is a computational complexity discussion in Appendix C.4, could you elaborate more on the scalability of the approach and for example provide numbers on its computational requirements compared to VIC/DIAYN given the same number of environment interactions? - As first observed in the DIAYN paper, fixing the prior distribution (over option choice at the initial state) rather than learning it prevents a collapse to sampling only a handful of skills (as it can occur for VIC), and this has become standard practice in most subsequent works (although it is a loose lower bound on the original MI objective that should also optimize for the prior distribution). Here you explicitly learn the prior distribution, which is an interesting but undiscussed choice, could you explain more if, and why, you can overcome the aforementioned collapse?  Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: A limitation is discussed in Section 5. Not applicable regarding potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Regarding the definition of option coverage (Question #1): In our paper, coverage is a property defined w.r.t. a single option. It refers to the expected number of landmark states (i.e., clusters of states) traversed by an option trajectory. It is defined as $f(\tau)$ in Eq. (6). We view states in a trajectory as the universal set $\mathcal{X}$. The expected cardinality of a set sampled from $\mathcal{X}$ under a DPP, i.e., $f(\tau)$, reflects the number of modes (i.e., landmark states) contained in the trajectory. By (a) maximizing coverage of each single option and (b) maximizing diversity among different options in the meanwhile, the overall span of all options in the state space can be maximized. The "coverage" in our paper does not refer to the span of all options and so we do not use entropy of states as the coverage measure. In Figure 1(b), options optimized for (single-option) coverage all enter the right corridor, allowing each one’s trajectory to visit more landmark states and thus achieving better (single-option) coverage compared with the ones in Figure 1(a). In Figure 3(a)(b), we show the learned option trajectories starting from different locations (i.e., yellow points), while in 3(c), we only show trajectories starting from the center point. Options learned in (c) can lead the agent to multiple directions from a starting point, showing more diversity. ## Regarding complexity and scalability of ODPP (Weakness #2, Question #3): Please refer to the global response at the top. ## Regarding the prior network (Question #4): Learning a prior network provides a tighter lower bound on the original MI objective. In Figure 6 of Appendix D.1, we show that the learned prior network can be used as initializations to aid downstream tasks. Previous variational methods choose to fix the prior distribution to avoid a collapse for the prior network to sample only a handful of skills. Our algorithm pretends that because our algorithm additionally introduces three DPP-based terms to explicitly maximize the coverage and diversity of the learned options. With $\mathcal{L}^{DPP}_{1:3}$, each option is expected to cover multiple state clusters with a long range and different options tend to visit different regions in the state space. In this case, the prior network would tend to select multiple diverse skills to improve the learning objective (i.e., Eq. (11)) rather than sampling only few of them. The collapse happens because the mutual information objective only implicitly measures option diversity as the difficulty of distinguishing them via the variational decoder and does not model the coverage, as noted in Section 3. This motivates us to introduce explicit diversity and coverage measures as regularizers and enhancement. ## Regarding related works (Weakness #1): Here, we provide comparisons of our algorithm with more recent variational option discovery methods. (This is a simplified version due to the word count limit. We can provide the detailed version in the discussion stage (if asked) and the final submission. [1-8] refers to papers listed by the reviewer.) In [1], the authors focus on alternative approaches to leverage options learned during the unsupervised phase, rather than new option discovery algorithms. They still adopt objectives based on Mutual Information (MI) as previous works without solving the exploration issue. The authors of [2] propose a slightly-modified version of VIC to improve usefulness of the discovered options by introducing an extra posterior. Still, their options are not explicitly trained for better coverage/exploration. The authors of [3] propose to replace the fixed prior distribution $P(c)$ with a fixed dynamics model over the option latent codes $P(c_{t}|c_{t-1})$. Each latent code corresponds to a sub-trajectory. By concatenating sub-trajectories, the agent can reach much further states. They only rely on MI-based objectives, so they cannot model the coverage of each option (like ours) and instead choose to chain options for better overall coverage. The fixed $P(c_{t}|c_{t-1})$ can result in inflexibility when applying options in downstream tasks. In [4], they employ a multi-step protocol to generate options organized in a tree-like structure. Heuristics and structural limits are involved in each step, which may hinder its generality. Also, they propose to optimize the local coverage around the final state rather than the overall trajectory coverage like ours. In [6], they optimize the MI $I(s, c) = H(s) - H(s|c)$, where $H(s)$ is for improving exploration of the learned options. They adopt a variational posterior $P_{\phi}(s|c)$ to estimate $H(s|c)$. Learning $P_{\phi}(s|c)$ can be challenging when the state space is high-dimensional, compare with $P_{\phi}(c|s)$ used in our paper, which can hinder the optimization. Further, in DADS, they categorize algorithms utilizing $I(s, c) = H(s) - H(s|c)$ as the forward form of MI-based option discovery, and they empirically and theoretically show the limited capability of these algorithms for exploration even with $H(s)$ in the objective. EDL [8] disentangles exploration and skill discovery into two phases. They first train an exploration policy that induces a uniform distribution over states and can cover the state space. Then, they discover diverse skills contained in thorough samples from the pre-learned exploration policy, based on MI-based objectives. Training such an exploration policy can be challenging. In [8], they adopt SMM [7] as a solution, which requires solving a demanding max-min problem. In our paper, we tackle a more challenging scenario where the agent must learn to identify diverse options and thoroughly explore the environment at the same time, starting from a random policy, without access to expert trajectories or exploration policies. ## Regarding Weakness #3: We have offered comparisons with more advanced baselines: DADS and APS [6], in the global response as a PDF. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal which addresses most of my concerns and questions. Incorporating the clarifications, related work and new empirical results will improve the paper. The proposed method, if somewhat complicated, brings interesting insights to diversity and coverage in unsupervised option discovery. I have raised my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for the raise. We greatly appreciate it. we will add our rebuttal content to the final submission, including clarifications on the definition of coverage, analysis on the complexity and scalability of ODPP, related works, and comparisons with more advanced baselines.
Summary: This paper introduces a novel framework for unsupervised option discovery by utilizing Determinantal Point Process (DPP) to quantify and optimize both the diversity and the coverage of the learned options. The proposed unified option discovery framework captures the advantages of both variational and Laplacian-based methods, which is the major tools for existing unsupervised option discovery approaches. The experiment results in both MuJoCo and Atari demonstrate the superiority of the proposed algorithm. Strengths: 1. The motivation to propose the unified algorithm framework is convincing and the paper is well-written and easy to follow. The author well illustrates the main idea of this paper using a toy example. 2. The proposed option discovery framework unifies both variational and Laplacian-based methods and enables explicit maximization of diversity and coverage of the options. 3. Though DPP is widely used in methods to promote diversity, it is the first work to adopt it into option diversity. 4. Both the theoretical and experimental results show the superiority of the algorithms. Weaknesses: 1. The major concern to me is the application of the proposed framework. In Equ 9. there are three hyper-parameters in the loss function. How to tune these hyper-parameters? In L586 you say fine-tune important hyperparameters using a sequential, greedy method based on options’ visualization, such as in Fig2. Could you provide more details? 2. DPP is widely used in the work to promote diversity. It would be better to discuss them in related work, e.g., in promoting diverse policies in population-based RL[1, 2], diverse policies in games [3], recommendation diversity [4], etc. 3. Some theoretical results can be highlighted (such as using a Proposition station) in the main text. 4. In Sec4, instead of some visualization results, are there some diversity metrics that could be proposed to measure the diversity? It would be better to see these numerical results in the main text. [1] Jack Parker-Holder, Aldo Pacchiano, Krzysztof M Choromanski, and Stephen J Roberts. Effective diversity in population based reinforcement learning. Advances in Neural Information Processing Systems, 33:18050–18062, 2020. [2] Wu S, Yao J, Fu H, et al. Quality-Similar Diversity via Population Based Reinforcement Learning. In The Eleventh International Conference on Learning Representations, (2023). [3] Perez-Nieves, Nicolas, et al. "Modelling behavioural diversity for learning in open-ended games." International conference on machine learning. PMLR, 2021. [4]. Chen, L., G. Zhang, E. Zhou. Fast greedy MAP inference for determinantal point process to improve recommendation diversity. In Advances in Neural Information Processing Systems 31, NeurIPS 2018, pages 5627–5638. 2018. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Regarding fine-tuning the hyperparameters: As noted in the paper, the crucial hyperparameters are $\beta,\ \alpha_{1:3}$ in Eq. (4) and (9) which control the importance of each objective term, relating to diversity and coverage. Conducting a grid search on the set of parameters can be exhaustive. Therefore, we follow the process of the ablation study shown in Figure 2, add objective terms and adjust their corresponding weights one by one. In particular, in Figure 2(e), we retain only the $\mathcal{L}^{IB}$ objective and select its weight $\beta=10^{-3}$ from five possible choices: $1, 10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}$, guided by the visualization results. Next, for Figure 2(f), we introduce $\mathcal{L}^{DPP}_1$ and fine-tune the corresponding weight $\alpha_1$ while keeping $\beta$ fixed at $10^{-3}$. Last, we incorporate $\mathcal{L}^{DPP}_2$ and $\mathcal{L}^{DPP}_3$ and adjust $\alpha_2$ and $\alpha_3$ accordingly, while keeping $\beta,\ \alpha_1$ fixed. Note that the final two terms must work in tandem to ensure that the discovered options exhibit diversity across different options and consistency for a specific option choice. After the fine-tuning, we set $\beta=10^{-3}, \alpha_1=10^{-4}, \alpha_2=10^{-2}, \alpha_3=10^{-2}$. It is worthy noting that our evaluations across various challenging RL tasks utilize the same hyperparameter set, highlighting its robustness. This is because our proposed (DPP-based) coverage and diversity measures are task-agnostic and universally applicable to RL tasks. ## Regarding related works on DPP: We will add the following literature review on applying DPP to diversity enhancement: Determinantal Point Processes (DPPs) have found applications across a wide array of domains to promote diversity due to their unique ability to model diverse subsets. Originating from quantum physics, DPPs were introduced to the machine learning community by Kulesza and Taskar [2], whose tutorial highlighted the potential of DPPs for promoting diversity. In information retrieval tasks, Wilhelm et al. [10] and Chen et al. [11] have demonstrated the utility of DPPs in diversifying the output of recommendation systems. In Computer Vision, Gong et al. [3] and Kim et al. [4] utilized DPPs in video summarization and object detection, respectively, to reduce outcome redundancy. In Natural Language Processing, Perez-Beltrachini et al. [5] exploited DPPs to select relevant and diverse content for neural abstractive summarisation, and Song et al. [6] applied them to model the query-level and system-level diversity in neural conversation systems. Expanding on this theme of diversity, DPPs have been extensively applied in reinforcement learning, particularly in promoting diverse policies in population-based RL [7, 8], and diverse policies in games [9]. ## Regarding the theoretical results: Thanks for your advice. We will highlight the variational lower bound of the Information Bottleneck objective (i.e., Eq. (5)) and the unbiased gradient estimators for the prior network and intra-option policy (i.e., Eq. (10)-(12)), and formalize them as propositions in the final version. ## Regarding the diversity metrics: In Figure 5(b), we utilize the standard deviation of trajectory rewards corresponding to different options as quantitative measures for the option diversity. This measure has been used in previous works, such as [1]. For more convincing ablation study, we provide new quantitative results in the global response as a PDF. We propose using the distribution of final locations within option trajectories to measure the diversity and coverage, as in [12]. ## References: [1] Eysenbach, Benjamin, Abhishek Gupta, Julian Ibarz, and Sergey Levine. "Diversity is All You Need: Learning Skills without a Reward Function." In International Conference on Learning Representations. 2018. [2] Kulesza, Alex, and Ben Taskar. "Determinantal point processes for machine learning." Foundations and Trends® in Machine Learning 5, no. 2–3 (2012): 123-286. [3] Gong, Boqing, Wei-Lun Chao, Kristen Grauman, and Fei Sha. "Diverse sequential subset selection for supervised video summarization." NeurIPS, 2014. [4] Kim, Nuri, Donghoon Lee, and Songhwai Oh. "Learning instance-aware object detection using determinantal point processes." Computer Vision and Image Understanding 201 (2020): 103061. [5] Perez-Beltrachini, Laura, and Mirella Lapata. "Multi-document summarization with determinantal point process attention." JAIR 71 (2021): 371-399. [6] Song, Yiping, Rui Yan, Yansong Feng, Yaoyuan Zhang, Dongyan Zhao, and Ming Zhang. "Towards a neural conversation model with diversity net using determinantal point processes." AAAI, 2018. [7] Parker-Holder, Jack, Aldo Pacchiano, Krzysztof M. Choromanski, and Stephen J. Roberts. "Effective diversity in population based reinforcement learning." NeurIPS, 2020. [8] Wu, Shuang, Jian Yao, Haobo Fu, Ye Tian, Chao Qian, Yaodong Yang, Qiang Fu, and Yang Wei. "Quality-Similar Diversity via Population Based Reinforcement Learning." ICLR, 2022. [9] Perez-Nieves, Nicolas, Yaodong Yang, Oliver Slumbers, David H. Mguni, Ying Wen, and Jun Wang. "Modelling behavioural diversity for learning in open-ended games." ICML, 2021. [10] Wilhelm, Mark, Ajith Ramanathan, Alexander Bonomo, Sagar Jain, Ed H. Chi, and Jennifer Gillenwater. "Practical diversified recommendations on youtube with determinantal point processes." CIKM, 2018. [11] Chen, Laming, Guoxin Zhang, and Eric Zhou. "Fast greedy map inference for determinantal point process to improve recommendation diversity." NeurIPS, 2018. [12] Achiam, Joshua, Harrison Edwards, Dario Amodei, and Pieter Abbeel. "Variational option discovery algorithms." arXiv:1807.10299 (2018). --- Rebuttal Comment 1.1: Comment: Thanks for your response. I am happy that the author solved most of my questions. It would be nice to add these details to the future version. I am happy to raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for the raise. We greatly appreciate it. We will add our rebuttal content to our final submission, including details on the fine-tuning process, related works on DPP, new quantitative results on the ablation study, and more formalized theoretical results.
Summary: The paper introduces an unsupervised option discovery framework based on a combination of Determinantal Point Process and Laplacian spectral features. The main idea is to combine variational methods and Laplacian methods in order to control for coverage and diversity of the options. The proposed method has been validated experimentally on Mujoco 3D navigation environments where it shows superior performance to other approaches. Further experiments have been done on Atari environments as well. Strengths: The proposed combination of DPP and Laplacian features is reasonable and shows promising results. The novelty stems mainly from the modified DPP kernel similarity matrix which makes use of the Laplacian spectrum. I also think that maximizing the cardinality of the the landmark set as a diversity signal is a nice idea. All components that were introduced in the method are well argued. Especially the overarching motivation of maximizing coverage and diversity in the same time (reaping the benefits of variational and Laplacian methods) Experiments are convincing. Weaknesses: At some points the paper lacks clarity, for example in the exposition of the DPP, terms were introduced that were not properly explained and that I would not consider common knowledge. Some probabilistic framing that was used in the paper was off, but we can clarify this in the rebuttal phase hopefully. The Atari experiments should appear in the main paper in some form, since they are already announced in the abstract. As the authors have already noted in the paper, the method needs balancing of 3 terms in the objective which might make it impractical, however the hyperparameters are stable across different environments (shows robustness). No significant theoretical insights, rather a combination of existing works in order to devise and option discovery algorithm. The writing in general could be improved, I found it a bit hard to keep track of all the introduced terms and the connection to the main idea of the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: line 112: the explanation of this Gram matrix seems lacking, for instance the quality measure is just introduced in the text here and used once more later, without any explanation how it related to reward and the DPP probability of a subset. eq.3: it is not clear to me from this equation that you are doing MAP inference, can you specify in clear terms what is the posterior here and what is the prior (also in the paper)? eq. 4: what is being maximized over here? And a follow-up, in which sense is beta a Lagrange multiplier? What is the constrained optimization problem that is being solved here, can you actually write it this way? Please clarify this in your response and ideally in the main text. line 219: the notation here is confusing, is this to define a conditional random variable given s_0 and c? (the trajectory). Wouldn’t it be better to just define this as sampled from a conditional distribution, then take the expectation in eq.7 over it. (might be less confusing) eq.8: same comment as for line 219. eq.9: perhaps you should consider putting the minus sign into the second loss, so that it remains a proper loss (something that you want to minimize) and adjust the explanation. figure 3 caption - “in the normal setting” the legends in figure 4 are not well visible and the fonts are off. I suggest you place the legends outside of the figure. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Regarding the Gram Matrix (Question #1): As introduced in Section 2.2, the Gram Matrix includes the quality measures $q$ and normalized vectors $\vec{b}$ for each element in the set. From Eq. (1), we can see that the sampling probability is proportional to the squared volume of the parallelepiped spanned by the columns $q_i\vec{b}_{i}$, for $i$ in the sampled subset. Thus, elements with more orthogonal feature vectors and higher quality values are more probable to be sampled as a subset. In RL tasks, we need to assign quality measures and features for each state. States with higher expected returns should be visited more frequently and thus be assigned with higher quality values. However, in our reward-free setting, we do not have prior knowledge on the quality of states, so we define equal quality measures to each state as 1, as mentioned in Section 3.2. ## Regarding Eq. (3) (Question #2): $P_{L(\mathcal{W})}(\mathbb{W}=W)=P(\mathbb{W}=W|\mathbb{L}=L(\mathcal{W}))$ denotes the probability of sampling the subset $W$ out of the universal set $\mathcal{W}$ given the definition of the kernel matrix $L(\mathcal{W})$. With such a conditional model, the problem to find the set $W \subseteq \mathcal{W}$ with the highest probability, i.e., Eq (3), is referred to as maximum a posteriori (or MAP) inference, as defined in the first paragraph of Section 2.4.5 in [1]. We will clarify this in the main text during the final submission stage. ## Regarding Eq. (4) (Question #3): Our goal is to learn a policy $\pi_{\theta}$ and a prior $P_{\omega}$ which condition on the option choice $c$. $c$ should be maximally expressive about the landmark states $G$ induced by $\pi_{\theta}$ and $P_{\omega}$, while being compressive about the whole trajectory $\tau$ to eliminate redundant information. According to the Information Bottleneck framework [2], this can be realized through: $\max_{\theta, \omega} \mathbb{E}_{s_0 \sim \mu(\cdot)} I(c, G|s_0;\theta, \omega)$ $I(c, \tau|s_0;\theta, \omega) \leq I_{ct}$ where $I_{ct}$ is the information constraint. Equivalently, with the introduction of a Lagrange multiplier $\beta \geq 0$, we can optimize: $\max_{\theta, \omega} \mathbb{E}_{s_0 \sim \mu(\cdot)} \left[ I(c, G|s_0;\theta, \omega) - \beta I(c, \tau|s_0;\theta, \omega) \right]$. Thanks for the advice. We will add this to the main text during the final submission stage. ## Regarding Eq. (7)-(9): $\vec{\tau}_{(s_0, c)}$ denotes a set of $M$ sampled trajectories subject to the option choice $c$ and starting from $s_0$. We will adopt your suggested modifications to Eq. (7) - (9) for better explanation. ## Regarding Figure 3 and 4: Thank you for your careful check. We will fix them during the final submission stage. ## References: [1] Kulesza, Alex, and Ben Taskar. "Determinantal point processes for machine learning." Foundations and Trends® in Machine Learning 5, no. 2–3 (2012): 123-286. [2] Alemi, Alexander A., Ian Fischer, Joshua V. Dillon, and Kevin Murphy. "Deep Variational Information Bottleneck." In International Conference on Learning Representations, 2017.
Summary: This paper addresses reward-free options discovery for RL. First, it notes that prior work would prioritize either state coverage or diversity into the options discovery procedure. Hence, it proposes a new loss function that fosters coverage and diversity simultaneously by exploiting DPPs on both the trajectories generated by the options and the states within a trajectory. Finally, it presents an algorithm to optimize this objective, which is evaluated in continuous control domains against standard baselines, such as VIC, VALOR, DIAYN, and Laplacian methods. Strengths: - Exploiting tools from DPPs for the options discovery is interesting and, to the best of my knowledge, novel and original; - The experimental results are at least promising; - The paper includes a neat presentation of the previous works and approaches for unsupervised options discovery. Weaknesses: - The learning objective and the corresponding algorithm are quite convoluted. They requires several layers of approximation to make the learning tractable, as well as a handful of hyper-parameters to be tuned and the definition of a suitable kernel over trajectories and states; - The experimental evaluation is not thorough. Most of all, the ablation study is limited to a single run in a single domain, and leaves the reader wondering whether all of the introduced ingredients would be needed in general; - The intuition behind the loss function is not always pristine. It is easy to lose focus going through Section 3 of the paper; - The visualization of the experimental results could be further polished. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The paper is an interesting contribution to a problem that is somehow far from being solved despite receiving a considerable attention recently. While the main contribution of this paper is algorithmic, it is hard to assess its value without a stronger empirical study (or more theoretical corroboration). Thus, I am currently providing a slightly negative score, even if a case for acceptance could be made and I am open to change my score after the authors' response. As is, the paper looks like a nice engineering feat to me, but lacks some convincing intuition and experiments on why all those tricks would work in general. **Algorithm** - There is one aspect that is not completely clear to me on how the loss is presented. The authors motivates the work as going beyond previous variational or Laplacian approaches with a new framework that comprises both, but then the main ingredient of the loss, i.e., $\mathcal{L}_{IB}$, looks a standard variational loss with some variation (landmark states). Moreover, the components $\mathcal{L}_1, \mathcal{L}_2, \mathcal{L}_3$ already provide incentives for coverage and diversity, so why one should also add the information bottleneck on top of them? Overall, I think that presenting this work as an evolution of variational option discovery would help its presentation. - From my understanding, to define the relevant DPPs a proper kernel on trajectories and states is needed. Can the authors discuss how this would be designed in the absence of any domain knowledge? - I am wondering whether the presented loss function induces a (hidden) RL problem or not, i.e., whether the loss function could be incorporated into a standard reward function in hindsight. Is the approach producing a set of deterministic options or the discovered options may be stochastic? **Experiments** - The presented approach is fairly complicated and I think the ablation study in Section 4.1 is not enough to state that all of the introduced ingredients would benefit the option discovery in general. It is somehow underwhelming that the paper only presents one result, a single domain and a single seed, through an illustration instead of some quantitative analysis and learning curves. I believe this does not meet the bar to motivate the added complexity of the loss function, especially on top of the $\mathcal{L}_{IB}$ that seems to lead to a good options discovery already. - All of the approach is predicated on the need for coverage and diversity simultaneously, but it is hard to evaluate them beyond qualitative assessments. However, I think the paper could put more effort in designing quantitative measures to evaluate diversity and coverage. - I would suggest to report the learning curves with average and confidence intervals instead of one standard error, which is hardly meaningful. **Minor** - I would suggest to report the pseudocode of Algorithm 1 in the main text. - The overall notation, in particular the one needed to define the loss function, is sometimes convoluted. - Where is the DIAYN learning curve in Fig. 4b, 4c? - Why are the Atari results relegated to the appendix? CartPole is not an Atari game, but a continuous control task. - In the downstream task setting only the option selecting policy is learned? Do the authors considered also fine-tuning of the learned options? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The paper explicitly addresses the limitations of the presented approach in the final paragraph. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Regarding loss terms (Question #1): As noted in the first paragraph of Section 3, we need to learn an intra-option policy $\pi_{\theta}(a|s,c)$ conditioned on the option choice $c$. Each option choice should correspond to a specific policy. As a common practice in variational methods, this type of option-policy mapping is established by maximizing the mutual information between them, i.e., $\mathcal{L}^{IB}$ in our case. The DPP-based loss terms, i.e., $\mathcal{L}^{DPP}_{1:3}$, cannot be harnessed directly to obtain such a mapping. Presenting our algorithm as an evolution of variational methods would be a good idea. We provide such a view in the second paragraph of Section 3 as an algorithm overview. The variational loss term, $\mathcal{L}^{IB}$, is used to establish the option-policy mapping, so we can learn multiple options simultaneously by introducing a conditional variable $c$. However, the mutual information objective only implicitly measures diversity of options as the difficulty of distinguishing them via a variational decoder (i.e., $P_{\phi}$ in our paper), and does not model the coverage. As a novel extension, we propose to explicitly model and optimize the coverage and diversity of options based on DPP. ## Regarding the DPP kernel (Question #2): To make our algorithm general, we do not rely on domain knowledge when designing the DPP kernel. For the kernel matrix on the set of states, as described in the second paragraph of Section 3.2, we need to specify the quality measure and feature vector for each state. Since this is a reward-free setting, all states are assigned with the same quality. As for feature vectors, we use Laplacian spectrum (i.e., eigenvectors corresponding to the $D$ smallest eigenvalues) of the state-transition graph, which intuitively captures connectivity among states. The same feature design has been used for spectral clustering [1], and we have shown analytically and empirically that we can generalize Laplacian option discovery with this feature embedding. This feature design is task-irrelevant. In any RL task, we can compile state transitions in replay buffers, upon which we can estimate these features, as detailed in Appendix C.3. As for the kernel matrix on trajectories, we still assign equal quality measure to each trajectory. As in the first paragraph of Page 6, the feature of each trajectory is acquired as a sum over feature vectors of states within the trajectory. This follows the structural DPP framework [2] used for modeling sequential data. Still, the kernel matrix design does not require domain knowledge. ## Regarding the option learning (Question #3): The learning outcome is the intra-option policy $\pi_{\theta}(a|s,c)$. $c$ is a one-hot vector representing a discrete set of options. For each $c$, its policy is a mapping from the current state to the action, which is stochastic in continuous control tasks. $\pi_{\theta}(a|s,c)$ is modeled as a neural network and trained through applying the gradient $\nabla_{\theta}\mathcal{L}$ (i.e., Eq. (10)). The gradient form inspires us to update the policy with Actor-Critic methods, where $A^{\pi_{\theta}}_m$ defines the Q-function. When being applied to downstream tasks, $\pi_{\theta}(a|s,c)$ is fixed and we only need to learn a high-level policy $P_{\psi}(c|s)$ to select among options. For each selected option, we execute its intra-option policy for a fixed number of steps (i.e., the option horizon) before sampling a new one. ## Regarding the experiments: For quantitative results, we have provided comparisons among multiple option discovery algorithms, including the learning performance on downstream Mujoco tasks (Figure 4), effectiveness of the discovered skills in 3D Mujoco Locomotion (Figure 5) and OpenAI Gym tasks (Figure 8 in Appendix D.3). Besides, we provide detailed analysis on the learned prior network (Figure 6 in Appendix D.1). The introduction of each objective term is intuitive. $\mathcal{L}^{IB}$ is for establishing the option-policy mapping and implicitly encouraging the diversity, $\mathcal{L}^{DPP}_{1}$ is for improving the coverage of each option trajectory, while the other DPP terms work together to explicitly model and optimize the option diversity. For more convincing ablation study, we provide new quantitative results in the global response as a PDF. We propose using the distribution of final locations within option trajectories to measure the diversity and coverage, as in [3]. ## Regarding the minor issues: We will move the pseudocode to the main text since we get an extra page in the final version. The learning curves of DIAYN in Figure 4(b)(c) are blocked by the ones of other algorithms. The options learned with these algorithms can hardly lead the agent out of the inner room in Figure 4(a) to reach the goal area and get any reward signal. CartPole is not an Atari game like the other two in Figure 8. We will fix the benchmark name as OpenAI Gym and move Figure 8 to the main text. In the downstream task setting, only the option selecting policy is learned. We do not fine-tune the options in this stage, because this is for evaluating the discovered options in the previous stage and we need to keep them fixed in downstream tasks to keep comparisons fair. ## Regarding the complexity of the proposed algorithm (Weakness #1): We have provided a discussion on this in the global response. ## References: [1] Ng, Andrew, Michael Jordan, and Yair Weiss. "On spectral clustering: Analysis and an algorithm." Advances in neural information processing systems 14 (2001). [2] Kulesza, Alex, and Ben Taskar. "Structured determinantal point processes." Advances in neural information processing systems 23 (2010). [3] Achiam, Joshua, Harrison Edwards, Dario Amodei, and Pieter Abbeel. "Variational option discovery algorithms." arXiv preprint arXiv:1807.10299 (2018). --- Rebuttal Comment 1.1: Title: After response Comment: I am very sorry for the late reply. First, I want to thank the authors for their detailed clarifications, which make me feel now more confident in evaluating this paper. My feeling is that the empirical analysis could be further strengthen, but the overall contribution of this paper is original and interesting. I am raising my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for the raise. We greatly appreciate it. We will bolster the analysis of empirical results and include rebuttal content in our final submission, encompassing insights on $\mathcal{L}^{IB}$, clarifications for the DPP kernel and option learning outcomes, complexity analysis of ODPP, and quantitative results for the ablation study.
Rebuttal 1: Rebuttal: ## Regarding complexity of ODPP : The learning target of ODPP is an intra-option policy $\pi_{\theta}(a|s,c)$ conditioned on the option choice $c$. As in Section 3.3, this policy is learned with an Actor-Critic algorithm for which the Q-function is defined as Eq. (12). This Q-function contains variational and DPP-based objectives. Compared with previous variational option discovery algorithms (e.g., DIAYN, VIC, VALOR), we additionally need to (a) sample landmark states from each trajectory and (b) calculate the DPP-related terms: $f(\cdot),\ g(\cdot),\ h(\cdot)$. For (a), we adopt a fast greedy MAP inference algorithm for DPP [1]. As mentioned in Appendix C.2, it takes $\mathcal{O}(S^2N)$ to sample $S$ landmark states from an option trajectory of length $N$. In our setting, $N=50,\ S=10$, so the process can be done in real-time. For (b), we need to build DPP kernel matrices, and then compute $f(\cdot),\ g(\cdot),\ h(\cdot)$ based on eigenvalues of the corresponding kernel matrix as in Eq. (6)-(8). The time complexity for eigen decomposition is $\mathcal{O}(N^3)$, where $N$ is the size of the matrix. For the state kernel matrix, $N$ is the number of states in an option trajectory (i.e., the option horizon) which we set as 50. For the trajectory matrix, $N$ corresponds to the number of trajectories collected in each training iteration, which is set as 100. Thus, $f(\cdot),\ g(\cdot),\ h(\cdot)$ can be computed in real-time. To build the kernel matrix, we need feature vectors for each state. As introduced in Appendix C.3, the feature vector is the output of a pre-trained neural network which takes the state as input. The training of this feature function is based on state transitions in the replay buffer and only needs to be done for once or twice in the whole option discovery process, of which the time cost is within 30 minutes. To sum up, compared with previous variational methods, we additionally introduce three DPP items to explicitly model the option diversity and coverage, of which the involvement only slightly increases the time complexity. ## Regarding scalability of ODPP: ODPP can indeed be adapted to more intricate setups encompassing longer option horizons, a greater number of skills, or visual domains. We elaborate on scalability of ODPP as follows and will include these discussions and numerical examples in the revised paper. (a) The skill horizon is constrained by the MAP inference and eigen decomposition operations previously described. Given their time complexity, the skill horizon could readily be expanded from 50 to 100 or even 500. This augmentation would necessitate an additional time of $\mathcal{O}(10^{-3})$ or $\mathcal{O}(10^{-1})$ seconds per training iteration, compared with previous variational methods. These estimations are based on computations on a machine with a single Intel i7 CPU and four GeForce RTX 2060 GPUs. Note that a skill horizon larger than 100 is rarely necessary. Employing a skill with an excessively long horizon may compromise flexibility in decision-making. (b) Compared with variational methods, our algorithm does not introduce extra limitations on the number of learned skills. Moreover, in Figure 5(c), we show that even when learning a large number of options at the same time (as much as 60), we can still get options with high quality (mean) and diversity (standard deviation) which increase during the training process. (c) ODPP needs to learn the Laplacian feature embeddings. For visual domains, this process can incorporate a pretrained CNN model as a feature extractor, which serves to convert visual input into feature vectors. Subsequently, the original algorithm can be applied. Applications in visual domains could pose a common challenge for all option discovery algorithms and present an exciting avenue for future research. ## Regarding new results: As required by reviewers, new empirical results are provided in the uploaded PDF, including quantitative ablation study results and comparisons with more advanced baselines on OpenAI Gym. ## References: [1] Chen, Laming, Guoxin Zhang, and Eric Zhou. "Fast greedy map inference for determinantal point process to improve recommendation diversity." Advances in Neural Information Processing Systems 31 (2018). Pdf: /pdf/bdf762140802db977840521cf193ae4168c6fd5e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes and evaluates an approach to option discovery in reinforcement learning. The aim is to autonomously identify a set of options that are diverse and that give good coverage of the state space. This aim is achieved by using the Determinantal Point Process (DPP). The proposed approach is evaluated in a range of domains. Strengths: The proposed approach is intuitive and principled. While the underlying notions of diversity and coverage have been used earlier in the literature for option discovery, their combined optimization through the Determinantal Point Process is novel and returns better results than related existing methods. The experimental evaluation is extensive and varied, showing not only learning curves but also option trajectories. Weaknesses: Only minor comments: An analysis of computational complexity is provided in the supplementary material. It would be useful to see a short summary of that in the main paper. In the learning curves, it would be useful to plot ceiling performance. Text in Figure 1d is too small. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation. We will fix the issues that you mentioned during the final submission stage, including moving analysis of the computational complexity to the main text, highlighting the ceiling performance in the learning curves, and adjusting the text size in Figure 1(d).
null
null
null
null
null
null
On Consistent Bayesian Inference from Synthetic Data
Reject
Summary: The authors consider the use of synthetic data $X^{sync} $ created from a model $p(X^{sync}|Z, I_S)$ on a further Bayesian analysis where the analyst has access to $p(Q|X^{sync})$ and to $p(X^{sync}| Z)$, where $Q$ is the parameter. the paper is based on equality (4) $ p(Q|Z) = \int p(Q|Z.X^*)p(X^*|Z)dx^*$ where $X^*$ essentially represents $X^{sync}$ and $Z$ is either the real data or a differentially private version of the data and the idea is to replace $\int p(Q|Z.X^*)p(X^*|Z)dx^*$ by $p_n(Q) = \int p(Q|X^*)p(X^*|Z)dx^*$ which is fully accessible by the Bayesian and then show that the latter is close to the former. To do that the authors assume that $p(Q|Z,X^*_n)$ and $p(Q|X^*_n)$ are both close in total variation to the same seq of distribution say $D_n$ in probability when $X^*_n $ follows $p(X^*_n|Z,Q_0)$ and that given $Q$ $X^*$ and $Z$ are independent. Then the authors treat 2 toy examples a Gaussian and a logistic regression example and run some simulations to illustrate. Inthe above presentation I am not mentioning the fact that the Bayesian models can be different from the generating model, which is treated but quickly pushed by assuming that this is not a problem. Strengths: The paper is well motivated and it is an important problem. If the results are correct then the paper is relevant and interesting. Weaknesses: I am not sure the results are correct. From the presentation I don't understand the author's eq (4) or rather their comment which says that $p(Q|X^*, Z) $ is different from $p(Q|Z)$. The reason is that the authors do not explain what is the generating model for $X^*$ (In their paper the authors sometimes use $X^{sync}$ and sometimes $X^*$ as if they were the same, so I gather that they represent the same thing , but in the DAG of fig 1, they are not the same at all. The $X^* = X^{sync}$ is distributed from $p(X^*|Z)$, which does not depend on $Q$. Hence unless the authors clarify this point then their result are not valid. The authors consider two toy examples but in neither of them do they check tht the theoretical setup considered before is valid. For instance in the Gaussian example the generating model for the synthetic data is $ X^* \sim p( \cdot | X) = \int p(\cdot | \mu) \pi(\mu| X) d\mu$, i.e. the posterior predictive density. The relation (4) writes as $\pi(\mu|X) = \int p(\mu | X, x^*)p(x^*|X)dx^*$ but the model $p(\mu | X, x^*)$ is not defined. The authors seem to consider that $x^* , X|\mu$ are iid but this is not possible because $\mu$ is unknown and it does not correspond to their Gaussian example. There are a number of other results which seem dubious to me. See below. Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: 1. In condition 3.2 the authors write for all Q but do not mention $Z$ while the distribution depends on $Z$. Is the condition almost sure in Z? in probability ? The same is true for condition 3.6. 2. Lemma 3.3 : What does it mean " hold for the downstream analysis for all $Q_0$ " ? condition (1) says given $Q$. Are $Q$ and $Q_)$ the same? 3. eq (17) says that they have the same mean and variance but not that they have the same limiting distribution. Why don't the authors verify the assumptions for this toy example. Surely if these assumptions don't hold for that one they will never hold. 4. in the supplement equation (204) : in my version of Asymptotic statistics there is not corollary 2.3 but a continuous mapping Theorem. I imagine that the authors are referring to the Lehman - Scheffe Theorem which states that if the sequence of probability densities $f_n$ converges pointwise to a probability density $f$ then it converges in $L_1$. Hoewever here the sequence is also random and the convergence is pointwise almost surely and I don't see the argument which allows to glue all these sets (for each Q) of probabiulity 1 to apply Lehman Scheffe. In other words the sets of proba 1 may differ from one $Q$ to another. 5. Minor comments: The authors recall in the call fairly trivial probability results which they should quote and recall possibly in the supplement and free this space to better explain their setup. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 1 poor Contribution: 3 good Limitations: The authors are conscious of some of the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I am not sure the results are correct. From the presentation I don't understand the author's eq (4) or rather their comment which says that $p(Q | X^*, Z)$ is different from $p(Q | Z)$. The reason is that the authors do not explain what is the generating model for $X^*$(In their paper the authors sometimes use $X^{sync}$ and sometimes $X^*$ as if they were the same, so I gather that they represent the same thing , but in the DAG of fig 1, they are not the same at all. > The $X^* = X^{sync}$ is distributed from $p(X^* | Z)$, which does not depend on $Q$. Hence unless the authors clarify this point then their result are not valid. $X^*$ and $X^{Syn}$ are not the same thing, as we note on line 133. $X^*$ is a random variable representing a hypothetical dataset that could be obtained if more data was collected, so its generating model given $\theta$ is the same as for the real data $X$. The synthetic data $X^{Syn}$ is a sample from the posterior predictive distribution $p(X^* | Z) = \int p(X^* | \theta)p(\theta | Z)d \theta$. This distinction between $X^*$ in the posterior predictive distribution and a sample of the posterior predictive is not limited to synthetic data: it is present in all Bayesian analyses when the posterior predictive is involved. The distinction leads to the Bayesian network in Figure 1, where $X^*$ is not a child of $Z$, but $X^{Syn}$ is. The interpretation of $X^*$ as a hypothetical real dataset makes it clear why $p(Q | X^*, Z) \neq p(Q | Z)$, as the former is conditioning on more information than the latter. We will add additional clarification of this to Section 3, and will clarify some parts where $X^*$ was referred to as a synthetic dataset. > The authors consider two toy examples but in neither of them do they check tht the theoretical setup considered before is valid. For instance in the Gaussian example the generating model for the synthetic data is $X^* \sim p(\cdot | X) = \int p(\cdot | \mu)\pi(\mu | X) d\mu$, i.e. the posterior predictive density. The relation (4) writes as $\pi(\mu | X) = \int p(\mu | X, x^*)p(x^*|X)d x^*$ but the model $p(\mu | X, x^*)$ is not defined. The authors seem to consider that $x^*, X | \mu$ are iid but this is not possible because $\mu$ is unknown and it does not correspond to their Gaussian example. Because $X^*$ represents a hypothetical real dataset, $p(\mu | X, X^*)$ is simply the posterior when concatenating $X$ and $X^*$ together into a single dataset. While $X^{Syn}$ and $X$ are obviously not independent given $\mu$, $X^*$ and $X$ are independent given $\mu$. We don't see why $\mu$ being unknown would affect these independencies. > 1. In condition 3.2 the authors write for all Q but do not mention $Z$ while the distribution depends on $Z$. Is the condition almost sure in Z? in probability ? The same is true for condition 3.6. In both cases, the condition is for the specific value of $Z$ which is observed. We will add this to the condition statements. > 2. Lemma 3.3 : What does it mean " hold for the downstream analysis for all $Q_0$ " ? condition (1) says given $Q$. Are $Q$ and $Q_)$ the same? $Q_0$ refers to Theorem 2.2 and Condition A.4. We will reorder the words of the statement to make it less confusing. > 3. eq (17) says that they have the same mean and variance but not that they have the same limiting distribution. Why don't the authors verify the assumptions for this toy example. Surely if these assumptions don't hold for that one they will never hold. We can verify the assumptions of Lemma 3.3 for this example. The only assumptions that are not obvious are (2-4) in Condition A.4. As the Gaussian likelihood has a score function, it is differentiable in quadratic mean (van der Vaart 1998), so (2) holds. The Fisher information is straightforward to calculate from its definition, and is $\frac{1}{\hat{\sigma}^2_k} \neq 0$ in this case, so (3) holds. For (4), we can set $\phi_n$ to reject (output 1) when $|\frac{\bar{X}_n - \mu_0}{\sigma}| \geq \frac{1}{2}\beta$ and accept (output 0) otherwise. We also note that there is ample literature on the assumptions of the Bernstein-von Mises theorem, and our additional assumptions in Lemma 3.3 are much easier to check, which we discuss on lines 197-200. We can also see that $\mu^*$ has a Gaussian distribution, as the other distributions involved are Gaussian, which we will add to the paper. > 4. in the supplement equation (204) : in my version of Asymptotic statistics there is not corollary 2.3 but a continuous mapping Theorem. I imagine that the authors are referring to the Lehman - Scheffe Theorem [...]. Hoewever here the sequence is also random and the convergence is pointwise almost surely and I don't see the argument which allows to glue all these sets (for each Q) of probabiulity 1 to apply Lehman Scheffe. [...] We are indeed referring to a special case of the Lehman-Scheffe theorem, which appears as corollary 2.30 in (van der Vaart 1998). It states that if $X_n$ and $X$ are random vectors with densities $p_n$ and $p$ with respect to a measure $\mu$, and if $p_n \to p$ pointwise $\mu$-almost everywhere, then $X_n$ converges to $X$ in total variation, meaning that $\lim_{n\to \infty} \mathrm{TV}(X_n, X) = 0$. Our argument then is the following: for any fixed $X^*_{i,n}$ such that (201) holds, (204) will hold due to the aforementioned corollary, as the sequence of densities is not random with fixed $X^*_{i,n}$. When $X^*_{i,n} \sim p(X^*_n|Z)$, (201) holds almost surely, so the previous argument yields (204) almost surely, which is what we claim. >5. Minor comments: The authors recall in the call fairly trivial probability results which they should quote and recall possibly in the supplement and free this space to better explain their setup. If this is referring to Section 2, we think the material there is important for the rest of the paper, especially for readers who are not experts on Bayesian inference or probability theory. --- Rebuttal 2: Title: what did you think of the authors' response? Comment: The authors responded to a few of your questions/concerns. Can you take a look and say whether your perspective on the submission has changed? In particular, the authors say that $X^*$ and $X^{sync}$ are not equal. Did their response/clarification on this change your mind?
Summary: The paper works on performing consistent Bayesian inference from synthetic data under DP. The authors propose a solution that involves mixing posterior samples from multiple large synthetic datasets, proving that this technique converges to the posterior of downstream analysis under specific conditions. This was established through experimentation involving non-private Gaussian mean estimation and DP logistic regression. Strengths: The paper offers a unique and engaging exploration of Bayesian Inference in the context of Synthetic Data, providing a fresh perspective in a field predominantly characterized by frequentist analysis. Weaknesses: See questions. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The motivation behind the research is not explicitly stated. Could you clarify the unique benefits that Bayesian Inference offers in this context? How does it enhance the study or application beyond the capabilities of other methodologies (frequentist)? The rationale for using Synthetic Data, specifically Synthetic Data without DP, is also vague. This choice seems to offer no additional solid protection under this setting. When applying DP, why choose to release Synthetic Data instead of the DP summary directly? The paper's primary theoretical contribution is not evidently defined. The claim that the distribution of Synthetic Data can be arbitrarily close to the original distribution as the sample size 'n' approaches infinity appears to be a trivial expectation. Could you elaborate on this aspect more? There has been prior discussion on the topic of Bayesian Inference from Synthetic Data, for example, as seen in reference [1]. Could you specify what new insights or advancements your study brings to the table, beyond the contributions of these previous works? [1] Wilde, H., Jewson, J., Vollmer, S., & Holmes, C. (2021, March). Foundations of Bayesian learning from synthetic data. In International Conference on Artificial Intelligence and Statistics (pp. 541-549). PMLR. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: While the study explores the concept of Synthetic Data, its impetus is not distinctly articulated, leading to ambiguity regarding the problem the authors aim to address. The theoretical contribution appears to be weak. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The motivation behind the research is not explicitly stated. Could you clarify the unique benefits that Bayesian Inference offers in this context? How does it enhance the study or application beyond the capabilities of other methodologies (frequentist)? Bayesian inference is widely used in numerous statistical applications because of the flexibility it allows for statistical modelling (multilevel models, partial pooling, incorporating prior knowledge). A popular Bayesian regression modelling package brms, for example, has been cited more than 5500 times (Google Scholar) since its publication in 2017, with more than 1150 citations in 2023 by early August. In theory, our results could be used to easily translate any of these applications and many more to use (private) synthetic data. > The rationale for using Synthetic Data, specifically Synthetic Data without DP, is also vague. This choice seems to offer no additional solid protection under this setting. When applying DP, why choose to release Synthetic Data instead of the DP summary directly? We fully agree that DP is required for synthetic data to provide strong privacy protection. We included the non-DP case because our theory covers both cases in almost the same way, and this could be of independent theoretical interest. Releasing synthetic data instead of the summary makes the job of the analyst much easier, as they can directly reuse their existing analysis methods and code by just using synthetic data instead of real data. Using the summary directly would require to analyst to develop a new model based on observing the summary, which could take significant effort. > The paper's primary theoretical contribution is not evidently defined. The claim that the distribution of Synthetic Data can be arbitrarily close to the original distribution as the sample size 'n' approaches infinity appears to be a trivial expectation. Could you elaborate on this aspect more? Our primary theoretical contribution is studying if multiple synthetic datasets could be used for consistent downstream Bayesian inference, finding that they can by mixing the downstream analysis posterior samples from multiple large synthetic datasets, and proving that distribution obtained from this converges to the desired distribution under our assumptions. The size of the real dataset is fixed in our theory, only the synthetic dataset's size grows to infinity. While the result may seem trivial in retrospect, the requirement of increasing synthetic data sizes was a surprise to us as that is not required in the frequentist setting. Furthermore, the formal proof of convergence is certainly a non-trivial undertaking. > There has been prior discussion on the topic of Bayesian Inference from Synthetic Data, for example, as seen in reference [1]. Could you specify what new insights or advancements your study brings to the table, beyond the contributions of these previous works? As we mention in the Related Work-section, the paper by Wilde et al. (2021) calibrates a posterior from synthetic data using public data, which would lead to a significantly weaker privacy model than our fully DP model. They also target a generalised variant of the posterior, which could make applying their method harder. Our method targets the standard notion of posterior, and uses the multiple large synthetic datasets for calibration instead, so public data is not needed. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I still have a few concerns: 1: The authors assert that "Releasing synthetic data instead of the summary ... reuse their existing analysis methods and code".. However, in the context of mechanisms based on data perturbation for Differential Privacy, conventional methods often require specialized treatment, such as debiasing or correction. The notion of directly applying existing analysis methods to synthetic data while disregarding the underlying data raises concerns. This approach may seem straightforward, but it could potentially lead to problems. Take, for instance, the task of computing the sample median under DP. Well-established techniques like those outlined in [1] have been developed. Nevertheless, as far as my understanding goes, achieving similar accuracy under DP with synthetic data insertion poses challenges. For the sample median task, the idea of "reusing the existing analysis method" might suggest generating a synthetic dataset and applying the sample median as the "existing analysis method." However, the results may not align meaningfully with those achieved in [1]. The examples presented by the authors rely on strong assumptions (such as Gaussian models). Moreover, in Gaussian settings, utilizing noised parameters instead of synthetic data could be more convenient. (Generating synthetic data from parameter is easy. Sending the parameter is easier than sending synthetic data). 2: If I grasp the concept accurately, the authors contend that both the sizes of synthetic data and the number of synthetic datasets need to approach infinity, a requirement not present in the frequentist setting. This condition appears to be a notable disadvantage. Is there an underlying lower bound that demonstrates the inevitability? Additionally, why should we opt for the Bayesian setting if this challenge doesn't manifest in the frequentist context? [1] Smith, A. (2011, June). Privacy-preserving statistical estimation with optimal convergence rates. In Proceedings of the forty-third annual ACM symposium on Theory of computing (pp. 813-822). --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our rebuttal. > However, in the context of mechanisms based on data perturbation for Differential Privacy, conventional methods often require specialized treatment, such as debiasing or correction. The notion of directly applying existing analysis methods to synthetic data while disregarding the underlying data raises concerns. The point of our theory is to provide this kind of correction to many types of downstream analyses at the same time, while making the correction easy to use, as the non-DP analysis method can be reused. > Take, for instance, the task of computing the sample median under DP. Well-established techniques like those outlined in [1] have been developed. Nevertheless, as far as my understanding goes, achieving similar accuracy under DP with synthetic data insertion poses challenges. For the sample median task, the idea of "reusing the existing analysis method" might suggest generating a synthetic dataset and applying the sample median as the "existing analysis method." However, the results may not align meaningfully with those achieved in [1]. We agree that using a tailored DP method should beat using DP synthetic data and doing the downstream analysis on the synthetic data, for any single analysis task. However, once the allocated privacy budget has been exhausted with tailored DP analyses, the original data has to be thrown away, never to be used again, which severely limits the practical applicability of such methods. Synthetic data allows an arbitrary number of analyses to be done on the synthetic data due to the post-processing immunity of DP. It also cannot be taken for granted that a tailored DP method always beats synthetic data + non-DP method. In our UCI Adult experiment, mixing the synthetic data posteriors clearly beat the DPVI which doesn't use synthetic data, and the alternative method DP-GLM was not able to fully converge. > The examples presented by the authors rely on strong assumptions (such as Gaussian models). Moreover, in Gaussian settings, utilizing noised parameters instead of synthetic data could be more convenient. (Generating synthetic data from parameter is easy. Sending the parameter is easier than sending synthetic data). The Gaussian example is only meant to serve as the simplest possible example, where the analytical tractability of the setting allows checking variour properties of the mixture of synthetic datasets analytically, such as the effect of uncongeniality. Our theory applies to much more complex settings, where the downstream method may not have sufficient statistics that could be published instead of the synthetic data. In addition, just publishing sufficient statistics would limit the analyses that can be done, while synthetic data allows arbitrary analyses. > 2: If I grasp the concept accurately, the authors contend that both the sizes of synthetic data and the number of synthetic datasets need to approach infinity, a requirement not present in the frequentist setting. This condition appears to be a notable disadvantage. Is there an underlying lower bound that demonstrates the inevitability? We are not aware of any such lower bound, but it is clear from all our experiments that having synthetic data sets of equal size as the real data leads to overestimating posterior variances. We were able to derive an additional variance correction for the Gaussian mean estimation which allows approximating the effect of large synthetic datasets in Supplemental Section C.4, so it is possible to get around the requirement in at least that case. > Additionally, why should we opt for the Bayesian setting if this challenge doesn't manifest in the frequentist context? Bayesian inference has the advantages over frequentist inference we mentioned in our rebuttal (multilevel models, partial pooling, incorporating prior knowledge). As Bayesian inference is such a widely used paradigm, providing methods to use it with synthetic data is useful and allowing practitioners to see what the tradeoffs are is useful, even if Bayesian inference doesn't end up being the right choice for every setting. Ultimately the aim of our paper is to increase theoretical understanding of what is and what is not possible for Bayesian inference with synthetic data. This understanding will help researchers make better choices between different approaches and develop new even better ones. We believe that results showing what is not possible can be extremely useful in this sense. --- Rebuttal 2: Title: what did you think of the authors' response? Comment: The authors have provided responses to your questions and comments. Please revise the text and score of your review to reflect how their responses have changed your perspective on their submission, and please acknowledge that you have read the authors' carefully written response.
Summary: Inspired by Bayesian approaches for performing multiple imputation of missing data, this paper investigates the applicability of similar strategies for the analysis of synthetic data. Namely, the paper proposes inferring the downstream posterior of a Bayesian analysis by: generating multiple synthetic datasets; inferring the analysis posterior for each synthetic dataset; and mixing the posteriors together. (Interestingly, the paper finds that, contrary to the missing data imputation context, in the synthetic data case this strategy requires the synthetic datasets to be larger than the original dataset.) The paper provides theory showing that under the regularity conditions of the Bernstein-von Mises theorem (augmented by the additional conditions presented in Lemma 3.3), and assuming the congenial conditions in Definition 3.1, then the proposed strategy will approximate the data provider posterior distribution as the number of synthetic datasets and the synthetic dataset sizes increase. (The paper also proves a convergence rate result under stronger assumptions.) The method is evaluated using two simple examples: (i) non-private univariate Gaussian mean estimation (when the variance is assumed to be known); and (ii) differentially private logistic regression. Strengths: This is an interesting paper. It addresses an important topic with an approach that appears to be novel and sound. Weaknesses: One limitation of the proposed approach appears to be its reliance on the congeniality assumption (which we should not expect to hold in general). While the paper uses a simple example to illustrate that the method was still able to recover the data provider’s posterior when congeniality was violated, the paper needs to provide more extensive evidence of the robustness of the proposed approach w.r.t. violations of this assumption (as, in practice, it seems that the usefulness of the proposed approach for data analysis will depend on how robust the method is to violations of congeniality). More specifically, the paper shows that for the toy problem of Gaussian mean estimation (with known variance) the mixture of posteriors converges to the data provider’s posterior even when the analyst’s variance is different from the data provider’s variance (right panel of Figure 2). However, for this example we have that the posterior distribution for the mean is already Gaussian in the finite sample setting to begin with. Providing additional examples where the posterior distribution of the quantity of interest is not Gaussian in the finite sample setting, but where the mixture of posteriors approximates the data provider’s posterior when congeniality is violated would provide more convincing illustrative examples. Perhaps, one simple example is the problem of Gaussian variance (or precision) estimation with known means. In this case, the paper could assess the robustness w.r.t. congeniality violations by choosing different means for the data provider and data analyst. The paper should provide additional examples along these lines. The paper might also want to include some discussion about some practically important settings where the Bernstein-von Mises theorem does not hold, and where the proposed approach might not be applicable (e.g., for models where the number of parameters increases with the sample size). Other minor suggestions: Line 69: change “which makes method their more” to “which makes their method more” Line 302: change “To recover the analyst’s posterior …” to “To recover the data provider’s posterior …” Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See suggestions above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, the paper addresses well the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > One limitation of the proposed approach appears to be its reliance on the congeniality assumption (which we should not expect to hold in general). While the paper uses a simple example to illustrate that the method was still able to recover the data provider’s posterior when congeniality was violated, the paper needs to provide more extensive evidence of the robustness of the proposed approach w.r.t. violations of this assumption (as, in practice, it seems that the usefulness of the proposed approach for data analysis will depend on how robust the method is to violations of congeniality). > More specifically, the paper shows that for the toy problem of Gaussian mean estimation (with known variance) the mixture of posteriors converges to the data provider’s posterior even when the analyst’s variance is different from the data provider’s variance (right panel of Figure 2). However, for this example we have that the posterior distribution for the mean is already Gaussian in the finite sample setting to begin with. Providing additional examples where the posterior distribution of the quantity of interest is not Gaussian in the finite sample setting, but where the mixture of posteriors approximates the data provider’s posterior when congeniality is violated would provide more convincing illustrative examples. Perhaps, one simple example is the problem of Gaussian variance (or precision) estimation with known means. In this case, the paper could assess the robustness w.r.t. congeniality violations by choosing different means for the data provider and data analyst. The paper should provide additional examples along these lines. We worked through the suggested example of Gaussian variance estimation with known mean. Specifically, the model assumes that the data $X = (x_1, \dotsc, x_n)$ is generated from a Gaussian distribution with some known mean $\mu_k$, and the task is to estimate the variance $\sigma^2$ of the Gaussian. In this example, the synthetic data provider knows the correct mean, but the analyst may use an incorrect known mean. It turns out that the mixture of synthetic data posteriors does not recover the data provider's posterior in this case. Specifically, the mixture's mean in the limit of infinite synthetic data is larger than the data provider's mean by the square of the difference between the known means of the parties. Interestingly, the shape and variance of the of the mixture empirically appear to converge to the data provider's posterior, so only the means are different, as seen in Figure R4 of the attached file. However, we haven't had time to verify this mathematically. While this example shows that the mixture of synthetic data posteriors is not always robust to congeniality violations, the the experiment with Adult data presented in the general response provides evidence that congeniality violations may not be an issue in practice, as the mixture of synthetic data posteriors was still much closer to the real data non-DP posterior than the baseline DPVI. In any case, our main contribution is the theory studying how consistent Bayesian inference could be done from multiple synthetic datasets, which includes finding that congeniality even is an issue in this setting, and treating the congenial case. The uncongenial case is an important direction of future work, and we think that our paper can serve as a good starting point for it. > The paper might also want to include some discussion about some practically important settings where the Bernstein-von Mises theorem does not hold, and where the proposed approach might not be applicable (e.g., for models where the number of parameters increases with the sample size). We will add some examples to the limitations section where the Bernstein-von Mises theorem may not hold, including models with increasing numbers of parameters, infinite-dimensional models, and models with support that heavily depends on the parameters. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful and detailed responses to all my questions (i.e., robustness to violations of the congeniality assumption and examples where the Bernstein-von Mises theorem does not hold). I appreciate your response regarding violations of the congeniality assumption for the Gaussian variance example. The failure of the method on this simple example decreases its practical appeal. Also, while the real data illustration provides some evidence that these violations might not be an issue in practice, this is again just one example, and more extensive illustrations would be needed to assess this point. (So, I suggest the authors include a fairly nuanced discussion about the robustness of their method w.r.t. violations of this assumption in the final version of the paper.) That being said, I also feel the authors make a fair point when they argue that the paper’s main contribution is the theory. In this regard, I agree the paper already provides enough contributions for a first publication and represents a first step towards more complicated settings that can be addressed in future work. Overall, I am still leaning towards the acceptance of the paper (but might change my mind depending on the outcome of these final discussions with other reviewers). --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our rebuttal. We are planning to include the Gaussian variance example in the revised version, with extended discussion on congeniality. --- Rebuttal 2: Title: what did you think of the authors' response? Comment: The authors have provided detailed responses to your questions and comments. Please revise the text and score of your review to reflect how their responses have changed your perspective on their submission, and please acknowledge that you have read the authors' carefully written response.
Summary: The paper studies Bayesian inference based on synthetic datasets generated in a DP and non-DP setting. The paper suggests a specific sampling approach for downstream Bayesian inference using synthetic DP and non-DP dataset. It contributes theoretical results on the convergence of the inferenced posterior (from synthetic dataset) to the true posterior showing that (under certain assumptions) it converges as the the of number of synthetic datasets and the size of the datasets increases. Additionally, a convergence rate is derived Experimental results are provided for non-DP Bayesian mean inference and DP Bayesian logistic regression showing that the inference approach works, along with examples of the effect of parameters influencing the convergence (including number of observations, samples and level of congeniality) Strengths: - Very timely and interesting topic; I enjoyed learning about the specific Bayesian+DP setting. - The theory is mostly well presented in the main paper (see suggestion/questions below). The narrative is relatively easy to follow. - The theory appears sound. I have not found any obvious issues; however I would need to rely on other reviewers (and perhaps later the community as a whole) to validate the many proofs in the supplementary. Weaknesses: The following are question and comments; not necessarily weaknesses per se: - The balance between theory and experiments is generally very good for my taste, but I feel the experimental part (in the main paper) let the theory part down a bit. The initial experiments focus on intuition and basic insights which I string support; however once the basics have been presented it would had been helpful with an experiment which covers many more scenarios proving summaries of the performance (using TV, coverage, means/modes and variance as metrics), along the most relevant dimensions such as number of observations, level of - ... it would also have been interesting with a more realistic example (high dimensional) using a more complicated model to better motivate the paper. Have the authors validated the results on such an example? - Figure 4 (right): It is not clear to me why the non-DP posterior does not mange to center its mode closer to the true parameter, is this an effect of the prior being centered on the true parameter combined with relatively few observations (the prior is not specified in the main text as far as I can tell?) - or other things? - Figure 1: I am slightly confused by the graphical model, probably because the nature and role of $\theta$ is never really explained in detail. I hope the authors can clarify this (perhaps along with a detailed explanation the generative model in general)? For completeness, I would suggest including $I_a$ and $I_s$ in the figure as well. Overall, I am generally positive about the paper but the experimental parts misses an opportunity to convince me. I will opt for a borderline score until I get a chance to see the other reviews and the authors' response. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Included above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Included above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The balance between theory and experiments is generally very good for my taste, but I feel the experimental part (in the main paper) let the theory part down a bit. The initial experiments focus on intuition and basic insights which I string support; however once the basics have been presented it would had been helpful with an experiment which covers many more scenarios proving summaries of the performance (using TV, coverage, means/modes and variance as metrics), along the most relevant dimensions such as number of observations, level of We have added a plot of the total variation distance between the mixture of synthetic data posteriors and the target posterior for different numbers and sizes of synthetic datasets in the toy data experiment. See the general response for more details. We will add more of the suggested experiments in the final version if space allows. > ... it would also have been interesting with a more realistic example (high dimensional) using a more complicated model to better motivate the paper. Have the authors validated the results on such an example? We have added an experiment on the UCI Adult dataset. See the general response for details. > Figure 4 (right): It is not clear to me why the non-DP posterior does not mange to center its mode closer to the true parameter, is this an effect of the prior being centered on the true parameter combined with relatively few observations (the prior is not specified in the main text as far as I can tell?) - or other things? The downstream prior for the logistic regression example was indeed missing from the paper. It's a centered Gaussian with standard deviation $\sqrt{10}$ with two independent components. We've added a mention of it to Supplemental Section C.5. The distance of the non-DP posterior from the true parameters in the toy data experiment, shown in Figure 4, is simply due to randomness in sampling the relatively small number of datapoints. Frequentist logistic regression (implemented by Statsmodels) gives almost identical coefficients to the mean of the non-DP posterior when run with the same input dataset. > Figure 1: I am slightly confused by the graphical model, probably because the nature and role of $\theta$ is never really explained in detail. I hope the authors can clarify this (perhaps along with a detailed explanation the generative model in general)? For completeness, I would suggest including $I_a$ and $I_s$ in the figure as well. $\theta$ is the parameter(s) of the data generating model used by the data provider, so it appears in the posterior predictive $p(X^* | Z) = \int p(X^* | \theta)p(\theta | Z) d X^*$ that the data provider uses to sample the synthetic data. We will clarify this in the paper. $I_S$ and $I_A$ affect most of the nodes in the network, so adding the required edges would clutter the network. We will add a note that the whole network is conditioned on either one of them to the caption. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for addressing my questions (with an additional experiment and metrics) and what seems like a detailed rebuttal to other reviewers. I am struggling with my assessment of this paper, but I still find it interesting and have not seen substantiated arguments against accepting it, so I’ll keep my score for now. For now, I note that the authors have made commitments to update the paper in several places; I feel it would be helpful with a concise list/summary of changes so we (the reviewers) can get an overview of the (key) updates suggested. Surprising results are the most interesting, and it would probably be helpful to the discussion if the authors could elaborate on the comment to reviewer 8977 “…the requirement of increasing synthetic data sizes was a surprise to us as that is not required in the frequentist setting”. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our rebuttal. We have made a summary of the paper updates as a general comment at the top. We found the requirement of increasing synthetic dataset sizes surprising, as it is not required in the frequentist setting studied by Räisä et al. (2023), or the Bayesian inference from missing data setting detailed in Supplemental Section F of our paper, which served as an inspiration to our work. When we started working on the problem, we did not see any reason why the synthetic dataset size should have such a different effect from the other two settings. Even now after writing the paper, the only indications of this difference we see are writing out Equation (4) and considering when $p(Q | Z, X^*, I_A)$ can be replaced with $p(Q, X^*, I_A)$, and the experiments which confirm this empirically. --- Rebuttal 2: Title: what did you think of the authors' response? Comment: The authors have provided detailed responses to your questions and comments. Please revise the text and score of your review to reflect how their responses have changed your perspective on their submission, and please acknowledge that you have read the authors' carefully written response.
Rebuttal 1: Rebuttal: ## Motivation and Contribution Several reviewers wrote that the paper's contribution and motivation is unclear. Our motivation was investigating whether multiple synthetic datasets could be used for consistent downstream Bayesian inference when the real data is not available due to privacy concerns. Our main contributions are finding that this is possible by mixing the posteriors from multiple large synthetic datasets, and rigorously proving that this converges to the desired posterior under our assumptions. We note the Bayesian viewpoint to synthetic data use has very recently received more attention in the context of downstream prediction tasks. van Breugel et al. (2023) independently proposed aggregating downstream predictions from multiple synthetic datasets, and empirically observed that this improves generalisation performance and uncertainty quantification. We will add discussion on this to Related Work. ## Additional Experiments Several reviewers asked for more experiments, especially on real data. We have conducted an experiment on the UCI Adult dataset, in the same setting Räisä et al. (2023) used to evaluate NAPSU-MQ, which is the algorithm we used to generate synthetic data in this experiment, and the toy data experiment. Specifically, synthetic data is generated under DP from a subset of the columns in the real data, with the downstream task being logistic regression on a futher subset of the columns in the synthetic data. In our experiment, the logistic regression is Bayesian, and the posteriors from multiple synthetic datasets are mixed together. The ideal target posterior is intractable in this setting, so we compare against a non-DP posterior from the real data, and the DP variational inference (DPVI) algorithm of Jälkö et al. (2017). We also tried DP-GLM, which was used in the toy data experiment, but were not able to get useful posteriors out of it. We have included a subset of the results in the attached file. In Figure R1, we plot the posteriors from one run of the experiment with $\epsilon=1$. The mixture of synthetic data posteriors $\bar{p}_n(Q)$ is fairly close to the real data non-DP posterior, with the exception of two coefficients that correspond to the races with the smallest number of people in the data. This is caused by the fact that NAPSU-MQ adds the same amount of noise to all categories, so the signal-to-noise ratio is smaller for underrepresented groups. The posteriors from DPVI are much less accurate. In Figure R2, we plot credible interval coverages from 20 repeats with $\epsilon=1$. The mean of the non-DP Laplace approximation is considered the true value for the coverage. $\bar{p}_n(Q)$ has much better coverage than DPVI. The small mismatch of coverage of $\bar{p}_n(Q)$ is likely due to the fact that running NAPSU-MQ with all possible queries would be computationally intractable in this setting, so the synthetic data has to lose some information. One reviewer asked for additional metrics, so we plotted the total variation distances between the mixture of synthetic data posteriors and the target posterior for different numbers and sizes of synthetic datasets in the toy data logistic regression experiment. These are shown in Figure R3 for $\epsilon = 1$. We computed the total variation distances separately for the 1D marginals, as computing the required integral over the 2D joint distribution took too long. The results in the top row panels show that as the size of the synthetic dataset increases, the total variation distance initially decreases at a steady rate, but stops decreasing at some point due to the finite number of synthetic datasets. As the number of synthetic datasets increases, this plateau moves further, and swapping the roles of the number and size of the synthetic datasets shows that adding more synthetic datasets also decreases the total variation distance, which is seen on the bottom row panels. We also plotted these with $\epsilon = 0.5$ and $\epsilon = 0.1$, and will include them in the paper. ### References - B. van Breugel, Z. Qian and M. van der Schaar. "Synthetic Data, Real Errors: How (Not) to Publish and Use Synthetic Data" ICML 2023 - J. Jälkö, O. Dikmen and A. Honkela. "Differentially Private Variational Inference for Non-conjugate Models" UAI 2017 Pdf: /pdf/c93447d7b2c6f6d4588efefddbc58cf812f480e4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work is sloving a interesting task, which infers the downstream analysis posterior using synthetic data. The work proved that the Bernstein-von Mises theroy applies, the method can converage to the ture posterio as the number of synthetic datasets. The experimental settings are under two examples, i.e. non-private univariate Gaussian 56 mean estimation and differentially private Bayesian logistic regression. Strengths: 1, The work is trying to solve an interesting task, which is infering the downstream analysis posterior using synthetic data. 2, The paper is well-writen and presented. 3, The code is provided. So it will be helpful for the following work. Weaknesses: 1. Since synthetic data is generated by models which are trained using real data. So why synthetic data can improve the consisten bayesian inference is not clear. I think the paper needs more discussion about differences bewteen the real data and synthetic data. 2, The synthetic data is a big topic. In the work, for me, it is not clear which synthetic data methods are used and how the synthetic data method is trained using real data. 3, The applications are missing. Is it possible to extend the proposed method or therory to some kind of real application. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Since synthetic data is generated by models which are trained using real data. So why synthetic data can improve the consisten bayesian inference is not clear. I think the paper needs more discussion about differences bewteen the real data and synthetic data. The synthetic data does not improve the downstream Bayesian inference over using the real data, so the real data should be used if it is available. The paper examines what can be done when the real data is not available due to privacy concerns, but synthetic data is available. We will clarify this in the paper. Proper privacy protection requires DP synthetic data which is our main focus, but we have included the non-private synthetic data setting as the same theory applies there as well. > The synthetic data is a big topic. In the work, for me, it is not clear which synthetic data methods are used and how the synthetic data method is trained using real data. Our main contribution is the theoretical result, which works for any synthetic data generation method capable of generating from the posterior predictive distribution, which is targeted for example by van Breugel et al. (ICML 2023). In the examples presented in the paper, the synthetic data in the Gaussian example is generated from the posterior predictive, as detailed in equations (14) and (15). In the logistic regression example, we used the NAPSU-MQ algorithm (Räisä et al., AISTATS 2023), which we've briefly described in Supplemental Section A.3. > The applications are missing. Is it possible to extend the proposed method or therory to some kind of real application. We have added an experiment on the UCI Adult dataset. See the general response for details. Our theory is not specific to any single synthetic data generation method, so it is potentially applicable to any release of privacy-sensitive synthetic data, such as medical data. --- Rebuttal Comment 1.1: Comment: After checking the responses, my concerns have been addressed. So I will to increase my score.
null
null
null
null
null
null
Model Shapley: Equitable Model Valuation with Black-box Access
Accept (poster)
Summary: This paper studies an abstract problem of model valuation. They propose a notion of valuing models, called the *Model Shapley Value*, based on the classic notion of Shapley Value from the literature on cooperative games. Additionally, the work proposes an abstraction for models, a *Dirichlet abstraction*, that is meant to enable efficient comparison of different models. They propose a method of learning to valuate models, and evaluate their method empirically. Strengths: The problem of valuing different predictive models is well-motivated. This paper nicely motivates the study of valuation of models, rather than simply data, on a number of axes. The reliance on Shapley value is standard in the ML literature, but also well-motivated, due to the strong theoretical guarantees and study of Shapley. Weaknesses: Major concern: I find the presentation of the paper to be a substantial barrier to understanding and evaluating the work. Concretely, the task at hand of model valuation is never actually described formally. - What is the task? What is the input and output? - What is the trivial solution? - Why is the proposed solution better? Unfortunately, I do not understand the description of Dirichlet abstractions. - Again, what is the point of the abstraction? We are representing a model as a Dirichlet probability distribution? Why? - What is $\mathbf{M}_i(x)$? Specifically, what is $x$ here? Is this the induced distribution on predictions given a randomly sampled input $x_j$? - Why can this be viewed as a Dirchlet distribution? I see the citation, but it feels like a substantial enough point that it should be justified and motivated in the text of this paper. - "$\mathbb{Q}_i$ encodes the predictive accuracy and certainty of $\mathbf{M}_i$ through a theoretical connection..." << Is this a formal statement? How does this work? How is it justified? My lack of understanding of the basic building blocks behind this paper makes it difficult for me to evaluate the theory or experiments appropriately. I've spent a decent amount of time trying to understand, and it is still not clear to me what is happening. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: Many of my questions are listed above. - What is the task at hand? - What is the key goal of this paper? - What are the key contributions to achieving this goal? I believe that there is probably something interesting happening in this paper, but I am not able to evaluate due to my lack of clarity on the basic objects studied. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: Adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Review Coej for reviewing our paper and for finding our problem "well-motivated" and the usage of the Shapley value also "well-motiavated". We wish to address the questions as follows. W1. > What is the task? What is the key goal of this paper? As pointed out by the reviewer: > This paper nicely motivates the study of valuation of models, __The task or key goal is to study the valuation of models, and proposes a suitable method for model valuation__, requiring us to resolve three main technical challenges as in Sec. 1: - (i) how to represent a model (which we call an abstraction) such that it is amenable to valuation? (lines 44-54); - (ii) how to ensure equitability properties are satisfied, so as to ensure the fairness of a model marketplace? (lines 54-60); - (iii) in leveraging the Shaley value to satisfy these equitability properties, a computational challenge arises, how then to resolve this challenge? (lines 61 - 66). > What are the key contributions ... ? __Resolving the above three challenges constitutes the key contributions, as summarized in lines 67-86 of Sec. 1.__ To elaborate, as the reviewer pointed out, the Dirichlet abstraction is proposed to resolve (i) as a suitable model abstraction, that is amenable to valuation. Then, the Shapley value formulation is adopted via designing a specific function $\nu$ (in Equ. (3) in line 195) to satisfy these equitability properties and resolve (ii); and a learning approach, by leveraging the analytic properties of the the proposed function $\nu$ and Dirichlet abstractions, is proposed to resolve (iii). > What is the input and output? The formal problem is described in Sec. 3.1. The input is $N$ models $\\{ \mathbf{M}\_1,\ldots , \mathbf{M}\_i, \ldots , \mathbf{M}\_{N} \\}$ with only black-box access. The desired output consists of __the definition of $\phi\_i$ for each model, which satisfies the equitability properties P1-P4__ (lines 181-184) and __a computationally feasible implemetation to obtain $\phi\_i$__. > What is the trivial solution? __A trivial solution is to use the predictive accuracy of a model as its value (lines 47-48)__. > Why is the proposed solution better? __Lines 48-53 highlight why the trivial solution is too reductive and thus missing other key aspects of the model__, such as its predictive certainty. Our proposed solution is shown to be consistent with both the predictive accuracy (Figure 3 left) and certainty (Figure 3 right). Moreover, our proposed solution is extendable to more sophisticated evaluation criteria (Sec. 3.2 and Table 1). W2. > Again, what is the point of the abstraction? __The abstraction is meant to be formal representation of the model__. Recall that we consider the black-box access setting (motivated in lines 33-38), which means only queries and the model predictions are observed. In this way, without a formal representation, it is difficult to provide a formal treatment of the problem of model valuation. > We are representing a model as a Dirichlet probability distribution? Why? As pointed out by the reviewer, "Additionally, the work proposes an abstraction for models, a Dirichlet abstraction, that is meant to enable efficient comparison of different models." There are two main reasons: (i) __to enable efficient comparison__ by using the closed-form expressions of the Hellinger distance between Dirichlet distributions; (ii) __to provide theoretical analysis__ by leveraging the analytic properties of the Hellinger distance between Dirichlet distributions, as in Proposition 2 and Theorem 1. > What is $\mathbf{M}_i(x)$? A model $\mathbf{M}\_i$ is a mapping, namely $\mathbf{M}\_i: \mathcal{X} \mapsto \triangle(C)$. > Specifically, what is $x$ here? Is this the induced distribution on predictions given a randomly sampled input $x_j$? $x \in \mathcal{X}$ is a specific feature vector, viewed a a realization of a random variable $X \sim P_X$ whose $\text{supp}(X) = \mathcal{X}$ (and $P_X$ is represented empirically by the task). Then, $\mathbf{M}\_i(X)$ is an induced distribution (by $P_X$) over $\triangle(C)$. > Why can this be viewed as a Dirchlet distribution? ... Because (i) __the support of $\mathbf{M}\_i(X)$ matches _exactly_ to that of a Dirichlet distribution__; (ii) from a statistical viewpoint, __the Dirichlet distribution is a suitable modeling choice for the distribution $\mathbf{M}\_i(X)$ [57]__. We will make this point explicit in our revision. > "$\mathbb{Q}_i$ encodes the predictive accuracy and certainty of $\mathbf{M}_i$ through a theoretical connection..." << Is this a formal statement? How does this work? How is it justified? __The formal result is Proposition 2__. The insight is that the cross entropy (CE) loss of a model encodes the predictive accuracy and certainty (see below). The CE loss is used to construct upper and lower bounds for our proposed method (i.e., $\nu$ in Equ.(3)). Hence, our proposed method also encodes the predictive accuracy and certainty. Recall the CE loss of a $C$-dimensional predicted probability vector $\hat{y}$ w.r.t. the one-hot encoded true label $y$: $-\sum_{k=1}^C y_k \times \ln(\hat{y}_k) $. W.l.o.g., assume that $y_{1}=1$ (i.e., the correct class is the first class). - For two predictions $[0.9,0.1, 0,\ldots,0]$ vs.$[0.1, 0.9,0, \ldots, 0]$. The CE losses are $0.105$ and $2.30$, respectively. Note that the first prediction is correct while the second in incorrect and that both predictions are "equally certain". Hence, __A higher predictive accuacy implies a lower CE__. - For two predictions $[0.9,0.1, 0,\ldots,0]$ vs.$[0.6, 0.4,0, \ldots, 0]$. The CE losses are $0.105$ and $0.511$, respectively. Note that both predictions are correct while the first prediction is "more certain". Hence, __A higher predictive certainty implies a lower CE__, if the prediction is correct. We hope our clarifications have addressed your questions and helped improve your opinon of our work. --- Rebuttal Comment 1.1: Title: Receipt of rebuttal Comment: I continue to find the presentation very confusing. I recognize that other reviewers seem to be more confident in their understanding of the paper, so I will downweight my confidence and make my score less extreme. Regardless of outcome, I recommend that the authors work on outlining the problem at hand precisely, formally, and to distinguish between the *problem* (i.e., given models, what are the values of them?) and solution concepts (i.e., equitable Shapley values, using ML to predict model values, etc). *** Based on more time with the manuscript and others' comments, I will update my review. It is now clear to me that there is something interesting happening here, with the representation of the models, and in particular, in connecting the idea of closeness in Hellinger distance with the ability to learn MSVs. For me, however, this took a lot of work to understand. I think there are a few things that were not obvious to me, and would be worth belaboring in the intro or early preliminaries. - I would lay out, a bit more directly, the tasks at hand: (a) Given n models, produces a valuation for each (b) Given a sample of models, produce a valuation function (i.e., model appraiser) that on an unseen model, returns a valuation. - I think some additional specificity about what type of "models" and "tasks" you're thinking about would be helpful orientation. For instance, I'm not sure I understand the conflation of "task" and "query set". - Finally, I think the paper is very notationally dense, but it is worth spelling out very clearly the notation and terminology being used, and to define it before you use it. For instance, Q* is defined inline after it's already used. On a technical level, Theorem 1 is interesting. I think the presentation would improve even more if there was more exposition about how to connect the theoretical guarantees of Theorem 1 to the problem of learning MSVs. In what sense does Theorem 1 provide guarantees for the learning task? In what sense does it give a guiding principle that, in empirical evaluations, turns out to be wise? --- Reply to Comment 1.1.1: Title: Thank the reviewer for the additional time and raising the score Comment: We thank Reviewer Coej for the taking the extra time and effort, we really appreciate it! We thank the reviewer for the suggestions on writing and, in our revision, we will - clearly lay out the formal problem statement in terms of the inputs (i.e., $N$ classification models) and desired outputs (i.e., an equitable valuation function $\phi_i$ and a computationally efficient way to obtain it such as via a learnt appraiser); - include additional descriptions of the models and tasks w.r.t. our problem setting: the models are trained classification models with specified input and output spaces (without constraints on their architectures) and the tasks are the classification problem (with the same input and output specifications as the models) that these models are trained on; - spell out clearly the list of notations (in Appendix) and ensure that $\mathcal{Q}^*$ is defined before it is formally used. [Regarding the "task" and "query set"]: The **"task"** (mentioned in line 44) **is a conceptual definition used to describe what the model is used for (by the user)**. An example would be the classification of MNIST digits. The **"query set"** (mentioned in line 39) **is meant to be how a task is formally represented**. In the example where the task is classification of MNIST digits, the corresponding query set can be a validation set containing the MNIST digit images and the labels. Hence, a "task" is the conceptual definition whose formal representation is the "query set", so in our writing we use them interchangebaly. Note that however, in our formal treatment, the query set is made precise as $\mathcal{D}$ (or the class-specific $\mathcal{D}_k$). [Regarding Theorem 1] We thank the reviewer for finding that it is interesting and for the insightful questions regarding the implications of Theorem 1, and wish to provide the following clarifications. > In what sense does Theorem 1 provide guarantees for the learning task? The main point of Theorem 1 is to show that the **model Shapley as a function, is learnable due to its "Lipschitz continuity"**; and because it is learnable, we propose to learn it. To see the Lipschitz continuity of model Shapley as a function, recall the input to this function is a model $\mathbf{M}\_i$ formally as its Dirichlet abstraction $\mathcal{Q}\_i$; and the corresponding output is its MSV $\phi_i$. Recall that the Lipschitz continuity of a function states that the difference in the outputs is bounded by (a Lipschitz constant times) the difference in the inputs, which is precisely what (the right hand side of the implication of) Theorem 1 aims to provide: $ |\phi\_i - \phi\_{i'}| \leq d\_{\text{H}}(\mathcal{Q}\_i, \mathcal{Q}\_{i'}) $ where _absolute difference in the MSVs is the difference in outputs_ whilst _the Hellinger distance between the Dirichlet abstractions is the difference in the inputs_. > In what sense does it give a guiding principle that, in empirical evaluations, turns out to be wise? To exploit this Lipschitz continuity, we propose **a learning approach using Gaussian process regression (GPR) as the learner** where the specific choice of GPR is due to **a uniform regression error bound for Lipschitz continuous functions** [40]. Then, in terms of implementation choice and empirical evaluations, GPR requires a kernel (between inputs) defined w.r.t. some suitable distance between the inputs, so **the Hellinger distance** presents as a very natural choice (lines 890-896 in App. C2), which **proves to be an effective choice**, demonstrated via the low regression errors in Fig. 3. We hope our clarifications help provide additional exposition about how to connect the theoretical guarantees of Theorem 1 to the problem of learning MSV. We are happy to clarify further questions. Again, we thank Reviewer coej for the feedback and questions.
Summary: The authors represent the model's predictive accuracy and certainty with Dritchlet abstractions and formalize model Shapley values, which measure the value of models to a given task defined by a query set. They evaluate their work on MNIST, CIFA-10, DrugRe, and medNIST datasets. Strengths: - The authors lay out an interesting problem, with clear background and supporting statements. - The authors formulate a solid, well-thought-out solution to the problem. - The theoretical statements are sound, and the results of the empirical analyses are clearly explained. - The paper is well structured and written, thus facilitating a good read. Weaknesses: P4: Diminished marginal utility - While it might be the case that the marginal utility of duplicate models reduces, the model sellers might negatively affect the marketplace with untruthful model presentations. Say, for example, two models (A, B), each have values 0.5, then if one model seller decides to fraudulently duplicate model A, the value although depreciates, might assign a high value collectively to A (2/3). Query set - The authors say they have black-box access to the models. This is somewhat okay with image data, where one doesn't need explicit knowledge of the key features. For example, if I'm interested in testing the value of cat/dog classification models, I will bring cat/do images in the query set. However, for things like loan worthiness, with black box access, how does one construct the query set? - What is the upper and lower bound on the query set size before one reverse engineer the model? P3 - Multiple complementary tasks might be a fair way to measure value of models with p3 assumption. How about if these tasks are tradeoffs of each other? Technical Quality: 3 good Clarity: 3 good Questions for Authors: For the rebuttal, the authors should respond to all the questions highlighted in the weaknesses subsection. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not mention any limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer bude for taking the time to review our paper, and for appreciating our studied problem ("lay out an interesting problem"), proposed approach ("a solid, well-thought-out solution"), presented results ("theoretical statements are sound, and the results of the empirical analyses are clearly explained") and our writing ("paper is well structured and written, thus facilitating a good read"). We wish to address the questions as follows. W1. > P4: Diminished marginal utility __Our proposed approach can be adapted to address this issue relatively easily__, by substituting our proposed $\nu$ in Equ.(3) into the variant of the Shapley value [Theorem 4.5, 21], which importantly continues to satisfy the properties P1, P2 and P3 [21]. However, we wish to highlight that (the robustness to) such duplication is beyond the scope of this work and we will include this discussion in our revision. W2. > However, for things like loan worthiness, with black box access, how does one construct the query set? __Formally, there is no difference between such different types of data__, as we have also considered non-image data, specifically tabular data (i.e., KDD99) and text-based data (i.e., DrugRe) in our experiments. The key requirement is that the __input and output specificaions of the model, have to match the input and output specifications of the collected data in the query set__. We believe this is a reasonable practical requirement because (i) without knowing the correct input of a model, a user cannot use the model for predictions; (ii) the output is $C$-dimensional for a $C$-way classification model. To illustrate, for an image task, the black-box model (which could be logistic regression or a convolutional neural network), takes as input an image and returns as output a predicted probability vector. Similarly for other forms of data (e.g., tabular), the model takes as input a data point (i.e., a row in the tabular data) and returns as output a predicted probability vector. > What is the upper and lower bound on the query set size before one reverse engineer the model? To the best of our knowledge, no known theoretical bounds are available to completely reverse engineer the model _only_ based on quries; __our approach is empirically shown to work with query sets which are smaller than an existing reverse-enginnering attack that requires additional assumptions _not_ satisfied in our setting__. Indeed, this is a main motivation for adopting the black-box (access) model in our setting as in lines 36-37. To elaborate, (Oh et al., 2018) requires (i) knowing something about the model (to be reverse engineered), such as "a diverse set of white-box models ... that are expected to be similar to the target black box at least to a certain extent"; (ii) hundreds to thousands of queries, to construct a model that _only predicts similarly_, __not__ the black-box model itself. In our setting, (i) is not satisfied; moreover, in our experiments, we find that query sets of size as small as $100$ are sufficient, which is much smaller than that used in (Oh et al., 2018). W3. > Multiple complementary tasks might be a fair way to measure value of models with p3 assumption. How about if these tasks are tradeoffs of each other? __If the user knows the importance of these tasks, then the user can specify the weights to achieve a desirable tradeoff.__ This is because different users might have different preferences and there is no one-size-fits-all solution. To elaborate, suppose the user _only_ cares about whether the model makes accurate predictions but not at all about adversarial robustness because the user intends to deploy it in a controlled and safe environment, then the task constructed for adversarial robustness is not very relevant to this user. In contrast, if the user does care about the adversarial robustness (which, is often at trade-off against pure predictive performance), then the user can set the weights between the two tasks according to their preferences. On the other hand, if the tradeoffs of the tasks are unknown, for instance the objectives are very complex, then uncovering the relationship between tasks (which are potentially tradeoffs of each other) is useful. Specifically, the approach to obtain the connections in Table 1 is useful. For instance, predictive accuracy and adversarial robustness are tradeoffs of each other since the objective of adversarial robustness "balances" between the clean and adversarial cross entropy (CE) losses. Upon identifying this theoretical connection, the user can then specify the weight between the two accordingly. We will include this discussion in our revision. *References.* Oh, Seong Joon and Augustin, Max and Schiele, Bernt and Fritz, Mario. Towards Reverse-Engineering Black-Box Neural Networks. In ICLR, 2018. We thank Review bude for the positive feedback and the comments. We hope that our response has clarified the questions, and has helped raise your opinion of our work. --- Rebuttal Comment 1.1: Comment: I thank the authors for investing lots of effort in answering our queries. In general, I find the responses reasonable and agree with most of them. Below are some responses I didn't fully agree with; Query sets and Blackbox access; - I still find the black-box access explanation for tabular datasets insufficient, especially in practical settings. Say, for example, a school admission system. While there is a noisy idea of what schools consider for admission, it's unclear what features are used in admissions model training. Additionally, these vary across schools. So if one wanted to valuate admission models with black-box access, query sets might be flawed or significantly vary across models, which beats the design. - A classification model (e.g., CV model) focusing on the image background instead of a holistic image is a model limitation, not a characteristic. On the other hand, some tabular models leaving out some features is an actual design, not necessarily a flaw. A tabular model not considering causal relationships and other relationships between features and classes would be a model design flaw. Consideration of different features is characteristically different models, not model variants. - In my opinion, letting 'outsiders' (e.g., not the target school admin, but say another school's admin) know what features are being used in a tabular model makes access "grey" (partial access) and not "black" (no access) or "white" (full access). Task tradeoffs - I agree that different users might have different preferences, and there is no one-size-fits-all solution. However, I think my question was more towards the divergence of this knowledge between model sellers and if this could encourage gaming. I agree that some tasks might be niche, and therefore if a buyer needed models that do that particular task, they would seek out the valuation of models for that task. However, if one seller had prior knowledge that the buyer cared about task-x+accuracy and the other sellers didn't and focused on only accuracy, this asymmetry of information in the marketplace would cause instability. So it might be better to mention this as a design limitation. I appreciate the responses, and I agree with some of them. Even though I didn't fully agree with everything, I think the authors' responses are generally reasonable and well thought out. I, therefore, will raise my score to 6. --- Reply to Comment 1.1.1: Title: Thank the reviewer for raising the score Comment: We thank Reviewer bude for the quick response, providing constructive feedback, and raising the score. We would like to provide some discussion as follows, [On Query sets and Blackbox access] We thank Reviewer bude for raising the questions (e.g., different feature spaces, and the access to the actual feature space) for the practical application scenarios of our studied problem setting. We believe these are interesting and very relevant questions worth careful explorations. In this regard, we wish to point out that our paper aims to take _a first step towards a theoretical framework in which these questions can receive a formal treatment_ and we definitely hope to inspire other works in this direction to tackle these interesting and relevant questions. We will highlight these questions as future work in our revision. [On task tradeoffs] We think that the asymmetry of information does exist in general marketplaces (i.e., not restricted to for data or machine learning models) where the party (i.e., either buyer or seller) with more information would have an advantage. Nevertheless, we thank the reviewer for the astute observation on a concrete example of this in model marketplaces and will definitely include this point in our revision! Again, we really appreciate the reviewer's effort and thought in reviewing work and response, and providing constructive feedback.
Summary: This paper introduces a novel approach to model comparison and valuation, utilizing a method known as Dirichlet Abstraction. The fundamental idea is to abstract the predictive behavior of different models via a Dirichlet distribution. This abstraction allows the comparison of diverse models on an equal footing. Three main challenges in model valuation are identified in the paper: developing a suitable abstraction for model valuation relative to a task, satisfying equitability properties in model valuation, and exploiting the equitability properties of the Shapley value in a large marketplace. To address these challenges, the authors introduce an innovative approach to model valuation, utilizing a Dirichlet distribution to approximate a model’s predictive pattern or distribution with respect to a task. This abstraction, termed the Dirichlet abstraction, incorporates both the model's predictive accuracy and certainty. The authors then propose using the model Shapley value as an equitable valuation method, leveraging the Dirichlet abstractions' ability to preserve similarity between models. To address the computational challenge of Shapley value in a large marketplace, the paper suggests a learning approach for training a model appraiser. This model appraiser, trained on a small subset of models and their model Shapley values (MSVs), can predict other models’ MSVs, thus validating model Shapley’s practical feasibility in a large-scale marketplace. The paper's empirical validation, performed on real-world datasets and with up to 150 heterogeneous models, confirms that higher predictive accuracy, more suitable model types, and higher predictive certainty correlate with higher model Shapley values. The authors also provide a use case for identifying a valuable subset of models from the marketplace to construct a more complex learner. Strengths: The paper presents a novel approach to model comparison and valuation using Dirichlet abstraction, offering a creative combination of existing statistical concepts applied in a new manner. The concept of utilizing Dirichlet distributions to abstract predictive behaviors of models is quite innovative. Furthermore, the introduction of class-specific Dirichlet abstraction is an original refinement to their method, preserving more detailed information about the model's behavior. The quality of the research seems to be high, given the soundness of the mathematical framework used in the paper and the logical structure of the presented methodology. The authors provide a theoretical background to support their claims and supplement it with visual aids, offering a detailed explanation of how their method works. The decision to use Hellinger distance to measure the dissimilarity between two probability distributions shows a thoughtful choice in ensuring computational efficiency. The ability to compare different types of machine learning models in a unified framework is an important advancement in the field. This has potential implications for model selection in various machine learning tasks, enabling a more flexible, efficient, and fair comparison of models. Moreover, the efficient computation of the Hellinger distance between Dirichlet distributions has implications in other fields where these statistical concepts are used. Overall, the paper presents a valuable addition to the machine learning and statistical literature. Weaknesses: While the paper does present a novel approach to model valuation, there are several areas where it could be improved. The paper assumes the availability of a large query set for accurate MLE estimation. It should address situations where the available data is sparse or not balanced across classes. A detailed discussion or a possible solution for handling these scenarios would strengthen the paper. While the authors discuss the trade-off between the abstraction level vs. query set size, there is a lack of clear guidelines or a framework for determining the optimal trade-off. This could lead to difficulties in implementing the proposed method in real-world applications. Adding a more formal discussion or a proposed methodology to address this trade-off would be beneficial. The approach assumes homogeneity across models after the Dirichlet abstraction, which might not always be the case in practical scenarios. Some models might have specific characteristics that cannot be captured by the Dirichlet distribution. More discussion on how such cases can be handled would improve the paper. The paper lacks a comparison with other existing model valuation methods. It would be beneficial to include an evaluation of how the proposed method fares against existing methods in terms of accuracy, efficiency, and robustness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How does the proposed technique compare with existing model valuation techniques, in terms of accuracy, computational efficiency, and other relevant metrics? It would be beneficial if the authors could provide a comparative study in this regard. How does the method perform when the available data is sparse or unbalanced across classes? Could the authors elaborate on how their approach handles diverse model architectures, especially those with characteristics that cannot be adequately captured by a Dirichlet distribution? Could the authors elaborate on how to choose an optimal trade-off between the level of Dirichlet abstraction and the size of the query set? A concrete set of guidelines or a decision-making framework would be helpful for practitioners seeking to apply this method. Some of the assumptions in the paper (like model homogeneity post-abstraction) might not hold in all practical scenarios. Could the authors elaborate on the implications if these assumptions are violated and how such situations could be handled? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have not explicitly addressed the potential limitations and broader societal impacts of their work. The authors could provide more discussion on the potential limitations of their work. For instance, they might discuss the sensitivity of the model's valuation to the choice of the task or query set, and the limitations of the Dirichlet abstraction in approximating a model's predictive pattern or distribution. In terms of broader societal impacts, the authors might consider discussing how their proposed model valuation framework could potentially affect the machine learning marketplace. For instance, could this valuation approach inadvertently favor certain types of models or tasks over others, or influence the development and use of machine learning models in ways that could have unintended consequences? They could also consider the potential impacts on data privacy, given that model valuation, unlike data valuation, does not require centralization of potentially private data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer RTGV for taking the time to review our paper and providing very detailed feedback and comments, especially saying that our work presents "a novel approach", "is quite innovate" and "a valuable addition to the machine learning and statistical literature". We wish to address the feedback and questions as follows. W1. > ... It should address situations where the available data is sparse or not balanced across classes. A detailed discussion or a possible solution ... . __We address a highly imbalanced case in our experiments, i.e., KDD99 dataset, using the class-specific Dirichlet abstractions weighted by the size of each class-specific query set__ (lines 345-348). Our results show that this approach is _effective and necessary_. Figure 4 (left) under the non-class-specific approach is unable to distinguish models trained on different amounts of training data (with 100-fold difference in size) while Figure 4 (right) under the class-specific approach is able to do so. We will make this discussion more explicit in our revision. W2. > ... there is a lack of clear guidelines or a framework for determining the optimal trade-off. ... Adding a more formal discussion or a proposed methodology to address this trade-off would be beneficial. __Adopting the highest level of abstraction can already be quite effective__ (shown in our experiments for MNIST, CIFAR-10, and two real-world datasets MedNIST and DrugRe) if the query set is _not highly imbalanced_. For a highly class-imbalanced query set (such as KDD99 in our experiments), __adopting the class-specific Dirichlet abstractions weighted by the size of the class-specific query set is effective__ (lines 345-348). We will include this in our revision and believe that a more extensive and formal framework is a very useful future direction. W3. > ... assumes homogeneity across models after the Dirichlet abstraction, ... __All models (regardless of architectures) for the same learning task__ (i.e., same feature and label spaces) __can be represented via their corresponding Dirichlet abstractions, which are homogeneous by design, not assumption__. As an example, the Dirichlet abstractions of a logistic regession and a CNN are both Dirichlet distributions, but with possibly different parameters. > Some models might have specific characteristics that cannot be captured by the Dirichlet distribution. [2] provides a polynomial sample complexity of the query set w.r.t. the error and a model's actual predictive distribution. It is an interesting future direction to study its implications for model valuation. W4. > ... a comparison with other existing model valuation methods __There are limited existing methods applicable for comparison__: The closest work is [55], but it is restricted to the binary classification, thus not applicable to $C$-way classification (in lines 230-231 and 370-371). We include an additional discussion on other works from an economics viewpoint (without a machine learning focus) in App. B.2 (lines 844-851). Hence, __we investigate some intuitive baselines where our proposed approach produces consistent results to these intuitive baselines__ (i.e., predictive accuracy in Figure 3, F$1$ score in Figure 4 right, training data sizes in Figure 4 left). A further discussion is included in App. B.2 (lines 833-843). Moreover, we have also __derived theoretical connections between our approach to some sophisticated criteria such as fairness, and robustness__ (Sec. 3.2 and Table 1). Q1. > How does the method perform when the available data is sparse or unbalanced across classes? See response for W1. > ... how their approach handles diverse model architectures, especially those with characteristics that cannot be adequately captured by a Dirichlet distribution? See response for W3. > ... how to choose an optimal trade-off between the level of Dirichlet abstraction and the size of the query set? See response for W2. > Some of the assumptions ... (like model homogeneity post-abstraction) might not hold ... . Could the authors elaborate on the implications if these assumptions are violated and how such situations could be handled? - On "model homogeneity", see response for W3. - A key assumption (for Theorem 1) is fusion-inreases-similarity (lines 272-273). __We specifically derive a sufficient condition for identifying when this assumption holds__ (Prop. 4 in App. A.3) and provide an interpretation. L1. > ... the sensitivity of the model's valuation ..., and the limitations of the Dirichlet abstraction in approximating a model's predictive pattern or distribution. We thank the reviewer for the suggestion and will incorporate this in our revision. L2. > ... could this valuation approach inadvertently favor certain types of models or tasks over others, or influence the development and use of machine learning models in ways that could have unintended consequences? __This is an important motivation to consider a diverse range of evaluation criteria, from the more intuitive predictive accuracy to the more sophisticated ones such as fairness and robustness__. We are happy to provide further elaboration on this, due to the character limit of the rebuttal. L3. > They could also consider the potential impacts on data privacy, given that model valuation, unlike data valuation, does not require centralization of potentially private data. The __privacy regulation on data is an important motivation of our work__ (lines 26-30) and is further discussed in App. B. 1. We thank the reviewer for the detailed feedback and for the suggestions, and will incorporate these discussions in our revision. We hope our response clarified the questions and has helped improve your opinion of our work. We are happy to provide further clarifications or elaboration. --- Rebuttal 2: Title: Gentle reminder for response Comment: We wish to thank Reviewer RTGV for the positive feedback ("novel approach", "quality of research seems to be high") and the questions, and are keen on finding out whether our response has clarified your questions, since the discussion period is coming to an end soon (in less than 15 hours). We really appreciate your feedback and acknowledgement. Thank you.
Summary: The paper considers the problem of assigning a value to a ML model (e.g., in a marketplace with multiple models). The proposed idea is to estimate (via MLE) the Dirichlet abstraction of a model (potentially conditioned on the class) and to compute the Shapley value of a game where the value function is the Hellinger distance between the Dirichlet abstraction of a model that fuses the models in the coalition and an (almost) optimal model. The paper shows that this proposal is well-behaved and allows for learning the Shapley value to reduce the complexity (the number of Shapley values to compute). The paper also shows a number of experiments on standard datasets (MNIST, CIFAR et al.) and compares to other more standard quality metrics. Strengths: The problem of valuing a model with black-box access (i.e., we can access predictions but not the model itself) is interesting because of many considerations that restrict sharing data and model parameters. The paper proposes an original solution in considering the Dirichlet abstraction of the model to compute metrics. This indeed helps because then models fusion as well as Hellinger distance can be computed easily. In a sense, the framework itself is the main contribution more than the theoretical results (which is totally fine). One of the advantages of the Dirichlet abstraction framework is that it handles easily and naturally multiple classes, as opposed to binary classification which is much more standard in that type of literature. I find the paper well written and easy to follow (up to the issue of Figures to understand numerical experiments, which I discuss later). Weaknesses: I had a doubt about the Shapley value: Shapley assumes an axiom of efficiency. Here, $Z$ seems free. So $\phi_i$ will be the Shapley value only for a particular value of $Z$. Does it have any impact? More generally, this should be clarified. For instance, Thm 1 clearly depends on $Z$ and I have not seen that said anywhere. In the proof, it looks like $Z$ is taken to be $1$, is that the case? Does it correspond to Shapley then? The condition in Prop 4 (App A), which implies that fusion increases similarity, is not too explicit. Is there a nice intuitive interpretation of it? And does it hold (theoretically) for the models used in the experiments for instance? The notation can be improved in some places. For instance, l. 208, $n$ is the size of a subset that depends on $\mathcal{C}$ whereas $n$ is almost always the size of the full set $N$. Also the fact that $C$ is the dimension of the label whereas $\mathcal{C}$ is a subset of $N$ is confusing. The definition of $\mathbb{Q}^*$ poses some questions. It is in a sense supposed to represent an optimal model, but then some noise is added with range $[0, 0.01]$ to avoid degeneracy in MLE. So it means that theoretically, there could be a better model with a non-zero distance and hence strictly speaking the Hellinger distance is not decreasing with the model quality. Is it correct? If so, isn't there a nicer way to define $\mathbb{Q}^*$ theoretically? On the learning of the Shapley value: the proposal is to compute the Shapley value for a subset of models (say, 20 out the 150 models) and to use that to infer the value of others because the Shapley value has some regularity wrt the Dirichlet abstraction. This is nice, but poses two questions: a- is it obvious that one can just compute the Shapley values of a subset of models? Normally, computing the Shapley value requires computing the marginal increment to some subcoalitions (even if not all, using MC). Here, we cannot do it for coalitions that aren't in the subset of 20, so how do we do? This requires clarification. Perhaps the special form of the game allows that but it needs to be clarified. b- in the experiments (l. 305-312), my understanding is that the paper computes the Shapley value of all 150 models (used as ground-truth), and then uses only a subset of them to infer the others. That by-passes the issue I mentioned just above... but then it does not allow to get the reduced complexity in practice so the issue stands completely. I think a better experiment would be to compute the Shapley value of the 20 models using whatever solution the authors came up with to my point a- above (which needs to be clarified), and then using it to infer the SV of the others, comparing it to the case where one can indeed compute all 150 SVs. Figures are really too small. I understand the page limit but it is excessive. Some of them are literally unreadable (like the plot is hidden behind the legend, e.g., Fig 3 left, for MLP). Also in many of them, I could not understand what the x-axis is (mostly because it was not written). UPDATE AFTER REBUTTAL: I raise my score to 7, due to a better understanding, in particular of the experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: [Please also see the weaknesses part where I put questions as well (probably the most important); I am putting here whatever I have not already written above.] - The Dirichlet abstraction allows handling multiple classes nicely, but it was not clear to me what would simplify or get more standard if we have binary classification. - I understand why it helps (l. 210-220) with the Shapley computation, but it would be good to better explain why doing the fusion as it is done makes sense from the prediction perspective. - For clarification: the Dirichlet abstraction (e.g., l. 100) is not allowed to depend on x? - Is it possible to extend the framework to assign different weights to different types of errors? - It may be good to add a bit more background on the Dirichlet distribution and Hellinger distance (in App), just enough to help follow the rest of the paper for those who are not familiar with these. - Fig 1: although I understand it doesn't matter for the point the paper makes there, I find that plots 1-2 aren't similar to plots 3-4. Is this normal? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer tCDm for taking the time to review and providing such detailed feedback and comments, and for finding our paper "interesting" and "well written". We wish to address the feedback and questions as follows. W1. > So $\phi\_i$ will be the Shapley value only for a particular value of $Z$. Does it have any impact? More generally, this should be clarified. ... Thm 1 clearly depends on $Z$ and I have not seen that said anywhere. __Theorem 1 holds regardless of the value of $Z$__, by identifying the suitable constant $L$ in Lemma 6 in App. A.4, though the bound can become looser for a larger $Z$ (due to a large $L$ in Lemma 6). > ... it looks like $Z$ is taken to be $1$ , is that the case? Does it correspond to Shapley then? W.l.o.g., we set $Z=1$ and it does recover the original Shapley value. W2. > The condition in Prop 4 (App A), ... Is there a nice intuitive interpretation of it? __If the "shapes" of $\mathcal{Q}\_i$ and $\mathcal{Q}\_{i'}$ are very different, then fusing each to a common $\mathcal{Q}\_{\mathcal{C}}$ increases the resulting similarity as it "evens out" their difference (lines 732-736).__ This is further elaborated in the lines 724-731, 732-736 of App. A.3. > And does it hold (theoretically) ... ? Our empirical verification of Theorem 1 (which requires Prop. 4) observes that the the models which are similar do lead to similar MSVs in lines 278-299 and Figure 2 and Table 2. Theoretical verifictaion is an interesting future exploration. W3. > notation can be improved in some places We clarify that $N$ denotes an integer, while $[N]$ is the set $\{1,\ldots, N\}$. We thank the reviewer for raising this and will revise our notations. W4. > there could be a better model ... Is it correct? If so, isn't there a nicer way to define theoretically $\mathcal{Q}^\*$? We first highlight that __our theoretical results do _not_ require this definition of $\mathcal{Q}^\*$__. Then, though there is a possibly better hypothetical model with a non-zero distance, empirically we find our definition effective, possibly because __such hypothetical models are very rare__. Nevertheless, exploring alternative definitions of $\mathcal{Q}^\*$ is an interesting direction. W5. > a- is it obvious that one can just compute the Shapley values of a subset of models? __We are considering a game of $150$ models to only obtain the MSVs of $20$ models__, instead of a game of only $20$ models. This elaborated further in App. C.2 (lines 881-888). > b- ... my understanding is that the paper computes the Shapley value of all 150 models (used as ground-truth), and then uses only a subset of them to infer the others. This is the correct understanding of the learning approach, which only sees a subset of the ground-truths as the training data. The __remaining unseen subset is used only for evaluation__. > a better experiment would be to compute the Shapley value of the 20 models ... compute all 150 SVs. __This is indeed our expeirment setting__. Table 3 shows comparison between the predicted MSVs and the obtained ground-truths to have low prediction errors, the saving in computational cost is from __only needing to obtain MSVs for a subset of models__, further elaborated in App. C.4 (lines 869-880). W6. > I could not understand what the x-axis is We will increase our figure size in our revision, and clarify here: the x-axis (and y-axis) in Fig.2 is the model indices $i$; the x-axis Fig. 3 (left) is the model types, the x-axis Fig. 3 (right) is the predictive certainty (see lines 337-339); the x-axis in Fig. 4 is the size of training data (see its caption, and lines 340-341). Q1. > but it was not clear to me what would simplify or get more standard if we have binary classification. Theoretically, $C>2$ or $C=2$ does _not_ make a significant difference, as the intended design. In contrast, [55] have specifically exploited the simpler structure of $C=2$ and thus it seems difficult to extend their method to $C>2$. For practice, a smaller $C$ means fewer class-specific query sets to collect (lines 357-358). Q2. > ... why doing the fusion as it is done makes sense from the prediction perspective. We wish to highligh that the __fused Dirichlet abstraction $\mathcal{Q}\_{\mathcal{C}}$ is _not_ a predictive model itself__ (i.e., it can not be used to produce predictions by taking as input queries), and it is __primarily designed to be amenable to valuation.__ Q3. > ... the Dirichlet abstraction (e.g., l. 100) is not allowed to depend on $x$? **A Dirichlet abstraction is a Dirichlet distribution, so it does not depend on $x$**. For a model $\mathbf{M}\_i: \mathcal{X} \mapsto \triangle(C)$, for a random variable $X \sim P_X$ whose $\text{supp}(X) = \mathcal{X}$ (and $P_X$ represents the task), $\mathbf{M}\_i(X)$ is a distribution over $\triangle(C)$, and this is represented using the Dirichlet abstraction (lines 101-102). Q4. > ... extend the framework to assign different weights to different types of errors? We consider the "types of errors" to be different model evaluation criteria, and a user __can easily linearly combine different model evaluation criteria__ (Sec. 3.2 and Table 1), by exploiting P3 (linearity) and implementing the selected criterion from Table 1. Q5. > ... a bit more background on the Dirichlet distribution and Hellinger distance. App. A.2 includes this background on Dirichlet distribution and Hellinger distance, and we will incorporate the suggestion in our revision. Q6. > Fig 1: ... I find that plots 1-2 aren't similar to plots 3-4. Is this normal? Yes, plots 1-2 show the probability vectors while plots 3-4 show the densities of the learnt Dirichlet abstractions from these probability vectors. We thank Reviewer tCDm for the detailed review and questions, and hope our response has helped clarified the questions and raised your opinion of our work. We are happy to provide further elaboration in discussion (due to character limit of rebuttal). --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. It helps understand the paper better. It still find it a pity that we do not have a way to theoretically show that the models used satisfy the assumption of Prop 4, but ok. Overall, I find this to be an interesting contribution and will raise my score to 7. That said, I do see the critics made by other reviewers. I do not disagree with them, I simply feel that the paper is acceptable despite those---but of course this is only one option amongst several. --- Reply to Comment 1.1.1: Title: Thank the reviewer for the positive feedback Comment: We thank Reviewer tCDm for the quick response and in particular for appreciating our contribution and raising the score.
Rebuttal 1: Rebuttal: We thank all the reviewers for taking the time to review our paper and providng the detailed comments and positive feedback: - Our studied problem is interesting and well-motivated (Reviewers tCDm, bude & Coej); - Our approach is novel and creative, and the quality of our research is high (Reviewer RTGV); our solution is solid and well-thought-out, and our theoretical statements are sound (Reviewer bude); our approach using the Shapley value is well-motivated (Reviewer Coej); - Our paper is well structured and written (Reviewers tCDm & bude); - Our paper presents a valuable addition to the machine learning and statistical literature (Reviewer RTGV). In our prepared response to your feedback and questions, with main points summarized below, we have - Provided additional clarifications on some theoretical results: - On the effect of $Z$ in Equ.(2), condition in Prop. 4, definition of $\mathcal{Q}^*$ and our proposed learning approach (Reviewer tCDm); - On the homogeneity of Dirichlet abstractions and other assumptions (Reviewer RTGV); - On the implications of P4 and P3 (Reviewer bude); - On the formal problem setting, theoretical motivation and justification of the proposed Dirichlet abstraction, to highlight our main contributions (Reviewer Coej). - Provided discussion from a practical viewpoint: - On the application to the simpler binary classification, and extension to combining different errors (Reviewer tCDm); - On the preparation of the query set (Reviewers RTGV & bude); - On the trade-off between query set size and abstraction level, and dealing with an unbalanced query set with very sparse data for some classes (Reviewer RTGV); - On the bounds of query set size for potential reverse-enginnering (Reviewer bude) . We thank all the reviewers for reviewing our paper and their reviews, and hope that our response has clarified your questions and helped raise your opinions of our work. We are happy to provide further clarifications during the discussion period.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Quantum Bayesian Optimization
Accept (poster)
Summary: The paper studies quantum kernelized bandits or Bayesian optimization (BO). Classically, in every iteration t=1,2,\ldots,T, a BO algorithm chooses an arm x_t and then queries the reward function f for a noisy observation y_t=f(x_t)+\zeta_t, where f can be non-linear and \zeta_t is a sub-Gaussian noise. The goal is to minimize the cumulative regret R_T=\sum_{t=1}^{T}[f(x^*)-f(x_t)]. In the quantum setting, every query to the reward function f at the arm x_t is replaced by a chance to access a quantum oracle or its inverse, which encodes the reward distribution for the arm x_t. Besides, a bounded noise or a noise with bounded variance are considered. The paper introduces the Q-GP-UCB algorithm which is the first BO algorithm able to achieve a regret bound of O(poly log T), which is significantly smaller than the classical lower bound of \Omega(\sqrt{T}). Strengths: 1. The paper provides the first quantum BO algorithm, which achieves a polylog(T) regret, beats the classical lower bound of \Omega(\sqrt{T}), and offers more evidence of quantum advantages over classical computers. 2. The result generalizes the previous quantum speedup for multi-armed bandits (MAB) and stochastic linear bandits (SLB) [32]. Besides, the paper improves the regret bound of SLB in [32] by improving the analysis of the confidence ellipsoid. 2. The paper is overall structured very well. The ingredients of the analysis for the Q-GP-UCB algorithm are listed in a readable way so I can quickly get the ideas behind it. I enjoy reading this paper very much. Weaknesses: 1. I'm a bit doubtful about the technical contributions of this paper. Apparently, the basic framework of Q-GP-UCB follows from the weighted least squared estimator and the doubling trick in [32]. The key difference from [32] is the design of the weighted GP posterior distribution (see Eq. (3)), which looks very similar to the classical one (see Eq. (1)). Such a combination, of course, can be regarded as the main technical novelty, but the question is, I have no idea whether it raises inherent difficulties in the analysis. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Please explain more about the technical difficulties in analyzing the Q-GP-UCB algorithm. 2. I notice that the authors of [32] showed their Q-LinUCB algorithm has a regret of O(log^{5/2} T) for bounded noise, but this paper said the regret is O(log^{3/2} T) (see line 350) when citing [32]. Is this because the analysis of [32] is not tight? Please confirm it since one of the contributions of this paper is an improvement over Q-LinUCB. If you indeed improve the regret from O(log^{5/2} T) to O(log^{3/2} T), this would increase the contributions of the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We'd like to thank the reviewer for your insightful comments. --- > 1. I'm a bit doubtful about the technical contributions of this paper... Such a combination, of course, can be regarded as the main technical novelty, but the question is, I have no idea whether it raises inherent difficulties in the analysis. > 1. Please explain more about the technical difficulties in analyzing the Q-GP-UCB algorithm. The analyses of our algorithm indeed posed non-trivial difficulties. We have clarified and highlighted our technical novelty (including the difficulties) and contributions in our global response above (point 1). Here we give a brief summary: - The analysis of our weighted information gain (Theorem 1) is non-trivial, and is a novel contribution especially compared to quantum linear bandits [32] whose analysis didn't involve information gain. It may also be **of broader independent interest**. - The proof of our confidence ellipsoid (Theorem 3) also requires non-trivial techniques and insights, such as the recognition to apply the concentration inequality for 1-sub-Gaussian noise. It is also **novel because it is closely tied to our algorithmic design**. - We have used the techniques and insights from the proof of our Theorem 3 to **improve the confidence ellipsoid and hence the regret of quantum linear bandits [32]**, which is another important contribution of ours. - We've also made important empirical contributions by conducting a more realistic AutoML experiment and **an experiment using a real quantum computer**, both of which were not done by previous works on quantum bandits [32]. We also achieved significantly better performances than quantum linear bandits [32] and classical GP-UCB [28]. In addition to the individual novel contributions listed above, another major aspect of our novelty and technical difficulty lies in **identifying the different required techniques** (e.g., weighted information gain, concentration for 1-sub-Gaussian noise) and **integrating them into a coherent analytical framework**, which we think is highly non-trivial. Therefore, we think that our work has made important novel technical contributions, which required overcoming non-trivial technical difficulties. Please refer to our global response above (point 1) for more details. --- > 2. I notice that the authors of [32] showed their Q-LinUCB algorithm has a regret of $\mathcal{O}(\log^{5/2}T)$ for bounded noise, but this paper said the regret is $\mathcal{O}(\log^{3/2}T)$ (see line 350) when citing [32]. Is this because the analysis of [32] is not tight? Please confirm it since one of the contributions of this paper is an improvement over Q-LinUCB. If you indeed improve the regret from $\mathcal{O}(\log^{5/2}T)$ to $\mathcal{O}(\log^{3/2}T)$, this would increase the contributions of the paper. If we understand correctly, you are referring to the upper bound on the **expected regret** of Q-LinUCB [32] (reported in Table 1 of [32]), which is indeed of the order $\mathcal{O}(\log^{5/2}T)$. To clarify, in our paper, we have only focused on **high-probability regret** (instead of expected regret), and the high-probability regret of Q-LinUCB [32] is of the order $\mathcal{O}(\log^{3/2}T)$ (Theorem 3 of [32], equation 37) which is consistent with what we have reported in our paper (line 350). The additional factor of $\mathcal{O}(\log T)$ in the expected regret results from the term $\log(\frac{m}{\delta})$ in the high-probability regret, because [32] has set $\delta=\frac{m}{T}$ to convert the high-probability regret to expected regret. In fact, the expected regret of our Q-GP-UCB with the linear kernel can also be easily derived, and it is also of the order $\mathcal{O}(\log^{5/2}T)$ which matches the expected regret of Q-LinUCB [32]. More importantly, **for both high-probability regret and expected regret**, **our Q-GP-UCB (with the linear kernel) indeed improves over Q-LinUCB [32]** by a factor of $\mathcal{O}(\sqrt{d})$ (lines 348-350). --- Thank you again for your comments. We hope our additional clarifications could improve your opinion of our paper. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My doubts are well resolved. --- Reply to Comment 1.1.1: Title: Thank You for Your Response Comment: Thank you for your response. We are glad to learn that your doubts are well resolved. We'll also add what we included in our response to the paper after revision to further improve our paper. Thanks again.
Summary: The paper studies the regret attainable for multi-armed bandits with non-linear reward functions when having access to a quantum oracle. For this setting they introduce the Quantum Gaussian Process Upper Confidence Bound (Q-GP-UCB) that with probability at least $\delta$ achieves regret: - $\mathcal{O}((d\log{T})^{3/2}\log{d\log{T}})$ using the linear kernel, - $\mathcal{O}((\log{T})^{3/2\cdot(d+1)}\cdot (d+1) \cdot \log{\log{T}})$ using the squared exponential (SE) kernel, when the noise is bounded and $d$ is the dimensionality of the input space. Similar rates are derived for noise with bounded variance. This notable improvement over the classical fundamental limit is mostly attributed to the use of the Quantum Monte Carlo (QMC), and the improvement over the work of Wan et. al. 2022 (when instantiated with the linear kernel) is due to the paper's novel and tighter analysis of the confidence ellipsoid. They actually modify the proof of this prior work to attain the same rate as well. Finally, they run experiments using the Qiskit package that demonstrate their algorithm's superiority over the classical variant, as well as the benefits of using the SE kernel in the more practical setting of AutoML. Strengths: - Intuitive and well explained use of staging to manage the growing number of samples fed to the QMC subroutine. - Their weighing technique is very intuitive and by making the noise 1-sub-Gaussian allow them to use a self-normalized concentration inequality that improves the confidence ellipsoid and allows for better rates (also has nice intuitive meaning as enabling a measuring of weighted information gain). - I particularly like that they use plug their improved analysis into the work of Wan et. al. to make their original algorithm match the rate the authors attain with the linear kernel. This is a nice way to both showcase the strengthened analysis, as well as to clarify to the community which things are key algorithmic behavior versus analysis artifacts. Weaknesses: - No core majorly novel algorithmic design / concept definition, i.e. results are due to elegant combinations of techniques and careful analyses rather than groundbreaking new concepts/approaches introduced. - Minor typo: "encode" -> "encodes" (line 144). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Could you expand more on the empirical behavior of Q-GP-UCB in the initial stage? It is a bit hard to see given the large setting of total time-steps. For example, could you comment on when this may be an issue and when not? - Do you have intuition for what would happen with other kernels being used? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - As per above, maybe more limitations regarding trade-offs of initial phase (and others) could be discussed in experimental section? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We'd like to thank the reviewer for your valuable comments. --- > Could you expand more on the empirical behavior of Q-GP-UCB in the initial stage? It is a bit hard to see given the large setting of total time-steps. For example, could you comment on when this may be an issue and when not? The empirical behavior of our Q-GP-UCB in the initial stage (i.e., its regrets in the initial stage are relatively larger compared with the classical GP-UCB) is reasonable because our Q-GP-UCB explores a smaller number of unique arms than the classical GP-UCB initially (because we query every selected arm for multiple rounds). However, after the initial exploration, our Q-GP-UCB can quickly start performing reliable exploitation, because the accurate reward observations achieved thanks to our QMC subroutine allow us to quickly learn the reward function and hence find the optimal arm. Of note, the same behavior (i.e., relatively larger regrets in the initial stage) is also observed in [32] regarding the quantum linear bandit algorithm, which is similar to our algorithm using the linear kernel. So, we think that this behavior is not kernel-specific. Regarding your second question, we think this may be an issue when the problem is easy, i.e., when the reward function is easy to optimize (e.g., when the noise is too small or when the required number of iterations to find the optimal arm is too small). Specifically, when the problem is easy, classical BO algorithms such as GP-UCB may also be able to quickly find the optimal arm, which makes it difficult for the advantage of our Q-GP-UCB algorithm to manifest. On the other hand, this is unlikely to be an issue when the problem is relatively difficult, in which case we expect our Q-GP-UCB algorithm to consistently perform better, as we have demonstrated in our experiments. We'll follow you suggestion and add our discussions here to the experimental section. > Do you have intuition for what would happen with other kernels being used? As you suggested, we have additionally analyzed the regret of our algorithm for the Matern kernel, and included a detailed discussion in our global response above (point 2). Intuitively, for the Matern kernel, our regret bound also improves over the state-of-the-art regret in the classical setting when the reward function $f$ is sufficiently smooth. This in fact brings about an analogy between our work (as the first work on quantum BO) and classical GP-UCB [28]: GP-UCB also requires the function $f$ to be sufficiently smooth in order to achieve a sub-linear regret bound. Also similar to how GP-UCB was extended by later works, We leave it to future works to further improve our regret bound for the Matern kernel. We'll add the results and discussions about the Matern kernel (more details in the global response above, point 2) to the paper after revision, which we think will further improve the significance of our contributions. --- Thank you again for your feedback. We'll also follow your suggestion to correct the typo, and do a thorough check for other potential typos.
Summary: This paper considers kernelized bandits also known as Bayesian optimization under a particular feedback model inspired by quantum computing. Under this model repeating sampling from the same point for N times reduces the noise to the level on $1/N$. That is significantly tighter than the classic setting where sampling the same point for N times reduces the noise to the level on $1/\sqrt{N}$. The paper introduces a UCB based algorithm, that is very similar to mini-META proposed in [6]. The proposed algorithm is referred to as Q-GP-UCB, and, in the case of SE kernel, obtains a polylogarithmic regret bound in the time horizon $T$ that is a significant improvement compared to $\sqrt{T}$ regret bound in the classic setting. Strengths: The problem is inspired by quantum computing and may be of broader interest. Overall this is an interesting formulation of kernel bandits. Weaknesses: The formulation and analytical techniques are similar to those of [32] in the case of linear bandits. That to some extent limits the novelty and contributions. In terms of complexity, the regret bounds seem to scale as $\tilde{\gamma}_m^{1.5}$, where the number of unique points is bounded by $\tilde{\gamma}_m$ and an additional $\tilde{\gamma}_m^{0.5}$ is contributed by the regret on each unique point, due to confidence ellipsoid. Theorem 1 bounds this quantity by \gamma_{T^2}. For example, in the case of Matern kernels, that leads to a regret bound of $T^{(3d)/(4\nu+2d)}$ where $\nu$ is the smoothness of the kernel. This regret bound may be worse than the classic results of $T^{(\nu+d)/(2\nu+d)}$ when $d$ is large. I think this is an indication that the regret bounds presented in this work are sub-optimal and can be further improved. Given that there is no lower bound under this setting, that raises some doubts about the tightness of the analysis. The results should be seen as some initial attempt on the problem. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Considering the example above about Matern kenrel, could the authors comment on the tightness of the bounds? The results seem to be an improvement only in the case of SE kernel. Why they do not necessarily improve the regret bounds in the case of other kernels. - If the standard mean and variance are used rather than the weighted ones, which step in the proof would fail? The number of unique points or the regret on each unique point? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Although the formulation seems interesting, the results do not seem tight. To some extent they follow the case of linear bandits. When it comes to more general kernels the regret bounds seem to even fail to be sublinear in some cases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We'd like to thank the reviewer for your constructive feedback. --- > The formulation and analytical techniques are similar to those of [32] in the case of linear bandits. That to some extent limits the novelty and contributions. We have clarified our technical novelty and contributions (especially compared with [32]) in our global response above (point 1). Here we give a brief summary: - The analysis of our weighted information gain (Theorem 1) is non-trivial, and is a novel contribution especially compared to quantum linear bandits [32] whose analysis didn't involve information gain. It may also be **of broader independent interest**. - The proof of our confidence ellipsoid (Theorem 3) also requires non-trivial techniques and insights, such as the recognition to apply the concentration inequality for 1-sub-Gaussian noise. It is also **novel because it is closely tied to our algorithmic design**. - We have used the techniques and insights from the proof of our Theorem 3 to **improve the confidence ellipsoid and hence the regret of quantum linear bandits [32]**, which is another important contribution of ours. - We've also made important empirical contributions by conducting a more realistic AutoML experiment and **an experiment using a real quantum computer**, both of which were not done by previous works on quantum bandits [32]. We also achieved significantly better performances than quantum linear bandits [32] and classical GP-UCB [28]. In addition to the individual novel contributions listed above, another major aspect of our novelty and technical difficulty lies in **identifying the different required techniques** (e.g., weighted information gain, concentration for 1-sub-Gaussian noise) and **integrating them into a coherent analytical framework**, which we think is highly non-trivial. Therefore, we think our work has made important contributions especially compared with quantum linear bandits [32]. Please refer to our global response above (point 1) for more details. --- > ... For example, in the case of Matern kernels, that leads to a regret bound of $T^{(3d)/(4\nu+2d)}$ where $\nu$ is the smoothness of the kernel. This regret bound may be worse than the classic results of $T^{(\nu+d)/(2\nu+d)}$ when $d$ is large. I think this is an indication that the regret bounds presented in this work are sub-optimal and can be further improved. Given that there is no lower bound under this setting, that raises some doubts about the tightness of the analysis. The results should be seen as some initial attempt on the problem. > ...could the authors comment on the tightness of the bounds? The results seem to be an improvement only in the case of SE kernel. Why they do not necessarily improve the regret bounds in the case of other kernels. As you suggested, we've added a discussion of our regret bound for the Matern kernel in our global response above (point 2). Below we also give a (self-contained) discussion, with additional discussions on the tightness and improvement (over the classical setting) of our regret bound. For the Matern kernel, our regret bound of $\widetilde{\mathcal{O}}(T^{(3d)/(2\nu+d)})$ also improves over the state-of-the-art regret in the classical setting $\widetilde{\mathcal{O}}(T^{(\nu+d)/(2\nu+d)})$ **when the reward function $f$ is sufficiently smooth** (i.e., when $\nu > 2d$). So, even when $d$ is large, we still achieve an improvement if $\nu$ is large enough. We think this smoothness condition for the Matern kernel is reasonable, which can be justified by drawing analogy between our work (as the first work on quantum BO) and the classical GP-UCB [28]: For the Matern kernel, **the classical GP-UCB** [28] (with a regret bound of $\mathcal{O}(\gamma_T\sqrt{T})=\widetilde{\mathcal{O}}(T^{(\nu+3d/2)/(2\nu+d)})$) **also requires the function $f$ to be sufficiently smooth** (i.e., $\nu>d/2$) to achieve a sub-linear regret. This requirement for smooth functions was only removed in later works (e.g., [19,25,31]), which used sophisticated algorithmic designs and analyses to improve the regret of classical GP-UCB to $\mathcal{O}(\sqrt{\gamma_T T})=\widetilde{\mathcal{O}}(T^{(\nu+d)/(2\nu+d)})$. Therefore, we agree that for the Matern kernel, our regret bound may not be tight. However, given our discussions here, our regret bound for the Matern kernel is still a reasonable and important contribution (analogous to the regret of GP-UCB for the Matern kernel which also requires smooth functions), and we hope our work could also inspire future works to further improve the regret of quantum BO for the Matern kernel (similar to how the regret of GP-UCB was improved by later works). Lastly, we briefly summarize the contributions and **improvements** of our regret bound for different kernels: For the *SE kernel*, our regret bound significantly improve over the classical regret; for the *Matern kernel*, our regret bound improves over the classical regret when the reward function $f$ is sufficiently smooth; for the *linear kernel*, our regret bound improves over that of the quantum linear bandits [32]. --- > If the standard mean and variance are used rather than the weighted ones, which step in the proof would fail? The number of unique points or the regret on each unique point? If the standard GP posterior mean and variance are used instead of the weighted ones, the proof of the number of unique points (i.e., the total number of stages, Sec. 5.2) would not be valid. Specifically, the proof of Equation (25) in Appendix D wouldn't go through, and hence we could no longer derive the upper bound on the total number of stages given in Theorem 2. Therefore, the weighted GP posterior regression (Sec. 4.1) is indispensable for deriving our theoretical results. --- Thank you again for your comments. We hope our additional clarifications could improve your evaluation of our paper. --- Rebuttal Comment 1.1: Title: Suboptimality of the achieved regret bounds Comment: Thanks for your response. In the case of Matern kernel, the regret bounds proven in this paper are in order of $\mathcal{O} (T^{3d/(2\nu+d)})$. This is worse than the optimal regret bound $\mathcal{O} (T^{(\nu+d)/(2\nu+d)})$ for standard kernel bandits when $\nu<2d$. Even when compared with the suboptimal $\mathcal{O} (T^{(\nu+3d/2)/(2\nu+d)})$ regret bound of GP-UCB, the regret bounds proven in this paper are worse when $\nu<1.5 d$. This is despite the observation that the noise concentrates faster, at a 1/N rate, in the quantum setting, in contrast to the $1/\sqrt{N}$ rate in the classic setting. It thus seems clear that the regret bounds proven in this paper are suboptimal in general. In addition, the reason for this suboptimality is not clear (where the difficulty comes from, what are the best regret bounds we hope for). I suggest the authors make this point clear in the paper to encourage future work on the topic. --- Reply to Comment 1.1.1: Title: Thank You for Your Reply Comment: We agree that our regret bound for the Matern kernel implies that in general, our regret upper bound does not match the (unknown) lower bound and is hence not optimal. We'll revise the paper to make this clear. We'd like to add that the tightness of our regret upper bound is kernel-dependent, and we think that **our gap with the lower bound is much smaller for the SE kernel**. This is in fact also in a similar spirit to the classical GP-UCB [28]: its regret upper bound of $\mathcal{O}(\gamma_T\sqrt{T})$ is suboptimal for both the SE kernel and Matern kernel (compared with the known classical lower bound in [26]). However, for the classical GP-UCB, **the gap between the upper and lower bounds is much smaller for the SE kernel** (i.e., logarithmic gap) than for the Matern kernel (i.e., polynomial gap). Similarly, for our Q-GP-UCB, we agree that for the Matern kernel, there is likely a large gap (e.g., polynomial in $T$) between our upper bound and the (unknown) lower bound. However, we think that for the SE kernel, our gap is much smaller. This can also be supported by the fact that for the SE kernel, our regret upper bound is only $\mathcal{O}(\text{ploy}\log T)$, which is significantly smaller than the classical regret lower bound of $\Omega(\sqrt{T})$. Therefore, we think that our significantly improved regret bound for the SE kernel over the classical setting, which is our main contribution, is an important step forward for the community. Regarding the difficulty of achieving a tighter regret upper bound, we think the challenge lies in the need to come up with novel (likely more sophisticated) algorithmic designs. This is also analogous to the classical GP-UCB, whose regret bound was improved by later works via more sophisticated algorithmic designs and analyses (e.g., [19,25,31]). These works have improved the regret upper bound of the classical GP-UCB from $\mathcal{O}(\gamma_T\sqrt{T})$ to $\mathcal{O}(\sqrt{\gamma_T T})$, and it is interesting to explore whether the techniques they adopted can also be applied to our algorithm to attain an improvement similar to $\mathcal{O}(\sqrt{\gamma_T})$. As you have also suggested, we'll add the discussions here to the paper after revision, in the hope that our paper could inspire future works aimed at improving our regret upper bound especially for the Matern kernel.
Summary: This paper studies Bayesian optimization with quantum reward oracles where the reward function $f$ lies in an RKHS space with the squared exponential kernel, and at every iteration after input is selected, we can access a quantum unitary oracle and its inverse that encode the noisy reward distribution. In such a setting, the authors introduce the quantum-Gaussian process-upper confidence bound (Q-GP-UCB) which achieves a regret upper bound of $\mathcal O(\text{poly log} T)$. To do it, they introduce a weighted GP regression and then analyze the growth rate of the weighted information gain. Next, they derive a tight confidence ellipsoid which gives a guarantee of the concentration of the reward function and the weighted GP posterior mean. They also show that their bound on the confidence ellipsoid improves that of the quantum linear UCB (Q-LinUCB) algorithm [32] over a factor $\sqrt{d}$, where $d$ is the input dimension. Finally, they show the performance of their proposed algorithm over the classical GP-UCB and Q-LinUCB, through a synthetic experiment and an experiment on automated machine learning Strengths: - The paper is well-organized and easy to read. The arguments, and comparisons with related works are clear and well-supported. - The paper is the first to introduce the first quantum Bayesian optimization algorithm which enjoys a regret upper bound of $\mathcal O(\text{poly log} T)$. Weaknesses: However, I am concerned about the novelty of the used techniques in this paper. They seem to be an unsophisticated combination of common techniques from classical Bayesian optimization (e.g., see [9], [28], [30]), the classical bandits [9], and quantum bandits [32]. It follows that the regret upper bound of $\mathcal O(\text{poly log} T)$ for their proposed BO algorithm has the same order as that of [32] which is designed for linear bandits. The authors claim that the recent quantum works on multi-armed or linear bandits are not able to solve sophisticated real-world problems with non-linear reward functions like their setting. However, from the technical point of view, Bayesian optimization under the setting that the reward function $f$ lies in an RKHS space is in fact a kind of linear bandit problem except that the input space is continuous. Hence, I think that this paper is OK but not good enough from a technical point of view. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The authors seem not to consider the case of the Materm kernel. Is there any improvement in the regret bound with the aid of quantum computing in this case? - It would be interesting if the authors can provide a discussion on the lower bound of the regret bound of BO in the setting of quantum computing. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We'd like to thank the reviewer for your insightful comments. --- > However, I am concerned about the novelty of the used techniques in this paper... the regret upper bound of $\mathcal{O}(\text{poly}\log T)$ for their proposed BO algorithm has the same order as that of [32] which is designed for linear bandits. We have clarified and highlighted our technical novelty and contributions (especially compared with [32]) in our global response above (point 1). Here we give a brief summary: - The analysis of our weighted information gain (Theorem 1) is non-trivial, and is a novel contribution especially compared to quantum linear bandits [32] whose analysis didn't involve information gain. It may also be **of broader independent interest**. - The proof of our confidence ellipsoid (Theorem 3) also requires non-trivial techniques and insights, such as the recognition to apply the concentration inequality for 1-sub-Gaussian noise. It is also **novel because it is closely tied to our algorithmic design**. - We have used the techniques and insights from the proof of our Theorem 3 to **improve the confidence ellipsoid and hence the regret of quantum linear bandits [32]**, which is another important contribution of ours. - We've also made important empirical contributions by conducting a more realistic AutoML experiment and **an experiment using a real quantum computer**, both of which were not done by previous works on quantum bandits [32]. We also achieved significantly better performances than quantum linear bandits [32] and classical GP-UCB [28]. In addition to the individual novel contributions listed above, another major aspect of our novelty and technical difficulty lies in **identifying the different required techniques** (e.g., weighted information gain, concentration for 1-sub-Gaussian noise) and **integrating them into a coherent analytical framework**, which we think is highly non-trivial. Therefore, we think that despite achieving the same regret order of $\mathcal{O}(\text{poly}\log T)$ as quantum linear bandits [32], our work has overcome non-trivial additional challenges and made important novel technical contributions, which go beyond unsophisticated combination of common techniques. Please refer to our global response above (point 1) for more details. --- > The authors claim that the recent quantum works on multi-armed or linear bandits are not able to solve sophisticated real-world problems with non-linear reward functions like their setting. However, from the technical point of view, Bayesian optimization under the setting that the reward function $f$ lies in an RKHS space is in fact a kind of linear bandit problem except that the input space is continuous. We'd like to clarify that when the reward function $f$ lies in an RKHS space, it is a linear function w.r.t. the RKHS feature mapping $\phi(x)$, but **not a linear function w.r.t. the original input $x$**. In contrast, the quantum linear bandits [32] does assume that $f$ is a linear function w.r.t. the original input $x$. Therefore, our algorithm is able to model non-linear reward functions, which the quantum linear bandit algorithm [32] is incapable of. Thanks for pointing this out, we'll revise the paper to make this point clearer. --- > The authors seem not to consider the case of the Materm kernel. Is there any improvement in the regret bound with the aid of quantum computing in this case? As you suggested, we have additionally analyzed our algorithm for the Matern kernel (with smoothness parameter $\nu$), and included a detailed discussion in our global response above (point 2). Briefly, for the Matern kernel, our regret bound $\widetilde{\mathcal{O}}(T^{(3d)/(2\nu+d)})$ also improves over the state-of-the-art regret in the classical setting $\widetilde{\mathcal{O}}(T^{(\nu+d)/(2\nu+d)})$ when the reward function $f$ is sufficiently smooth (i.e., when $\nu > 2d$). We think that this smoothness condition is reasonable for our work (as the first work on quantum BO) because it is analogous to the classical GP-UCB [28]: GP-UCB also requires the function $f$ to be sufficiently smooth (i.e., $\nu>d/2$) in order to achieve a sub-linear regret bound. This requirement of GP-UCB for smooth functions (for the Matern kernel) was only removed by later works (e.g., [19,25,31]) which improved the regret bound of GP-UCB through sophisticated algorithmic designs and analyses; similarly, we also leave it to future works to further improve the regret bound of our quantum BO for the Matern kernel. Thank you for pointing this out. We'll add the results and discussions about the Matern kernel (more details in the global response above, point 2) to the paper after revision. We think that these additional results, given our already significantly improved regret for the SE kernel, will further improve the significance of our contributions. --- > It would be interesting if the authors can provide a discussion on the lower bound of the regret bound of BO in the setting of quantum computing. To the best of our knowledge, deriving a regret lower bound for quantum multi-armed bandits and quantum linear bandits is also still an open problem [32]. We agree that obtaining a regret lower bound in the quantum bandit setting is an interesting and important research problem, and we aim to explore this in future works. --- Thank you again for your feedback. We hope our additional clarifications could improve your opinion of our paper. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thanks to the authors to address my concerns, especially the regret bound for the Matern kernel. I have no further questions.
Rebuttal 1: Rebuttal: We'd like to sincerely thank all reviewers for your constructive feedback and for appreciating our contributions. For example, Reviewer Hp2A and Reviewer 5kMD have acknowledged that our paper "introduces the first quantum BO algorithm", Reviewer pGF9 has commented that our work "may be of broader interest", Reviewer n5g2 "particularly likes" the way we "showcase our **strengthened analysis**" and acknowledges our "**elegant combinations of techniques and careful analyses**", Reviewer 5kMD has commented that our work "offers more evidence of quantum advantages over classical computers" and "**enjoys reading this paper very much**". We are deeply encouraged by these comments. We have provided individual responses to your questions below, and have also highlighted two points here. --- # 1. Technical novelty and contributions. Here we clarify and highlight some of our major technical novelty and contributions: - Weighted information gain (Theorem 1): Our proof of Theorem 1, despite following the analysis of [30], is non-trivial since it requires carefully tracking the impact of the weights $1/\epsilon_s^2$ throughout the analysis (lines 257-258). Our proof of weighted information gain, as well as its adoption in the analysis of quantum bandits, is also novel to the best of our knowledge, especially when compared with the analysis of quantum linear bandits [32] which did not involve information gain at all. More importantly, our Theorem 1 may be **of broader independent interest** for future works using weighted kernel ridge regression (Sec. 4.1). - Confidence ellipsoid (Theorem 3): Despite following the analysis of [9], our proof of Theorem 3 also requires non-trivial analyses and insights, such as **the recognition to apply the concentration inequality for 1-sub-Gaussian noise**. To the best of our knowledge, our technical proof and insights here are **novel since they is closely tied to our algorithmic design** (specifically, our QMC subroutine which is required to guarantee 1-sub-Gaussian noise, lines 297-307), and they are the core reasons why we can improve over the regret bound of quantum linear bandits [32] (see below). - Improvement over quantum linear bandits [32]: Our regret upper bound (when using the linear kernel) is tighter than the regret of quantum linear bandits [32] (Sec. 5.6). Additionally, by adopting our proof techniques and insights for our Theorem 3, we have **improved the confidence ellipsoid and hence the regret bound of quantum linear bandits [32]** (Theorem 6) to match our result. This is another important contribution of ours. - Empirical contributions: Our empirical experiments also represent important contributions, as they may be significant steps towards assessing the real-world potential of quantum bandit algorithms. Specifically, to the best of our knowledge, our paper is the first work on quantum bandits to include a non-synthetic experiment (AutoML experiment, Fig. 1 c-d) and **an experiment using a real quantum computer** (Fig. 4 on page 27). Moreover, in our experiments, we've shown that our quantum BO significantly outperforms quantum linear bandits [32] and classical GP-UCB [28]. In addition to the individual novel contributions listed above, another major aspect of our novelty and technical difficulty lies in **identifying the different required techniques** (e.g., weighted information gain, concentration for 1-sub-Gaussian noise) and **integrating them into a coherent analytical framework**, which we think is highly non-trivial. So, we think that our work has made important novel technical contributions compared with the previous works. We'll also revise our paper to further clarify our technical novelty and contributions. --- # 2. Other kernels such as the Matern kernel. In this work, we've focused on the commonly used squared exponential (SE) kernel, and significantly improved the regret bound over the classical setting. We've also analyzed the regret of our algorithm with the linear kernel, and shown that it achieves a better regret bound than quantum linear bandits [32]. For the Matern kernel with smoothness parameter $\nu$, we can in fact also derive a regret bound of the order $\widetilde{\mathcal{O}}(T^{(3d)/(2\nu+d)})$ (ignoring log factors, we'll add more details on this after revision). This **improves over the state-of-the-art classical regret upper bound of $\widetilde{\mathcal{O}}(T^{(\nu+d)/(2\nu+d)})$ when the reward function $f$ is sufficiently smooth** (when $\nu > 2d$). We think this smoothness condition for the Matern kernel is reasonable, which can be justified by drawing analogy between our work (as the first work on quantum BO) and the classical GP-UCB [28]: For the Matern kernel, **the classical GP-UCB** [28] (with a regret bound of $\mathcal{O}(\gamma_T\sqrt{T})=\widetilde{\mathcal{O}}(T^{(\nu+3d/2)/(2\nu+d)})$) **also requires the function $f$ to be sufficiently smooth** (i.e., $\nu>d/2$) to achieve a sub-linear regret. This requirement for smooth functions was only removed in later works (e.g., [19,25,31]), which used sophisticated algorithmic designs and analyses to improve the regret of classical GP-UCB to $\mathcal{O}(\sqrt{\gamma_T T})=\widetilde{\mathcal{O}}(T^{(\nu+d)/(2\nu+d)})$. Similarly, because our Q-GP-UCB is the first BO algorithm in the quantum setting (analogous to GP-UCB [28] in the classical setting), we think it's expected and reasonable that our Q-GP-UCB **also** requires the function $f$ to be smooth for the Matern kernel (in order to improve over the regret in the classical setting), and we **also** leave it to future works to further improve the regret bound of our Q-GP-UCB for the Matern kernel. We'll add the discussions here (as well as the associated technical details) about the Matern kernel to the paper after revision. We think that these additional results, given our already significantly improved regret for the SE kernel, will further improve the significance of our contributions.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
PTQD: Accurate Post-Training Quantization for Diffusion Models
Accept (poster)
Summary: The paper introduces PTQD, a Post-Training Quantization framework for Diffusion models. PTQD analyzes the influence of quantization noise on diffusion noise. The method suggests separating the quantization noise into noise correlated and uncorrelated with the full-precision reverse diffusion. The correlated part is easily fixed by estimating the correlation coefficient. The residual noise is corrected by modifying the stochastic variance of DM in reverse diffusion. Finally, a mixed precision which is time-step aware is proposed based on gathered statistics. Strengths: The paper is clearly written.\ The analysis of DM quantization challenges is interesting, well-described, and well-mathematically defined and analyzed.\ The framework while being very simple (mainly based on gathered statistics) still can outperform previous work. Weaknesses: The main weaknesses are as follows: 1) The practical ability of the method to deal with the residual noise. (see question 1)) 2) The weak comparisons. (see questions 3-4)) Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) As far as I know most DDPMs make no use of the stochastic reverse diffusion ($\sigma_t = 0$). In that case, the contribution and impact are certainly reduced. 2) A visualization of the generated images should be provided in order to ass the impact on both the generation and reconstruction quality. 3) Comparison with regular PTQ should be performed at different bit ranges (not only naive TensorRT's uniform quantization but others such as MMSE etc.) 4) Ablations: The ablation is good but should also be performed at a constant precision (not MP) in order to assess the real impact of each suggestion, independently of the mixed precision correction. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are ok. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks to the reviewer for the valuable comments. **Q1: Contribution and impact in deterministic sampling.** 1) We acknowledge that the efficacy of our contributions encounters constraints for deterministic sampling, as pointed out in line 205 of the paper. However, in the deterministic case, we can still **correct the correlated quantization noise** and the **biases stemming from uncorrelated components**. This corrective capability assumes particular significance in instances involving low bit quantization, as referred to the experimental result of LSUN-Churches in Table 3 in the paper and Figure J in supplementary material. 2) While deterministic sampling has gained widespread adoption, it tends to **result in lower output quality** compared to stochastic sampling [i, ii]. This proposition is also substantiated by empirical observations, as referred to Table C in the rebuttal PDF. Specifically, when generating samples on FFHQ dataset with deterministic DDIM sampler, introducing stochastic perturbations lower both the FID and sFID metrics. For experiments on ImageNet dataset, it greatly improves the IS with little increase in FID and sFID. In the case of stochastic sampling, our method can achieve better performance by calibrating the variance schedule. **Q2: A visualization of the generated images should be provided.** As referred to Figures H, I, J in the supplementary material, we have provided visualization results on three datasets to substantiate the effects of our proposed method. **Q3: Comparison with regular PTQ should be performed at different bit ranges.** As referred to Tables A and E in the rebuttal PDF, we have conducted experiments on more bitwidth settings and compared the results with PTQ4DM. Due to the limited time slot of the rebuttal, we will conduct experiments on more PTQ methods and bitwidths and add the results to the revised version. **Q4: The ablation should also be performed at a constant precision.** As referred to lines 219-220 in the paper, the proposed mixed-precision (MP) scheme allows the utilization of low-bit diffusion models during the sampling process, resulting in a greater speedup in generation. Specifically, we introduce W4A4 in the MP experiment, a **more intricate task** in comparison to the fixed W4A8 quantization due to the larger quantization noise. Moreover, we conduct additional ablation experiments with constant precision, which are outlined in Table A of the attached rebuttal PDF. The experimental results consistently demonstrate performance improvements brought by each component of our method under constant precision settings. Notably, our method exhibits more significant improvements at lower bitwidths (W3A8) due to the inherent presence of greater quantization noise at these levels. [i] Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models." NeurIPS 2022. [ii] Song, Yang, et al. "Score-based generative modeling through stochastic differential equations." ICLR 2021.
Summary: The paper suggests a method for post-training quantization of diffusion models. The method consists of a factorization of the quantization noise into a correlated and an uncorrelated part, and then addressing each component separately, either by linearly regressing for the correlation coefficient, or incorporating the quantization noise into the diffusion noise level. The authors conduct extensive experiments on multiple datasets, and obtain generation quality comparable to that of the full-precision model in most cases. Strengths: - The characterization of the quantization noise, and incorporating its uncorrelated part into the diffusion noise is a nice idea. - The obtained results are impressive. Weaknesses: - Related work lacks a specific description of methodical/result-based differences between this work and Q-Diffusion. "Analyze the quantization effect" and "unified framework" are very broad terms that give little to no context. - Lines 170-171: When does this linear regression happen? At training time or test time? What data do you use for this? Details lacking. Edit: Data is provided at the end of page 7, just please mention that details will be presented later from lines 170-171. - What is $SNR^F$? What is the motivation behind it? Why is it defined this way? What is its purpose? These details should not be delegated to a citation. - The reason not to compare with PTQ4DM sounds unreasonable. We can still compare results even if they do not specify the PTQ method used. Moreover, it seems like PTQ4DM has a public git repo with their code. The PTQ method can be inferred from that. Moreover, Q-Diffusion seems to have a curious failure mode for multi-precision. When considering a single precision level, it seems like the new method is not very different from Q-Diffusion in terms of results. - No wall clock time comparison is given, even though slow runtime of diffusion models is touted as one of the main drawbacks of these models. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - What does Figure 2 plot? Axis titles represent vectors. If the numbers plotted are entries in these vectors, it should be noted in the figure caption or in the text. - Equation 12 and its following text gloss over the case "otherwise". Mathematically, $\sigma_t^2 = 0$ is *not* a solution for Eq. 11 in this case, and should not be presented as such. When quantization noise becomes larger than $\sigma_t^2$, especially in the mentioned deterministic case, the proposed method cannot deal with the noise. If it does, a thorough explanation is needed here. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Potential negative impact: Yes. Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks to the reviewer for the valuable comments. **Q1: Methodical/result-based differences between this work and Q-Diffusion.** In terms of methodology, Q-Diffusion designed a calibration data collection method and applied the PTQ method BRECQ [i] to the diffusion model. In sharp contrast, we introduce a unified formulation for quantization noise and diffusion perturbed noise, as referred to Eq. (5) in the paper. We argue that quantization noise alters the mean and variance of the predicted noise in each sampling step, resulting in poor sample quality of the quantized diffusion model. Our approach corrects both correlated and residual quantization noises at every step to mitigate these adverse effects. It can be seamlessly integrated with Q-Diffusion or **any other PTQ method** to consistently enhance their performance. **Q2: Mention the details of linear regression in lines 170-171.** Linear regression is performed before test time. As referred to lines 258-262 of the paper, we generate 1024 samples using both quantized and full-precision diffusion models to collect the data (the quantization noise and the output of the full-precision noise prediction network) for performing linear regression. We will add the details in front in the revised version. **Q3: Details of $\rm{SNR}^F$ should not be delegated to a citation.** $\rm{SNR}$ is a widely adopted notation of signal-to-noise ratio. As referred to lines 226-229 in the paper and Eqs. (1)-(2) in [i], $\rm{SNR}^F$ is defined by the hyperparameters of the diffusion forward process, where $\alpha$ is the coefficient for data and $\sigma$ is the coefficient for noise. It is first introduced by [i] to note the degree of noise of data at each step. Moreover, as referred to lines 212-213 in the paper, $\rm{SNR}^Q$ is defined as the SNR of the quantized noise prediction network. We compare these two metrics to select the optimal bitwidth that satisfies the $\rm{SNR}$ requirement for effective denoising. **Q4: The reason not to compare with PTQ4DM sounds unreasonable.** Our initial attention was directed towards PTQ4DM [ii] upon its initial publication on arXiv, which did not release its code. The CVPR camera-ready paper was made public after the NeurIPS submission deadline. Additionally, PTQ4DM's experimental scope was confined to lower-resolution datasets, whereas our study encompasses datasets with higher resolutions. Nonetheless, we evaluate PTQ4DM on LSUN-Bedrooms dataset as shown in Table E in the rebuttal PDF. Our method outperforms it under both W4A8 and W3A8 bitwidths. Full results will be included in the revised version. **Q5: Q-Diffusion seems to have a curious failure mode for multi-precision.** This can be attributed to the substantial quantization noise inherent in the W4A4 bitwidth. In the absence of our correction method, the excessive noise becomes a hindrance, preventing Q-Diffusion from generating samples of desirable quality. **Q6: Method is not very different from Q-Diffusion in terms of results.** 1) It is essential to consider that the absolute performance **improvement is closely related to the precision** of the model. When higher bitwidths are employed, the absolute performance gains may appear relatively small, because the model's performance is already in close proximity to the full-precision counterpart. **As the bitwidth decreases, the efficacy of our approach becomes more noticeable**, particularly in the scenarios where W4A4 bitwidth is utilized, as referred to the results of mixed precision in Tables 1-3 in the paper. Notably, our method substantially reduces the FID score from 218.59 to 17.99 on the LSUN-Churches dataset. In addition, we conducted experiments using lower bitwidth W3A8 to demonstrate the extent of our improvement, as shown in Table A in the attached rebuttal PDF. The experimental results show that our method can bring greater improvement at W3A8 bitwidth on LSUN-Bedrooms dataset, resulting in a noticeable reduction of $1.85$ and $4.02$ in FID and sFID, respectively. 2\) It is worth noting that while FID provides an informative metric, it might not **holistically capture the improved image quality**. In the supplementary material, we have provided visualizations in Figures H-J that convincingly showcase the superiority of results produced by PTQD. These visualizations underscore higher image quality and a closer resemblance to samples generated by the full-precision model, in stark contrast to Q-Diffusion outputs. **Q7: No wall clock time comparison is given.** Please refer to Q3 in the general response. **Q8: What does Figure 2 plot?** Figure 2 illustrates the correlation between the quantization noise (Y-axis) and the output of the full-precision noise prediction network (X-axis). Each data point on the plot corresponds to specific entries within these vectors. We will clarify this point in the figure caption of the revised version. **Q9: The proposed method cannot deal with the noise in the deterministic case.** Please refer to Q4 in the general response. **Q10: Mathematically, Equation 12 is not a solution for Equation 11.** When quantization noise becomes larger than ${\sigma_{t}^{2}}$, ${\sigma_{t}^{'2}}=0$ is not an analytical solution of Eq. (11), but an optimal solution in this case. We will revise it in the revised version. [i] Kingma, Diederik, et al. "Variational diffusion models." NeurIPS 2021. [ii] Shang, Yuzhang, et al. "Post-training quantization on diffusion models." CVPR 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The added results and explanations following my fellow reviewers' and my suggestions indeed enrich the paper. Therefore, I change my recommendation to acceptance. --- Reply to Comment 1.1.1: Comment: Dear Reviewer MVe5, Thank you for your feedback. We truly appreciate your careful consideration of our responses to your and the other reviewers' suggestions. Best regards, Authors of #2641.
Summary: The authors propose a new post-training quantization method for diffusion models titled PTQD that disentangles the quantization noise into correlated and uncorrelated parts regarding its full-precision counterpart, and demonstrate that PTQD generates as much high-quality samples as its full-precision counterpart. Strengths: (1) The idea of disentangling the quantization noise into correlated and uncorrelated parts regarding its full precision counterpart seems to be novel. (2) PTQD shows better performance than previous methods in both class-conditional image generation and unconditional image generation. Weaknesses: (1) Although PTQD reduces BOPs greatly, it is doubtful whether the speed of PTQD generating samples is really faster than that of its full-precision counterpart in real time. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks to the reviewer for the valuable comments. **Q1: The speed of PTQD in real time.** We have measured the latency of matrix multiplication and convolution operations in quantized and full-precision diffusion models using an RTX3090 GPU, as presented below. Both floating-point and quantized operations are implemented with CUTLASS. When both weights and activations are quantized to 8-bit, we observe a **2.03$\times$** reduction in latency compared to its full-precision counterpart over LDM-4. Moreover, when weights and activations are quantized to 4-bit, the speedup further increases to **3.34$\times$**. The mixed-precision settings explored in our experiments strike a good balance between latency and model performance. **Comparisons of time cost across various bitwidth configurations on ImageNet 256$\times$256.** Due to the current lack of a fast implementation for W4A8, we implement MP scheme with W8A8 and W4A4 kernels. | Model | Bitwidth (W/A) | Model Size (MB) | FID | sFID | Time (s) | |--------------------|----------------|------------------|-------|-------|----------| | LDM-4 (steps=250, eta=1.0, scale=1.5) | 32/32 | 1603.35 | 5.05 | 7.10 | 5.46 | | | 8/8 | 430.06 | 4.02 | 5.81 | 2.68 | | | MP | 234.51 | 6.44 | 8.43 | 2.45 | | | 4/4 | 234.51 | - | - | 1.63 | --- Rebuttal Comment 1.1: Comment: Thanks for your response. I keep my original score.
Summary: The paper proposed the quantization scheme for Diffusion models, where they disentangled the quantization noise into the correlated and uncorrelated parts; Then, they incorporated the correlated part into diffusion-perturbed noise and calibrated the denoising variance schedule to absorb additional variance into diffusion noise. They also suggested a mixed-precision quantization scheme for handling the difference in activations variance for each time step. Strengths: It is a novel point of view in incorporating quantization noise into diffusion noise. It also sounds good to me to utilize the bias correction proposed in DFQ to handle the uncorrelated part. Weaknesses: It does not seem to show noticeable improvement compared to existing works, Q-diffusion. It seems to require more experiments to show their performance. The ablation study was performed only when quantizing with mixed precision. The mixed-precision scheme has any special things; the effect seems to be insignificant according to table 2. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. It would be helpful if you the match the technique the author proposed with the description of the method sections. The terms of CNC, VSC and BC was suddenly advent in the result section. 2. It would make it easy to follow your work if you provide your algorithm in the manuscript. 3. could you provide how many bit-width is allocated for each step for a specific model. 4. could you provide the result when setting k as arbitrarily? 5. Why did you do the ablation study with mixed precision? 6. In your result, the mixed-precision scheme seems not well worked out for Q-diffusion. could you provide the reasons? 7. In Section 4.1, did you want to explain that quantization noise is divided into the correlated part and unrelated part due to normalization layers? Or is it just an example? 8. In line 180, what are the bias and additional variance? Please notate them with symbols for clarity Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: There seems to have still room for improving the readability and clarity in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks to the reviewer for the valuable comments. **Q1: It does not seem to show noticeable improvement compared to existing works.** 1) It is essential to consider that the absolute performance **improvement is closely related to the precision** of the model. When higher bitwidths are employed, the absolute performance gains may appear relatively small, because the model's performance is already in close proximity to the full-precision counterpart. **As the bitwidth decreases, the efficacy of our approach becomes more noticeable**, particularly in the scenarios where W4A4 bitwidth is utilized, as referred to the results of mixed precision in Tables 1-3 in the paper. Notably, our method substantially reduces the FID score from 218.59 to 17.99 on the LSUN-Churches dataset. In addition, we conducted experiments using lower bitwidth W3A8 to demonstrate the extent of our improvement, as shown in Table A in the attached rebuttal PDF. The experimental results show that our method can bring greater improvement at W3A8 bitwidth on LSUN-Bedrooms dataset, resulting in a noticeable reduction of $1.85$ and $4.02$ in FID and sFID, respectively. 2\) It is worth noting that while FID provides an informative metric, it might not **holistically capture the improved image quality**. In the supplementary material, we have provided visualizations in Figures H-J that convincingly showcase the superiority of results produced by PTQD. These visualizations underscore higher image quality and a closer resemblance to samples generated by the full-precision model, in stark contrast to Q-Diffusion outputs. **Q2: Match the proposed techniques with the description of the method sections.** Thanks for the valuable comments. Correlated Noise Correction (CNC) is proposed in section 4.2.1 of the paper, which corrects the correlated part of the quantization noise. Both Bias Correction (BC) and Variance Schedule Calibration (VSC) are proposed in section 4.2.2 to correct the uncorrelated quantization noise, as referred to lines 194-201 of the paper. We will incorporate your suggestions in the revised version. **Q3: Provide the algorithm in the manuscript.** The algorithm is briefly summarized below and will be included in the revised version. **Before sampling:** | Algorithm Step | Description | |------|-----------------------------------------------------------------------------------------------------| | 1 | Quantize diffusion models with BRECQ [i] (or other PTQ method). | | 2 | Generate samples with both quantized and FP model and collect quantization noise. | | 3 | Calculate the correlated coefficient $k$ based on Eq. (7), and the mean and variance of the residual quantization noise as per Eq. (10). | **For each sampling step:** | Algorithm Step | Description | |------|-----------------------------------------------------------------------------------------------------| | 4 | Correct the correlated part of the quantization noise by dividing the output of the noise prediction network by $1+k$. | | 5 | Calibrate the variance schedule by Eq. (12) and subtract the channel-wise biases from the output of quantized noise prediction network. | **Q4: How many bit-width is allocated for each step for a specific model?** As referred to lines 229-234 in the paper, the bitwidth for each step is determined by comparing $\rm{SNR}^Q$ with $\rm{SNR}^F$ as per Eq. (15). The results of bitwidth allocation for each dataset are presented below and will be included in the final version. | Dataset | W4A4 Step Range | W4A8 Step Range | |-----------------------|-----------------|-----------------| | ImageNet (250 steps) | 249 to 202 | 201 to 0 | | ImageNet (20 steps) | 19 to 15 | 14 to 0 | | LSUN-Bedrooms | 199 to 155 | 154 to 0 | | LSUN-Churches | 199 to 146 | 145 to 0 | **Q5: Provide the result when setting $k$ arbitrarily.** As shown in the table below, setting $k$ arbitrarily can greatly impair the quality of generated samples. As referred to lines 168-171 and Eqs. (7)-(9) of the paper, if $k$ is set arbitrarily, the correction of the correlated quantization noise can be inaccurate and there will still be correlation between the remaining quantization noise and the output of the noise prediction network. | Model | Method | FID | sFID | |------------|-------------|--------|--------| | LDM-4 | Q-Diffusion | 6.72 | 18.80 | | | Random $k$ | 18.98 | 46.52 | | | Ours | **5.94** | **15.16** | **Q6: Why ablation study is conducted with mixed precision?** Please refer to Q2 in the general response. **Q7: Why the mixed-precision scheme did not work well for Q-Diffusion?** This can be attributed to the substantial quantization noise inherent in the W4A4 bitwidth. In the absence of our correction method, the excessive noise becomes a hindrance, preventing Q-Diffusion from generating samples of desirable quality. **Q8: In Section 4.1, is the quantization noise divided into the correlated part and unrelated part due to normalization layers?** As referred to lines 149-163 of the paper, we prove that at least a portion of the correlation comes from the normalization layer. It is plausible that other nonlinear layers within the model could also contribute to this correlation. **Q9: In line 180, what are the bias and additional variance?** The bias and additional variance refer to the mean and variance of the residual quantization noise, which are denoted in Eq. (10) of the paper. We will make the descriptions consistent in the revised version. [i] Li, Yuhang, et al. "Brecq: Pushing the limit of post-training quantization by block reconstruction." ICLR 2021. --- Rebuttal 2: Title: Follow-Up on Rebuttal Comment: Dear Reviewer LoUy, We greatly appreciate the time and effort in reviewing our work. We have carefully considered your comments and suggestions and have made significant revisions to address the concerns you raised. We are eager to ensure that our paper meets the high standards of our respected reviewers. Please don’t hesitate to let us know if there is any additional feedback you might have at this stage. Best regards, Authors of #2641. --- Rebuttal Comment 2.1: Title: dpm++ Comment: Thank you for your effort to address my concern. I have an additional suggestion. As in Q-Diffusion, could you provide the results of applying latest solver such as dpm++ into your quantized models ? --- Reply to Comment 2.1.1: Comment: Dear Reviewer LoUy, Thanks for your kind suggestion. As referred to Table 4 in the rebuttal PDF and our response to Reviewer i5nq, we have conducted experiments over recent solver PLMS[i], demonstrating the strong performance of PTQD under this solver. Additionally, we present the results of our PTQD over latest DPM++[ii] solver on LSUN-Churches dataset, as shown below. Notably, our PTQD with W3A8 bitwidth achieves a sFID result comparable to that of W4A8 Q-Diffusion. | Model | Method | Bitwidth (W/A) | FID | sFID | |-------------|-------------|----------------|--------|--------| | LDM-8 (steps=50, eta=0.0) | FP | 32/32 | 5.97 | 21.50 | | | Q-Diffusion | 4/8 | 7.80 | 23.24 | | | Ours | 4/8 | **7.45** | **22.74** | | | Q-Diffusion | 3/8 | 11.44 | 24.67 | | | Ours | 3/8 | **10.72** | **23.36** | Once again, thank you for your time and commitment in reviewing our work. Best regards, Authors of #2641. [i] Liu, Luping, et al. "Pseudo numerical methods for diffusion models on manifolds." ICLR 2022. [ii] Lu, Cheng, et al. "Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models." arXiv 2022. Title: Response to dpm++
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable feedback. Overall, our work has been well recognized as it "is well-organized and clearly presented" (Reviewer i5nq), presents a novel idea" (Reviewer oS4i) and "obtains impressive results" (Reviewer MVe5). We have summarized and addressed the main concerns as follows: **Q1: Improvement is not noticeable compared to Q-Diffusion.** 1) It is essential to consider that the absolute performance **improvement is closely related to the precision** of the model. When higher bitwidths are employed, the absolute performance gains may appear relatively small, because the model's performance is already in close proximity to the full-precision counterpart. **As the bitwidth decreases, the efficacy of our approach becomes more noticeable**, particularly in the scenarios where W4A4 bitwidth is utilized, as referred to the results of mixed precision in Tables 1-3 in the paper. Notably, our method substantially reduces the FID score from 218.59 to 17.99 on the LSUN-Churches dataset. In addition, we conducted experiments using lower bitwidth W3A8 to demonstrate the extent of our improvement, as shown in Table A in the attached rebuttal PDF. The experimental results show that our method can bring greater improvement at W3A8 bitwidth on LSUN-Bedrooms dataset, resulting in a noticeable reduction of $1.85$ and $4.02$ in FID and sFID, respectively. 2\) It is worth noting that while FID provides an informative metric, it might not **holistically capture the improved image quality**. In the supplementary material, we have provided visualizations in Figures H-J that convincingly showcase the superiority of results produced by PTQD. These visualizations underscore higher image quality and a closer resemblance to samples generated by the full-precision model, in stark contrast to Q-Diffusion outputs. **Q2: Why conduct ablation study with mixed precision?** As referred to lines 219-220 in the paper, the proposed mixed-precision (MP) scheme allows the utilization of low-bit diffusion models during the sampling process, resulting in a greater speedup in generation. Specifically, we introduce W4A4 in the MP experiment, a **more intricate task** in comparison to the fixed W4A8 quantization due to the larger quantization noise. Moreover, we conduct additional ablation experiments with constant precision, which are outlined in Table A of the attached rebuttal PDF. The experimental results consistently demonstrate performance improvements brought by each component of our method under constant precision settings. Notably, our method exhibits more significant improvements at lower bitwidths (W3A8) due to the inherent presence of greater quantization noise at these levels. **Q3: Real time speed up of PTQD.** We have measured the latency of matrix multiplication and convolution operations in quantized and full-precision diffusion models using an RTX3090 GPU, as shown in Table B in the rebuttal PDF. Both floating-point and quantized operations are implemented with CUTLASS. When both weights and activations are quantized to 8-bit, we observe a **$2.03\times$** reduction in latency compared to its full-precision counterpart over LDM-4. Moreover, when weights and activations are quantized to 4-bit, the speedup further increases to **$3.34\times$**. The mixed-precision settings explored in our experiments strike a good balance between latency and model performance. **Q4: The contributions are reduced for deterministic sampling ($\sigma_t=0$).** 1) We acknowledge that the efficacy of our contributions encounters constraints for deterministic sampling, as pointed out in line 205 of the paper. However, in the deterministic case, we can still **correct the correlated quantization noise** and the **biases stemming from uncorrelated components**. This corrective capability assumes particular significance in instances involving low bit quantization, as referred to the experimental result of LSUN-Churches in Table 3 in the paper and Figure J in supplementary material. 2) While deterministic sampling has gained widespread adoption, it tends to **result in lower output quality** compared to stochastic sampling [i, ii]. This proposition is also substantiated by empirical observations, as referred to Table C in the rebuttal PDF. Specifically, when generating samples on FFHQ dataset with deterministic DDIM sampler, introducing stochastic perturbations lower both the FID and sFID metrics. For experiments on ImageNet dataset, it greatly improves the IS with little increase in FID and sFID. In the case of stochastic sampling, our method can achieve better performance by calibrating the variance schedule. [i] Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models." NeurIPS 2022. [ii] Song, Yang, et al. "Score-based generative modeling through stochastic differential equations." ICLR 2021. Pdf: /pdf/ad2499611e4cdc0d56b39afa079518e38b8afc84.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces PTQD, a novel method designed to tackle issues arising when applying existing post-training quantization techniques directly to low-bit diffusion models. The proposed approach disentangles quantization noise into its correlated and residual uncorrelated components regarding its full-precision counterpart, enabling separate correction for each part . Moreover, the authors present Step-aware Mixed Precision, a scheme that dynamically selects optimal bitwidths for individual denoising steps. Extensive experiments on three image datasets demonstrate significant improvements in image quality compared to the baseline. Strengths: 1. PTQD presents a unique method that disentangles quantization noise and addresses it separately, while Step-aware Mixed Precision dynamically optimizes bitwidths for synonymous steps, demonstrating a comprehensive approach to quantization in diffusion models. 2. The experimental results provide strong evidence of PTQD's effectiveness in enhancing image quality, validating its practical value. 3. The paper is well-organized and clearly presented, with the supplementary file extending PTQD to DDIM and including a statistical analysis of residual quantization noise, which bolsters the credibility of their work. Weaknesses: 1. To further validate PTQD's performance, it would be beneficial to include comparisons with more competitive methods, such as recent DDPM variants, in the experiments. 2. Evaluating PTQD with other post-training quantization methods on diverse image datasets would enhance its applicability and demonstrate its effectiveness across various tasks, making the findings more robust. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks to the reviewer for the valuable comments. **Q1: Include comparisons with recent DDPM variants and evaluating PTQD with other post-training quantization methods on diverse image datasets in the experiments.** Table D in the rebuttal PDF presents the results on a **new dataset CelebA-HQ** over recent **DDPM variants PLMS [i]**, demonstrating the strong performance of PTQD under this configuration. Notably, the proposed PTQD reduces the FID and sFID by a considerable margin of $3.23$ and $4.73$ in comparison to Q-Diffusion, respectively. Additionally, we include a comparison with the PTQ method PTQ4DM [ii] on the LSUN-Bedrooms dataset, as shown in Table E in the rebuttal PDF. Remarkably, our proposed approach outperforms PTQ4DM in both W4A8 and W3A8 bitwidth scenarios. The full results will be included in the revised version. [i] Liu, Luping, et al. "Pseudo numerical methods for diffusion models on manifolds." ICLR 2022. [ii] Shang, Yuzhang, et al. "Post-training quantization on diffusion models." CVPR 2023.
null
null
null
null
null
null
Detection Based Part-level Articulated Object Reconstruction from Single RGBD Image
Accept (poster)
Summary: The paper presents a novel task focused on the reconstruction of multiple articulated objects, considering part-level shape, pose, joint parameters, and part-instance association, using only a single RGBD image. The authors propose an effective detect-and-group strategy that harnesses the part-level representations to detect, reconstruct, and predict parameters for articulated objects with diverse structures. Additionally, to enhance the detection performance, the paper introduces an oversampling and fusion strategy during inference. Moreover, the authors incorporate anisotropic size normalization and a refinement module to improve reconstruction quality and enhance part pose/motion prediction. Strengths: * The proposed task of reconstructing any number of articulated objects is both novel and valuable for exploring potential downstream robotics applications. * Utilizing part-level representations and employing the detect-then-group strategy are intuitive and effective when handling articulated objects with diverse structures. * The proposed end-to-end method builds upon 3DETR by incorporating part-level detection and pose/motion prediction. Furthermore, the authors introduce an instance-loss to aid in grouping within the part latent space. * The experimental settings are reasonable and allow for meaningful comparisons with previous work in the fields of articulated object reconstruction and motion prediction. The results demonstrate the effectiveness of the proposed method. Weaknesses: * Due to the task's focus on reconstructing part shapes, it’s crucial to show some novel viewpoint of the recons trued shapes. For example, the reconstructed shapes of some base parts are hard to see the improvement only with quantitative results. Additionally, all visualizations in the paper appear to align with the input RGBD viewpoint, limiting the comprehensive understanding of the reconstructed shapes. * If I understand correctly, for the statistics in table 2, the instance number includes the same articulated objects with various part states, or it’s hard to explain why there are so many instances in each category (much more than the object number in SAPIEN dataset) * For the evaluation, it seems that in the test set, the part state are randomly initialized. Then most doors or drawers are actually open. However, in the real case, most time, the movable parts are closed. It’s better to have separate evaluation for the models with different motion states * The paper lacks statistics on the number of articulated objects present in the input image, and it would be intriguing to include evaluation results that consider the number of objects to better understand the performance of part-instance association. * The quantitative improvements achieved with QO, PF, and kIoU metrics do not appear to be substantial. Are there any qualitative results available to provide additional insights or demonstrate the effectiveness of the proposed module? Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Why the number of images in the validation set is much more than the number of images in the train set and test set? * For the number of objects in each image, what’s the statistics in the train set and test set? * For the model, why “ours-BG” sometimes outperform “ours”? “Ours” should have extra information of the foreground mask, right? Is this caused by the imperfect performance of the foreground mask? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors mention the limitation in the supplement. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and questions! ## Novel viewpoint of the reconstructed shapes We promised to add novel views in camera ready. For reference, we show the novel viewpoint of the reconstructed shape in Fig. 5 and Fig. 7 of the main paper in Fig. 4 of the attached material. Our method qualitatively better reconstructs the shape of occluded regions than the baseline. ## The instance number in Table 2 We augment each CAD model from the SAPIEN dataset in terms of three side lengths and randomize the part poses as explained in Section 4, L230. Randomizing the three side lengths changes the original shape significantly. Thus we count it as a unique instance in Table 2. We list the number of original CAD models for each category per split in Table 1 of the appendix. We also show the size distribution after the augmentation in Fig. 1 of the appendix. ## Evaluation for closed parts We follow the relevant previous works’ [1,2,3,4,5] standard experimental settings of uniformly randomized articulation for the test set. We qualitatively confirmed that our approach reasonably works for closed or nearly closed parts, as visualized in Fig. 7 (right) of the main paper, Fig 6. (top row) of the appendix, and Fig 3 (bottom, three stacked drawers on the left) of the attached material. We will add further analysis regarding the closed parts in the camera ready. ## Evaluation results that consider the number of instances We have evaluated shape mAP in terms of different numbers of instances in a scene, as shown in Fig. 5 of the attached material. As the number of instances in a scene grows, F-Score decreases. We attribute this to the camera being often distant from instances when a view contains multiple instances. Thus it becomes harder to understand the fine details of the part geometry accurately. ## Qualitative results for the KPF module Please refer to the global comment for the detail. ## Larger validation set than the training set Also, please refer to the global comment for the detail as well. ## Statistics of the number of objects in each image The average number of instances for the training set is 2.154, and 2.158 for the test set. ## Why “Ours-BG” sometimes outperform “Ours”? As pointed out, the imperfect performance of the foreground mask can be one reason. Another reason could be that sometimes background context near the instance would help detect an instance and estimate the part pose. In some cases, the background context of the floor might be informative for estimating the rotation of the part pose based on the tilt of the floor, and the position of the floor might help to estimate the center of the part. That being said, "Ours" without background outperforms "Ours-BG" with background in the majority of the metrics, as in the majority of cases when there is a background, less number of queries cover the foreground objects, becoming disadvantageous for detection and detailed understanding of part shapes and poses. [1] Heppert et al. CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects, CVPR 2023. [2] Liu et al. AKB-48: A Real-World Articulated Object Knowledge Base, CVPR 2022. [3] Kawana et al. Unsupervised Pose-aware Part Decomposition for Man-made Articulated Objects, ECCV 2022. [4] Jiang et al. OPD: Single-view 3D Openable Part Detection, ECCV 2022. [5] Mu et al. A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation, ICCV 2021. --- Rebuttal Comment 1.1: Comment: Thanks to the response from the authors. The rebuttal has resolved my questions.
Summary: This paper presents a new method for 3D semantic instance reconstruction at the part level from a single RGB-D image. This method follows a top-down manner by first detecting object parts using 3DETR, where each instance part's bounding box will be predicted. The point cloud located inside each bounding box contains the part surface geometry, which will be normalized into a uniformly-scaled canonical system for the following part shape reconstruction. The key contribution of this paper is the kinematics-aware part fusion module, which follows a bottom-up manner to constitute complete instance shapes from parts. The paper writing could be improved, e.g., the first paragraph in the introduction section is actually elaborating on related work. An illustration in Sec 3.6 would be more accessible for the audience to follow. The experiment metrics and designs are extensive but lack some comparisons with the state-of-the-art. Strengths: In my view, the major strengths can be concluded as follows: 1. Kinematics-aware part fusion (KPF). The authors follow a top-down and bottom-up manner to first detect part-level geometries from the point cloud. Then they use KPF to construct instance shapes from parts. These two processes are trained end-to-end jointly to improve the entire performance. 2. It shows good qualitative and quantitative performance in both synthetic data and synthetic data, with an acceptable generalization ability, even though they only train their model with synthetic data. Weaknesses: The weaknesses of this paper are also obvious. 1. This paper follows a similar top-down pipeline as previous works. (e.g., RfD-Net). In L34, the authors argued that previous methods "are either limited to a single instance or require a separate instance detector, making them not end-to-end trainable". I do not think this makes sense, as this paper also requires an instance detector. Besides, relying on a separate instance detector does not mean they can not be trained end-to-end. 2. As this task is not new and this paper follows a similar pipeline, it means the contribution of this paper only comes from the module design. To me the major contribution lies on KPF module. But from the ablation study, it seems KPF does not contribute the most. 3. Section 4.2, comparing with OPD by adjusting the IoU threshold is not rigorous. There is no theoretical guarantee that this comparison is fair. Since OPD is an RGB-based method while this paper uses RGB-D information, a fair comparison would be to add depth information into OPD or to remove the depth channel in your method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Here are some suggestions to improve this work, with which I would consider raising my score. 1. A comprehensive analysis and comparison between the key module of this method (e.g., KPF) and the state-of-the-art (A-SDF is a bit old). 2. The introduction should be about telling the story of the paper's motivation, key observations and contributions. Listing many related works there made it difficult to follow. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This paper well discussed the limitations and failure cases in the supplemental. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed review! ## Effectiveness of the KPF module and comprehensive analysis The primary source of improvement in shape reconstruction accuracy is our part-level reconstruction approach enabling the reconstruction of articulated objects with various part counts, which is the main contribution of this paper. In addition, we also observed qualitative improvement using the KPF module for challenging targets like distant and small part shapes, which we detailed in the global comment. ## Comparison with more recent state-of-the-art baseline We have added another baseline based on AKBNet from CVPR 2022 as a more recent state-of-the-art method besides A-SDF, as detailed in the global comment. ## Improving introduction We show the revised introduction idea below and promise to revise the introduction further in the camera ready. ``` Estimating object shape, pose, size, and kinematics from a single frame of partial observation is a fundamental challenge in computer vision. Understanding such properties of daily articulated objects like refrigerators and drawers has various applications in robotics and AR/VR. Shape reconstruction of daily articulated objects is a challenging task. First, the objects have various shapes resulting from different local part poses. More importantly, they have intra- and inter-category diverse part configurations regarding part counts and structures. A combination of those factors results in exponentially increasing shape variation. Previous works handles only fixed part count by a single model [1] or use multiple category-level models for different part counts after instance-level detection [2] by modeling the target shape in instance-level latent space. Handling those varieties with a single model is a complex and unsolved task. In this paper, we address this complexity through the novel detect-then-group approach. Our key observation is that the daily articulated objects consist of similar part shapes. For example, regardless of the number of refrigerator doors, each door can have a similar shape, and the base part may have similar shapes to those from other categories, such as storage. Detecting each part and then grouping them into multiple instances is a scalable and generalizable approach for diverse part configurations of daily articulated objects in a scene. Based on this idea, we propose an end-to-end detection-based approach for part-level shape reconstruction. <the rest of the text will follow the second and the third paragraphs of the introduction> ``` ## Improving the figure for the KPF module We promise to revise the current figure in Section 3.6 in the camera ready for better accessibility. ## Using an instance detector does not mean end-to-end trainable We agree that some methods, like RfD-Net, operate end-to-end from detection to reconstruction. Therefore, we promise to rephrase the original sentence not to imply that detection-based necessarily means not end-to-end trainable up to shape reconstruction. ## Contribution of the paper As a high-level idea, we base our approach on detector-based reconstruction approaches. However, we have developed these ideas and made novel progress. As reviewers **mcMG** and **MJHC** agree, introducing a detect-then-group approach that can handle daily articulated objects with an arbitrary number of parts, a major limitation of previous works, is novel. This presents a new setting for shape reconstruction of daily articulated objects and stands as the significant contribution of this paper against the recent previous works. ## Fairness of comparison against OPD As written in L275 of the main paper, we have included depth alongside RGB as an additional channel for OPD's input, ensuring a fair comparison. Addressing concerns about the IoU threshold's fairness, we've also conducted an additional evaluation with a favorable setting for OPD, which is presented in the table below. While we maintained the same evaluation settings as in the main paper, we selected the pair of IoU threshold values for OPD and ours from all possible combinations (50% to 90% with 10% steps for both OPD and ours) this time. We chose a pair of IoU thresholds in a way that gave the best metric values for OPD. In the table below, IoU thresholds for OPD and ours are 70% and 80% for the prismatic joint state, respectively, and 90% for the rest of the joint parameters for both OPD and ours. Despite these adjustments, our method still outperforms OPD. ### Additional joint state evaluation | | State (revolute/prismatic) | OE | MD | |-------|---------------------------|---------------|--------------| | OPD | 16.52°/16.46cm | 10.81° | 34.68cm | | Ours-BG | **3.34°**/**5.45cm** | **1.96°** | **5.15cm** | [1] Heppert et al. CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects, CVPR 2023. [2] Liu et al. AKB-48: A Real-World Articulated Object Knowledge Base, CVPR 2022. --- Rebuttal Comment 1.1: Title: Post rebuttal comments Comment: Thanks to the authors for their comprehensive rebuttal. I believe the authors addressed my concerns, and would like to raise my score to weak accept.
Summary: The paper presents a detection-based reconstruction method for articulated objects along with estimating part-level 6D object poses, sizes, and joint parameters. The paper uses 3DETR as the backbone predicts all of the above quantities while treating this problem as a supervised learning approach given labeled synthetic data. The approach trains only on synthetic data and transfers to real world dataset. Strengths: The paper effectively uses a detection backbone i.e. 3DETR to reconstruct multiple articulated objects from a single view RGB-D observation. Sim-to-real transfer uses only synthetic data pretraining is a strong result. The use of set matching (from instance segmentation) literature for articulated part reconstruction is interesting. Joint estimation and shape reconstruction quantitative results are significantly better than baselines. I believe breaking the problem in this way i.e. part level reconstruction omits the need for model selection (i.e. training one model per category such as different models for glasses (2 joints) and refrigerators (1 joint) which is a major limitation in previous works. Weaknesses: 1. The paper mentions CARTO, which is super relevant and recent CVPR'23 work but doesn't directly compare to it in terms of both detection and reconstruction. Is there a reason for it? Rather the paper takes individual joint parameter baseline i.e. OPD and shape reconstruction baseline i.e. A-SDF and compares them separately to those. While these are relevant, CARTO is the most relevant to this work in terms of system-level joint detection and shape reconstruction. 2. While the work offers a solution for handling an arbitrary number of parts in the image, the qualitative results show simple examples i.e. same types of joints and 2 joints mostly. Did the authors test their approach on more complicated geometries i.e. varying number of joints or articulated objects with more joints >5+. 3. I didn't see a discussion to run the speed of the model. The most relevant works which the author discusses i.e. CARTO (which builds upon CenterSnap and ShaPO) are all fast approaches. Is 3DETR equally fast and did the author consider adding that backbone i.e. CenterSnap backbone which offers a faster solution + equally good model in terms of accuracy? This is crucial for real-time applications like robotics or grasping etc. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the questions raised above in the weakness section. To summarize, a comparison to strong and relevant system-level baselines, more qualitative results, and discussion/comparison to speed vs accuracy tradeoff would be useful to answer. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors can only handle cases where instance boundaries are defined so it would fail in scenarios where a door would be attached to a room let's say. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback! ## Targeting articulated objects with >5+ joints The trained model works reasonably for complex target instances like >5+ joints, as shown in Fig. 2 of the attached material, especially when all parts are clearly visible from the given view. However, from a certain viewpoint, we also observe that the single instance is reconstructed as two separate instances (Fig.3 of the attached material, first row) with fewer joints for each instance. We attribute this to the current dataset consisting of a small number of CAD models with >5+ joints, and the majority of instances have fewer joints. Also, when some parts are only partially visible, our method tends to make inaccurate pose estimations for such parts (Fig.3 of the attached material, second row, the right stacked three drawers indicated by red box). We add these cases as limitations in camera ready. ## Comparison with CARTO Please refer to the global comment on the comparison with state-of-the-art. ## Real-time application Currently, improvement in detection-to-reconstruction speed compared to the state-of-the-art methods is not our focus in this paper. Combination of real-time DETR-based 3D detectors like MonoDTR [1], tuning the number of queries based on speed vs. accuracy trade-off, employing hierarchical isosurface sampling for fast meshing as CARTO, re-implementing current single processed, CPU-based implementation of KPF module with multiprocessed or GPU-based implementation can be possible extensions for speed improvement, and we leave it for future work. We will include quantitative speed measurement in the camera ready. [1] Huang et al. MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer, CVPR 2022. --- Rebuttal Comment 1.1: Title: Post rebuttal comments Comment: The rebuttal has resolved my questions. Adding limitation examples, and quantitative comparisons to existing state-of-the-art fast reconstruction techniques like CARTO/ShAPO (and mentioning this as a potential limitation as well) would strengthen the arguments in the paper and I look forward to seeing that in the final version. I am happy with the novelty of the paper i.e. part based reconstruction to handle multiple joint types and retain my rating.
Summary: The paper proposes a method for man-made articulated objects reconstruction from a single RGBD image across different object categories. The method is based on a detect-then-group pipeline, using kinematics-aware fusion for addressing false negatives. Strengths: The proposed method addresses the challenging problem of man-made articulated object reconstruction from a single RGBD image, estimating at the same time each part's pose. The overall architecture has a somewhat complicated structure with numerous encoder and decoder modules employed. Nevertheless, the design choices are sufficiently motivated in the text and the loss function is quite straightforward given the overall architecture. The experimental evaluation considers both a synthetic (SAPIEN) as well as a real-world (BMVC) dataset. The proposed method achieves improved performance with respect the state-of-the-art on SAPIEN. The ablative study shows the relative contribution of important components. Regarding reproducibility, sufficient details are provided making it easier to reproduce the results (considering also the supplemental material). Weaknesses: The method seems to be closely related to [20]. I think a more detailed discussion about the relation between the two methods should be included in the Related work section. Related to this, it is not clear why the experimental evaluation does not consider comparison also with [20]. This is important, considering the few baselines available for this task. Alternatively, methods considering multi-view reconstruction on SAPIEN could also be considered to give additional insight on the performance of the proposed method. On a side note, the considered metrics could also include the joint state error. ### Minor comments - L.45: define acronym NMS - L.127: not clear - Fig. 2 is not referenced in text - Although prior work regarding human subjects is briefly visited in the related work, similar work for animals is briefly mentioned but not specified. Works like [R1], [R2] and [R3] could be included to make this aspect of related work stronger. [R1] Ntouskos, V., Sanzari, M., et al. (2015). Component-wise modeling of articulated objects. ICCV [R2] Zuffi, S., Kanazawa, A., et al. (2017). 3D menagerie: Modeling the 3D shape and pose of animals. CVPR [R3] Jiang, L., Lee, C., Teotia, D., & Ostadabbas, S. (2022). Animal pose estimation: A closer look at the state-of-the-art, existing gaps and opportunities. Computer Vision and Image Understanding, 103483. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - L.126: why P_O both conditions O and is given also as an argument? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in the text Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and questions! ## Joint state error We show the joint state error compared with OPD in Table 3 in the main paper, denoted as “State.” We have also added the joint state error comparison against A-SDF-GT-2 in the table below. Note that A-SDF-GT-2 can only be evaluated against GT instances with its learned part counts. Therefore, we evaluated A-SDF-GT-2 and ours only on the GT instances with the part counts that A-SDF-GT-2 learned. ### Joint state error evaluation | | Revolute (deg) | Prismatic (cm) | |---------------|----------------|----------------| | A-SDF-GT-2 | 25.49 | 13.97 | | Ours | **4.62** | **4.80** | ## Why is P_O both conditions O and is also given as an argument? We appreciate the reviewer for pointing this out. Actually $P_\mathcal{O}$ is in the right place but $\mathbf{p}$ should be $\mathbf{x}$. The correct equation is as follows: $o_{\mathbf{x}} = {\mathcal{O}}((\mathbf{R}\mathbf{S})^{-1}(\mathbf{x}-\mathbf{c}) \mid P_{\mathcal{O}},\mathbf{h})$ $\mathbf{x}$ is in world coordinates, and $(\mathbf{R}\mathbf{S})^{-1}(\mathbf{x}-\mathbf{c})$ projects it into the local coordinate of the part to sample occupancy value at world coordinates $\mathbf{x}$. We will fix the equation in camera ready. ## Comparison with PPD [1] Please refer to the global comment onthe comparison with state-of-the-art. ## Additional related works We appreciate an idea to improve our related work section. We will include them in the camera ready. ## Writing errors As responded in the global comment, we promise to fix this in camera ready. [1] Kawana et al. Unsupervised Pose-aware Part Decomposition for Man-made Articulated Objects, ECCV 2022. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for the detailed reply and the clarifications provided in their rebuttal. I have no further questions or comments at this time.
Rebuttal 1: Rebuttal: # Global comment We thank all the reviewers for their thoughtful feedback. We are encouraged that the reviewers having identified our paper making a good contribution (**mcMG**, **Y464**, **MJHC**, **VtGS**), and the proposed to be interesting (**MJHC**), intuitive and effective (**mcMG**), working on a novel task (**mcMG**), and addressing the major limitations of previous works (**MJHC**). We are also glad that all reviewers agree that the experiments demonstrate effectiveness and good performance on both synthetic and real data, significantly better than the baseline (**MJHC**), with reasonable experimental settings for meaningful comparisons with previous works (**mcMG**). Below, we write responses to the concerns common to several reviewers. ## Comparison to the state of the art (**Y464**, **MJHC**, **HLDh**): To the best of our knowledge, no prior work operates in exactly the same setting as ours, which can handle cross-category, multiple-articulated objects with various part counts and structures. The closest work is the system-level (handling multiple articulated objects) reconstruction approach CARTO [1], which targets cross-category multiple articulated objects with a **single** articulated part. We were unable to directly compare against it due to the lack of an official code at the time of submission, and it was only made publicly available several days ago. To alleviate this problem, we used the SOTA system-level baseline setting from the paper [1]: A-SDF with the ground truth (denoted as A-SDF-GT in our paper) for detection. This baseline provides an upper-bound system-level performance in detection and is used as a comparable baseline against CARTO in the paper [1]. To address concerns from fellow reviewers to consider another baseline (**Y464**) and to compare against a more recent approach than A-SDF from ICCV 2021 (**HLDh**), we added the new baseline based on the idea from AKBNet [2] (CVPR 2022), denoted as AKBNet-GT in the table below. AKBNet uses A-SDF for shape reconstruction and improves the accuracy of shape reconstruction by using the motion amount estimated by an additional, improved pose encoder during shape reconstruction. As the official AKBNet code for the encoder has not been released, we use the ground truth motion amount during evaluation instead. As it uses a category-level shape decoder, we use the same setting with A-SDF-GT-2 in the main paper that trains up to two most frequent part counts per category. Our approach still outperforms AKBNet-GT in the majority of metrics. To avoid unfair comparison, we did not include another recent work, PPD [3], as our baseline, suggested by fellow reviewer **Y464**. This is because PPD focuses explicitly on shape **abstraction** but not accurate shape reconstruction with unsupervised learning. In contrast, our approach is fully-supervised, and targets shape reconstruction. ### Shape mAP evaluation | | Fscore@80% | Fscore@90% | CD1@5% | CD1@1% | IoU@25% | IoU@50% | |---------------|------------|------------|---------|---------|--------|--------| | AKBNet-GT | 72.67 | 58.73 | **79.17** | 49.92 | **41.61** | 11.26 | | Ours | **74.77** | **68.38** | 77.39 | **56.53** | 41.35 | **11.63** | ## Effectiveness of the KPF module (**mcMG**, **HLDh**): We have detailed the effectiveness of the KPF module qualitatively through additional ablation studies in Appendix H. The KPF module improves detection and pose estimation, especially for small and distant parts, as visualized in Fig. 3 in the appendix while suppressing false positives, as shown in the precision score in Table 4 of the main paper. As explained in L159-161 in Section 3.6 of the main paper, the reason for this improvement is that the query oversampling allows more queries to cover better small parts represented as a small number of points in the input point cloud. We added qualitative visualization of the KPF module in Fig. 1 of the attached material. We observe that the KPF module also improves detection for occluded parts with a small number of points in the input point cloud due to the occlusion (indicated by the bottom red box in the figure). Furthermore, we have qualitatively demonstrated the effectiveness of kIoU in the KPF module in Fig. 4 of the appendix. Query oversampling (QO) and part fusion (PF) alone with standard 3D box IoU results in false positives for thin parts like doors due to the small overlap between parts, but kIoU effectively suppresses the false positives by considering their overlapping trajectories, as explained in L171 in Section 3.6. ## Larger number of the validation set than the training set (**mcMG**, **VtGS**): We generated 188,726 images for training and validation purposes. Due to our limited computational resources and time budget for the experiments in parallel, we used only 20,000 images for training. One can include the remaining images in the validation set to the training set to make the sizes of the validation and training sets comparable, depending on available computational resources and time budget. ## Writing improvement: typos, grammatical errors, missing reference to the figure, unclear sentence (**VtGS**, **Y464**): We appreciate the reviewers pointing out the areas to improve in the manuscript. We promise to fix those issues in the camera-ready version. [1] Heppert et al. CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects, CVPR 2023. [2] Liu et al. AKB-48: A Real-World Articulated Object Knowledge Base, CVPR 2022. [3] Kawana et al. Unsupervised Pose-aware Part Decomposition for Man-made Articulated Objects, ECCV 2022. Pdf: /pdf/b59a6fe0d35f0a7951af341bbee59c8a93e2e463.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes an end-to-end trainable method for reconstructing multiple articulated objects from a single RGB-D image, consisting of detecting parts, reconstructing part-level shapes, and estimating poses, bounding boxes as well as kinematic parameters. The parts are grouped into instances later. The authors propose anisotropic scale normalization for shape reconstruction to accommodate various part sizes and scales. Besides, the authors also propose test-time kinematic-aware part fusion (similar to non-maximum suppression) to reduce false positives when multiple detected results are generated and needed to be merged. Evaluation on both synthetic and real data demonstrates the effectiveness of the proposed method. Strengths: - The paper is clearly written and easy to follow. - The authors propose a clean pipeline for reconstructing articulated objects that can be trained end-to-end and inferred straightforwardly without composing multiple networks. Weaknesses: Minor typos: - L38: "a end-to-end" -> "an end-to-end" - L219-L220: "and use cosine scheduler to learning rate of 1e-6" sounds a little strange to me. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. How do the authors choose the revolute origin? It seems that any point on the revolute axis can be chosen as the origin. I think this question also applies to prismatic joints. 2. Is there any explanation why anisotropic scaling is better than isotropic one? 3. L235-L236: Do you actually mean 20,000 validation images and 168,726 training images? Otherwise, it is strange why more images are used for validation instead of training. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors has addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing our paper! ## How to choose revolute origin? As pointed out, any point on the line formed by GT revolute origin and GT joint axis is the correct revolute origin. Thus for evaluation, we measure the minimum distance (MD) between the GT axis line and the predicted revolute origin. During training, we simply minimized the L2 distance between the GT revolute origin and prediction, following OPD. As a prismatic joint does not have a GT origin, we only optimize the predicted revolute origin if the matched GT part’s kinematic class is revolute type. ## Why is anisotropic scaling better than isotropic one? Although we have explained the reason in the main paper's L308 in Section 4.3 and L121-122 in Section 3.3 of the main paper, let us recap here again for better understanding. With isotropic scaling, the shape decoder needs to learn variations in shapes with different width, height, and depth ratios. Anisotropic scaling normalizes all three sides to unit length, reducing the variation of shapes that the shape decoder needs to learn and making it easier for the shape decoder to be optimized during the training. ## Validation data size Please refer to the global comment for a detailed response. ## Writing errors We appreciate the reviewer for pointing this out. As responded in the global comment, we promise to fix this in camera ready. --- Rebuttal Comment 1.1: Comment: The rebuttal has resolved my questions. I would like to keep my rating.
null
null
null
null
null
null
Beyond NTK with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time
Accept (poster)
Summary: This paper studies the global convergence of gradient descent for training two-layer networks in learning a high dimensional quartic function. The authors show that GD converges when the sample size $n = O(d^{3.1})$ and the width of the neural network grows at most polynomially in d. The authors also showed that any kernel method with $n \ll d^4$ cannot achieve the same order of accuracy. Strengths: First of all, this is an excellent paper on the non-asymptotic mean field analysis of GD training for two-layer neural networks and its separation from kernel method. Albeit the simplicity of the target function, the convergence analysis requests new ideas and tools. One crucial ingredient of the proof lies in showing that the projected gradient flow (population) dynamics can escape saddle points. The other key ingredient is bounding the coupling error between the empirical particle dynamics and the population dynamics. For the later, the authors tame the usual exponential growth rate in the coupling error with a delicate investigation on the relationship between the growth of the error and that of the signal. Weaknesses: I do not have many comments on the weakness, but have few questions, which are mostly relevant to future considerations. See below. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The authors established the results for learning the quartic function. Does the proof generalize to a more general class of functions? If so, can the authors highlight the further work to be done? If not, it would be good if the authors can comment on the major challenges. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review, and for mentioning that “this is an excellent paper on the non-asymptotic mean field analysis of GD for training two-layer neural networks.” # Extending to more general activations The framework that we use for analyzing the population gradient flow can potentially be generalized to higher-degree activations. We divide the population gradient flow into three phases, as described in Section 4. Phases 1 and 2 would likely have very similar analyses even if the activations/target function had higher degree. Analyzing Phase 3 with higher-degree activations/target functions would be more challenging, since the velocity function would have more roots. Separately from our techniques for analyzing the population dynamics, we also believe our techniques for analyzing the finite-width and finite-sample setting, by coupling the infinite-width/population loss and finite-width/empirical loss trajectories, can generalize to higher-degree polynomial activations with relatively modest effort, to obtain sample complexity better than NTK. --- Rebuttal Comment 1.1: Title: Keep my rating unchanged Comment: I have read the rebuttal. Thank the authors for addressing my comments. I keep my rating unchanged. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you very much for your reply!
Summary: This paper studied the projected gradient flow on two-layer neural networks in the mean-field regime with polynomial width and quartic activation function. With data sampled uniformly on the sphere, the authors proved that to learn a single-index model with an even quartic link function, this neural network needs $n=O(d^{3.1})$ training samples. However, we know for any inner product kernel ridge regression requires $n\gg d^4$ to learn a quartic target function. This provides a concrete example of practical feature learning of neural networks outperforming the kernel methods and NTK regime when the target function has a low-rank structure. Strengths: Overall, I found the paper well-written and easy to follow. The paper presents interesting insights into more accurate models of neural network training. The results obtained by the authors are as sharp as can be reasonably expected given the problem in certain cases. Unlike previous analyses of feature learning with two-stage training procedures, this work shows that neural networks can learn both feature direction and nonlinear link function when training in a more practical way with the projected gradient flow. The methods of the proof may provide further insights to obtain more general results on feature learning. Weaknesses: 1. The main limitation is that the activation function and the target link function are special, only quartic functions. It would be nice to check if the proof techniques can handle more general activation and target functions. Another thing that can be improved is the precise order of the width: how wide the neural network is sufficient for this feature learning. 2. There should be a comparison between the results of the current submission and [10]. [10] uses information exponent to determine the sample complexity for online SGD to learn a single-index target. When $\gamma_2=0$ in Assumption 3.2, [10] implies that sample size should be $n\gg d^{3}\log n$ which is similar to the results in the current submission. Further explanations are needed. 3. And it should be more convincing to visualize Phases 1-3 of the training dynamic in Section 5 if there is some simulation of synthetic data or real-world data for this finite-width training dynamic. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. [Ba et al. 2022] also study the feature learning and single-index model when $n\asymp d \asymp m$ with two-stage training processes. And [Arnaboldi et al. 2023] and [Berthier et al. 2023] also studied the gradient flow dynamics of two-layer neural networks in high-dimension to a single-index model. Some comparisons with the current paper could be made. 2. In line 88, you mentioned that a similar coupling error was shown in [49] but [49] is for random feature regression with proportional limit. Can you explain more about it? 3. Line 147 typo. 4. In Theorem 3.4, the sample complexity is $n\ge d^\mu$. Is there an explicit formula or upper bound for the power $\mu$? 5. In Theorem 3.5, you considered the limitation of the inner-product kernel with data points drawn from the unit sphere. Does the data distribution have to be uniform unit sphere distribution? Can Theorem 3.5 be extended to a general rotational invariant kernel or Euclidean distance kernel? 6. Line 233 double "Appendix" 7. Line 274 $w$ should be $w_t$? 8. In the analysis, Phases 1 and 2 are controlled by the quadratic components in the activation and target functions, and the dynamics are analogous to a power method update. Is there any relation between this and the PCA warmup proposed by [Chen and Meka 2020] before SGD training? 9. In Assumption 3.2, we need $\gamma_4\ge 1.1\gamma_2^2$. Why do we need this assumption and coefficient $1.1$? What if $\gamma_2=\gamma_4=1$? =============================================================================================== - Chen, S. and Meka, R., 2020, July. Learning polynomials in few relevant dimensions. In Conference on Learning Theory (pp. 1161-1227). PMLR. - Ba, J., Erdogdu, M.A., Suzuki, T., Wang, Z., Wu, D. and Yang, G., 2022. High-dimensional asymptotics of feature learning: How one gradient step improves the representation. Advances in Neural Information Processing Systems, 35, pp.37932-37946. - Arnaboldi, L., Stephan, L., Krzakala, F. and Loureiro, B., 2023. From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks. arXiv preprint arXiv:2302.05882. - Berthier, R., Montanari, A. and Zhou, K., 2023. Learning time-scales in two-layers neural networks. arXiv preprint arXiv:2303.00055. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the authors have adequately addressed the potential negative social impact of their work. The conclusion lays out some suggested next steps and in doing so highlights certain current limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and for noting that our work “presents interesting insights into more accurate models of neural network training.” We now address the reviewer’s questions. We will incorporate presentation-related comments in the revision and include a simulation to illustrate Phases 1-3. # Generality of Activation Function We agree that our activation/target function deviates from more realistic settings. We keep the activations and target functions simple to not distract from our main objective of studying unmodified gradient descent in a non-trivial setting which requires avoiding bad stationary points. We note that while many works such as Abbe et al. (2022) study more general activation functions, they use modified algorithms which bypass the question of whether gradient descent (GD) can avoid bad stationary points. To our knowledge, our work is the first to study unmodified GD in the mean-field regime that goes beyond linear/quadratic activations. Also, the framework that we use for analyzing the population gradient flow can potentially be generalized to higher-degree activations. We divide the population gradient flow into three phases, as described in Section 4. Phases 1 and 2 would likely have very similar analyses even if the activations/target function had higher degree. Analyzing Phase 3 with higher-degree activations would be more challenging, since the velocity function would have more roots. Separately from our techniques for analyzing the population dynamics, we also believe our techniques for analyzing the finite-width and finite-sample setting, by coupling the infinite-width/population loss and finite-width/empirical loss trajectories, can generalize to higher-degree polynomial activations with relatively modest effort, to obtain sample complexity better than NTK. # Precise Width Up to logarithmic factors and other factors depending on the Legendre coefficients of the activation, the width m can be $d^{3 + \gamma}$, where $\gamma$ can be any arbitrarily small, but positive, universal constant. # Comparison with Ben Arous et al. (2021) Ben Arous et al. (2021) consider a single-neuron student network - this is significantly simpler than our setting where we consider a neural network with $\text{poly}(d)$ width. Still, it is plausible that the sample complexity of GD in our setting is also $d (\log d)^{O(1)}$, since the lowest order term in the Legendre decomposition (which is comparable to the information exponent for the case of spherical data) of our activation and target functions has degree 2. We leave the tight sample complexity for GD as an open question for future work. Our Assumption 3.2 excludes the case $\gamma_2 = 0$, due to the condition that $\gamma_4 \leq c_1 \gamma_2^2$. Our analysis of Phases 1 and 2 makes use of the fact that $\gamma_2$ is nonzero because the particles grow uniformly in magnitude while the second-order term is dominant. # Other Related Works Ba et al. (2022) perform one step of gradient descent on the hidden layer, followed by linear regression to fit the second layer. Meanwhile, we train the hidden layer for a long period of time. Arnaboldi et al. (2023) bound the error between dimension-free dynamics and the true dynamics. We also obtain one-dimensional dynamics, but we do not claim that it is novel. Rather, our main contributions are our convergence analysis of population dynamics (Section 4.2) and of coupling error between population and empirical dynamics (Section 5). Berthier et al. (2023) obtain a super-exponential coupling error, which gives a super-polynomial coupling error in our setting. # Exponent of sample complexity In the sample complexity $d^\mu$, we can set $\mu = 3 + \gamma$ where $\gamma$ is a positive, but arbitrarily small, universal constant. # Data Distribution for Kernel Lower Bound While our current proof requires a uniform distribution over the unit sphere, we believe that our proof technique is quite general and can be used to prove similar results for other data distributions. Essentially we only require that the minimum eigenvalue of the matrix $(K(x_i, x_j))_{i,j\in [n]}$ concentrates to its mean with constant probability, and $K(x_i,z)^2$ concentrates to its mean for a fixed $z$. Thus, we leave the technical details as future work. # PCA Warmup Our phases 1 and 2 are not directly related to the PCA warmup of Chen and Meka (2020) - they explicitly threshold points $x$ based on $y(x)$ to identify the low-rank subspace, while we show the neurons automatically achieve high correlation with $e_1$ in Phase 1 and 2. Additionally, in the rank-1 case, Chen and Meka (2020) perform Riemannian GD on a teacher-student setting where both the teacher and student have only one neuron and the same activation, while our model has poly(d) neurons. # Loosening assumption that $\gamma_4 \geq 1.1 \gamma_2^2$ We make use of this assumption in the analysis of Phase 3, Case 2 (omitted from main body due to space constraints). In Phase 3, Case 2, we have $D_{4, t} > 0$ and $D_{2, t} < 0$, which by the assumption $\gamma_4 \geq 1.1 \gamma_2^2$, implies that the distribution $\rho_t$ cannot have all of its mass at a root of the velocity function. Thus the population dynamics will continue to make progress. More generally, we could instead have assumed that $\gamma_4 \geq c \gamma_2^2$ for any constant $c > 1$. Our proof could likely extend to the case $\gamma_2 = \gamma_4 = 1$ (and more generally $\gamma_4 = \gamma_2^2$), using a similar proof by contradiction to show that if all the particles are close to the root of the velocity, then this root must be $\sqrt{\gamma_2}$, or else a contradiction is obtained. We exclude this case to simplify the analysis while still having a nontrivial setting where unmodified GD achieves good sample complexity. # Citation to [49] This was a typo - we intended to cite Mei et al. (2019). # References (Mei et al. 2019) arxiv ID: 1902.06015 --- Rebuttal Comment 1.1: Title: Minor Clarification on Precise Width Comment: We wish to add a minor clarification to the "Precise Width" section of our rebuttal - this is not related to any of the other points that we make in the rebuttal. We just wanted to clarify that the width $m$ does not depend on the Legendre coefficients of the activations - we incorrectly said in the rebuttal that it has some factors depending on the Legendre coefficients. So the width $m$, up to $(\log d)^{O(1)}$ factors, is $d^{3 + \gamma}$ where $\gamma > 0$ is an arbitrarily small constant. We note that this change does not affect our submitted manuscript - in our actual submitted manuscript, we correctly calculated that the width only depends on $d$ and does not depend on the Legendre coefficients of the activation. --- Rebuttal Comment 1.2: Title: Thanks for the rebuttal Comment: Thanks for the authors' detailed response. I thank the authors for their clarifications in the rebuttal and appreciate the theoretical contributions of this work. But considering the model assumptions and the lack of a tight sample complexity for GD, I will keep my score. --- Rebuttal 2: Title: Please let us know if you have any questions! Comment: Dear reviewer, since the end of the discussion period is approaching, we just wanted to check if we have addressed your questions, or if you have any additional questions. Thank you very much!
Summary: This paper studies the statistical efficiency of the projected gradient dynamics on the sphere for (polynomial-width) two-layer neural networks under the mean-field regime. In particular, this work proves the sample complexity of $O(d^{3.1})$ for learning the single-index model with an unknown quartic link function. Strengths: The results can be of interest to the readers and are technically sound. The remarkable contributions compared to most related works are that this work - studies not unnatural optimization dynamics but the standard projected gradient flow on the sphere, - shows the separation between two-layer neural networks in the mean field regime and kernel methods by providing the sample complexities. Technically, the improvement in particle complexity is also worth noting. In a naive way, the approximation accuracy to the distribution by using finite particles exponentially deteriorates in time, but this study prevents such deterioration by making use of the problem and model structures. Hence, I think this work certainly contributes to the context. Weaknesses: - This theory deals with not standard activation such as ReLU, sigmoid, and tanh but a special type of activation function (i.e., fourth-order polynomial). - In my understanding, Abbe et al. (2023) [2] also studies the standard optimization dynamics and shows the superiority of the mean-field regime against linear models. A detailed discussion of the relationship to [2] would help clarify the position of the work. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Could you specify the dependence of $m$ stated in the next sentence of Lemma 5.1: ``network width $m$ is sufficiently large polynomial of $d$?'' What does the highest degree of this polynomial depend on? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitation of the paper is well addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and for noting that our “results can be of interest to the readers and are technically sound.” Below we address the reviewer’s questions and concerns. # Generality of Activation Function We agree with the reviewer that our activation/target function deviates from realistic settings. We chose to keep the activations and target functions simple to not distract from our main objective of studying unmodified gradient descent in a non-trivial setting which requires avoiding bad stationary points. We note that while many works such as Abbe et al. (2022) study more general activation functions, they use modified algorithms (such as the layerwise/two-stage algorithm used by Abbe et al. (2022)) which avoid the question of whether gradient descent can avoid bad stationary points. To our knowledge, our work is the first to study unmodified gradient descent that goes beyond linear/quadratic activations. Also, the framework that we use for analyzing the population gradient flow can potentially be generalized to higher-degree activations. We divide the population gradient flow into three phases, as described in Section 4. Phases 1 and 2 would likely have very similar analyses even if the activations/target function had higher degree. Analyzing Phase 3 with higher-degree activations/target functions would be more challenging, since the velocity function would have more roots. Separately from our techniques for analyzing the population dynamics, we also believe our techniques for analyzing the finite-width and finite-sample setting, by coupling the infinite-width/population loss and finite-width/empirical loss trajectories, can generalize to higher-degree polynomial activations with relatively modest effort, to obtain sample complexity better than NTK. # Comparison with Abbe et al. (2022) ### We study how gradient descent can escape bad stationary points - Abbe et al. (2022) use a two-stage algorithm, only training the hidden layer for O(1) time, which bypasses this challenge. Abbe et al. (2022) use a two-stage algorithm for the dimension-free population dynamics. They train the first layer for O(1) time, and show no guarantees on the error obtained by training the first layer. They then train the second layer weights, which is a kernel regression problem. ### We have a much tighter bound on the coupling error which allows us to analyze the training for O(log d) steps, while Abbe et al. (2022) have a much looser error bound which is super-exponential in the number of steps and could only work with O(1) training time. Technically speaking, in Abbe et al. (2022) the upper bound on coupling error is based on a loose Lipschitzness bound (also see Theorem 1B from Mei et al. (2019)), while we tighten the bound by comparing the growth of the error with the growth of the signal. ### We do not use fresh samples in every iteration, unlike Abbe et al. (2022) - ours is a more realistic setting. This setting causes the weights to be correlated with the samples, thus requiring a more intricate induction on the coupling error together with a uniform concentration bound from Adamczak et al. (2010). ### Our target function does not satisfy the merged-staircase property (MSP). The lowest degree term in our target function has degree 2. The more recent work of Abbe et al. (2023) generalizes the MSP to functions with more “leaps,” but they use a layerwise training algorithm, and apply non-standard projection steps while training the first layer (separating the coordinates into large and small coordinates and applying different projections to each subset). Thus, Abbe et al. (2023) does not study unmodified gradient descent. # Precise Order of Width Up to logarithmic factors and other factors depending on the Legendre coefficients of the activation, the width m can be $d^{3 + \gamma}$, where $\gamma$ can be any arbitrarily small, but positive, universal constant. We will explicitly mention it in the revised version. # References (Adamczak et al. 2010) Quantitative estimates of the convergence of the empirical covariance matrix in Log-concave Ensembles (Mei et al. 2019) Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit (Abbe et al. 2023) SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics --- Rebuttal Comment 1.1: Title: reply to authors Comment: I appreciate the author's response. The authors have adequately addressed my concerns. I will keep the evaluation. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you very much for your reply and for your review! --- Rebuttal Comment 1.2: Title: Minor Clarification Comment: We wish to add a minor clarification to the "Precise Width" section of our rebuttal - this is not related to any of the other points that we make in the rebuttal. We just wanted to clarify that the width $m$ does not depend on the Legendre coefficients of the activations - we incorrectly said in the rebuttal that it has some factors depending on the Legendre coefficients. So the width $m$, up to $(\log d)^{O(1)}$ factors, is $d^{3 + \gamma}$ where $\gamma > 0$ is an arbitrarily small constant. We note that this change does not affect our submitted manuscript - in our actual submitted manuscript, we correctly calculated that the width only depends on $d$ and does not depend on the Legendre coefficients of the activation.
Summary: Analyzed the (projected) gradient flow dynamics of a two-layer neural network in the mean-field regime in learning a specific degree-4 single-index target function. The main contribution is a polynomial-time convergence guarantee and a sample complexity that outperforms kernel methods. This differs from the naive mean-field analysis where the network width can grow exponentially. Strengths: Most existing mean-field analyses of two-layer neural network focus on the optimization aspect, with the goal of showing global convergence of the training loss, but the generalization properties and sample complexity in learning certain class of target functions are largely unknown. The only exception that I know of is (Abbe et al. 2022). So this submission definitely tackles a challenging and interesting problem. On the technical side, the analysis differs from many previous works on learning single-index model with two-layer neural network, where the link function mismatch is typically handled by training the second-layer parameters. Weaknesses: I have the following concerns. 1. The problem setting is restrictive and convoluted, which is rather underwhelming given the promise of a *clean mean-field analysis* with no unnatural modifications. (i) The training algorithm is the unmodified projected gradient flow, but time discretization is not studied. (ii) The target function is assumed to be an even degree-4 function which is quite restrictive. Moreover, the trained neural network also uses a somewhat unnatural degree-4 activation function, and the Legendre coefficients are restricted by Assumption 3.2. The fact that this submission is lengthy and technically demanding despite these strong assumptions raises the question of whether the same analysis can be extended to more general problem settings. 2. The comparison against prior works is not sufficient, and consequently, the significance of the results cannot be easily evaluated. * In the context of learning single-index model, the sample complexity of gradient descent is decided by the information exponent from (Ben Arous et al. 2021). This definition is originally defined in the analysis of online SGD, but the authors should discuss whether the same mechanism is present in the ERM setting. Specifically, my current reading is that this submission assumes an information exponent of 2, due to the factor of $\sigma_2^2\gamma_2$ in the denominator of the stopping time $T_*$ in Theorem 3.3. Please clarify if this is the case. (Ben Arous et al. 2021) *Online stochastic gradient descent on non-convex losses from high-dimensional inference* * Related to the previous point, it is also well-known that trained two-layer neural network can outperform kernel models when the link function is a high-degree polynomial with low information exponent. For example, the staircase functions in (Abbe et al. 2022) can be learned in linear sample complexity. While these prior results typically employs a layer-wise training procedure, the similarity in the main message may undermine the significance of the theoretical results in this submission. Therefore, I feel that the authors should use more space to highlight the technical differences and challenges (no bias units and retraining of second layer, etc.). (Abbe et al. 2022) *The merged-staircase property: a necessary and nearly sufficient condition for sgd learning of sparse functions on two-layer neural networks* * The simplification of population dynamics into some low-dimensional object via symmetry has appeared in many prior works, such as the dimension-free PDE in (Abbe et al. 2022) and (Hajjar and Chizat 2022). The authors need to explain the difference and new ingredients in Section 4.1. (Hajjar and Chizat 2022) *Symmetries in the dynamics of wide two-layer neural networks* * The claim that prior mean-field analysis only proved "generic exponential growth rate in the coupling error" is not entirely accurate. For the noisy gradient descent setting, recent papers have provided uniform-in-time propagation of chaos estimates, for example see (Chen et al. 2022) (Suzuki et al. 2023). These results hold for finite-width network and discrete-time algorithm, but the current submission only handles the finite-width error. It would be a good idea to comment on the possibility/difficulty of handling the discrete projected gradient descent algorithm. (Chen et al. 2022) *Uniform-in-time propagation of chaos for mean field Langevin dynamics* (Suzuki et al. 2023) *Convergence of mean-field Langevin dynamics: Time and space discretization, stochastic gradient, and variance reduction* Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would be happy to update my evaluation if the authors could address the concerns and questions in the Weaknesses section. #####################**Post-rebuttal Update**##################### The authors addressed some of my concerns; I have therefore increased my score to 5. In the revised manuscript, please include a detailed comparison against relevant prior works, as done in the rebuttal. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their very helpful comments, and for noting that our submission “definitely tackles a challenging and interesting problem.” We now respond to each of the reviewer’s concerns. # Discrete Time We were able to extend our results to time discretization, and we will include this result upon revision. Specifically, we can prove that projected gradient descent with 1/poly(d) step size and a poly(d) width neural network can achieve low population loss in poly(d) iterations as a relatively direct extension of our existing results (please see the next paragraph). To extend to discrete time, we emulate standard bounds on the discretization error of Euler’s method. To bound the error per time step, we bound the smoothness of the empirical gradient, using concentration bounds similar to those we used in the analysis of the empirical dynamics. # Generality of Activations We agree with the reviewer that our activation/target function deviates from realistic settings. We chose to keep the activations and target functions simple to not distract from our main objective of studying unmodified gradient descent in a non-trivial setting which requires avoiding bad stationary points. We note that while many works such as Abbe et al. (2022) study more general activation functions, they use modified algorithms (such as the layerwise/two-stage algorithm used by Abbe et al. (2022)) which avoid the question of whether gradient descent (GD) can avoid bad stationary points. To our knowledge, our work is the first to study unmodified GD in the mean-field regime beyond linear/quadratic activations. Also, the framework that we use for analyzing the population gradient flow can potentially be generalized to higher-degree activations. We divide the population gradient flow into three phases, as described in Section 4. Phases 1 and 2 would likely have very similar analyses even if the activations/target function had higher degree. Analyzing Phase 3 would be more challenging, since the velocity function would have more roots. Separately, we believe our techniques for coupling error of the finite-width/finite-sample trajectory can generalize to higher-degree polynomial activations with relatively modest effort, to obtain sample complexity better than NTK. # Comparison to Ben Arous et al. (2021) In the setting of Ben Arous et al. (2021), the goal is to learn a student neural network which has a single neuron - this is significantly simpler than our setting where we consider a neural network with $\text{poly}(d)$ width. It is plausible that the sample complexity of GD in our setting is also $d (\log d)^{O(1)}$, since the lowest order term in the Legendre decomposition (which is comparable to the information exponent for the case of spherical data) of our activation and target functions has degree 2. We leave the tight sample complexity for GD as an open question for future work. # Comparison to Abbe et al. (2022) ### We study how gradient descent can escape bad stationary points - Abbe et al. (2022) use a two-stage algorithm, only training the hidden layer for O(1) time, which bypasses this challenge. Abbe et al. (2022) use a two-stage algorithm for the dimension-free population dynamics. They train the first layer for O(1) time, and show no guarantees on the error obtained by training the first layer. They then train the second layer weights, which is a kernel regression problem. ### We have a much tighter bound on the coupling error which allows us to analyze the training for O(log d) steps, while Abbe et al. (2022) have a much looser error bound which is super-exponential in the number of steps and could only work with O(1) training time. Technically speaking, in Abbe et al. (2022) the upper bound on coupling error is based on a loose Lipschitzness bound (also see Theorem 1B from Mei et al. (2019)), while we tighten the bound by comparing the growth of the error with the growth of the signal. ### We do not use fresh samples in every iteration, unlike Abbe et al. (2022) - ours is a more realistic setting. This setting causes the weights to be correlated with the samples, thus requiring a more intricate induction on the coupling error together with a uniform concentration bound from Adamczak et al. (2010). ### Our target function does not satisfy the merged-staircase property (MSP). The lowest degree term in our target function has degree 2. The more recent work of Abbe et al. (2023) generalizes the MSP to functions with more “leaps,” but they apply non-standard projection steps while training the first layer (separating the coordinates into large and small coordinates and applying different projections to each subset), rather than studying unmodified GD. # Uniform-in-time propagation of chaos We will cite Suzuki et al. (2023) and Chen et al. (2022) (though the work of Suzuki et al. (2023) was posted after the NeurIPS deadline). **However, it is likely highly non-trivial to apply current analyses of mean-field Langevin dynamics to obtain good test error and sample complexity.** Assuming Theorem 4 of Mei et al. (2018) is tight, the inverse temperature $\lambda$ has to be at least proportional to the dimension $D$ for Langevin dynamics to achieve good test error. However, this causes the log-Sobolev constant in Suzuki et al. (2023) to be $e^{-D}$, and hence their runtime is $e^D$. In comparison, **we are able to extend our techniques to discrete-time projected gradient descent with $\text{poly}(d)$ iterations.** Also, Chen et al. (2022) do not study finite data or discrete time. # Dimension-free dynamics We do not claim that our reduction to one-dimensional dynamics is novel. The main novelty of our work is in our convergence analysis of the 1-dimensional dynamics and the coupling error between the population and empirical dynamics. # References (Adamczak et al. 2010) arxiv ID: 0903.2323 (Mei et al. 2018) 1804.06561 (Mei et al. 2019) 1902.06015 (Abbe et al. 2023) 2302.11055
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Expert load matters: operating networks at high accuracy and low manual effort
Accept (poster)
Summary: This work supposes a real-world setting where misclassified examples are reviewed post-hoc by human experts, and offers a near-optimal trade-off between such examples (the expert load) and classifier accuracy. The authors propose to use a curve of confidence versus expert load, using the latter as a sliding scale to arrive at that optimum. They propose a no-binning method that kernelises the said (CFOC) curve, and arrive at a formulation to minimize the expert load value irrespective of the network parameters so that the formulation may be added to the error minimisation loop. Extensive experiments are performed, such as those on OOD detection where overconfidence is significant, and imbalance, where traditional OC curves are less meaningful because the underlying metrics are too. Strengths: The idea has a high likelihood of orginality and novelty, to the extent that benchmarking has had to be done with the COC-derived-loss-augmented CE and all sorts of CE-derived losses, and not competing methods. The idea is developed in a simple manner that persuades the reader's logic. Experimentation is fairly complete, even though a discussion of mixture ratios of CE and AUCOC- losses was expected too. Weaknesses: Studying the dynamic of mixing two losses that have such variable gradient magnitude properties on the rate and preciseness of convergence is something I'd have done. The $\tau$ being the bandwidth of the kernel, seems to be extraordinarily large at both levels in table 1. I'm not sure if I'd be offloading up to a third of testing examples to the expert. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Equation 3 does not mention how AUCOC is determined. It just computes the AUCOC Loss using the former. Please let me know in my assuming it is determined in the same way as the ECE does, i.e. accuracy vs. confidence, as is strongly evident from the presence of bins? How do you define confidence? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the proposed method and we are happy to answer to the raised questions. AUCOCLoss (Equation 3), exploits the definition of AUCOC (equation in Section 3.2.2). Specifically, to build a differentiable loss out of this definition, we employ KDE to define E[c|r]p(r) and tau0 as explained in Section 3.3.1. The metrics AUCOC instead, as it does not need to be differentiable, is just built in the same fashion as classic AUROC, i.e. as explained in lines 215-224. Regarding the study about mixing the two losses, it is definitely an interesting point. We tried various weighting factors and found out that favourable weighting factors for AUCOCLoss fell all between 1 and 10, without no unified preferred value in this range, but with consistent improvement over the baselines. --- Rebuttal Comment 1.1: Comment: Thank you for the response, we will take the additional explanations into consideration for further discussions, Best regards, AC
Summary: The authors present a "confidence operating characteristic" curve to represent tradeoff between accuracy and numbers of samples delegated to human experts. To maximize the area under this curve, the authors propose a new loss. The authors run classification experiments on computer vision and medical image datasets. Strengths: The paper tackles a problem that is clearly important -- namely, minimizing the human effort needed in a human-in-the-loop system and quantifying the "human effort" portion. However, see weaknesses section below. Weaknesses: - The references list looks reasonable (35 references), but for whatever reason, the related works section seems very sparse and lacking in relevant bodies of work. Upon closer inspection, it looks like a significant portion of those references (10+) are not related, just part of the motivation. This work is effectively "active learning," which is conspicuously absent from the paper entirely. I'm not super familiar with active learning, but no doubt there is previous work that already tackles the problems introduced here -- and it's not clear how this paper is different. For example, here's a highly cited paper in the area: "Cost-Effective Active Learning for Deep Image Classification" 2017, https://arxiv.org/abs/1701.03551. The paper here proposes a metric that minimizes annotation cost -- in particular, they ask humans to annotate highly-unconfident samples. This seems like a reasonable metric and it's not clear why end-to-end minimizing cost is better, for example. (I'm sure it could be justified) Granted, the setup is slightly different, but I'm sure a more thorough search will yield more relevant papers. - The paper doesn't cite previous metrics for "human effort", how they are fallible, and how the proposed metric fixes that problem. As a result, it's not clear why this metric is preferred to other variants. - Experiments are performed on MNIST variants, CIFAR100 and TinyImageNet, but do these results extend to ImageNet for example? A full ImageNet run isn't needed; just a few epochs showing that results are trending positively for your method. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Comment about active learning and related work. We believe there is a misunderstanding, as the proposed paper is not performing active learning. Active learning and our work have two inherently different goals. Active learning aims to include the human expert in the loop **during training**. In contrast, our work focuses on reducing the load on human expert **at deployment**, i.e., when the model is used for inference. In many applications, e.g., medical imaging, to meet high accuracy requirements, samples for which a model is not certain are delegated to experts. Our goal is to find training losses that will take into account this delegation. Consequently, our study has primarily concentrated on a different literature than active learning, namely confidence calibration, that also explicitly focuses on predictive confidences and ultimately aims to enhance human-AI interaction. We outlined the most recent and relevant methodologies in this realm. Moreover, our investigation explores the allocation of samples for human analysis during deployment. This quantity is a recognised metric, and analogous curves resembling COC have been previously employed, as in [1]. In response to the reviewer's suggestion, to further improve our work, we will incorporate supplementary references to exemplar applications that utilise akin metrics [2, 3, 4]. To avoid any future misunderstandings or doubts, we will also add a discussion to the main paper where we explicitly state the differences with active learning. # Comment about "human effort" metrics. To the best of our understanding of this comment, we believe this is related to the misunderstanding about the relation between this work to active learning. In this work we are only interested in the number of samples delegated to a human expert for analysis **at deployment time**. This number is already an established metric and curves similar to COC have been used before [1]. Following the reviewer's suggestion, we can add additional citations to example applications using similar quantities [2,3,4]. However, to the best of our knowledge, this is the first work that incorporates AUCOC in a loss function in a differentiable way. We acknowledge that in active learning, "human effort" may be quantified in different ways, but this work is fundamentally different than active learning, as we discuss in detail as response to the previous concern of the reviewer. [1] Gorski, N., V. Anisimov, E. Augustin, O. Baret, and S. Maximov (2001). Industrial bank check processing: the a2ia checkreadertm. International Journal on Document Analysis and Recognition 3(4), 196–206.\ [2] Dvijotham, K.(., Winkens, J., Barsbey, M. et al. Enhancing the reliability and accuracy of AI-enabled diagnosis via complementarity-driven deferral to clinicians. Nat Med 29, 1814–1820 (2023). \ [3] Leibig, C. et al. Combining the strengths of radiologists and AI for breast cancer screening: a retrospective analysis. Lancet Digit. Health 4, e507–e519 (2022).\ [4] Hendrycks, D. & Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In Proceedings of International Conference on Learning Representations (ICLR) (OpenReview.net, 2017). # ImageNet preliminary results. Larger-scale datasets are indeed very interesting. However, resource and energy consumption of experimenting with such datasets at the breadth we present - which requires training from scratch and assessing multiple aspects of the model with different data sets - is extremely high. Following the reviewer's suggestion, we report preliminary results on ImageNet in Table 3 of the PDF uploaded in the "global" rebuttal response, running AUCOCLoss and the cross-entropy baseline for 15 epochs with ResNet-50. Noticeably, compared to the baseline, AUCOCLoss shows favourable preliminary results. --- Rebuttal Comment 1.1: Title: Thanks for the clarifications Comment: Thanks to the authors for a thorough explanation: - The distinction between train-time and deployment-time makes a lot of sense. I now see why the related works section mentions the fields that it does. - It's also helpful to know these metrics are already in-use and established; I missed that in the paper before. - I also appreciate the ImageNet run. The preliminary results look convincing to me. I've bumped by rating from Reject to Borderline accept, as my old rating was based on a misunderstanding of the paper's contributions.
Summary: The paper aims to address the trade-off between model accuracy and model confidence results. The authors propose a novel loss function called AUCOC, which maximizes the area under the confidence operating characteristic curve. They evaluate the performance of their approach on various image classification and medical image classification datasets. Strengths: The problem being addressed is of significant importance and has been widely studied. Weaknesses: 1. In terms of empirical evaluation, the authors only assess the performance on image classification datasets. As this is primarily an empirical paper, it would be beneficial to see results on different types of datasets, including text data, tabular data, and other diverse datasets. Moreover, I am curious to know how the proposed model performs on larger-scale datasets, as the ones evaluated in the paper (e.g., CIFAR100 and Tiny-ImageNet) may not be sufficiently large-scale. 2. Furthermore, when it comes to out-of-distribution (OOD) detection results, it would be valuable for the authors to compare their proposed methods with commonly used approaches such as MSP, MaxLogit, and others. The proposed method is commendably simple and straightforward. However, I believe that a more comprehensive evaluation is necessary to demonstrate the effectiveness of the proposed approach. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See cons above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the importance of the task and appreciating the simplicity of the proposed method. We are happy to address the raised concerns. # Comment about additional datasets. We would like to point out that the presented article is not primarily an empirical study, as the reviewer suggests. Considerable modelling effort is required to formulate the loss, which is a novel contribution that is appreciated by R1 and R4. We aimed for a clear and easy explanation, and, to this end, we placed all lengthy derivations in the Appendix. While we appreciate that demonstrations on diverse types of data can be beneficial, there are two main reasons why we refrain from this: (i) the aim here is to present a novel loss and evaluated it consistently with experiments from the baselines [1, 2] and (ii) imaging data is one of the most challenging data types used in many different ML articles. Therefore, we chose to focus on explaining the loss well and demonstrate its different aspects on well established data sets. Larger-scale datasets are indeed very interesting, however, resource and energy consumption of experimenting with such datasets at the breadth we present - which requires training from scratch and assessing multiple aspects of the model with different datasets - is extremely high. We report some initial results on ImageNet here to address the reviewer's concerns. Results are summarised in Table 3 of the PDF uploaded in the "global" rebuttal response, running AUCOCLoss and the cross-entropy baseline for 15 epochs with ResNet-50. Noticeably, compared to the baseline, AUCOCLoss shows favourable preliminary results in all the metrics. [1] Karandikar, A., N. Cain, D. Tran, B. Lakshminarayanan, J. Shlens, M. C. Mozer, and B. Roelofs (2021). Soft calibration objectives for neural networks. In NeurIPS.\ [2] Mukhoti, J., V. Kulharia, A. Sanyal, S. Golodetz, P. Torr, and P. Dokania (2020). Calibrating deep neural networks using focal loss. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Eds.), Advances in Neural Information Processing Systems, Volume 33, pp. 15288–15299. Curran Associates, Inc. # Comment about additional OOD experiments. We would like to highlight that the detectors provided in the main paper are already the commonly used approaches: AUROC MSP [3] (pre temperature scaling) and ODIN (post temperature scaling) [4], even though we did not mention the original names explicitly, but only described in lines 275-277. This was also adopted by the baseline [5]. We thank the reviewer for pointing this out, and we will add the names to the paper to make it clearer and more explicit. Following the reviewer's suggestion, in addition we report the results with two other OOD detectors, MaxLogit [6] and EBM [7], for SVHN, CIFAR-C under Gaussian noise and additionally CIFAR-C under all the 15 corruptions from [8]. Results are summarised in Table 2 of the PDF uploaded in the "global" rebuttal response. The new results are consistent with those presented in the paper i.e., for every detector and dataset, AUCOCLoss provides the best OOD detection performance and, in almost all the cases, also the second-best. [3] Hendrycks, D., & Gimpel, K. (2016). A baseline for detecting misclassified and out-of-distribution examples in neural networks. ICLR.\ [4] Hsu, Yen-Chang, Yilin Shen, Hongxia Jin and Zsolt Kira. “Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data.” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020): 10948-10957.\ [5] Mukhoti, J., V. Kulharia, A. Sanyal, S. Golodetz, P. Torr, and P. Dokania (2020). Calibrating deep neural networks using focal loss. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Eds.), Advances in Neural Information Processing Systems, Volume 33, pp. 15288–15299. Curran Associates, Inc.\ [6] Hendrycks, D., Basart, S., Mazeika, M., Mostajabi, M., Steinhardt, J., & Song, D. (2022). Scaling out-of-distribution detection for real-world settings. ICML.\ [7] Liu, W., Wang, X., Owens, J., & Li, Y. (2020). Energy-based out-of-distribution detection. NeurIPS.\ [8] Hendrycks, D. and T. Dietterich (2019). Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations --- Rebuttal 2: Title: Approaching end of the author-reviewer discussion phase Comment: We would like to kindly draw the attention of the Reviewer that the discussion phase is soon reaching its conclusion. We did our best to thoroughly address the raised concerns and we are eager to engage further, should any additional clarification be beneficial. --- Rebuttal Comment 2.1: Comment: Thank you for the response, we will take the additional explanations into consideration for further discussions, Best regards, AC
Summary: This paper proposes a new loss, AUCOC loss, to improve the networks accuracy and prediction confidence. The loss aims to reduce the number of errors made by the algorithm and thus also the number of delegated samples to domain experts. The proposed loss focuses on maximizing the area under the COC curve during training in a differentiable manner. The AUCOC loss is used complementary to the original network’s training loss, resulting in increased classification accuracy, better OOD sample detections, and on par calibration performance. Strengths: The usage of the ''area under the confidence operating characteristics'' curve as an additional loss (being differentiable) is a novel idea. The paper is well written, the core method is well explained and the experimental results back the claims of the paper. The proposed AUCOC loss is outperforming other loss functions. The possible weakness of only using AUCOC loss is clearly described. Weaknesses: The title is somewhat misleading. A clear reference of which CNNs are used for the experiments is needed, especially since different CNNs were used for the experiments. How does ResNet-50 behave for CIFAR100 using the AUCOC loss and how does Wide-Resnet-28-10 behave on TinyImageNet? Was the latter Wide-Resnet also used for table 4? Why was only the corruption type Gaussian noise evaluated for the CIFAR 100-C experiments? The paper wants to minimize the expert load and to be able to predict OOD samples correctly. So far, the paper shows that the accuracy increases and the amount of delegated samples decreases in the OOD setting. While in its current state, the paper has important contributions, it would be interesting to explore if the proposed method also influences the prediction behavior of the CNN. [1] analyses the prediction behavior of CNNs and finds a shape-bias cue conflict: CNNs tend to recognize texture rather than shape, which is in contrast to human vision behavior, which recognizes objects based on their shapes. Therefore, an obvious step would be to investigate how the networks predict objects (i.e., shape or texture), and if the AUCOC can also influence that behavior. Line 149-153: This or/and is confusing. When does a higher AUCOC indicate lower number of samples delegated to humans *and* also a higher accuracy, and when is it one of these results? Typos: References to Eq. 3 are sometimes misleading, while a reference to the Eq. in line 153 would sometimes be more appropriate. Line 173: “estimate”: choose Line 178: Leibniz integral rule [1] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, Wieland Brendel. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. ICLR 2019 Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. It would be interesting to see if the proposed AUCOC loss can influence the CNN prediction behavior to be more in line with humans resulting in a lower need for experts. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper addressed the limitations of why the AUCOC loss is rather used in conjunction with classical CNN loss functions, like cross-entropy. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the proposed work and helping us improve the explanations. We are happy to address the raised doubts and questions. # Comment about employed architectures and additional experiments on CIFAR100 and Tiny-ImageNet. We thank the reviewer for pointing this out. Due to space reasons, we provided information on which architectures have been used in Appendix 7. To be consistent with the baseline [1], we used Wide-Resnet-28-10 for CIFAR100 (OOD in Table 4 as well) and Resnet-50 for Tiny-Imagenet. Following the suggestion of the reviewer, we report results on CIFAR100 with Resnet-50 and on Tiny-Imagenet with Wide-Resnet-28-10 in Table 1 of the PDF uploaded in the "global" rebuttal response. Consistently with the results in the paper, AUCOCLoss is better than all the baselines in terms of accuracy and AUCOC, and it provides lower delegated samples. [1] Karandikar, A., N. Cain, D. Tran, B. Lakshminarayanan, J. Shlens, M. C. Mozer, and B. Roelofs (2021). Soft calibration objectives for neural networks. In NeurIPS. # Comment about OOD setup. We evaluated the proposed method on OOD, among the various tasks. Consistently with our baseline [2], we provided an example of stronger dataset shift with SVHN and a weaker one with CIFAR-C under Gaussian noise. Following the reviewer's suggestion, we report the average AUROC(%) on all the 15 corruptions provided in [3] on CIFAR-C. Results are presented in Table 2 of the PDF uploaded in the "general" rebuttal response. Please note that we explicitly referred to the two detectors already employed in the main paper with their commonly used names, i.e., MSP and ODIN. Following the suggestion of R3, we added MaxLogit and EBM to further consolidate the presented findings. The new results are consistent with those presented in the paper, i.e. AUCOCLoss provides the best OOD detection performance for all the experiments and, in almost all the cases, also the second-best. We would like to highlight that the current method does not explicitly enforce networks to focus on "human-like" learning, e.g., focusing more on shapes rather than textures, as we are neither directly acting on representation learning, nor providing human feedback to the network during training, therefore we do not expect at the moment an improvement towards this end. However, it is definitely an interesting future investigation, to reduce the need of the expert analysis in a human-AI system. [2] Mukhoti, J., V. Kulharia, A. Sanyal, S. Golodetz, P. Torr, and P. Dokania (2020). Calibrating deep neural networks using focal loss. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Eds.), Advances in Neural Information Processing Systems, Volume 33, pp. 15288–15299. Curran Associates, Inc.\ [3] Hendrycks, D. and T. Dietterich (2019). Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations # Comment about AUCOC improvement explanation (lines 149-153). We thank the reviewer for drawing attention to it, thus helping clarify the explanation. In lines 149-153 we aim to explain what an improvement, i.e. increase, in AUCOC could practically correspond to. In that paragraph, we wanted to clarify that there are two factors which contribute to an increase in AUCOC: decrease in the number of samples delegated to human experts (given the same network accuracy) and increase in the accuracy for the samples that are not delegated but analysed only by the network (given the same human workload). These two aspects could manifest either individually, if the AUCOC improvement is generated by just a shift "up" or "left" of COC, or in a combined way. Hence the "or/and". The example provided in Figure 1a of the paper shows an improvement in both axes ("and" case) and the proposed loss function does not favour one specific behaviour. Figure 1 in the PDF uploaded in the "global" rebuttal response provides an example of shifts "up" and "left" ("or" cases). From the AUCOC metrics alone, it is not possible to infer which mechanism is taking place. --- Rebuttal 2: Title: Approaching end of the author-reviewer discussion phase Comment: We would like to kindly draw the attention of the Reviewer that the discussion phase is soon reaching its conclusion. We tried to thoroughly address the raised concerns and we are eager to engage further, should any additional clarification be beneficial. --- Rebuttal Comment 2.1: Comment: I appreciate the author's response and their additional evaluations. Thus, I will increase my score.
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to read our paper and for providing insightful feedback and input. In each rebuttal we addressed the individual concerns of the reviewers and we are happy to respond to any additional doubts or questions. In the PDF uploaded in this "general" response, we provide the following additional results and clarifying figures, following reviewers' suggestions: - Additional experiments on CIFAR100 and Tiny-Imagenet; - Additional OOD experiments, with new OOD detectors and more CIFAR-C corruptions; - Preliminary ImageNet results; - A toy example of AUCOC improvement. Pdf: /pdf/0f471e2d50e240c643d7ed3a828377a4490ddf95.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Unified Approach for Maximizing Continuous DR-submodular Functions
Accept (poster)
Summary: In this work, the authors present a framework for maximizing continuous DR-submodular functions over a range of settings and Oracle access types. To achieve this, they employ a variant of the Frank-Wolfe algorithm which yields either the first guarantees in some cases or comparable results to the SOTA in other cases. The paper is generally well-written, and I have reviewed and found most of the proofs to be accurate. However, I do have a few significant concerns that I would like to highlight as weaknesses. Strengths: **Major:** 1. This paper presents the first regret analysis with bandit feedback for stochastic DR-submodular functions maximization. **Minor:** 1. Reducing computational complexity by avoiding projections in two cases. 2. Obtaining the first results on offline DR-submodular maximization over general convex sets and down-closed convex sets. Weaknesses: **Major concerns:** 1. This paper lacks empirical evaluation, and it is very crucial since the main contribution of this work is on avoiding computationally expensive projections. 2. Motivation. In my opinion, this work lacks a very important paragraph/subsection on the importance of the setting where the oracle provides access to only the function value rather than the gradient of the function since the main results in this work are on this setting. For example, provide some applications in this setting. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Can you please describe the main novelty of this framework? 2. Besides the Online Stochastic DR-submodular maximization with Bandit Feedback result, the other results of this paper seem to be a straightforward extension of prior works. The authors need to highlight the challenges faced when proving these more general results and specify how they managed to overcome the challenges. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time in reviewing our paper. **Weaknesses** 1. For this concern, there are two points we would like to highlight. (1) First, we would like to gently push back on the statement mentioned in the concern that "the main contribution of this work is on avoiding computationally expensive projections." Avoiding potentially computationally expensive projections did inspire our choice of using Frank-Wolfe type methods, similar to most SOTA works for offline and online constrained DR-submodular optimization. However, for only 3 of the 16 offline problems was a SOTA achieved by a gradient method. Specifically, for monotone functions over general convex set with stochastic/deterministic gradient oracle, in [16], and monotone functions over a convex set containing the origin with access to a stochastic gradient oracle, in "Stochastic continuous submodular aximization: Boosting via non-oblivious function" by Zhang et al. ICML 2022 mentioned by Reviewer 6fHN. Note that in one of these cases, namely the one with deterministic gradient oracle in [16], the complexity is the same as the Frank-Wolfe methods up to logartithmic factors. Likewise, for just 2 of the online stochastic problem settings (one row in Table 2 and the Zhang et al. paper mentioned above) were the SOTA a gradient based method. For those problem settings in offline and online case, the sample complexity (respectively regret bound) is worse for our Frank-Wolfe type algorithm compared to the gradient based method. That was why we did not highlight our method in the corresponding rows. (2) Second, we note that in all cases where our result matches SOTA, our unifying framework is a strict generalization of the prior works which focused on specific cases (namely, those with a (*) sign in Table 1). Hence, in all those cases, our algorithm reduces to the SOTA algorithm and therefore all of the experiments done in those papers apply to our algorithm as well. 2. We agree that the introduction should be strengthened as you suggest. **Motivation for (feasible) value oracle queries:** We will revise the introduction to better motivate the importance of developing optimization methods for value oracle queries, including just over the feasible region. We first highlight two points and then discuss application motivations. (1) *Offline-to-online adaptations* For online optimization problems, when only bandit feedback is available (it is typically a strong assumption that semi-bandit or full-information feedback is available), then the agent must be able to learn from stochastic value oracle queries over the feasible actions action. By designing offline algorithms that only query feasible points, we made it possible to convert those offline algorithms into online algorithms. In fact, because of how we designed the offline algorithms, we are able to access them in a black-box fashion for online problems when only bandit feedback is available. (2) *More precise characterizations of inherent challenges underlying approximation guarantees* As noted above, in developing a unifying framework where we took care to characterize how powerful the oracles were, we identified the underlying causes for an approximation gap between gradient ascent and Frank-Wolfe methods. **Applications:** We will revise the paper by discussing "classic" example applications that prior works (like [arXiv:2006.13474]) have shown to be instances of constrained DR-submodular maximization, such as influence/revenue maximization, facility location, and non-convex/non-concave quadratic programming, as well as more recently identified applications like serving heterogeneous learners under networking constraints [arXiv:2201.04830] and joint optimization of routing and caching in networks [arXiv:2302.02508]. We will comment on how strong/mild assuming availability of anything more powerful than a value oracle over the feasible region is. For many problems, the ability to evaluate gradients directly requires strong assumptions about problem-specific parameters. We will briefly mention the application examples in the introduction and elaborate in the appendices. For example: Influence maximization and profit maximization form a family of problems that model choosing advertising resource allocations to maximize the expected number of customers, where there is an underlying diffusion model for how advertising resources spent (stochastically) activate customers over a social network. For common diffusion models, the objective function is known to be DR-submodular (see for instance [arXiv:2006.13474] or [arXiv:2212.06646]). The revenue (expected number of activated customers) is a monotone objective function; total profit (revenue from activated customers minus advertising costs) is a non-monotone objective. Budget limits are typically modeled as linear constraints. One significant challenge with these problems is that the objective function (and the gradients) cannot be analytically evaluated for general (non-bipartite) networks, *even if all the underlying diffusion model parameters are known exactly*. The mildest assumptions on knowledge/observability of the network diffusions for offline variants (respectively actions for online variants), especially fitting for user privacy and/or third-party access, leads to instantiations of queries as the agent selecting an advertising allocation within the budget (i.e. feasible point) and observing a (stochastic) count of activated customers. This corresponds to stochastic value oracle queries over the feasible region (respectively bandit feedback for online variants). **Questions** We have included a discussion on novelty and significance in a general response box above. While the online stochastic setting is a novelty of our work, we have not listed it as one of the main technical novelties in here or in the submitted article since it is a relatively simple extension of the main results. --- Rebuttal Comment 1.1: Comment: Dear Reviewer Dxab, We wanted to ensure that our comments have adequately addressed your concerns. If you require further clarification or have any additional questions, please don't hesitate to reach out. We appreciate your time and effort in reviewing our paper. Thank you once again. Sincerely, Authors.
Summary: This paper studies offline constrained DR-submodular maximization in 16 different settings (monotone/non-monotone, down-closed/general convex constraint, gradient/value oracle access, and exact/stochastic oracle) and provides a unified approach to solve all 16 cases with the same Frank-Wolfe algorithmic framework. Moreover, the authors extend their tools to study the online stochastic DR-submodular maximization problem under bandit and semi-bandit feedback models. The provided offline and online algorithms either match or improve the state-of-the-art results for each of the various settings. Strengths: - Constrained DR-submodular maximization has been studied in numerous works under different assumptions and settings and a number of algorithms have been proposed. In contrast, the algorithmic framework proposed in this paper is general and could be applied to any of the possible settings. - The paper is well-written and while the proofs are moved to the appendix, the main concepts and ideas are highlighted clearly in the paper. - Algorithm 1 (BBGE) for gradient estimation contains some novel ideas. Weaknesses: - While the unifying approach of the paper is interesting, most of the tools and techniques for gradient estimation and the Frank-Wolfe-type algorithm used here have been previously introduced and are not novel. - The main contributions of this work are the improved complexity (offline setting) and regret bounds (online setting) and these improvements could be highlighted via numerical examples, however, the paper lacks any experiments. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - I noticed that the following paper is missing from your references. In this work, the authors propose a boosting gradient ascent method that improves the $\frac{1}{2}$ approximation ratio of gradient ascent to $1-\frac{1}{e}$ for the monotone setting. How do their techniques and results compare to yours? Zhang, Q., Deng, Z., Chen, Z., Hu, H. and Yang, Y., 2022, June. Stochastic continuous submodular maximization: Boosting via non-oblivious function. In International Conference on Machine Learning (pp. 26116-26134). PMLR. - Is it possible to use your unified approach and provide an analysis for the setting with bounded curvature (similar to what the following paper has done)? Fazel, Maryam, and Omid Sadeghi. "Fast First-Order Methods for Monotone Strongly DR-Submodular Maximization." In SIAM Conference on Applied and Computational Discrete Algorithms (ACDA23), pp. 169-179. Society for Industrial and Applied Mathematics, 2023. - In the online setting with bandit feedback, how many value oracle queries are necessary per iteration to run Algorithm 3? -------------------------------- I've read the authors' rebuttal, thanks for addressing my questions and concerns. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: This is a theoretical work and a discussion of potential negative societal impact is not necessary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our submission. We reply to your questions in order below. We also include a discussion on novelty and significance in a general rebuttal box above. 1. Thank you for pointing this out. We first discuss the technique. The technique used in that paper is a combination of a novel line integral, referred to as boosting, and the projected gradient ascent. The boosting method uses a line integral over the line segment connecting the origin to any point $z$ in the constraint set. (see Theorem 2 part ii) In other words, it is working with the assumption that we are allowed to query the oracle on the convex hull of $\mathcal{K} \cup \{0\}$. (outside $\mathcal{K}$) Hence, technically speaking, they are not improving $1/2$ to $1-1/e$ within the same problem space, since $1/2$ solves monotone submodular maximization over a general convex set $\mathcal{K}$ where we are only allowed to sample within $\mathcal{K}$. Next, we comment on the results. Briefly, they would beat our method and the SOTA included in Table 1 for one of the sixteen settings -- optimizing a monotone function $F$ over convex sets that contain the origin $(0\in \mathcal{K})$, using a stochastic gradient oracle $\nabla F$. For that problem, Zhang et al. achieves an approximation of $1-1/e$ with $O(1/\epsilon^2)$ sample complexity. Our method and the prior SOTA [22] achieved an approximation of $1-1/e$ with $O(1/\epsilon^3)$ sample complexity. We remark that our method, [4], and [22] are Frank-Wolfe type methods and use a linear maximization oracle as a subroutine while Zhang et al. use a quadratic maximization oracle as a subroutine, which for some problems could have high computational complexity. Similarly, in Table 2, the results of Zhang et al. should be included in the same category as [7] and [29] and it will be the SOTA with ($1-1/e$)-regret of $O(T^{1/2})$. However, we again note that they use a projection based method, so our result would be SOTA among projection-free algorithms. 2. Thank you for pointing this out. This is an interesting direction. It is not immediately clear how the update rule, specifically the value of $v_k$ in Algorithm 3.1. SDRFW in Fazel et al., should change to adapt to other settings as we have studied in order to exploit curvature and strong DR-submodularity to obtain better guarantees. However, we would not be surprised if a unified approach similar in spirit to ours could be applied in the bounded curvature setting and believe it is worth future investigation. 3. For the online setting with bandit feedback, for each time step in the online problem only a single value oracle query is performed (corresponding to a single action taken at each time step and only the stochastic reward being revealed to the agent). The exploration horizon $T_0$ in Algorithm 3 (for bandit feedback we have $T_0 \gets \lceil T^{5/6} \rceil$) is input as the total number of iterations $N$ in Algorithm 2 when the latter is invoked as a black-box subroutine for exploration (so $N \gets \lceil T^{5/6} \rceil$). --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. Regarding question 2, if we put aside the idea of strong DR submodularity (and focus only on bounded curvature), the following work provides an optimal and efficient algorithm for submodular maximization with bounded curvature. Is it possible to extend your unified analysis using this work? What are the challenges for this extension? Feldman, Moran. "Guess free maximization of submodular and linear sums." Algorithmica 83.3 (2021): 853-878. --- Reply to Comment 1.1.1: Comment: Thank you for following up and pointing out that work by Feldman. To answer the comment, we note the following: 1. For DR-submodular setting, we begin by noting that Algorithm 1 in the previous paper you mentioned [Fazel et al.], may be considered an adaptation of Algorithm 1 of the Feldman paper (which considers set submodular functions) to the monotone DR-submodular setting where the constraint set contains the origin $0 \in \mathcal{K}$. As we mentioned in our previous reply, any extension of the Fazel paper (with curvature and strong DR-submodularity) to settings with non-monotone functions or when the constraint set does not contains the origin, would require a non-trivial change in the update rule. However, we believe such extension to be possible. A natural starting point for monotone DR-submodular functions with (just) bounded curvature would be to use Fazel et al. paper by "removing" the strong DR-submodularity. However, doing so is not trivial (if even possible). Setting $\mu \gets 0$ simplifies the update formula $v_k$ in SDRFW algorithm, however, their main result is only applicable when $\mu > 0$. More precisely, we note that the approximation bounds depend on a number of iterations $K \propto 1/\mu$, which is meaningless for $\mu=0$, so we would need to modify the proof, and possibly the update rule, to obtain a result using this approach. Another significant issue is with the algorithm design, as Fazel et al. algorithm requires as input a particular linear function $\ell_i = \min_x \nabla_i f(x)$ which in general may be challenging to compute. The function essentially finds, for each coordinate, worst case marginal gains achieved in the feasible region. It is not clear to us how we could compute that efficiently (even with exact gradient oracles) to begin with in addition to the FW steps that incorporate that. Further, with stochastic gradient estimates (via a stochastic gradient oracle or using samples from a value oracle), it is not immediately clear how robust the algorithm would be to an inexact $\ell_i$. We remark that Fazel et al.'s gradient ascent algorithm (Algorithm 2) interestingly does not require knowledge of $\mu$ or require as input the same linear function (constructed from $F$) as their FW type algorithm SDRFW did. That result looks more promising to generalize to different feasible regions and objective oracle types. The approximation coefficient obtained $1/(1+c_f)$. Note that this result is for monotone DR-submodular functions over general convex sets and therefore their approximation coefficient is $1/2$ rather than $1-1/e$ when $c = 1$. 2. The mentioned paper (Feldman, "Guess free$\dots$") does not discuss maximization of submodular functions with bounded curvature. It considers the closely related problem of optimizing the sum of a submodular function and a linear function, which is used as a subroutine for bounded curvature (DR-)submodular maximization as in [Sviridenko et al., Mathematics of Operations Research 42.4 (2017): 1197-1218. https://pubsonline.informs.org/doi/abs/10.1287/moor.2016.0842 ] or [Fazel et al.], and will be discussed in (3) below. For a given known linear function (i.e., separate value/gradient oracles for the DR-submodular and the linear functions), we think it is plausible can likely extend our paper to the sum of a DR-submodular function and a linear function. However, a detailed investigation is needed, since the approximation ratios need to be seen if they still hold (where approximation ratio is in the front of only DR-submodular function). Further, the update rules would need modifications. We believe that a careful analysis might work out, and is a good topic for the future work. 3. As mentioned above, optimizing the sum of a submodular function and a linear function is used as a subroutine for bounded curvature (DR-)submodular maximization. In other words, the first step for maximizing a bounded curvature (DR-)submodular function is to decompose it as a sum of a linear and a (DR-)submodular function. [Sviridenko et al., 2017] constructs a linear function for the discrete case, which can be evaluated using multiple oracle calls. However, such an approach is shown only for monotone (set) functions. Further, in order to obtain this linear function, we may need to query outside the feasible set. For the DR-submodular case, a linear function construction is given in Fazel et al., as mentioned in point (1) above, whose computation can be challenging and it is not evident how to compute it efficiently with only oracle calls. In summary, we believe that the problem of extending our work on the direction of bounded curvature is an interesting problem. However, we do not believe this to be an easy direction. Nevertheless, we will point this possible extension in the future works section in the final version.
Summary: The paper studies maximizing stochastic DR-submodular functions under 16 different settings, which depend on 1) whether the function is monotone or not, 2) the feasible region is a downward-closed or a general convex set, 3) gradient or value oracle access is available, and 4) or the oracle is exact or stochastic. The authors present a unified approach based on the Frank-Wolfe meta-algorithm, which 1) provides the first oracle complexity guarantees in 9 settings, 2) reduces the computational complexity by avoiding projections in two settings, and 3) matches guarantees in the remaining 5 settings. The paper also considers online versions of the problem with bandit feedback and semi-bandit feedback and presents online algorithms with improved regret bounds. Strengths: 1. The paper provides a unified approach for stochastic DR-submodular maximization, which encompasses a range of settings and beats or matches the SOTA results. 2. The paper presents the first regret analysis for online stochastic DR-submodular maximization with bandit feedback. 3. Technically, a novel construction procedure of a shrunk constraint set is invented that allows us to work with lower dimensional feasible sets when given a value oracle. 4. The paper is organized very well, making it read clearly although there are so many settings. Besides, no typo was found after I read it. Weaknesses: 1. I notice that only in the case where the value oracle is available, new guarantees that beat the SOTA are provided. So I consider it as the main part of the paper's contributions. In this case, the paper uses many standard techniques like the smoothing trick, the two-point estimator (for estimating gradients), and the momentum technique (for stochastic oracle). And to me what makes the paper different is the construction of a shrunk constraint set. But without reading the appendix, I can not figure out why such a construction is introduced and how it can achieve the claimed guarantees. So, I can not determine whether the paper enjoys excellent technical novelty or just consists of a refined combination of known techniques. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. If possible, please provides more intuition in the main text about what difficulties the construction of a shrunk constraint set is used to solve and how it can solve these difficulties. 2. Very recently, There is a paper titled "Bandit Multi-linear DR-Submodular Maximization and Its Applications on Adversarial Submodular Bandits" [arXiv: 2305.12402, ICML23], which presents a $\tilde{O}(T^{3/4})$ regret for monotone submodular maximization with adversary bandit feedback. This partially beats your results for bandit submodular maximization. As the approaches are different and the arXiv paper was unavailable during the NeurIPS submission, I do not think it is a "weakness". But it is suggested to include the paper in the Related Work. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes, as stated in the paper, any non-trivial lower bound for the problem would be exciting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for time and efforts in reviewing our submission. Below we first provide a response to the points you raised in the "Weaknesses" section and then reply to your questions in order. **Weaknesses** 1. In the "global rebuttal" above, we explain the novelty of our work in more detail and reply to your question about the shrunken set construction below. **Questions** 1. The original version of one-point gradient estimator introduced in [14] is a technique that is used extensively [17, 1, 26, 29, 9, 30] in the literature to estimate the gradient of a function using a single sample. To use this technique for estimating $\nabla F(z)$, one needs to sample points in a sphere centered at $z$. Therefore, it is only applicable when the function can be queried over an open set in $\mathbb{R}^d$. In that case, the original constraint set $\mathcal{K}$ should be shrunk to get $\mathcal{K}'$ so that an open neighbourhood of $\mathcal{K}'$ is contained in $\mathcal{K}$. An example of such construction is done in Appendix D in [29], when $\mathcal{K}$ is an open downward closed convex set. When the constraint set is a lower dimensional convex set in $\mathbb{R}^d$, the technique needs to be modified since no open neighbourhood of $\mathcal{K}'$ is contained in $\mathcal{K}$. In Lemma 5, we generalize the one-point gradient estimator from [14] so that it does not require sampling from a sphere. This allows us to propose a new construction that is much simpler than [29] and works for general convex sets. 2. Thank you for bringing this new paper to our attention. It is exciting to see how active this area is. We will discuss this paper in the related work section. In Table 3 (line 553), we include results for stochastic and adversarial online DR-submodular maximization under bandit feedback. This new paper by Wan et al. belongs to the same category as the reference [29], i.e. in the second row for monotone objective functions with a constraint set that contains the origin ($0 \in \mathcal{K}$), which improves the adversarial setting regret bound from $O(T^{8/9})$ to $O(T^{3/4})$. However, it should be noted that their algorithm relies on a ``self-concordant barrier'' $\Phi(x)$ of the constraint set which increases the computational complexity of the algorithm. More precisely, in Algorithm 1, line 13, of that paper we see that in each iteration a constrained optimization problem needs to be solved as a subroutine, for which the objective function is convex, but is not linear (like in our Frank-Wolfe type algorithm) or even quadratic (like in projected gradient ascent works like [16]). Thus, for some problems, the computational complexity could be significantly higher than even gradient ascent. It is also important to note that the guarantees depend on the existence of a self-concordant barrier. In Appendix D of their paper, they discuss the construction of self-concordant barriers when the constraint set is a product of simplices. It is unclear to us how computationally expensive it is to construct a self-concordant barrier for a general convex set containing the origin. We note that the ICML paper references Niazadeh et al. 2021 ([24] in our paper) as the previous SOTA for monotone online adversarial with bandit feedback with $O(T^{5/6})$. However, in our submission in Appendix B (lines 618-627), we point out an error in their analysis (due to using stochastic gradient samples for a subroutine designed for exact gradients, without controlling for the subsequent variance such as with momentum). Hence, in the paper we will state that the SOTA using only a linear maximization oracle as a subroutine is $O(T^{8/9})$ and the SOTA without being restricted to using linear oracles is $O(T^{3/4})$. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I read all reviews, rebuttals, and comments and now I have a better understanding of the paper's contributions and technical novelties. I'm looking forward to reading the final version of this paper. --- Reply to Comment 1.1.1: Comment: Thank you very much for your time and your comments!
Summary: This paper considers the problem of maximizing a continuous DR-submodular function over a convex set, under various settings for the objective (monotone/non-monotone), constraint set (downward-closed/includes the origin/general set), and oracle access types (deterministic/stochastic gradient/value oracle) -- 16 settings in total. For any type of oracle, the oracle is only allowed to query within the feasible set. The authors propose a Frank-Wolfe meta-algorithm and its different variants for each of the considered settings which achieve the best known approximation guarantee in each case. These approximation guarantees are the first ones in nine of the considered settings, reduce the computational complexity in two cases, and match existing guarantees in the remaining five cases. The authors also extend these results to the online stochastic setting with bandit and semi-bandit feedback. They provide the first regret bounds for the bandit setting, and in the semi-bandit setting, they provide the first regret bound in one case, improve over exiting bounds in two cases, and improve the computational complexity in one case. Strengths: - The paper provide the first approximation guarantees for various settings of DR-submodular maximization, where the oracle is only allowed to query within the feasible set, and improve over existing results in few other settings. - The paper cover a wide range of settings under a unified algorithm and analysis. - The presentation of the main paper is very good, especially taking into consideration the amount of material covered. - Relation to related work is clear: the authors discuss how their results compare to existing ones, and which ideas/proofs are based on existing work. Weaknesses: - Potential error in Lemma 5 (see questions), which affects all results with the function value oracle. If incorrect, this is easily fixable but the algorithm would require knowing the dimension of the affine hull of the constraint set. - The presentation in the appendix can be improved to make it easier to read the proofs (see suggestions in questions). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - In Lemma 5, shouldn't d be replaced by k=dim(A) inside the expectation? Applying Lemma 4 with restriction to the first k coordinates is equivalent to applying it with sampling from a sphere of dimension k no? - Why is the claim on lines 671-672 in Lemma 5 true? Please explain or provide a reference. - The notations $\hat{G}(z)|\_\mathcal{L}$ and $\tilde{F}|\_\mathcal{L}$ are not defined, which makes it hard to understand some of the proofs. Are these the projection of $\hat{G}$ on $\mathcal{L}$ and the restriction of the domain of $\tilde{F}$ to $\mathcal{L}$? - $\mathcal{L}$ is defined as $\mathrm{aff}(K)$ in Lemma 9 and the proof of Theorem 1, but in Algorithm 2 it is defined as $\mathrm{aff}(K) - z_1$. - I recommend motivating why it is important to restrict the oracle to query only feasible points. - It would be good to also include known lower bounds for settings where they are available. - h is not defined in Table 1. - In the appendix, it is helpful to remind the reader of what different terms refer to or to have a table listing them. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reading and helpful suggestions. Below we reply to your questions in order. 1. You are correct. In Lemma 5, the ratio $d/2\delta$ in the expectation should be replaced by $k/2\delta$ where $k = dim(A)$. Similarly, in Algorithm 1, the ratio $d/2\delta$ used in line 4 should be replaced with $k/2\delta$. With these modifications, the algorithms and proofs would be correct as is. 2. For reference, that line is *Note that the function $F$ is defined only on $\mathcal{D}$ and therefore the gradient $\nabla F$ lives within the linear space $A$.* Preceding that we defined $A$ as the affine hull of $\mathcal{D}$, i.e. $A:= \text{aff}(D).$ We will include the following explanation in a footnote to clarify the statement. Let $f : M \to \mathbb{R}$ be a differentiable function where $M$ is a manifold in $\mathbb{R}^n$ for some choice of $n \geq 1$. For each $z \in M$, the total derivative of $f$, is a linear function $D f(z) : T_z(M) \to \mathbb{R}$ from tangent space of $M$ at the point $z$ to $\mathbb{R}$. Then gradient of $f$, i.e. $\nabla f$ is the vector field on $M$ for which we have $\langle \nabla f(z), v \rangle = (D f(z))(v)$, for all $z \in M$ and $v \in T_z(M)$. In particular, since it is a vector field, at each point $z \in M$, the value of $\nabla f(z)$ is a vector in the tangent space $T_z(M) \subseteq \mathbb{R}^n$. As a special case, if $M \subseteq A$ where $A$ is an affine space, then the tangent space $T_z(M) \subseteq A$ for all $z \in M$ and therefore $\nabla F(z) \subseteq A$ for all $z \in M$. 3. For reference, that notation appears in Lemma 9 (starting on line 716). We apologize for the confusion. You are correct. $\tilde{F}|\_{\mathcal{L}}$ is the restriction of the domain of the $\tilde{F}$ to $\mathcal{L}$ and $\hat{G}(z)|\_{\mathcal{L}}$ is the projection of $\hat{G}$ onto $\mathcal{L}$. Before Lemma 9, we will define the notion $\tilde{F}|\_{\mathcal{L}}$ as the function restriction, define $P\_{\mathcal{L}}$ as the projection operator and replace $\hat{G}(z)|\_{\mathcal{L}}$ with $P\_{\mathcal{L}}(\hat{G}(z))$ to clarify. We will also include this notation in a notation table in the appendix. 4. We apologize for the confusion that caused. We re-used the notation "$\mathcal{L}$" in those places for different affine spaces. First, we remark that each definition is correct within its respective local scope (Algorithm 2, Lemma 9, Theorem 1). Second, we will revise the notation, fixing $\mathcal{L} = \operatorname{aff}(\mathcal{K})$ and defining $\mathcal{L}_0 = \operatorname{aff}(\mathcal{K}) - z$ (for some $z \in \mathcal{K}$) to distinguish between the two affine spaces. We will also include this notation in a notation table in the appendix. 5. Thank you for the suggestion. We agree this point should be more clearly motivated. Please refer to the "global rebuttal" above for a detailed explanation. 6. We agree. Including lower bounds (when known) would provide important context for the sample complexity and regret bounds (for offline and online settings respectively) we and the SOTA have obtained. We will mention lower bounds in the introduction and include a discussion on those results in Appendix A: Details of Related Works. (In Appendix A.1., we pointed out the optimality of the approximation ratios for two cases (mentioned in the following), but will make clear there and the introduction those are the only cases with tight bounds.) For reference, we will briefly summarize lower bounds and hardness results. The approximation coefficients are optimal for the case (i) where the function is monotone and the constraint set contains the origin and the case (ii) where the function is non-monotone and the constraint set is a general convex set. For monotone functions over general convex set, we conjecture that $1/2$ is the optimal coefficient (See the "global rebuttal" above for more details). For non-monotone functions over downward closed convex sets, the $1/e$ coefficient is known to be sub-optimal. (See the recent paper "Continuous Non-monotone DR-submodular Maximization with Down-closed Convex Constraint" by Chen et al, arXiv.org:2307.09616, that obtains 0.385 coefficient) and the best known upper bound so far is 0.491 (See "Submodular maximization by simulated annealing" by Gharan et al. 22nd ACM-SIAM, 2011) With a stochastic gradient oracle, when maximizing a monotone function over a downward closed convex set, the lower bound on oracle complexity is $1/\varepsilon^2$ (see "Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match" by Karbasi et al. in NeurIPS 2019) We expect the algorithm to be efficient in the deterministic gradient oracle case for all where the approximation coefficient is optimal. This intuition is based on the fact that $O(1/\varepsilon)$ is a fundamental barrier of linear programming based methods in general within the context of convex optimization. (See "Conditional Gradient Methods", Braun et al, arXiv:2211.14103v2, Section 2.1.2) 7. We will revise the caption for Table 1 to include $h = \min_{z \in \mathcal{K}} \|z\|_\infty$. 8. We will follow your suggestions, adding a notation table to the appendix and adding verbal reminders of what notation refers to throughout the paper (esp. in the appendices). --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the detailed answer. I have some follow up questions/comments: 1. Given these modifications, the algorithm requires knowing the dimension of the affine hull of the constraint set, which might not be easy to compute for complicated constraint sets. This limitation should be stated clearly. 2. So this claim requires that D (which corresponds to the constraint set K) to be a manifold? This should also be stated clearly then. 5. I did not see any discussion about the motivation for restricting the oracle to query only feasible points in the "global rebuttal" above. One other clarity issue I noticed: $\hat{F}$ is defined as a function of two variables $z, x$ in Section 3.1, but later one it is used as a function of a single variable (for example in Algorithm 1), without clarification of how it relates to the original definition. --- Reply to Comment 1.1.1: Comment: Thank you for your detailed review. 1. We agree, and will add that in the final version. 2. We note that lines 671-672 did not play any role it the proof and were meant for clarification. However, we acknowledge that they are not as clarifying as we have hoped and we will remove them. To answer your question, lines 671-672 should be revise to say "$\nabla F(z)$ lives within the linear space $A_0$ for all $z \in D$ where $\mathbb{B}\_\delta^A(z) \subseteq D$." Please note that the last line in our previous response to Q.2 should be: "As a special case, if $M \subseteq A$ where $A$ is an affine space, then for all $z \in M$ the tangent space $T_z(M) \subseteq A_0$, where $A_0 = A - x$ for some $x \in A$. Therefore we have $\nabla F(z) \subseteq A_0$ for all $z \in M$." Also note that we do not require $D$ to be a manifold. However, the set $\bigcup\_{\\{z \in D \mid \mathbb{B}\_\delta^A(z) \subseteq D\\}} \mathbb{B}\_\delta^A(z) \subseteq D$ is a union of $k$-dimensional spheres, where $k = dim(\operatorname{aff}(D))$, and so it is a $k$-dimensional manifold. 3. Sorry, due to character limits, we moved that to the response to Weakness 2 to Reviewer Dxab - though we write it again here. **Motivation for (feasible) value oracle queries:** We will revise the introduction to better motivate the importance of developing optimization methods for value oracle queries, including just over the feasible region. We first highlight two points and then discuss application motivations. (1) *Offline-to-online adaptations* For online optimization problems, when only bandit feedback is available (it is typically a strong assumption that semi-bandit or full-information feedback is available), then the agent must be able to learn from stochastic value oracle queries over the feasible actions action. By designing offline algorithms that only query feasible points, we made it possible to convert those offline algorithms into online algorithms. In fact, because of how we designed the offline algorithms, we are able to access them in a black-box fashion for online problems when only bandit feedback is available. *Note that previous works on DR-submodular maximization with bandit feedback in monotone settings (i.e. [29] and arXiv:2305.12402) explicitly assume that the convex set contains the origin.* (2) *More precise characterizations of inherent challenges underlying approximation guarantees* As noted in the "global rebuttal" above, in developing a unifying framework where we took care to characterize how powerful the oracles were, we identified the underlying causes for an approximation gap between gradient ascent and Frank-Wolfe methods. **Applications:** We will revise the paper by discussing "classic" example applications that prior works (like [arXiv:2006.13474]) have shown to be instances of constrained DR-submodular maximization, such as influence/revenue maximization, facility location, and non-convex/non-concave quadratic programming, as well as more recently identified applications like serving heterogeneous learners under networking constraints [arXiv:2201.04830] and joint optimization of routing and caching in networks [arXiv:2302.02508]. We will comment on how strong/mild assuming availability of anything more powerful than a value oracle over the feasible region is. For many problems, the ability to evaluate gradients directly requires strong assumptions about problem-specific parameters. Influence maximization and profit maximization form a family of problems that model choosing advertising resource allocations to maximize the expected number of customers, where there is an underlying diffusion model for how advertising resources spent (stochastically) activate customers over a social network. For common diffusion models, the objective function is known to be DR-submodular (see for instance [arXiv:2006.13474] or [arXiv:2212.06646]). The revenue (expected number of activated customers) is a monotone objective function; total profit (revenue from activated customers minus advertising costs) is a non-monotone objective. One significant challenge with these problems is that the objective function (and the gradients) cannot be analytically evaluated for general (non-bipartite) networks, even if all the underlying diffusion model parameters are known exactly. The mildest assumptions on knowledge/observability of the network diffusions for offline variants (respectively actions for online variants), especially fitting for user privacy and/or third-party access, leads to instantiations of queries as the agent selecting an advertising allocation within the budget (i.e., feasible point) and observing a (stochastic) count of activated customers. This corresponds to stochastic value oracle queries over the feasible region (respectively bandit feedback for online variants). 4. We will clarify in Section 3.1 that when $\hat{F}$ is taking one variable, we are treating it as a random variable.
Rebuttal 1: Rebuttal: We highlight our technical contributions (in addition to contributions in obtaining new guarantees for numerous offline and online settings as well as unifying algorithm design and analysis among several prior works). 1. **Our procedure is the first Frank-Wolfe type algorithm for analyzing monotone functions over general a convex set when the oracle is only allowed to query within the feasible set, for any type of oracle for the objective function (exact/stochastic value/gradient).** Note that the algorithm in this case is the same as the algorithm for non-monotone general convex case, only with a different step-size. The main challenge here was recognizing that the analysis for the non-monotone general convex setting and the monotone setting where the convex set contains the origin could be combined to prove regret bounds for this algorithm. 2. **A new construction procedure of a shrunk constraint set that allows us to work with lower dimensional feasible sets when given a value oracle, resulting in the first results on general lower dimensional feasible sets given a value oracle.** *Please refer to Q.1 in the response to Reviewer L9XA for details about differences with prior works.* 3. **Our work sheds light on a previously unexplained gap in approximation guarantees for monotone DR-submodular maximization.** (We briefly mentioned some of the following in our paper, but will revise the main section and appendices to make the following clear.) Specifically, some prior works (enumerated below) studying monotone DR-submodular maximization over general convex sets obtained guarantees of $1/2$ while others obtained $1-1/e$. In [16], a $1/2$ guarantee was obtained by a projected gradient ascent method and this was shown by proving that the algorithm tends to a stationary point and proving that any stationary point is at least $1/2$ as good as the optimal point. Moreover, they construct examples with stationary points that are no better that $1/2$ of the optimal point. The $1-1/e$ guarantee was reported for Frank-Wolfe methods, which (superficially) suggests that the gap may be due to algorithm or analysis differences. However, in carrying out our work on developing a unified framework, we identified that the gap was not attributable to algorithm or analysis differences, but instead due to queries to infeasible points by the Frank-Wolfe methods (i.e. they were solving different problems). A key ingredient to obtain $1-1/e$ was the ability to query the (gradient) oracle within the convex hull of $\mathcal{K}\cup\{0\}$. (Note that this is true for both Frank-Wolfe based methods and for projection based methods. Please refer to response to Question 1 of reviewer f6HN for more details about the projection based methods.) For monotone submodular maximization over general convex sets (not necessarily containing the origin), we can only guarantee a coefficient of $1/2$, both for Frank-Wolfe type methods (our work) and projection based methods (i.e. [16]). Moreover, it is evident from our proofs that the case of maximizing a monotone function over a general convex region with the origin infeasible ($0 \not \in \mathcal{K}$) is the only case where the starting point ($\mathbf{z}_1$ in our paper) of the algorithm does not matter. To the best of our knowledge, in every paper where the $1/2$ approximation coefficient and $1-1/e$ approximation coefficient in the monotone setting are compared, the comparison was unwittingly between problems that are inherently mathematically different: [16] and [8] in experiments and main text; [7] and [9] in experiments; [30], [23], and [13] in related work section, [22] in the introduction and Table 2, ”Stochastic continuous submodular Maximization: Boosting via non-oblivious function” by Zhang et al. ICML 2022 in the main claim; and ”Fast First-Order Methods for Monotone Strongly DR-Submodular Maximization.” Fazel et al. (ACDA23), 2023 in the main claims. In other words, the $1/2$ approximation could very well be optimal in its own setting. We will add this explanation and the following conjecture to the final version. **Conjecture**: *The problem of maximizing a monotone DR-submodular continuous function subject to a general convex constraint, where the oracle access is limited to the feasible region, is NP-hard. For any $\epsilon > 0$, it cannot be approximated in polynomial time within a ratio of $1/2 + \epsilon$ (up to low-order terms), unless $RP = NP$.* **Minor correction to Tables** In Table 2, we cite 4 papers, but 2 of them ([29] and [30]) are referring to a method known as Mono-Frank-Wolfe, which is not for online stochastic setting since they choose one point but query feedback for another point. The feedback model Mono-Frank-Wolfe relies on is more informative than the semi-bandit gradient $\nabla F$ feedback included in Table 2. Moreover, the paper [8] mentioned in Table 2 should be replaced with [16] since in the special case considered in Table 2, both algorithms will be the same and it was first proposed by [16]. As we have discussed in rebuttals below, the papers mentioned by reviewers L9XA and f6HN should also be added.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Posterior Contraction Rates for Matérn Gaussian Processes on Riemannian Manifolds
Accept (spotlight)
Summary: The paper concerns Gaussian processes on manifolds. The authors present theorems for contraction rates for Matern Gaussian processes defined intrinsically and extrinsically through and embedding in a higher-dimensional Euclidean space. The authors shows that the rates are asymptotically equal in the two settings. Additionally, they treat the case of finitely truncated series expansion of the kernels to get a similar rate. Finally, it is shown experimentally that it can still be beneficial to work with the intrinsic geometry in the small sample domain. Strengths: - extremely well-written paper. I found the exposition very clear - novel theoretical results - interesting findings (in line with what the authors state, I would not have expected the manifold dimension to show up in the extrinsic case) - empirical study to cover the small sample case Weaknesses: - since the theorems are not in the main paper, one could perhaps consider if a longer format than NeurIPS (a journal paper) would be more suitable for the paper from a presentation point of view Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: no questions Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review, and especially for the comment that you found our exposition "very clear" in spite of the technical nature of the subject! Below we comment on one of the points. --- *"since the theorems are not in the main paper, one could perhaps consider if a longer format than NeurIPS (a journal paper) would be more suitable for the paper from a presentation point of view"* * While it certainly would have also been possible for us to write up our work in a longer format compared to NeurIPS such as JMLR, we believe that the NeurIPS format is also appropriate here, because the relatively short page limit means that a paper like ours needs to focus its main body on summarizing the main theoretical results, rather than on technical details in proofs. It is valuable to have papers like this, because they provides a picture of the state of affairs which are also readable by non-experts - we think this is particularly important for posterior contraction analysis, which tends to involve harder-to-parse mathematics than other areas. Therefore, we believe the NeurIPS format used by this work effectively complements other, much longer and more technical-detail-focused papers on related questions which can be found in the literature. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. My scoring has not changed.
Summary: This paper studies the contraction rate(s) for both the intrinsic and extrinsic Mat\'ern Gaussian process in compact Riemannian manifold. The authors proved that the (optimal) rate in both cases is $\frac{2 \min(\beta, \nu)}{2 \nu +d}$, where $\nu$ is the smooth parameter of the Mat\'ern process, $\beta$ is the regression function smoothness class, and $d$ is the dimension. The authors also showed with examples that the geometric models outperform the non-geometric ones through empirical error analysis. Strengths: The results in this paper are novel and enlightening. Up to minor typos, I enjoy reading the paper. It is concerned with the fine topic of Gaussian processes on manifolds, and on why (and how) this kind of modeling is valuable through quantitative posterior contraction analysis. The main contribution is the (optimal) contraction rate of both intrinsic and ambient Mat\'ern processes on compact manifolds in the nonparametric setting, the analysis of which is different from that in the Euclidean setting as the definition of the Mat\'ern processes on manifolds is subtle (though the rates are the same). It is also shown how the underlying geometric analysis outperforms by numerical experiments. Weaknesses: As for every (good) paper, there are always plenty of things remained to be done. For instance, (1) In the current analysis it is assumed that the "nugget" $\sigma_\epsilon$ is given. In Bayesian setting, it would be interesting to know what happens if we put a prior on it (and more importantly, what prior to put). (2) The results, implicitly, compute the interpolation errors (as $L^2$ error based on $p_0$). What about extrapolation errors (that is, to predict an "outside" the domain)? This is also important in most geostatistics problems. (3) For the numerical experiments, the authors only consider the synthetic examples, e.g. dragon, sphere. It is interesting to see if the intrinsic and extrinsic modeling can bring a big difference for some real data. One such example is provided in [14]. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: While the paper is generally well written, I feel the authors need to address the following points: (1) I catch the idea of the intrinsic vs extrinsic modeling quickly. But the authors may point out a reference for this terminology (is this inspired from [15])? (2) p.3, line 80: I guess the authors mean $f \sim GP(m, k)$ instead of $GP(0,k)$. (3) p.3, line 115-116: the notation $C^{\beta}$ for the H\"older class is nonstandard. Does this mean $\mathcal{C}^{0, \beta}$? I guess not since $\beta > \frac{d}{2}$ meaning that it can be larger than $1$. The authors may need to clarify this. (4) In Result 1, it is assumed that $f$ is mean zero. Is this also assumed in Theorems 5 and 6? It is known that for the Mat\'ern processes in the Euclidean space, the mean function may raise the identifiability issue (see Stein's Interpolation of Spatial Data, or Tang, Zhang and Banerjee's paper On identifiability and consistency of the nugget in Gaussian spatial process models). There are also related discussions in [24]. The authors may need to clarify, and add a few sentences in the paper. (5) p.4, Assumption 3: the authors may "indicate" $\sigma_\epsilon$ is known earlier, as this is important for the Bayesian workers. (6) Regarding the use of Theorems 5 and 6: it is clear from the rates that one should take $\nu = \beta$ (i.e. if one can identify the function class, and set the same smoothness parameter in the Mat\'ern process). However, we never know exactly $\beta$. The question is whether there is an adaptive way to select $\nu$. (Of course, this may be discussed in another paper. I only want to bring this question to the authors.) (7) Regarding Theorem 8: the rate is $\frac{2 \min(\beta , \nu)}{2 \nu +d} = \frac{2 \nu}{2 \nu +d}$ if $\beta = \nu$. This rate also appears in the prediction based on BLUE (best linear unbiased predictor) on the context of MLE. See e.g. Tang, Zhang and Banerjee's paper On identifiability and consistency of the nugget in Gaussian spatial process models, JRSS-B, 2021, page 1055. The authors may want to mention this as well. (8) There are a few more references that the authors may want to add. (a) Stein's book Interpolation of Spatial Data is one of the main references on the Mat\'ern Gaussian processes. (b) Tang, Zhang and Banerjee's paper On identifiability and consistency of the nugget in Gaussian spatial process models studies the Mat\'ern process (with nugget) in the Euclidean space, where the identifiability issue occurs as pointed out in (7). This is related to the setting of Theorem 8. Arafat, Porcu, Bevilacqua and Mateu's paper Equivalence and orthogonality of Gaussian measures on spheres, Journal of Multivariate Analysis, 2018 is also relevant. (c) Regarding the prior on $\sigma^2_\epsilon$, one such model is the conjugate Bayesian linear model in Banerjee's paper Modeling massive spatial datasets using a conjugate bayesian linear modeling framework, Spatial Statistics, 2020; this model was analyzed in Zhang, Tang and Banerjee's Exact Bayesian geostatistics using predictive stacking, arXiv:2304.12414, Section 2. I believe it should be possible to generalize the results in this paper to the conjugate bayesian linear model. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thorough review of our work and for the very encouraging comments! Below we address key questions: **Further work:** (1) *Nugget and prior on $\sigma_\epsilon$* * Thank you for this question! In our work, the main reason we assume $\sigma_\epsilon$ is fixed is because in the Euclidean case other work has studied conditions on priors that ensure minimax optimality - at least up to ln n factor, as is done in [17] - **we expect these to be similar in both the Euclidean and geometric settings**. We therefore opted to not do this to avoid the paper becoming too long, but will amend the manuscript to add references so that readers interested in this case can find the relevant papers. (2) *Interpolation vs. extrapolation, $p_0$-norm vs empirical norm* * Thank you for the comment! We would argue that **our results do, in an appropriate technical sense, extrapolate** outside of the design points, since $p_0$ is an absolutely continuous measure - as opposed to, for instance, to the empirical measure of the data. Our proof technique uses the assumption that $p_0$ is lower-bounded, which implies that we can control convergence over the whole manifold, by moving between different distributions by changing the constants in the bound. Note that, if one considers regions where $p_0$ takes small values, the constant may degenerate, so this assumption does have limitations - but, we expect these properties are similar to the Euclidean case and not particularly specific to the manifold setting. (3) *Real data examples* * Thanks - this is a great point! We opted to focus on synthetic examples for simplicity, since this allows us to better control the moving parts in the experiment. However, we also wanted to note that in addition to [14] a similar performance difference was also observed in [12] in the context of medical data with a slightly-different prior. We will add a few remarks mentioning this to the experimental section to point reviewers toward these papers. **Questions:** (1) *Intrinsic vs. extrinsic terminology* * Thanks for this question! We called the processes this because the concepts “extrinsic” and “intrinsic” mirror the distinction between intrinsic and extrinsic properties/quantities **in the sense of differential geometry** (and mathematics more generally) where the former refers to concepts not needing any kind of embedding to be defined, while the latter refers to ones that need to be expressed through an embedding in a higher dimensional space/object. This is exactly the difference in how the two Matérn processes are constructed. (2) *Thanks for spotting the typo!* (3) *Notation for Hölder spaces* * This is a great point - thank you for spotting this! For $\gamma = k + \alpha$ with integer $k >= 0$ and $0 < \alpha <= 1$, define $CH^\gamma$ to be the space of k times differentiable functions with $k$th derivative being $\alpha$-Hölder. We've **changed the notation to $CH$** to avoid confusion with ordinary smooth functions. (4) *Prior mean and identifiability* * Thank you for this observation! The mean of the prior processes is indeed kept fixed at 0, as is commonly done for proving contraction rates for Gaussian processes. This choice actually leads to optimal contraction rates, but could more generally be relaxed. Regarding identifiability, note that we are not trying to identify covariance parameters given a sample of a Matérn process in an infill asymptotic regime, but rather to show contraction of the posterior towards a fixed regression function. **Our nonparametric regression model's parameter is therefore identifiable**: the probability distribution is only indexed by the regression function, and our results imply the existence of a consistent estimator for it, by taking for instance the posterior mean of the process. We will add a few sentences and references on this particular point. (5) *Emphasis on $\sigma_\epsilon$* * This is a good idea - we'll add a sentence on this! (6) *Adaptive selection of $\nu$* * This is an *excellent* point! We agree adaptivity to the smoothness of f_0 is a natural next step. For the intrinsic Matérn process adaptivity can be achieved by standard techniques that are not specific to the geometric setting. One way would be to follow the approach of Kirichenko & Van Zanten in “Estimating a smooth function on a large graph by Bayesian Laplacian regularization” where a multiplicative scaling parameter is introduced in the definition, and on which another prior is placed. Another solution could be to consider a truncated intrinsic Matérn process with a prior on the truncation, similar to Waaij & Van Zanten in “Full adaptation to smoothness using randomly truncated series priors with Gaussian coefficients and inverse gamma scaling”. For the extrinsic Matérn process, the problem is more complicated, but is tackled for the RBF kernel by Yang & Dunson in “Bayesian Manifold Regression” where they show that they can achieve adaptivity by placing a prior on the length scale of the prior process: **we believe a result of the same flavor could be shown in our context, although our proof technique is fundamentally different**. Studying adaptivity is, however, difficult and we believe should be the focus of a follow-up paper, but we are happy to add more comments on this in the revised version. (7) *Connection with other asymptotic rates* * For $\beta = \nu$, namely the less-realistic case where the right smoothness has been chosen in advance, we indeed recover the same contraction rate as the one you mention. This makes sense, as this is the minimax optimal rate of estimation of $f_0$ in this model. **It's a great idea to point this out with appropriate references** - thanks for bringing this to our attention! (8) *References* * Thank you very much for bringing these references to our attention! We will add them to help the reader frame our results with respect to identifiability and current research on Bayesian models. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed explanations. The score remains unchanged.
Summary: This paper establishes bounds on the contraction rate of matern processes on Riemannian manifolds. The authors study three variants: 1) intrinsic matern process 2) truncated intrinsic matern process 3) extrinsic matern process and show that in each case, the optimal contraction rate can be achieved, which matches the Euclidean case. Strengths: This is a fundamental problem, and it is remarkable that the authors are able to prove the same optimal contraction rates for both the intrinsic and extrinsic Riemannian matern processes. Furthermore, the manifold hypothesis has been receiving increasing attention recently, and so the analysis of Gaussian processes on manifolds is well motivated. I also appreciate the examples that the authors provided for illustrating the difference between intrinsic and extrinsic processes. Weaknesses: see questions below Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Regarding the smoothness parameter nu: 1) Can the authors provide intuition for why nu (as used in (7) and (10)) is the smoothness parameter (i.e. smoothness in what sense)? 2) The Riemannian matern process (7) and the Euclidean matern process (10) appear quite different, can the authors explain the analogy between these two processes, and specifically, why is nu, as used in (7), comparable to nu as used in (10)? In particular, can the authors explain the comment from line 275 "the intrinsic Matérn process, its truncated version and the extrinsic Matérn process all possess the same posterior contraction rates", and how these two rates are comparable under the different assumptions? 3) All theorems assume nu > d/2. Is this a standard assumption? How necessary is this assumption, and the theorems not work when nu is small? (and I have the a similar question for the beta > d/2 assumption in assumption 3 as well) Regarding the intrinsic matern process: 4) expression (7) involves an sum over eigenfunctions of the laplace beltrami operator. The authors do mention that this sum can be truncated, but for a general manifold, it seems like even computing a single eigenfunction can be quite expensive. Can the authors comment on how this is done computationally (and what is the cost), when the manifold is an arbitrary one, e.g. the dragon? 5) In figure 2, the authors give a dumbbell example which highlights the difference between intrinsic and extrinsic kernels -- one important difference seems to be that two points can be far away in manifold distance, but close in euclidean distance (under the embedding). Consequently, a function may have a small lipschitz constant wrt manifold distance, but a huge lipschitz constant wrt euclidean embedding distance. Intuitively, why is this not reflected in the contraction rates? Is it because the assumptions made do not care about things like lipschitz smoothness? (related to my earlier question of what is the meaning of nu?) Regarding the bound: 6) What is contained in the constant C in the theorems, is this a universal constant? Or does it depend polynomially/exponentially on any problem parameters? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our work! Thank you for the encouraging comment that work addresses a fundamental research problem! Below we address the questions, some of which we had to partially quote due to character limits: --- *".. intuition for why nu is the smoothness parameter .."* * Thank you for the question! Indeed, it may not appear obvious that $\nu$ in the definitions of the priors can be interpreted as a smoothness parameter. This stems from the sample path regularity of the processes: it is shown in [44] that the samples of the Euclidean Matérn process (and therefore also those of the extrinsic process) are $\alpha$-Holder for every $\alpha<\nu$, whereas this property for the intrinsic Matern process is **precisely the content of our Lemma 27**. *".. explain the analogy between these two processes (Riemannian vs. Euclidean) ..", ".. why is nu, as used in (7), comparable to nu as used in (10)"* * Thank you for bringing this important point of confusion, which is likely to be shared by other readers, to our attention. To compare the two processes, note that both can be represented as **(weak) solutions of the same stochastic partial differential equation**, defined over Euclidean space and the manifold, respectively. This is the viewpoint from which the intrinsic Matérn process was studied in [8] and other prior works, and is why both are called Matérn processes. * Regarding $\nu$: see the remark above, but also note that the Euclidean Matérn process on R^d has an RKHS norm equivalent to the Sobolev space $H^{nu+d/2}(R^d)$, and the same is true for the intrinsic Matern by our Lemma 23 where the Sobolev space $H^{nu+d/2}(M)$ is defined using the Bessel potential space formulation. *".. how these two (contraction) rates are comparable under the different assumptions?"* * Thank you for the comment! Let us clarify this: we fix a **single, common data generating process**, namely nonparametric regression with random design and a fixed unknown regression function $f_0$, and, for **three different Bayesian models**, we compare the asymptotic contraction/convergence rate of the posterior distribution towards the true regression function $f_0$. The rates all depend on the intrinsic dimensionality of the data $d$, the smoothness parameter of our prior processes $nu$, and the smoothness of the true regression function $f_0$, and for the models considered end up equal up to constants. To improve clarity, we will update the manuscript to further emphasize this. *".. nu > d/2. Is this a standard assumption? .. * * This is a good question! **Yes, this is relatively standard:** $\nu>d/2$ is also used in the prior work [44], where the Euclidean counterparts of our results are proved. From a technical standpoint, this is needed in order to get a convergence rate under the $L^2(p_0)$ norm from a convergence rate at the input locations. Removing this assumption would be an interesting research question, but we suspect that it does not particularly involve geometry specific to the manifold setting which is our focus here. *".. comment on how this (obtaining Laplace-Beltrami eigenfunctions) is done computationally .."* * Thank you for the comment - this is actually the main computational challenge in working with kernels on manifolds. In practice one can rely on two different sets of techniques. The first one is to **discretize/mesh the manifold and solve for the eigenpairs of a large sparse matrix** as is done in [8], although this inevitably introduces numerical errors on top of the asymptotic contraction rates that we present here. The second way is to rely on **algebraic techniques based on symmetries which makes an exact computation possible for a large class of manifolds** of interest - see [2,3]. *".. a function may have a small lipschitz constant wrt manifold distance, but a huge lipschitz constant wrt euclidean embedding distance. Intuitively, why is this not reflected in the contraction rates? .."* * This is an *extremely relevant point*: curvature and more general distortions between geodesic and Euclidean distances introduce undesirable behaviors of the extrinsic processes when compared to the intrinsic ones. In our analysis we are however incorporating the **same smoothness assumptions** on the regression function in both the intrinsic and the extrinsic case and nonetheless find **equivalent contraction rates** for both priors: the crucial fact is that the contraction rates presented here are all asymptotic. Hence, we highly suspect a **very bad dependence of the constant on the curvature and embedding** in the extrinsic case which would explain the differences in performance that we can witness in the small data regime - in fact, we believe our work provides an **excellent motivation to initiate future work on non-asymptotic contraction analysis** that we believe may be needed to capture these differences. *".. constant C in the theorems, is this a universal constant? Or does it depend polynomially/exponentially on any problem parameters?"* * Thank you for pointing that out! As discussed above, the value of $C$ is precisely what makes the extrinsic and intrinsic processes non-equivalent in practice when it comes to the performances. An inspection of the proof shows that $C$ depends on (1) $d,D$ (for the extrinsic prior), (2) the prior process hyperparameters, (3) $\beta$, (4) $M$, (5) the distribtuion $p_0$ over $x$-values, (6) $\sigma_\epsilon^2$, and (7) the Sobolev/Holder norms of the true regression function $f_0$. As mentioned in line 295 the constant in the case of the extrinsic process is expected to be bigger than the one for the intrinsic process, especially when the distortion between geodesic and Euclidean distance is high. We suspect in particular that the operator norm of the trace and extension operators between Sobolev spaces to be big in this case. We will add further clarifications on the constant in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. My questions are adequately addressed, and I have increased my score to 7.
Summary: This paper investigates the theoretical properties and performance of Gaussian processes in machine learning, particularly when applied in geometric settings such as Riemannian manifolds. It compares intrinsic and extrinsic methods, with the former directly formulated on the manifold of interest and the latter requiring a higher-dimensional Euclidean space embedding. The research derives posterior contraction rates for three primary geometric model classes and shows that all three can lead to optimal procedures, given certain conditions. Empirical experiments support these theoretical findings, demonstrating better performance by intrinsic models in small-data regimes. Strengths: NA Weaknesses: NA Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the accurate summary of our work and for your review! If you have any further questions or comments, please feel free to post them - we are happy to provide further information or clarification where needed. If not, thank you very much for your time in reading our work!
Rebuttal 1: Rebuttal: We would like to thank the referees for their summaries and pertinent observations that will help us to improve the final version of the paper. Most of the comments were about the format, additional references, identifiability and insights regarding the definitions of the different processes; we have responded to each reviewer in detail.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards Higher Ranks via Adversarial Weight Pruning
Accept (poster)
Summary: This paper proposes a novel Rank-based PruninG (RPG) method for network pruning that maintains the ranks of sparse weights in an adversarial manner, leading to high-rank topology and improved performance. The proposed method is evaluated on various datasets and tasks, including image classification, object detection, and semantic segmentation, and compared to state-of-the-art pruning methods. The experimental results show that the proposed RPG method outperforms the existing methods in terms of accuracy and efficiency. The paper also provides insights into the importance of rank preservation in network pruning and the potential of adversarial training for improving the performance of pruned networks. Strengths: + This paper is well-written and organized, with clear explanations of the proposed method and experimental results. The authors provide detailed insights into the importance of rank preservation in network pruning and the potential of adversarial training for improving the performance of pruned networks. + This paper proposes a novel Rank-based PruninG (RPG) method that maintains the ranks of sparse weights in an adversarial manner, leading to high-rank topology and improved performance. This approach is original and creative, as it combines the ideas of rank preservation and adversarial training to address the limitations of existing pruning methods. + This paper provides a comprehensive evaluation of the proposed RPG method on various datasets and tasks, including image classification, object detection, and semantic segmentation. The experimental results show that the proposed method outperforms the existing methods in terms of accuracy and efficiency, which demonstrates the quality and effectiveness of the proposed approach. Overall, this paper has a high-quality and significant contribution to the field of network pruning, with original ideas, rigorous evaluation, clear explanations, and practical implications. Weaknesses: - One potential weakness of the proposed Rank-based Pruning (RPG) method is that it may only achieve better performance than existing methods when the sparsity rate is high. As mentioned in the paper, the RPG method outperforms existing methods such as WoodFisher, PowerPropagation, and AC/DC at sparsity rates of 90%, 95%, and 98%. However, at lower sparsity rates such as 80%, the RPG method is slightly lower than WoodFisher in terms of ImageNet accuracy. This suggests that the RPG method may not be as effective at lower sparsity rates, and may require higher sparsity rates to achieve better performance than existing methods. This could be a limitation for some applications where lower sparsity rates are preferred due to memory or computational constraints. - The proposed method can only be applied on weight pruning, which is slower than filter pruning when their pruning rates are same. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: This paper mentions that the proposed method is based on the rank of the weights, but it would be helpful to have more details on how exactly the rank is determined and how it affects the pruning process. Additionally, are there any limitations or potential drawbacks to using rank-based pruning compared to other pruning methods that use different criteria for selecting weights to prune? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer GHQR, Thank you very much for your review. Here are our reponses to the weaknesses and questions you raised: 1. *The RPG method is not performant at low sparsities, impairing low-sparsity applications*: We admit the limitation of the RPG pruning methods under low-sparsity regimes, because the rank-collapse effect is not manifested on low-sparsity networks. But we hold that highly-sparse networks are more valuable in terms of application. As shown in the ResNet-50 example (cf. Table 3), intermediate-sparse models could achieve little CPU acceleration effect; while the speedup is significant for models at high sparsities. Hence, we could find a better accuracy-speed trade-off on highly-sparse networks, and the application value of the proposed RPG method is demonstrated. 2. *About how the rank is determined*: Due to page limits, we put details of rank determination in appendix B.2. Considering weights of different layers vary significantly in shapes, we designed an automatic selection mechanism for ranks. 3. *Any limitations or potential drawbacks to using rank-based pruning*: Our method is targeted at the "rank collapse" effect that only occurs on highly-sparse networks. It has limited effect on models with relatively low sparsity, because the "rank collapse" effect is not so serve on low sparsity networks. Sincerely, Authors
Summary: This work propose a novel objective for performing element-wise pruning of DNN model. The work identifies the loss of weight rank as the key factor influencing the performance of DNN when pruned to high sparsity. This issue is tackled by including a rank loss in the pruning criteria, so that weight elements contributing to the weight matrix rank are preserved. Experiment results show the proposed method outperform previous work under high sparsity. Strengths: 1. The motivation of preserving high rank in element-wise pruning is novel, and it motivates the proposed method well 2. The proposed method is well formulated and is technically sound 3. Adequate ablation study and extensive experiments are conducted, showing promising results Weaknesses: One major concern of this paper is whether aiming for higher element-wise sparsity is useful for the deployment of efficient DNN model. As mentioned in the limitation, element-wise sparsity is not well supported on GPU. Even on CPU, as shown in Tab.3 only 2x speedup can be achieved with 95% sparsity, at the cost of 3% accuracy drop. While a 50% structural sparsity can lead to 4x FLOPs reduction, potentially 3x speedup, with less accuracy drop. This would indicate that aiming towards a high-rank element-wise sparsity is not as useful as directly having structural sparsity. I would suggest the author to provide a comparison of accuracy-speed tradeoff of the proposed method and previous structural pruning results, to show the importance of the proposed method. Minor issue: It would be better to have a pseudo code for the pruning procedure performed in Sec. 2.5 Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weakness Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitation is adequately discussed. No potential negative social impact is observed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer P9XJ, Thank you very much for your review. Here are our responses: Q1. *The author should provide a comparison of accuracy-speed tradeoff of the proposed method and previous structural pruning results.* A1: Thanks for your suggestion. Here we attach a table comparing the CPU speedup of one of the latest structural baseline TPP [Wang et al.] with RPG. The table shows that the gap between the two methods are narrow. | Baseline | Accuracy | Speed | | --------- | ----- | ----- | | RPG | 76.58 | 1.08x | | TPP | 76.44 | 1.13x | | RPG | 75.63 | 1.56x | | TPP | 75.60 | 1.45x | | TPP | 74.51 | 1.86x | | RPG | 73.89 | 1.99x | Besides, while we admit structural baselines could achieve good accerlation effects on GPU (while unstructured sparsity has little effect), unstructured-sparse networks are more compact in terms of size. The size advantage of unstructured sparsity allows applications in storage or bandwidth-limited scenarios. Q2. *It would be better to have a pseudo code for the pruning procedure performed in Sec. 2.5.* A2: Thanks for your advice. The pseudo code is included in the Appendix B.1, pp. 4 due to page limits. We will consider moving it to a prominent place in later revision. Sincerely, Authors References: [Wang et al.] Trainability Preserving Neural Pruning. ICLR 2023. --- Rebuttal 2: Title: Thanks for the response Comment: I would like to thank the authors for the response. I would suggest the author to include this result in the revision as a discussion on the limitation of the proposed method. Meanwhile I agree with the author's motivation that unstructured sparsity is still useful in memory-bounded scenario, and the proposed method serves as a good approach to achieve extreme sparsity. One suggestion I would have for the author is to include additional experiments on performing RPG on top of a structurally-sparsed model. As structural pruning already (presumably) removed unnecessary structures, RGP will be much more effective as it can preserve the remaining important structure. I assume RPG should outperform other unstructural pruning method when applied on already-compressed models. --- Rebuttal Comment 2.1: Title: Additional Experiments on Structually-Sparse Model Pruning Comment: Thank you very much for your suggestions, and we tried to conduct experiments as suggested despite limited time left for discussion. We conducted 90\%-sparse pruning experiments on TPP [Wang et al.] structurally-sparse ResNet-50 models, and we compared our RPG pruning method with AC/DC [Peste et al.], a competitive pruning baseline. Experimental settings are kept the same to our ImageNet experiments, and the results are shown in the table below. RPG could outperform AC/DC on a structurally-pruned model. | Methods | Accuracy | | ---------------------------- | -------- | | TPP [Wang et al.] (Baseline) | 74.51 | | AC/DC [Peste et al.] | 71.33 | | RPG (Ours) | **71.77** | [Wang et al.] Trainability Preserving Neural Pruning. ICLR 2023. [Peste et al.] AC/DC: alternating compressed/decompressed training of deep neural networks. NeurIPS 2021.
Summary: This paper proposes a new weight pruning method for compressing Convolutional Neural Networks (CNNs) called Rank-based PruninG (RPG). The RPG method consists of two steps: first, the low-rank approximation error for the weight matrices is minimized using singular value decomposition, and second, the weight matrices are pushed away from their low-rank approximation to maximize their distance. The authors demonstrate that the RPG method outperforms other state-of-the-art methods in terms of accuracy and compression rate on various datasets and tasks. Strengths: Originality: Pruning weights while maintaining the ranks of sparse weights in an adversarial manner is original and has not been proposed before, which is different from other pruning methods that focus on removing individual weights or neurons. Quality: The experimental results show that the RPG method achieves higher accuracy and compression rate than the previous baselines. The authors also provide insights into the mechanism of the RPG method and its impact on the network structure, which enhances the quality of the paper. Clarity: The paper is well-written and easy to understand, which enhances its clarity. The authors provide clear explanations of the proposed method and its implementation. They also provide detailed experimental results and analysis. Significance: When the pruning rate is high, traditional pruning methods can lead to a structured pattern in the remaining weights, which limits their performance. The proposed rank-based pruning method maintains the ranks of sparse weights in an adversarial manner, which ensures that the pruned network retains its structure and performance even at high pruning rates. Weaknesses: 1) The adversarial optimization involves a min-max problem that requires additional computation and optimization steps, which can increase the training time and complexity. Similarly, SVD is a computationally expensive operation that requires matrix factorization and singular value decomposition, which can also increase the training cost. Moreover, the proposed method requires the computation of low-rank approximations and the search for the best k-rank approximation, which can further increase the training cost. These additional computations and operations can make the proposed method less practical for large-scale networks or real-time applications. 2) Singular value decomposition (SVD) is used in the proposed method to compute the rank and low-rank approximations of weight matrices. In contrast, Canonical Polyadic Decomposition (CPD) and Tucker Decomposition are other methods for decomposing tensors into lower-rank components. It is better to explain why choosing SVD to estimate the rank of weights. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer jsvG, Thank you very much for your review. Here are our responses: Q1. *Additional computations and operations can make the proposed method less practical for large-scale networks or real-time applications.* A1: In fact, these extra overheads only accounts for a small proportion of training cost. Firstly, the extra cost is amortized because the procedures are carried out once every one hundred iterations; Secondly, weight SVD, which is much more costly than adverserial loss calculation and best k-rank searching, only accounts <<1\% of the whole pruning cost both in terms of time and FLOPs (according to Sec. 3.6). In a nutshell, additional computations are minimal and won't impact the application value of RPG on large models and other applications. Q2. *Why choosing SVD instead of CPD or Tucker Decomposition?* A2: Unlike Singular Value Decomposition of matrices, Canonical Polyadic Decomposition and Tucker Decomposition are decomposition methods of high-order tensors (in fact, CPD could be viewed as a high-order extension of SVD). Matrix-form (2-dimensional) weights (rather than high-order tensors) are the most general and widely-applied form in all sorts of neural networks. Hence, we adopt SVD as the decomposition method. Sincerely, Authors
Summary: This paper proposes a novel unstructured pruning method, trying to maximize the matrix rank while trying to remove as many model weights as possible. The paper first demonstrates the phenomenon that unstructured pruning may degrades to structured pruning at large sparsity ratios, which is closely related to the fact that the pruned weight matrices become low-rank matrices after many weights are set to zero. Thus, the objective of the proposed method is trying to on the one hand, minimize the task-related loss, and on the other, maximize the rank of the pruned weight matrices, which forms a min-max problem. This min-max problem is then integrated into model pruning via a matrix rank-boosting regularization term. With the gradual pruning framework, the proposed method (RPG) is examined empirically on CIFAR-10, ImageNet, COCO datasets with CNNs and ViTs model architectures. The results show the effectiveness of the RPG. Strengths: This paper studies the unstructured pruning from the perspective of rank maintenance, which is very novel. The authors made very informative and helpful illustrations to help the readers understand this paper without difficulty. Therefore, the presentation is also very good. Extensive experiments were conducted on different tasks, and the results look good. In summary, this work is a very good attempt to connect model pruning with weight ranks in a novel perspective. Weaknesses: I listed some weakness of this work from different perspectives. I will consider to raise my score if they are addressed properly. 1. [Motivations] The motivations of this paper by "removing the structuring patterns in the unstructured pruning" do not seem very direct to me. In general, currently the results of the unstructured pruning can not be directly used for hardware acceleration. Therefore, one important research direction is to generate structured mask from the unstructured pruning results, such as [ICML22] Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets. Therefore, I wonder if the motivation of this paper will make the unstructured pruning less adaptable to hardwares? 2. [Presentations] I would suggest the authors to remove some discussions on SVD, which are very basic knowledge in linear algebra. In contrast, more important information in the Appendix can be brought back to the main paper. 3. [Method] It is a bit vague to me, how the gradients are dealt with when Eq. (2.6) is treated as the regularization term. Is $\text{Trunc}(U\Sigma V^T)$ treated as a constant value during back-propagation? Also, I suggest the authors explicitly write down the $\mathcal{L}(\text{task})$ as well, which shows how the sparsity of the model weights are imposed. 4. [Method] In Eq. (2.6), the model weights are denoted by one variable $\mathbf{W}$. However, as we know the model weights may contain many layers and many types (convolutional kernels, fully connected layers, etc.) Do the authors sum them up in implementation using the same coefficient? Or there are different weights assigned to different layers. It is not clear in the paper regarding this point. 5. [Experiments] The baselines used in Tab. 1 and Tab. 2 are not consistent. Are their any specific reasons? The same issue is also spotted in Tab. 5, where in the last row block the method "SViTE" is missing, but the authors stated "For fair comparison, all pruning experiments follow the setting of SViTE." (Line 351). 6. [Experiments] It would help the readers to better compare the results if the authors can include the computational (training) time of different methods. 7. [Minor] There are unnecessary margins under Tab. 2 and Fig. 3. Please consider to remove them and make the layout of the figure better. 8. [Minor] Line "is illustrated in" -> "**are** illustrated in" Technical Quality: 3 good Clarity: 3 good Questions for Authors: Below is a summary of my comments in the section "Weaknesses". 1. What is the relationship between the rank of the pruned weight matrix? Will the unstructured pruning with higher rank impairs the further acceleration on hardwares? 2. How are the SVD-reconstructed terms dealt with during back-propagation? 3. What are the formulation of $\mathcal{L}(\text{task})$? 4. How are the weights with multi-layers processed to compute Eq. (2.6)? 5. Why are the baselines not consistent within one table/across different tables on the same tasks? 6. What how does the training efficiency of different methods look like? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not have additional comments on the limitation of this work. Please refer to the "Weaknesses" and "Questions" sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer yERC, Thank you very much for your suggestions. Here are the answers to the questions you raised: Q1. *RPG will make the unstructured pruning less adaptable to hardwares.* A1: Sorry for ambiguities in the paper. "Structured pattern" does not necessarily mean the acceleration-friendly structural sparsity (which is only a special case of structured pattern); in the context of RPG, "structured pattern" is reflected by the low-rank characteristics of weights. The RPG method mainly focuses on improving weight ranks instead of the actual removal of structural sparsity. We conducted speedtest experiments and compared RPG with the competitive pruning baseline of AC/DC [Peste et al.]. Results show that despite higher ranks, RPG won't affect the hardware adaptability of unstructured-sparse networks. | ResNet-50 | Acc. | Speedup | Rank | | --------- | ----- | ------- | ----- | | Dense | 76.80 | 1.00x | 263.5 | | AC/DC 95% | 73.14 | 1.66x | 234.2 | | RPG 95% | 73.89 | 1.99x | 262.2 | Q2. *How are the SVD-reconstructed terms dealt with during back-propagation?* A2: The term $Trunc(U\Sigma V)$ is treated as a constant value. The SVD-reconstructed terms are detached during loss calculation. Q3. *What is the formulation of $\mathcal{L}(task)$?* A3: $\mathcal{L}(task)$ is exactly the training loss for the original dense network. For instance, $\mathcal{L}(task)$ is cross entropy loss for ResNet-50/DeiT ImageNet classification. Q4. *How are the weights with multi-layers processed to compute Eq. (2.6)?* A4: The losses for each layer are summed-up together. Q5. *Inconsistency of baselines in Tab. 1 and Tab 2; missing SViTE in Tab. 5.* A5: The inconsistancy between Table 1 and 2 is due to the lack of official reports or code. Sparse ResNet-50 for ImageNet classification is a commonly-used weight pruning benchmark (Table 2), but only a handful of papers report CIFAR results (Table 1). Experiment settings (e.g. models, training epochs) also varies among papers with CIFAR. Hence, we have to follow the setting of one reliable and competitive baseline, namely, ProbMask [Zhou et al.] for CIFAR-10 experiments to guarantee fair comparison. In spite of this, we still tried to include competitive pruning baselines, e.g. we re-implemented AC/DC [Peste et al.] on CIFAR. We attempted to re-implement some other strong baselines but encountered problems (e.g. incomplete open-sourcing; closed-source codebase though requested via emails; failure of replication). The method SViTE is missing in the last row block, because the SViTE paper [Chen et al.] only reports values on relatively low sparsities (sparsity 50/60). We compared our RPG method to a more powerful baseline AC/DC to demonstrate the strong capability of our method under a high sparsity regime. For 80\% sparsity experiments, we still follow the same setting as SViTE except for the pruning rate. Q6. *Authors should include the computational (training) time of different methods.* A6: We provide the running time of some pruning algorithms in the table below: | Baseline | Acc. | Time | | --------- | ----- | ------- | | Dense | 76.80 | 821 min | | AC/DC 90% | 75.03 | 861 min | | RPG 90% (Ours) | 75.63 | 866 min | Notably, we remark that the above statistics could only give a rough estimation, because the actual running time depends on many factors unrelated to the algorithm itself, including how DataParallel is implemented, what supporting packages the codebase is using, et cetera. Statistics show that the RPG method is not significantly time-costly. Additionally, thank you for your advice on layouts and typos. We will amend them in later revisions. Sincerely, Authors References: [Chen et al.] Chasing sparsity in vision transformers: An end-to-end exploration. NeurIPS 2021. [Peste et al.] AC/DC: alternating compressed/decompressed training of deep neural networks. NeurIPS 2021. [Zhou et al.] Effective sparsification of neural networks with global sparsity constraint. CVPR 2021.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Minimax Risks and Optimal Procedures for Estimation under Functional Local Differential Privacy
Accept (poster)
Summary: The authors consider the a local version of functional DP / Gaussian DP, and establish minimax rates of convergence for mean estimation and density estimation under this privacy paradigm. They highlight how functional/Gaussian DP is more conducive to LDP than approximate DP given it’s tight compositional properties. They demonstrate how properties of the tradeoff function play a critical role in the minimax risk. Strengths: This is a timely work given the recent attention GDP has received. Understanding minimax rates for LDP in such a setting seems like an important contribution. The mathematical results are thorough, and translating properties of the tradeoff function into minimax rates is interesting. Weaknesses: A seemingly glaring problem with results such as Corollary 2 is the privacy level, $\mu$, is not made explicit in the rate. However the constants in the theorem do depend on $\mu$. The results of Duchi et al make the dependence on epsilon explicit, and it is quite substantial (i.e. taking substantially smaller values of \epsilon will substantially reduce the effective sample size). The authors then discuss how surprising it is that $\epsilon$ LDP and $\mu$ GLDP would have the same minimax rate, but the authors didn’t show this since their rate does not include $\mu$. I find this unfortunate, as otherwise the paper is quite interesting, but the final comparison with $\epsilon$ LDP falls flat and unfinished. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q**: A seemingly glaring problem with results such as Corollary 2 is the privacy level, $\mu$, is not made explicit in the rate. However the constants in the theorem do depend on $\mu$. The results of Duchi et al make the dependence on epsilon explicit, and it is quite substantial (i.e. taking substantially smaller values of epsilon will substantially reduce the effective sample size). The authors then discuss how surprising it is that LDP and GLDP would have the same minimax rate, but the authors didn’t show this since their rate does not include $\mu$. I find this unfortunate, as otherwise the paper is quite interesting, but the final comparison with LDP falls flat and unfinished. **A**: We appreciate the reviewer's thoughtful comments and insightful feedback. As demonstrated in the paper, both LDP and GLDP share the same minimax rate concerning $n$. However, their equivalence does not extend to privacy constraints in our theoretical results. In response to the reviewer's suggestion, we have undertaken additional calculations to determine the minimax rate while incorporating privacy constraints. The derived bounds are as follows: \begin{equation*} O\left(\left(ne^{\mu^2}\right)^{-\frac{2(k-1)}{2k}\textbf{or}-\frac{2\beta}{2\beta+2}}\right)\leq\mathcal{R}\leq O\left(\left(n\mu^2\right)^{-\frac{2(k-1)}{2k}\textbf{or}-\frac{2\beta}{2\beta+2}}\right) \end{equation*} for both univariate mean estimation or nonparametric density estimation. This analysis highlights that our approach does not yield a unified minimax rate in terms of $\mu$. Furthermore, the observation that $e^{\mu^2}\longrightarrow1$ as $\mu\longrightarrow0$ contrasts with the behavior of the lower bounds presented in Duchi et al. (2018), which tend towards infinity as $\epsilon\longrightarrow0$. Hence, further efforts are required to establish a rigorous minimax rate in terms of privacy constraints for local FDP. --- Rebuttal Comment 1.1: Comment: Thank you for the reply, I think this is an interesting point.
Summary: The authors study the problems of mean estimation and density estimation under functional local differential privacy (FLDP). In particular, they are interested in deriving minimax (rate-) optimal estimation procedures and privacy mechanisms for these problems. Their results include analogues to Le Cam's bound and Assouad's bound for the FLDP setting. They then specialize these results to the previously mentioned problems and derive minimax lower bounds as corollaries. They also provide algorithms which match the rates specified by their lower bounds. Finally, numerical experiments confirm empirically that their methods achieve better privacy/utility tradeoffs than existing methods which enforce LDP. Strengths: This work is the first to address minimax estimation rates for functional local DP. Functional DP was introduced recently and has received a great deal of attention, and understanding its properties and advantages is of great interest to the community. As such, this paper offers a relevant and novel contribution. I did not check all of the proofs, but those I did were technically sound. The results in the paper are quite extensive. The authors not only derive the minimax rate for two fundamental estimation problems (univariate mean estimation and density estimation), but also give algorithms which match the minimax rate. In addition to these strong theoretical results, their algorithms also offer practical improvement over existing alternatives, as can be seen from their empirical evaluation. The paper is also very well written. I found the background section to be a very helpful introduction to both functional local DP and private minimax estimators for a non-expert. The authors also give helpful intuition and interpretations for many of their technical results, which make this highly technical paper much easier to parse. Weaknesses: While the authors did a good job providing interpretation for many of their results to make the paper accessible to non-experts, some results still seemed opaque to me. For instance, the instantiation of the Le Cam's and Assouad's bounds provided some intuition for how these results could be useful, but I did not grasp the intuition for either of the more general results (Theorems 1 & 3). In particular, it seems like for large n, the lower bound in both of these theorems will actually be negative and therefore vacuous. Adding some further interpretation of the general bounds would be helpful. The authors also make a point that their results smoothly interpret between private and non-private regimes, a feature which is lacking from existing analyses. This is of course a strength of the paper (mentioned above), but I did not understand the connection with $\kappa$ in the general lower bounds. In particular, I see that the minimax optimal rates for mean estimation are recovered for $\kappa = 0,1$, but it was not clear to me why these values of $\kappa$ correspond to private/non-private settings. A more thorough explanation of this (probably in terms of the relationship between $\kappa$ and the tradeoff function $f$) would be helpful. Lastly, the experimental section is fairly sparse. Since this is primarily theoretical paper, I think it is a fairly minor point. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Can the authors provide more interpretation of the general versions of Le Cam's and Assouad's bounds (Theorems 1 & 3)? Even some simple intuition such as, when would we expect the lower bound to be non-negative, etc. would be helpful. 2. Can you more context for the parameter $\kappa$ and how it interpolates between the non-private and $\epsilon$-LDP settings? 3. Can you provide some intuition as to why your techniques for univariate mean estimation can't be easily extended to higher dimensions? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have a nice discussion of the paper’s limitations (as well as possible directions for future work) at the end of their conclusion section. The main limitations include some restrictive technical conditions, as well as the fact that their mechanisms only achieve the minimax rate in limited settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and insightful feedback. **Q1:** Can the authors provide more interpretation of the general versions of Le Cam's and Assouad's bounds (Theorems 1 & 3)? Even some simple intuition such as, when would we expect the lower bound to be non-negative, etc. would be helpful. **A1:** The general form of Le Cam's and Assouad's inequalities could indeed become trivial, i.e., the risk bound can be negative. For the Le Cam's method as an example, a negative bound can be prevented by carefully choosing the two distributions $P_1$ and $P_2$ with "similar'' densities (i.e., smaller total variation) for a fixed value of $\eta$. A proper choice of these distributions will adequately represent the innate challenge regarding the given estimation problem quantified by the minimax risk. The choice of $\eta$ in relation to $n$ is made deliberately to ensure that the lower bound remains positive and it consequently influences the rate of the lower bound. The lower bound itself can be interpreted as the following. For a given parameter difference $\eta$, as the dissimilarity between the densities (TV) decreases, the problem becomes difficult so the lower bound increases, and vice versa. **Q2:** Can you add more context for the parameter $\kappa$ and how it interpolates between the non-private and $\epsilon$-LDP settings? **A2:** The role of the parameter $\kappa$ can be understood in the context of the contraction coefficient $c_{f,\kappa}=(1-\kappa)^{1-\kappa}\kappa^\kappa\int (\kappa+1)t^{\kappa-1}\delta_f(t)dt$, which involves $\delta_f(t)$ and $\kappa$. (Recall: $f$-FLDP is equivalent to $(\epsilon,\delta_f(e^\epsilon))$-LDP.) Our minimax bounds are non-trivial only when $c_{f,\kappa}$ is finite, and this necessitates $\delta_f(t)$ to decrease faster than $t^{-\kappa}$. Consequently, a larger value of $\kappa$ (close to 1) requires a smaller value of $\delta_f(t)$, aligning with a more private setting. On the contrary, in a less private setting, $\delta_f(t)$ grows, compelling $\kappa$ to decrease in order to uphold the condition $c_{f,\kappa}<\infty$. **Q3:** Can you provide some intuition as to why your techniques for univariate mean estimation can't be easily extended to higher dimensions? **A3:** The challenge in extending to the high-dimensional mean estimation problem is that we rely on the *probability* that the likelihood ratio is bounded, while in the case of $\epsilon$-LDP, it regards the pointwise-bounded nature of the likelihood ratio between outputs of mechanisms. This discrepancy introduces a challenging technical aspect, as the meticulous selection of distributions highlighted in A1 must be carried out for each component. Consequently, this entails managing multiple distributions in high-dimensional spaces. We suggest this problem as future research. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their thorough response, I believe it has improved my understanding on several points. I maintain my score and hope to see this paper accepted.
Summary: The paper proposes a local version of functional differential privacy (FDP), and finds the minimax rate of mean estimation and non-parametric density estimation under local FDP. This paper amounts to an extension of the main results of Duchi et al. (2018) from epsilon-LDP to local FDP. Strengths: Originality: the paper is the first to consider extending functional DP to the local setting, to the best of my knowledge. Quality and clarity: the technical claims are carefully proved and concisely explained. Significance: because FDP offers many advantages over (epsilon, delta)-DP in the central setting, it is useful to understand whether some advantage also exists in the local setting. The comparison between mu-GDP and epsilon-LDP in this paper provides some (perhaps negative?) insight in this regard. Weaknesses: * Key concepts such as epsilon-LDP and local FDP are never formally defined in the main paper. As the definition of LDP is not entirely a trivial extension of central DP to begin with (for example, see Duchi et al. (2018)'s treatment of this concept), it is a risky choice to expect readers to extrapolate the (central) FDP definition to local FDP, as understanding the definition of local FDP is essential for reading the rest of the paper. * High-level contribution of this paper. While I appreciate the neat mathematical arguments and results on local FDP, this paper leaves me wondering why local FDP is a worthwhile notion of privacy to consider. The original FDP paper by Dong et al. provides some convincing arguments for FDP, but this paper seems to be much more interested in deriving minimax rates as a mathematical exercise, without critically thinking about the underlying notion of privacy. I wish the paper had devoted some discussion to this question either before or after its mathematical investigations. From a mathematical point of view, the generalizations of epsilon-LDP Le Cam's and Assouad's inequalities by Duchi et al. can be interesting in their own right, but neither do these generalizations suggest any qualitative difference between epsilon-LDP and local FDP besides different contraction constants. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * The role of lower bounds varying continuously with $\kappa$. If ultimately the matching upper and lower bounds only needed the $\kappa = 1$ case, what is the purpose of considering this continuous family of lower bounds? For those trade-off functions satisfying Lemma 1, does $\kappa = 1$ always imply the best minimax rate? Do you know of any trade-off function $f$, possibly violating Lemma 1, such that choosing a $\kappa$ strictly between 0 and 1 is necessary? * What technical innovations are needed to extend the epsilon-LDP versions of Le Cam's inequality and Assouad's inequality to local FDP? * From a DP practitioner's point of view, why might one want to consider local FDP? As the paper reveals, there is hardly evidence that local FDP offers better statistical utility, and the cognitive burden of describing/understanding local FDP is certainly higher than the epsilon-LDP case. Additionally, for non-Gaussian tradeoff functions, it may not be easy to come up with an appropriate privatization method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: Section 5 has comprehensively discussed the limitations from a technical point of view. Discussions on more foundational questions, such as the usefulness of extending FDP to the local setting, or even the technical innovations required, if any, for extending epsilon-LDP theory to local FDP, would be much appreciated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and insightful feedback. **Answer to Q1 on $\kappa$:** (i) We examined a continuous range of lower bounds relative to $\kappa$ with the purpose of investigating the *gradual shift* in optimal utility associated with privacy constraints. It was motivated by the connection between the trade-off function and the contraction coefficient $c_{f,\kappa}$, as well as the exponent of the minimax rate. While our findings do not yield an explicit result for $\kappa<1$, we note that there has not been any research on private nonparametric density estimation in this particular $\kappa$ regime. Therefore, our work can provide indirect insights into private estimation with privacy constraints $\kappa<1$. (ii) It is true that $\kappa=1$ is optimal with respect to minimax rates for trade-off functions satisfying Lemma 1. Another way to appreciate this is the following: For a trade-off function $f$, there exists a trade-off function $f_{\epsilon}$ such that $f$-FDP is equivalent to $\epsilon$-DP for some $\epsilon>0$ except for the case of $f(x)=1-x$. This allows us to leverage optimal mechanisms designed for $\epsilon$-LDP, which achieve the minimax rates of $O\left(n^{-\frac{2k-2}{2k}}\right)$ for univariate mean estimation and $O\left(n^{-\frac{2\beta}{2\beta+2}}\right)$ for nonparametric density estimation. When $\kappa=1$, the corresponding lower bound retains the same rate, meaning that the theoretical lower bounds align with the optimal minimax rates. (iii) There are trade-off functions that contradict Lemma 1 and need $\kappa< 1$. For example, when $f(x)=1-x^{\frac{\kappa}{1+\kappa}}$ we have $\delta_f(y)=\sup_{x\in [0,1]}1-yx-f(x)=\sup_{x\in[0,1]}x^{\frac{\kappa}{1+\kappa}}-yx=\frac{\kappa^{\kappa}}{(1+\kappa)^{1+\kappa}}y^{-\kappa}$ for large $y>y_0$ for some $y_0>0$. Also $1\geq f(y)\geq 1-y\cdot 0-f(0)\geq 0$. Thus, $\int t^{\kappa-1} \delta_f(t)dt$ diverges. Therefore, for every $\kappa_0\in (0,1)$, there exists a trade-off function requiring $\kappa$ to be less than $\kappa_0$. **Answer to Q2 on technical innovation:** Both Le Cam's and Assouad's inequalities establish minimax lower bounds by considering distributions that are similar but differ in target parameters. When these similar distributions possess substantially distinct parameters, the estimation task becomes challenging and the minimax risk tends to increase. Introducing a privacy constraint to the estimation problem yields that the distributions of observables—outputs of the privacy mechanism—become less distinguishable from one another compared to the original distributions, while the true value of the target parameter stays the same. To extend the Le Cam and Assouad methods from non-private settings to private estimation, one needs to establish a connection between the difference in distributions of mechanism outputs, $M(P_1)$ and $M(P_2)$, and the divergence between the original input distributions $P_1$ and $P_2$. Prior approaches achieved this via a uniform bound on the output distribution difference (expressed as the likelihood ratio) over the sample space, under the $\epsilon$-LDP assumption. In our work, we found that most local DP mechanisms tend to exhibit properties akin to $\epsilon$-LDP for certain values of $\epsilon$, even if they do not satisfy strict $\epsilon$-LDP. In order to extend the $\epsilon$-LDP versions of Le Cam's and Assouad's inequalities to the local version of $f$-DP, we determined bounds for the probability that an $f$-DP mechanism behaves like $\epsilon$-DP. In other words, we derived an inequality that governs the likelihood ratio of outputs of mechanisms $\mathbb{P}\left(\frac{f_{M(P_1)}(Z)}{f_{M(P_2)}(Z)}>e^\epsilon|Z\sim M(P_2)\right)$ for distributions $P_1,P_2\in\mathcal{P}(\mathcal{X})$. Considering such probability was critical in order to treat local FDP as if they were $\epsilon$-LDP and this can be considered as a key technical innovation of our paper. **Answer to Q3 on local FDP:** We think that there might be a possibility for achieving a better minimax rate through local FDP compared to $\epsilon$-LDP. Specifically, in our response to the first question from Reviewer 6GNC, we theoretically demonstrate that our FLDP algorithm has an improved constant compared to the LDP algorithm Duchi et al. (2018) in nonparametric density estimation (even though the orders of the minimax rates are the same), which implies a practical advantage of our method. You can find more details regarding this discussion in that response. In addition, implementing privacy mechanisms under $\epsilon$-LDP presents a challenge in managing the privacy budget, particularly when composing multiple mechanisms. Effective composition rules are essential to strike the right balance between privacy protection and estimation accuracy. Inefficient composition rules tend to overestimate privacy leakage, leading to excessive perturbation of estimations. Even when minimax rates remain identical across different privacy schemes, an inefficient composition rule can practically undermine the overall performance. The shortcomings of the composition rule for $(\epsilon,\delta)$-DP are widely acknowledged, and other relaxations of DP have also faced criticism for their inefficient composition rules. In that regard, we believe that FDP possesses an effective composition rule. Lastly, we note that Awan et al. (2023) have introduced an additive mechanism that achieves $f$-FDP across a wide array of trade-off functions $f$, and Awan et al. (2022) have developed multivariate $f$-FDP mechanisms as well as log-concave $f$-FDP mechanisms tailored to specific $f$ functions. While we cannot definitively ascertain their suitability as effective privatization methods without further investigation, these approaches offer potential avenues for applying non-Gaussian trade-off functions to achieve privacy objectives. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers. They have certainly improved my understanding of the technical results. I will maintain my score. After reading the reviews and rebuttals, I believe that a more thorough comparison with local $\varepsilon$-DP can help improving both clarity and impact of the paper.
Summary: This paper investigates the minimax risk achieved under functional local differential privacy (FLDP) constraints, and particularly under Gaussian local differential privacy (GLDP). The authors first introduce lower bounds for univariate mean estimation under FLDP using Le Cam’s method. Under certain assumptions on the threshold function (satisfied by GLDP) this rate matches the one with $\epsilon$-LDP introduced by Duchi et al (2018). They then use this result to derive upper and lower bounds for GLDP and a mechanism achieving this rate. Similarly, they use Assoud’s method to derive lower and upper bounds for non-parametric density estimation, and the corresponding mechanism achieving optimality. The authors empirically show their private mean estimation algorithm outperforms the one introduced in Duchi et al. (2018). Strengths: - The paper is very well written and clear. It introduces simple mechanisms to achieve optimal minimax rates for two important problems, namely (1) univariate mean estimation and (2) non-parametric density estimation. - Further, their theoretical results provide an understanding of the tradeoff between utility and privacy for mean estimation. By analyzing the minimax risk under the lens of FLDP they introduce a continuous measure between privacy and utility, parametrized by a constant kappa, such that kappa=0 corresponds to non-privacy and kappa=1 corresponds to pure local DP. Weaknesses: 1. Constants matter in differential privacy. In the current state of the paper it is clear that asymptotically the rates are the same, however it is hard to get an intuition of the constants. The plots however provide evidence that the suggested approach does provide better results. 2. After Duchi et al. 2018’s paper there have been other papers introducing local DP algorithms, however the paper only compares theoretically and empirically to Duchi et al. 2018. Due to constants these other algorithms could perform better under certain regimes. - https://ieeexplore.ieee.org/document/8006630 - https://proceedings.mlr.press/v162/asi22b/asi22b.pdf 3. Algorithm complexity is only qualified as “less complex” or “more straightforward”. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Could the authors discuss the three main points above? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: - The authors clearly describe some of the limitations of their work in the conclusion. First, the optimality is only achieved for kappa=1 that corresponds to pure local DP. Second, results only hold for a certain class of threshold functions. And finally, mean estimation is only analyzed in one dimension. The high dimensional case remains an open problem under FLDP. - Besides these limitations, the condition on threshold functions does not seem easy to verify. Further, it remains unclear if in practice the mechanisms provided will actually have better performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and insightful feedback. **Q1:** Constants matter in differential privacy. In the current state of the paper it is clear that asymptotically the rates are the same, however it is hard to get an intuition of the constants. The plots however provide evidence that the suggested approach does provide better results. **A1:** We agree that constants can have substantial meaning in differential privacy, especially in practice. We have attempted to derive the optimal constants for the results in Duchi et al. (2018) to compare with ours. Their result implies that \begin{equation*} \mathcal{R}\leq(\beta+1)\left(\frac{\beta}{\sqrt{2\pi e}}n\left(\frac{e^{\epsilon}+1}{e^{\epsilon}-1}\right)^{-2}\right)^{-\frac{2\beta}{2\beta+2}}r^{\frac{2}{\beta+1}}, \end{equation*} while our risk bound from a similar calculation becomes: \begin{equation*} \mathcal{R}\leq(\beta+1)\left(0.5\beta n\mu^2\right)^{-\frac{2\beta}{2\beta+2}}r^{\frac{2}{\beta+1}}+O\left(n^{-\frac{2\beta+1}{2\beta+2}}\right). \end{equation*} Comparing the coefficients of $n^{-\frac{2\beta}{2\beta+2}}$, we have \begin{equation*} \frac{c_{ours}}{c_{Duchi}}=\left(\frac{0.5\mu^2}{\frac{1}{\sqrt{2\pi e}}\left(\frac{e^{\epsilon}+1}{e^{\epsilon}-1}\right)^{-2}}\right)^{-\frac{2\beta}{2\beta+2}}=\left(\sqrt{\frac{\pi e}{2}}\mu^2\left(\frac{e^{\epsilon}+1}{e^{\epsilon}-1}\right)^2\right)^{-\frac{2\beta}{2\beta+2}}. \end{equation*} If $\mu=1$, as in the experiment, $c_{ours}<c_{Duchi}$ holds for every $\epsilon>0$, suggesting that our algorithm can potentially achieve a smaller risk, which is also empirically demonstrated in our paper. However, we note that these constants could only reflect the upper bounds of potential risks, rather than the risks themselves. **Q2:** After Duchi et al. (2018)’s paper, there have been other papers introducing local DP algorithms, however, the paper only compares theoretically and empirically to Duchi et al. (2018). Due to constants, these other algorithms could perform better under certain regimes. Ye and Barg (2017) (https://ieeexplore.ieee.org/document/8006630) Asi et al. (2022) (https://proceedings.mlr.press/v162/asi22b/asi22b.pdf) **A2:** We agree that other algorithms could perform better under some regimes. However, we want to point out that our settings for the considered estimation problems are different from those works. First, the distribution family that are considered for nonparametric density estimation in both ours and Duchi et al. (2018) is assumed to have a density with smoothness parameter $\beta>1/2$. However, Ye and Barg (2017) and most other related works deal with either discrete distributions or a discretized version of a continuous density. Second, for the univariate mean estimation, we assume that the random variable is unbounded, but has bounded moments. The majority of related studies including Asi et al. (2022) focus on high-dimensional, bounded variables. This disparity makes the comparison unaccountable. Specifically, when privately estimating the mean of an unbounded random variable, a cut-off procedure is crucial to ensure privacy, and determining the cut-off point plays a significant role in achieving the optimal risk. In contrast, private mechanisms for bounded high-dimensional mean estimation do not need a cut-off, as their data is inherently bounded. The assumed boundedness yields the minimax rates that are comparable (with respect to $n$) to non-private estimation, which is not possible in our setting. **Q3:** Algorithm complexity is only qualified as “less complex” or “more straightforward”. **A3:** The computational complexity of both Duchi et al. (2018) and our algorithms is $O(n^{(\beta+2)/(\beta+1)})$. However, our algorithm is considered canonical and offers a more straightforward implementation. --- Rebuttal Comment 1.1: Title: Thanks! Comment: Thanks for all the clarifications, I think adding a clarification on (1) could reinforce the merits of this paper. I maintain my score and I look forward to discussing it with other reviewers.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Look Ma, No Hands! Agent-Environment Factorization of Egocentric Videos
Accept (poster)
Summary: The paper addresses challenges in using egocentric videos for robotics tasks, specifically the issues of occlusion and visual mismatch between the human hand and a robot end effector. To address these problems, this work proposes a factored representation of the scene that separates the agent (human hand) from the environment. This factorization is achieved via the proposed Video Inpainting via Diffusion Model (VIDM). Experiments demonstrate the effectiveness of VIDM in improving the inpainting quality and highlights the power of the factored representation for various downstream robotics tasks. Strengths: 1) Paper is well-written and well-structured; the descriptions of the experimental protocols for each of the applications was very helpful. 2) Experiments are very comprehensive; I was happy to see evaluation on both the inpainting quality achieved by the proposed model, as well as the utility of its associated representations for various downstream robotics tasks. Weaknesses: 1) The main weakness that stands out to me is that the current set of downstream robotics applications are not as convincing as they need to be. Specifically, in many of the tasks, it seems the inpainted environment $I_{t}^{\text{env}}$ is the key component, and not the factorized representation in its entirety. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1) It seems mostly $I_{t}^{\text{env}}$ is being used; where does the agent representation come into play? In application 3, I assume you're using $I_{t}^{\text{env}}$ to predict the GT hand pixels, so $I_{t}^{\text{agent}}$ is unused? And in applications 4 and 5, the function $g$ which abstracts $I_{t}^{\text{agent}}$ simply returns a green dot representing the position of the end effector/hand. This is a reasonable exploration of the concept, but are there more concrete use cases where $I_{t}^{\text{agent}}$ obtained from segmentation models (as described in L101-103) actually interplays equally with the inpainted $I_{t}^{\text{env}}$? 2) It seems like in application 2, what the experiment is actually proving is that ground truth environment information without the agent improves 3D reconstruction (unsurprising), and not "the effectiveness of our proposal of using factorized agent-environment representations" (L228-229). In other words, there's no connection at all between this experiment and the factorized representation that you learn from VIDM. 3) In applications 4 and 5, neither of these tasks lend themselves to needing this agent-environment factorized representation. In particular, I feel that in opening a drawer, cupboard or fridge, the occlusion by the hand/end-effector does not significantly hamper the accomplishing of the task. Even in Figure 5, I see that the (salient) occlusion by the green dot in the agent-agnostic representation is almost more extreme than the occlusion by the robotic end-effector in the raw image. What's the intuition here, or could it simply be the case that the baselines were not tuned sufficiently? I ask because in Table 4, the "inpainted only" performance is much better than the "raw image" performance, but in Figure 5 for the real-world experiments, "inpainted only" actually fails entirely compared to "raw image." Can the authors provide some explanation for this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations are addressed well in the discussion section of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and thoughtful feedback. Please see our response below and refer to the supporting figures in the rebuttal PDF. > It seems mostly Ienv is being used; where does the agent representation come into play? In application 3, I assume you're using Ienv to predict the GT hand pixels, so Iagent is unused? And in applications 4 and 5, the function g which abstracts Iagent simply returns a green dot representing the position of the end effector/hand. This is a reasonable exploration of the concept, but are there more concrete use cases where Iagent obtained from segmentation models (as described in L101-103) actually interplays equally with the inpainted env? * To clarify: when making predictions for application 3, $I_{agent}$ is not used as an input to the model, $f$. However in the parlance of figure 2, aspects of $I_{agent}$ (the pose of the hand) is used to provide the supervision to train $f$ to predict grasps afforded by objects (GAO task) in Table 3 and Section 5.4. More generally, many past works re-target human hand pose to robot pose (eg [DexMV], [DexVIP], [Robotic Telekinesis] among others) for tele-operation and imitation learning. These applications involve using $I_{agent}$ to infer hand pose (and not just grasp type as in our work). One could also imagine a content creation application where $I_{agent}$ is used to predict aspects of $I_{env}$ (what object would fit into an animated character’s hand, for instance). Thus, we believe there are many other applications of our proposed factorization (and specifically $I_{agent}$) than what we were able to experiment with in our paper. > It seems like in application 2, what the experiment is actually proving is that ground truth environment information without the agent improves 3D reconstruction (unsurprising), and not "the effectiveness of our proposal of using factorized agent-environment representations" (L228-229). In other words, there's no connection at all between this experiment and the factorized representation that you learn from VIDM. * Your understanding of application 2 is correct. However, the ground truth is exactly the ground truth version of $I_{env}$ in our proposed agent-environment factorization that our model tries to approximate. This experiment was designed to independently test the efficacy of AEF from VIDM, as such we use ground truth factorized images. > In applications 4 and 5, neither of these tasks lend themselves to needing this agent-environment factorized representation. In particular, I feel that in opening a drawer, cupboard or fridge, the occlusion by the hand/end-effector does not significantly hamper the accomplishing of the task. Even in Figure 5, I see that the (salient) occlusion by the green dot in the agent-agnostic representation is almost more extreme than the occlusion by the robotic end-effector in the raw image. What's the intuition here, or could it simply be the case that the baselines were not tuned sufficiently? I ask because in Table 4, the "inpainted only" performance is much better than the "raw image" performance, but in Figure 5 for the real-world experiments, "inpainted only" actually fails entirely compared to "raw image." Can the authors provide some explanation for this? * AEF offers a solution for both occlusion and the domain gap between human hands and the robot end-effector. In applications 4 and 5, the difficulty is not so much about occlusion, but rather about the domain gap between human hands and the robot end effector. This domain gap is mitigated by the use of the same green dot across embodiments. Excellent observation about Table 4 and Figure 5! All methods learn to correctly score frames after the objects have been manipulated. But they have different behavior while the hand/robot end-effector is approaching the object. In Table 4, the camera motion in the egocentric claw videos provides a good signal for approaching the goal (thus high Spearman’s rho), while the robot experiment in Figure 5 has a fixed camera. The fixed camera in the robot experiment means that the reward function in the “Inpainted only” baseline doesn’t provide any feedback on the task progress, whereas our method, with its green dot, does. Hopefully, this clarifies your concern. We believe we have adequately addressed the concerns raised in this review. We will love to hear what you think and will be happy to offer further clarifications or respond to any other concerns. We hope our response helps improve the impression of our work. **References:** [DexMV] DexMV: Imitation Learning for Dexterous Manipulation from Human Videos. ECCV 2022. [DexVIP] DexVIP: Learning Dexterous Grasping with Human Hand Pose Priors from Video. CoRL 2021. [Robotic Telekinesis] Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans on YouTube. RSS 2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response; my concerns have been sufficiently addressed, and I have raised my rating to reflect this.
Summary: This paper proposes to use agent-environment factorization of egocentric videos to facilitate various downstream tasks (e.g., object detection, 3D reconstruction, affordance prediction, etc.). The authors leverages a pipeline to achieve agent-environment factorization. It consists of first a segmentation model that segment hands in egocentric videos, and second a diffusion-based inpainting model for filling the hand area. The benefit of the proposed pipeline is supported by improvements of baselins over various downstream tasks. Strengths: 1. The idea of agent-environment factorization is interesting and as shown by the experiments, it does facilitate downstream tasks in various scenarios. 2. I do appreciate the sufficient amount of downstream tasks evaluated according, it proves the soundness of factorization. Weaknesses: 1. One major concern on this paper is on its technical contribution. The proposed VIDM contains limited technical novelty as it is a basic segment-then-inpaint pipeline and does not propose any new modules. 2. There is no video-inpainting method compared in comparative experiments (e.g., [1]). As diffusion-based models usually perform poorly in inference speed, the current comparison with image-based inpainting models do not show whether VIDM's inference speed is enough for videos. 3. In the reward learning task (Sec. 5.5), the experiments are only conducted on 3 tasks, and all of them are open-action tasks. Would the results and analysis still hold for more complex tasks? or is it limited to the current selected domain. 4. In the real-world policy learning task (Sec. 5.6), the action space of the robot is very limited (1D) and has been placed in a very task-specific position. It might be too simple for making a point. Additionally, these experiments were only conducted on one task (still open-action) under one secneario, this makes it difficult to assess the generalizability and effectiveness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the Weakness section. The authors could focus on: 1. Identify the uniqueness of the proposed VIDM and show its superiority for the current task instead of a plain pipeline. 2. Showing that agent-environment factorization could be beneficial for interaction tasks. As this could be the most important factor for the proposed pipeline, the current evaluated domain and task might be too limited. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and thoughtful feedback. Please see our response below and refer to the supporting figures in the rebuttal PDF. >One major concern on this paper is on its technical contribution. The proposed VIDM contains limited technical novelty as it is a basic segment-then-inpaint pipeline and does not propose any new modules. * VIDM introduces new modules. It is more and better than just a segment-then-inpaint pipeline. Furthermore, our paper is more than just VIDM. Specifically, VIDM uses cross-frame attention layers to transform a pre-trained image diffusion model into a video in-painter as described in Section 4 and Figure 3. We evaluate the effectiveness of this proposed architectural contribution on the video-inpainting task and observe improvements over a basic segment-then-inpaint pipeline as well as the current state-of-the-art for video inpainting. Furthermore, a major component of the paper is not just the performance of VIDM, but how we use VIDM. We propose a novel agent-environment factored representation for egocentric videos and show its effectiveness in extensive experimental evaluation across 5 benchmarks spanning 2D/3D perception to robot learning. >There is no video-inpainting method compared in comparative experiments (e.g., [1]). As diffusion-based models usually perform poorly in inference speed, the current comparison with image-based inpainting models do not show whether VIDM's inference speed is enough for videos. * Our paper already included comparisons to DLFormer, the state-of-the-art for video in-painting tasks in Table 1 and Section 5.1. We note a large improvement in metrics over this prior state-of-the-art (PSNR for DLFormer 26.98, vs 32.26 for ours). Table 1 in the DLFormer paper already reports comparison to the cut-and-paste in-painter used in [1] and reports very large improvements over cut-and-paste, thus we didn’t include a direct comparison to cut-and-paste. Furthermore, we also report the inference speed in Table 1. VIDM needs 13.6s / image and is also much faster than 106.4s/image for DLFormer. This is not real-time but our applications don’t require real-time inference. > In the reward learning task (Sec. 5.5), the experiments are only conducted on 3 tasks, and all of them are open-action tasks. Would the results and analysis still hold for more complex tasks? or is it limited to the current selected domain. * To demonstrate that our method can work on tasks beyond opening, we followed the same protocol as in Table 4 for a fourth task of picking up a plate. In epic kitchens, there are less than ⅓ as many sequences for this task as for opening drawers, and the quality is worse (clips having the plate out of frame, annotation timing being off etc.). This lack of data and quality hurts generalization for all methods, but we still see a positive trend where using VIDM inpainted images with factorization gives Spearman’s correlation of 0.139, while raw images and non-factorized inpainting give 0.118 and 0.083 respectively. We note that many other cross embodiment learning techniques may be used with our factored representation to explore more complex or multi-stage tasks (e.g [1,7]) which we leave to future work. > In the real-world policy learning task (Sec. 5.6), the action space of the robot is very limited (1D) and has been placed in a very task-specific position. It might be too simple for making a point. Additionally, these experiments were only conducted on one task (still open-action) under one secneario, this makes it difficult to assess the generalizability and effectiveness. * While this experiment may be simple, it still does make a point. Past work in this setting [1] (which is represented by the orange line in figure 5 right) doesn’t work because information of where the end-effector is relative to the object of interaction is lost. This slows down learning. AEF factorization retains the end-effector position while minimizing the domain gap and consequently learns faster. Lack of cheap dexterous manipulators limits the tasks we can tackle in the real world. In addition to improvements over baselines in 4 other applications (2d object detection, 3d shape prediction, affordance prediction, and offline reward learning) this real world experiment demonstrates real-world feasibility of AEF+VIDM. We believe we have adequately addressed the concerns raised in this review. We will love to hear what you think and will be happy to offer further clarifications or respond to any other concerns. We hope our response helps improve the impression of our work. --- Rebuttal Comment 1.1: Title: Post-rebuttal response Comment: Thanks the authors for the clarification, the rebuttal has addressed most of my concerns, therefore I'm willing to increase my original rating to 5.
Summary: This work proposes the use of a factored agent and environment representation to handle two ego-centric video problems introduced by human hands : 1. They occlude objects of interaction and induce a domain gap between the data available for learning (egocentric videos) and the data seen by the robot at execution time; 2. Removing hands from the scene by masking or inpainting abandon the information of object affordances. The paper demonstrates the ability of the factored representation across tasks spanning 2D/3D visual perception to robot learning. They also show how selectively choosing and modifying aspects of the factored representation improves performance across all of these tasks compared to existing approaches. Strengths: Strength 1. The paper is clearly written and easy to follow. The related work provides enough information for the reviewers to get familiar with the background of ego-centric video and related tasks. 2. The proposed diffusion model, VIDM, is effective while efficient. The performance looks amazing compared to previous work. 3. The experiment part is sound and shows the effectiveness of the proposed VIDM across many benchmarks. 4. The motivation is intuitive. Combining hand pose as well as inpainting technique can provide more information than before, and thus the improvement is plausible. Weaknesses: Weakness 1. In figure 2, what is the meaning of the big f and g? Does they stand for different functions? If so, what is the purpose of drawing them in that way? The idea of this figure is not that clear. 2. Which part in VIDM contributes the most to the performance improvement across those benchmark? There seems to be no related ablation study on this. 3. In table 1, how does the stable diffusion (fine-tuned) done? It would be great if the author could provide more detail on this. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see my comment in weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see my comment in weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and thoughtful feedback. Please see our response below and refer to the supporting figures in the rebuttal PDF. > In figure 2, what is the meaning of the big f and g? Does they stand for different functions? If so, what is the purpose of drawing them in that way? The idea of this figure is not that clear. * Figure 2 is meant to illustrate a few of the ways one could use an agent-envorinment factorization. For example, in section a (left) we show $I_{env}$ and $I_{agent}$ being passed into a single function that optimizes some perception task. This mirrors application 1, which uses the both elements of the factorization to improve object detection. Similarly in section c (right) we show that one can independently transform the agent representation (with some function $g$) before passing into both elements into another function ($f$). This is particularly useful when working with multiple embodiments. We demonstrate this in applications 4 and 5, where we process $I_{agent}$ with a function ($g$ in the language of figure 2) that maps both robot end-effectors and human hands to the same visual representation (green dot). This processing remove the visual domain gap, improving generalization across embodiments. We will update the caption in this figure to be clearer. > Which part in VIDM contributes the most to the performance improvement across those benchmark? There seems to be no related ablation study on this. * In the paper we have already included an ablation, and we have added another for the rebuttal along with a diagnostic visualization as described below. * In the paper we compare VIDM, vs Latent Diffusion finetuned on our data. This shows that there is a clear benefit to the multi-frame nature of our model, and gains aren’t just due to training on in-domain data. * As for ablations on the nature of our training: we are currently running an ablation on the hand exclusion aspect of our training protocol (filtering out Ego4d frames with hands and not propagating loss on pixels with hands). Unfortunately it won’t finish in the time duration of the rebuttal (the model takes about 12 days to train). Preliminary quantitative results at 3 days of training indicate that this choice is indeed effective across all metrics (PSNR of 31.14 vs 32.17 for our original model at 3 days of training. SSIM: 0.950 vs 0.955, FID 12.10 vs 10.57). However, qualitative visualizations exhibit the error mode that we saw during our development. Because this ablated model has to output hands some of the time, it sometimes paints the hand like pixels back into the image. See examples in the PDF (Figure B2) attached with the main response. * Furthermore, in order to give some insight to how our model uses information from prior frames, we visualized how our method responds to corruptions in context frames at test time. See examples in the PDF (Figure B1) attached with the main response. > In table 1, how does the stable diffusion (fine-tuned) done? It would be great if the author could provide more detail on this. * We took the single frame model that we extended to multiple frames (Latent Diffusion inpainting pre-trained on Places) and simply finetuned it on the exact same data that we used for finetuning VIDM. Since the single frame model takes in no context frames it ignored the extra frames and was finetuned to inpaint the masked region in the target frame. The same hand exclusion techniques and other training choices were used for this experiment. We believe we have adequately addressed the concerns raised in this review. We will love to hear what you think and will be happy to offer further clarifications or respond to any other concerns. We hope our response helps improve the impression of our work. --- Rebuttal Comment 1.1: Comment: The rebuttal solves my concern well, and I wish to keep my rating.
Summary: The following work presents a factorized approach for video-based egocentric tasks. Specifically, they propose to break down the video feed into separate environment-only and hands only feeds. Intuition behind this formulation is that change in appearance of the hands may constitute a domain gap when the source of the video perspective changes (e.g. person to person, person to robot manipulator). Furthermore, explicit factorization provides the model with additional supervision on the breakdown between what is the environment and what is the manipulator. The hands are removed from the video feed using a video-inpainting model based on the latent diffusion architecture with attention-based extensions to attend to multiple past frames. Results demonstrate that their video-inpainting formulation outperforms DLFormer in both the video inpainting task (limited to their use case) as well as in improvements to downstream applications (object detection, affordance prediction, etc.) Strengths: - Simple idea with extensive demonstration of improvements in multiple downstream tasks. - Strong results for video inpainting in the ego-centric setting with similarly very simple but pretty easy to justify design choices (optical-flow-like attention for past frames, exclusion of hands from training data) Weaknesses: - While the superiority of their video inpainting formulation is demonstrated only within the hand-removal task of ego-centric videos, the language used to describe the method can often be misinterpreted as a broader claim for outperforming existing state of the art models in a general sense. - The authors found it helpful to exclude hands as prediction targets during training. This seems like a significant design decision that should have a corresponding ablation study. - Additional training details for comparison against DLFormer are missing: - Was DLFormer also trained with the hand-exclusion technique? - On what training data were the visual codebooks used by the LDM formulation and DLFormer derived from? - While I don't necessarily doubt the idea that the factorized formulation improves object detection, I do not see average recall as an appropriate replacement for average precision. I would much rather see average precision measured on a limited set of categories where all instances of the object category in question are annotated. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Technical limitations discussed. I'm not sure it's sufficient to simply say that this work inherits uncertain societal implications from other generative modeling works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and thoughtful feedback. Please see our response below and refer to the supporting figures in the rebuttal PDF. > While the superiority of their video inpainting formulation is demonstrated only within the hand-removal task of ego-centric videos, the language used to describe the method can often be misinterpreted as a broader claim for outperforming existing state of the art models in a general sense. * Thanks for the feedback. We will take a pass and further qualify our claims to be more about the hand inpainting task in egocentric videos. > The authors found it helpful to exclude hands as prediction targets during training. This seems like a significant design decision that should have a corresponding ablation study. * We are running this ablation but, unfortunately it won’t finish in the time duration of the rebuttal (the model takes about 12 days to train). Preliminary quantitative results at 3 days of training indicate that this choice is indeed effective across all metrics (PSNR of 31.14 vs 32.17 for our original model at 3 days of training. SSIM: 0.950 vs 0.955, FID 12.10 vs 10.57). However, qualitative visualizations exhibit the error mode that we saw during our development. Because this ablated model has to output hands some of the time, it sometimes paints the hand like pixels back into the image. See examples in the PDF (Figure B2) attached with the main response. > Additional training details for comparison against DLFormer are missing: Was DLFormer also trained with the hand-exclusion technique? * Yes. DLFormer is a per-clip method. It fits a unique set of model weights for each clip at test-time. Since all hands are masked out during test-time inpainting, DLFormer never sees any hands during test-time finetuning. It uses no additional pre-training step beyond having a pre-trained visual codebook. This codebook can easily reconstruct images from EPIC without hands. > On what training data were the visual codebooks used by the LDM formulation and DLFormer derived from? * Because we are finetuning pre-trained models we had to use the same codebooks that were used for the released LDM and DLFormer models: Places for LDM and COCO for DLFormer. We verified that both codebooks did a comparable job at reconstructing frames from the EPIC Dataset. > While I don't necessarily doubt the idea that the factorized formulation improves object detection, I do not see average recall as an appropriate replacement for average precision. I would much rather see average precision measured on a limited set of categories where all instances of the object category in question are annotated. * This is a good experiment to run. To this end we took the class with the fewest false positives (which happened to be ‘scissors’) when using raw images, and manually labeled all instances which were indeed true positives (adding missing detections to the ground truth labels). For this class with labels updated, using raw images only achieves an AP of 0.559, while using images in painted with VIDM achieves an AP of 0.584. We believe we have adequately addressed the concerns raised in this review. We will love to hear what you think and will be happy to offer further clarifications or respond to any other concerns. We hope our response helps improve the impression of our work. --- Rebuttal Comment 1.1: Title: Concerns addressed Comment: The authors did a great job of addressing all my concerns, as well as the concerns of many other reviews. I am increasing my rating to accept with the expectation that all changes are appropriately incorporated into the final draft.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their valuable and insightful comments. Attached to this post is a single page pdf containing 3 figures: B1, B2, and B3. B1 is a diagnostic visualization showcasing VIDM’s ability to intelligently copy pixels from previous frames. B2 visualizes failure modes of an ablation of our method that allows loss to propagate to pixels containing hands. B3 compares reconstructions from VIDM against those from NeuralDiff, demonstrating VIDM’s superior ability to recover occluded pixels. We have posted replies to each reviewer's individual comments to address their specific concerns. Pdf: /pdf/a54e373a284447624f02fc1eee743f98dd041121.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes agent-environment factorization (AEF) as a representation for egocentric videos. AEF consists of 2 parts: the hand segmentation as agent part, and video inpainted environment part. The former uses an off-the-shelf hand segmentation while the latter is from finetuning an inpainting diffusion model. The authors show several downstream applications to demonstrate the AEF can improve recognition, reconstruction, and robotic tasks. Strengths: + The key idea is agent-environment factorization, which is shown beneficial to multiple applications, 2D recognition, 3D reconstruction, and robotic tasks. The baselines are carefully designed such that they are directly comparable and showcase where and why AEF helps. + In the design of video inpainting model, they use the pretrained image inpainting to get the spatial prior while using nearby frames to aggregate more context. Weaknesses: 1. While the factorization is shown effective, the improvement over the video inpainting model itself appears a bit incremental. This is based on 1) qualitative results in FigS3 2) some domain specific knowledge being used (e.g. no loss on the hand pixels, copy paste hand-shaped masks, etc). It is not clear how well the proposed architecture can generalize to other well-adopted video inpainting benchmark benchmarks like DAVIS or Youtube VOS. 1.1 The proposed architectures may not be as critical as the paper claims (like in Table 1, see 1.2). It seems finetuning an generative video model, e.g. a MAGVit or video diffusion model may lead to results as good as the current method shows. I wonder if the authors agree with my conjecture. 1.2 The numbers in Table 1 indicate a significant gap over current SoTA video inpainting methods. Improvement over Latent Diffusion is sensible since the model sees more context. But DLFormer is a completely different method – a per-clip model that operates in pixel space. All of the metrics would favor sharp high-frequency signals, which latent space in latent diffusions are good at but DLFormer lacks. 1.3 visualizing attention may be helpful to understand how context in nearby frames help inpainting. 2. The paper is very related to “Neural Feature Fusion Fields” which factorize the videos into background, agent, and, in addition, moving objects. Although they optimize a per-clip representation, the differences with this line of works should be discussed. 3. There are some improvements across multiple downstreaming applications but the improvements are not surprising. For example, seeing the unoccluded objects / environment improves 3D reconstruction; merging prediction of both unoccluded and original images boosts object detection; seeing the hand location indicates frame order better. 3.1 It is a minor point but in Table 2, it will be more fair to compare if the proposed region is also doubled for the baselines since the proposed method is evaluated with twice the predictions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Overall I think it is a sound paper. The novelty are faire and mainly empirical -- showing this factorization can help several downstream tasks. See weakness. My main concern is 1.1 and 2. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The author discussed the limitation explicitly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and thoughtful feedback. Please see our response below and refer to the supporting figures in the rebuttal PDF. **Clarifications about improvement in the video inpainting model** First, to clarify, VIDM includes a) an architectural modification on top of an image-based diffusion model to use information from past frames via cross-frame attention, and b) domain specific insights to train VIDM for the application we are interested in (agent-environment factorization). In our view, improvements over the baselines are quite salient, both qualitatively and quantitatively. Our output in Figure S3 suffers from fewer artifacts (row 1, row 2, row 5, row 6) and better completes the objects (row 3, row 4, row 7). Quantitative improvements in Table 1 are also quite solid, PSNR improves from 28.27 to 32.26, FID decreases from 27.50 to 10.37. Note that baselines were retrained on the same dataset that we use, so this represents a solid improvement. In the setting that we care about (egocentric videos), the major challenges are large camera motion, and dynamic occlusion (hands holding objects and moving through the scene). In this setting we are able to outperform the current state-of-the-art (DLFormer). Since our downstream applications involve hand occlusion, we focus on the task of inpainting hands in egocentric video. For this use case, our model outperforms existing models and lets us build effective agent-environment factorizations. **Discussion about finetuning a video generative model** The reviewer’s suggested approach is feasible, but it is not trivial to apply in our setting. For starters, published video diffusion models at the time of submission [19, 26, 75], could not be directly applied to our video inpainting task (they either operate at lower resolution or do not incorporate masks for inpainting). The reviewer’s suggested approach (taking a video prediction model, and adapting it to do inpainting), is similar to the approach we took (taking an inpainting model and adapting it to video). We started from an image-based generative model and made necessary modifications to get it to work for video in-painting. We had our reasons: LDM released code, code for MAGVit wasn’t available, other video generative models weren’t applicable for reasons above. So, yes we agree with your conjecture, and such alternatives are interesting avenues for future work. But that doesn’t nullify our contributions in this paper, that is AEF combined with VIDM. **Comparison with SoTA video inpainting methods.** Independent of how DLFormer works, we compared against DLFormer as it is, to the best of our knowledge, the state-of-the-art at the video in-painting task. PSNR and SSIM are standard metrics. DLFormer itself uses these metrics for which it reports state-of-the-art results. We also report FID which is not directly comparing any two images, but rather comparing image statistics computed across all inpainted test clips versus ground truth. We outperform the current state-of-the-art for video inpainting in our setting (hand removal in egocentric videos) and our contribution (video inpainting extension of latent diffusion models) leads to improvements over just image inpainting with latent diffusion. **Visualization to understand how context in nearby frames help inpainting.** Excellent suggestion! Figure B1 in the main response PDF, visualizes how our method responds to corruptions in context frames at test time. This suggests VIDM does use info from context frames when necessary.. **Discussion about Neural Feature Fusion Fields and NeuralDiff.** Thanks for this pointer. This is a relevant reference that we will cite and discuss. Neural Feature Fusion Fields (like NeuralDiff) embeds inductive bias into NeRF to obtain a decomposition into a static background, transient foreground object, and agent. Using NeRF lets them infer a 3D factorization but it comes with its own limitations: a) it requires many (100s of) viewpoints to work, and b) there are no priors that can be used to complete objects that are never observed. In contrast, our method pursues a factorization in 2D, can inpaint reasonaly with just 4 frames of context, and can also use priors on appearance of objects from large-scale pre-trained diffusion models. We compare to NeuralDIff (NFFF didn’t release models for EPIC videos) on P05_01 sequence since it is the only one that is common with our test set. We focus on frames that include a hand, and use their static and transient reconstruction as the prediction for $I_{env}$. We contrast it with the prediction for $I_{env}$ from our model. Figure B3 shows qualitative comparisons. On these images, our model achieves superior FID scores - 186.79 for VIDM vs 215.90 for NeuralDiff. Note that FIDs are overall higher than usual, but for good reason. There is no hand-removed image set (ie. objects floating in air) to use as reference to compute FID. As a proxy reference set, we use images that don’t contain hands as reference and thus FID scores for both models are higher than usual. **Clarification about experiments in Table 2.** First, to clarify, all methods in Table 2 return the same number of proposals. The last row, that runs the detector twice, pools together the detections from the two runs but returns the same number of detections as the baselines. Thus, in our view, the comparison, as is, is fair. One could argue that we use 2x compute time than baselines. For this, the second to last row presents a direct comparison where we run the detector just once (but on $I_{env}$) and still see improvements over the raw image and other baselines across most metrics. We believe we have adequately addressed the concerns raised in this review. We will love to hear what you think and will be happy to offer further clarifications or respond to any other concerns. We hope our response helps improve the impression of our work.
null
null
null
null
null
null
An Empirical Study Towards Prompt-Tuning for Graph Contrastive Pre-Training in Recommendations
Accept (poster)
Summary: The paper presents an empirical study on the application of prompt-tuning for graph contrastive pre-training in recommendation systems. The authors propose a method that combines graph neural networks (GNNs) and contrastive learning to enhance the performance of recommendation models. The key idea is to leverage prompt engineering techniques, where carefully crafted prompts are used to guide the recommendation process. The authors conduct extensive experiments on several real-world datasets, comparing their proposed method with various baselines. They evaluate the performance in terms of recommendation accuracy, coverage, and diversity. The results demonstrate the effectiveness of the proposed approach, showing significant improvements over the baselines in terms of recommendation quality. The contributions of the paper include the introduction of a novel method that combines graph contrastive learning with prompt-tuning for recommendation systems. The authors provide insights into the design choices and hyper-parameter settings of the proposed method. They also conduct ablation studies to analyze the impact of different components and variations in the training process. Overall, the paper highlights the potential of prompt-tuning for graph contrastive pre-training in recommendation systems. The empirical results support the effectiveness of the proposed approach and provide valuable insights for researchers and practitioners in the field of recommendation systems. Strengths: 1. The paper explores the application of prompt-tuning techniques in the context of graph contrastive pre-training for recommendation systems. This novel research direction expands the understanding of prompt-based methods in the field of recommendation systems. The paper also offers insights into prompt design for graph contrastive pre-training in recommendation systems. The authors discuss the importance of considering domain knowledge and tailoring prompts to specific tasks, providing practical guidance for researchers and practitioners. 2. The authors conduct systematic evaluations on different prompt strategies, considering both template-based prompts and prompt engineering. The evaluation process is well-designed, ensuring a comprehensive analysis of the effectiveness of prompt-tuning methods. Also, the paper provides a detailed comparative analysis of different prompt-tuning methods, allowing readers to understand the relative performance of each technique. This analysis helps researchers and practitioners make informed decisions regarding the choice of prompt strategy for graph contrastive pre-training. 3. The experiments are conducted on various recommendation datasets, enhancing the generalizability of the findings. This inclusion of diverse datasets strengthens the validity of the conclusions drawn from the study. The authors also pay attention to hyperparameter tuning and conduct experiments to find optimal settings. This consideration enhances the reliability of the results and ensures that the performance improvements observed are not solely due to arbitrary hyperparameter choices. 4. The paper includes detailed information about the code implementation and data availability, facilitating reproducibility and promoting further research in the field. This transparency enhances the credibility of the study. 5. The findings of the paper have practical implications for the development of recommendation systems. The demonstrated performance improvements through prompt-tuning techniques can guide practitioners in enhancing the effectiveness of graph contrastive pre-training models for recommendation tasks. The paper's contributions have the potential to positively impact the field of recommendation systems. By introducing and validating the effectiveness of prompt-tuning for graph contrastive pre-training, the authors provide valuable insights that can guide future research and the development of improved recommendation algorithms. Weaknesses: 1.The paper assumes readers have a strong understanding of graph contrastive learning and prompt-tuning, potentially alienating some readers unfamiliar with these topics. 2. While the authors show the effectiveness of prompt-tuning, it would be interesting to compare their approach with prompt-less models to understand the benefits fully. 3. The paper could discuss the generalizability of the prompt engineering techniques used in the study to other recommendation tasks or different prompt-based methods. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. How sensitive is the proposed method to the choice of hyper-parameters, such as the prompt size or learning rates? Have you conducted sensitivity analysis to understand the impact of these hyper-parameters on the performance? 2. How generalizable are the prompt engineering techniques used in this study? Can they be applied to other recommendation tasks or different prompt-based methods beyond the proposed approach? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Could you discuss potential challenges and limitations of prompt-tuning in recommendation systems? Are there any scenarios where prompt-tuning might not be suitable or effective? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comprehensive reviewing and recognising our contributions. We sincerely appreciate your valuable comments and suggestions. We would response to your comments and concerns in the following. W1. The paper assumes readers have a strong understanding of graph contrastive learning and prompt-tuning, potentially alienating some readers unfamiliar with these topics. Thanks for your reminder. We agree that our writing neglects the readers unfamiliar with the related topics and causes misunderstandings in other reviews. We will add some content as preliminaries to briefly introduce the background knowledge of GCL and prompt learning. W2. While the authors show the effectiveness of prompt-tuning, it would be interesting to compare their approach with prompt-less models to understand the benefits fully. Yes, we agree with that. The reviewer PJh3 also has the same concern. It is important to show the improvement brought by personalised prompts explicitly. Considering the base model we adopt is SGL, we will conduct experiments on SGL to compare it with our proposed method to verify the advantages of personalised prompts generation. W3. The paper could discuss the generalizability of the prompt engineering techniques used in the study to other recommendation tasks or different prompt-based methods. Thanks for your suggestions. We will collect more literature to discuss the application of prompt learning in other recommendation tasks. As the research scope of us focuses on graph learning, we apologies for limiting the discussion about it. Q1. How sensitive is the proposed method to the choice of hyper-parameters, such as the prompt size or learning rates? Have you conducted sensitivity analysis to understand the impact of these hyper-parameters on the performance? To better understand the properties of our proposed CPTPP, we conduct hyper-parameter studies on an important term, the dimension size of the personalized prompt. By fixing all the other hyper-parameters, we comprehensively examine the performance of three versions of the proposed CPTPP on all the datasets with different prompts. Specifically, the size of the personalized prompt is selected from $\{8, 16, 32, 64, 128, 256\}$. We choose two metrics, Precision@5 and NDCG@5, to demonstrate CPTPP's performance variations with different prompt sizes. All the experiment results are shown in Figure 3 and Figure 5 in the Appendix. (i) The first thing we can observe is that, in most cases, CPTPP has the best performance when the prompt size is not larger than the dimensionality of user embeddings. A potential reason is that sizeable prompt dimensions would introduce more noise into pre-trained user embeddings, disturbing the structural semantics extracted from the user-item interaction graph by graph contrastive learning. (ii) We also notice a significant performance improvement when prompt size is 256 in several cases, such as CPTPP-M on dataset ML-1M and CPTPP-R on dataset Gowalla. However, they still fail to significantly outperform the CPTPP model, which has a much smaller prompt size. Therefore, small prompt size for prompt-tuning is a better option in practice as they achieve a relatively good recommendation quality and higher efficiency. Q2. How generalizable are the prompt engineering techniques used in this study? Can they be applied to other recommendation tasks or different prompt-based methods beyond the proposed approach? The proposed method can be applied to various graph-based recommendation tasks and enhance the recommendation performance when no side information is available. Please note that the final outputs of our proposed method are the user embeddings enriched by the personalised prompts and item embeddings. The acquired user and item representations can be fed into any downstream tasks that accept such inputs. Therefore, the generalizability of our proposed method is promising. L1. Could you discuss potential challenges and limitations of prompt-tuning in recommendation systems? Are there any scenarios where prompt-tuning might not be suitable or effective? The challenges of applying prompt learning are time-consuming and require experts knowledge to design the prompts. Though a novel paradigm, soft prompt, is proposed, it requires auxiliary information like user profiles and extra computation resources to generate prompts according to the auxiliary information. The prompt-tuning paradigm may not work when the computation source is limited (e.g., edge device). Moreover, the prompt-tuning is tailored for pre-trained models. If there is no pre-trained models get involved, the prompt-tuning cannot be applied. --- Rebuttal Comment 1.1: Comment: I appreciate the response from the authors, which has addressed most of my previous conerns. Generally, I think the topic studied in this paper is timely and interesting, while some details can be improved. I would like to update my scores accordingly.
Summary: This paper proposes a prompt-enhanced framework for GCL-based recommender named CPTPP. At the core of CPTPP is a personalized user prompts generation framework that summarizes user profiles in graph recommender systems. The generated user prompts are then integrated with pre-trained user embeddings when applied in downstream tasks. Empirical results on three benchmark datasets shows that CPTPP is able to outperform state-of-the-art baseline alternatives such as SimGCL. Strengths: - CPTPP achieves strong empirical performance, outperforming strong baseline approaches such as SimGCL. - Graph contrastive pre-training is useful in the recommendation scenario with many potential downstream applications. - Code is available which improves reproducibility. Weaknesses: - CPTPP relies on exploiting historical interaction records, adjacency matrix, and high-order user relations for generating personalized user prompts. The personalized user prompts can thus be viewed as features engineered from user-item interactions. When incorporating feature engineering into the model, it is not surprising that there will be an improvement in terms of performance. - Performance is not much better even with the auxiliary information used. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Why does combining personalized user prompts with pre-trained user embedding help narrow the distinct targets between pre-training and downstream tasks? The paper explains in detail how the personalized user prompts are created, but does not explain why this results in narrowed distinction. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: CPTPP relies on mining historical interaction records, adjacency matrix, and high-order user relations for generating personalized user prompts, and therefore it is possible that the improved performance primarily comes from the extra features used. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and questions. We will revise our paper according to them. Now, we would like to respond to your suggestions and questions. W1. CPTPP relies on exploiting historical interaction records, adjacency matrix, and high-order user relations for generating personalized user prompts. The personalized user prompts can thus be viewed as features engineered from user-item interactions. When incorporating feature engineering into the model, it is not surprising that there will be an improvement in terms of performance. Our method is inspired by a novel paradigm, soft prompt. The original goal of prompt learning is to elicit the pre-trained model. However, it is time-consuming and requires experts while designing hard prompts. The soft prompt paradigm addresses such limitation by utilising side information like user profiles to adaptively generate soft prompts. The generated prompts help the proposed method achieve better performance. Please notice that, the baselines also process the user-item interaction graphs but fail to outperform our method, which shows the advantages of our method to process the interaction graph, which is soft prompting. W2. Performance is not much better even with the auxiliary information used. Please note that there is no auxiliary information available. The settings of our research is graph-based recommendation without side information. Only user-item interaction graph is available for each method. We first propose acquiring various user profiles based on user-item interaction graph and generating soft prompts based the acquired user profiles. No side information used in our method. Q1. Why does combining personalized user prompts with pre-trained user embedding help narrow the distinct targets between pre-training and downstream tasks? The paper explains in detail how the personalized user prompts are created, but does not explain why this results in narrowed distinction. One of the advantages of prompt learning is to narrow the distinct targets between pre-training and downstream tasks, which is verified by many research works in the community of prompt learning. If the reviewer wants to know more about the mechanism and advantages of prompt learning, please refer to the section of related work, where we list some critical literature which is the basement of prompt learning. L1. CPTPP relies on mining historical interaction records, adjacency matrix, and high-order user relations for generating personalized user prompts, and therefore it is possible that the improved performance primarily comes from the extra features used. Thanks for acknowledging that our proposed personalised prompts improve the performance. Please note that there is no extra features used in our method, the personalised prompts are generated solely based on the user-item interaction graph. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the detailed response to my questions and concerns. The authors' rebuttal has addressed most of my concerns, although I still have concern about the limited performance improvement. I have therefore raised my score accordingly.
Summary: This paper proposes a prompt-tuning approach for GCL-based recommender systems. A framework consisting of a GCL module, a prompt generation module and a recommendation module is developed. Both ablation study and hyper-parameter study are conducted. Strengths: - This paper studies an interesting research problem, prompt tuning for GCL-based recommendation, which gets rid of a combination of two quite different targets. - The paper is easy to follow. - Both ablation study and hyper-parameter study are conducted. Weaknesses: - The improvements over existing baselines are not significant. For example, SimGCL outperforms the proposed CPTPP approach on Gowalla dataset w.r.t NDCG@20. - The visualization of Figure 2 is not convincing to me. - There is no doc or README in the released codes. - The authors did not provide the reasons for selecting the adopted baselines. - What is the computational complexity of the proposed method compared with existing approaches? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review. We would like to respond to your comments and questions in the following. W1. The improvements over existing baselines are not significant. For example, SimGCL outperforms the proposed CPTPP approach on Gowalla dataset w.r.t NDCG@20. Thanks for your conscientious and detailed review. We will keep tuning the proposed model. Currently, CPPTPP outperforms the baselines in most cases and has a very close performance compared to SimGCL on dataset Gowalla regarding the metric NDCG@20. W2. The visualization of Figure 2 is not convincing to me. The reviewer PJh3 has the same concern regarding the visualisation results. Please refer to Q3 in the reponse to reviewer PJh3. W3. There is no doc or README in the released codes. Thanks for your reminder. We will add the related document later. The document is also available in the SelfRec project on Github as I officially stated that our implementation is based on the source codes of SelfRec. W4. The authors did not provide the reasons for selecting the adopted baselines. There are three types in the baselines we selected. BPR-MF is a conventional recommendation method. BUIR and SelfCF are both contrastive learning-based recommendation systems. NCL and SimGCL are two representative GCL-based recommendation systems. As to our proposed method, we take SGL as the backbone, the predecessor version of SimGCL. We will add related information in the main content later. W5. What is the computational complexity of the proposed method compared with existing approaches? Our proposed method is a framework to address the limitations of current GCL-based recommendation methods. The complexity of the proposed methods depends on the complexity of backbone GCL method and the user profile generation method. Therefore, there is no fixed complexity for our proposed method.
Summary: This paper proposes a prompt-enhanced framework for GCL-based recommender system, called CPTPP. CPTPP reforms the existing GCL-based recommendation methods with the prompt tuning mechanism to fully exploit the advantages of GCL in the pre-training phase instead of combining the contrastive loss with downstream objectives. The authors summarize three user profiles derived from the user-item interaction graph as the inputs, without requiring extra side information, for the prompt generator. Extensive experiments on real-world datasets have demonstrated the effectiveness. Strengths: 1. The proposed CPTPP reforms the existing GCL-based recommendation methods by separating the GCL pre-training and the downstream recommendation task using the prompt tuning mechanism. The idea is innovative for GCL based recommendation. 2. Integrating prompts could better elicit the knowledge within the pre-trained user and item embeddings. The authors propose three different prompt generation methods, which can be applied to situations where users’ side information is not available. 3. The writing is good, which is well organized and easy to read. Most experimental details are provided. Weaknesses: 1. Generating prompts for users in recommender systems has been proposed in existing work. The difference mainly lies in that this paper addresses the GCL-based recommendation situation, where no side information of users is available. 2. As an empirical study, some important details are not fully explained. For example, in section 2.2, the authors stated that “we can adopt various GCL learning methods …, to obtain high-quality user and item embeddings.”. Which GCL method did the authors actually adopt in their experiments? This is not claimed in the paper. I have some other questions about the experiments. Please refer to the “Questions” section. 3. The Ablation Studies (section 3.2.3) actually provides horizontal comparisons among CPTPP-M, CPTPP-H, and CPTPP-R, instead of standard ablation studies. It is nice to provide some insights into the experiments. But I wonder to what extent the proposed methods improve the performance compared with models without the personalized prompts. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What are the backbone GCLs of these models in the experiments? Do the generation modules perform differently on different GCLs? 2. The original goal of prompt designing is to better elicit the knowledge contained in the pre-trained model for downstream applications. The second proposed method (CPTPP-M) for prompt generation actually obtains user and item embedding using adjacency matrix factorization separately which is irrelevant from GCL. Moreover, CPTPP-M performs the best half of the time. Can the authors explain the reason? 3. Section 3.2.1, “As suggested in [15], the more uniform the embedding distribution is, the more capability to model the diverse preferences of users the method has”. Actually, this is a “speculation” in the paper [15] instead of a conclusion. Moreover, “uniform” distribution is quite ill-defined here. I find it hard to differentiate which one of the figures is more uniformly distributed. Are there any quantitative measurement methods? 4. It seems that the proposed methods are not limited to user embeddings. Can the personalized prompt generation methods be symmetrically applied to items? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for recognising the contribution and novelty of our research work. We appreciate your suggestions and questions, and we will revise the manuscript according to your comments. The following is our response to your comments. W1. Generating prompts for users in recommender systems has been proposed in existing work. The difference mainly lies in that this paper addresses the GCL-based recommendation situation, where no side information of users is available. We totally agree with you. Generating prompts in conventional recommender systems is a widely explored topic, focusing on utilising various side information of users and items. However, in our settings, we focus on the graph-based recommender system where only the user-item interaction graph is available, and no side information is provided. And the scope of our work is to adaptively construct prompts from the interaction graph to enhance the recommendation performance. W2. As an empirical study, some important details are not fully explained. For example, in section 2.2, the authors stated that “we can adopt various GCL learning methods …, to obtain high-quality user and item embeddings.”. Which GCL method did the authors actually adopt in their experiments? This is not claimed in the paper. We apologise for neglecting the details about the base GCL model we adopt. To ensure that delicate GCL models do not cause performance improvement, we take SGL, a vanilla GCL model without sophisticated components, as the base model. SGL is the predecessor of the baseline SimGCL. We also strictly follow the evaluation protocols in GraphCL and InfoGraph to ensure a fair comparison. We will add related descriptions in the manuscript later. Thanks for your reminder. W3. The Ablation Studies (section 3.2.3) actually provides horizontal comparisons among CPTPP-M, CPTPP-H, and CPTPP-R, instead of standard ablation studies. It is nice to provide some insights into the experiments. But I wonder to what extent the proposed methods improve the performance compared with models without the personalized prompts. Thanks for your suggestions. It is important to show the improvement brought by personalised prompts explicitly. Considering the base model we adopt is SGL, we will conduct experiments on SGL to compare it with our proposed method to verify the advantages of personalised prompts generation. Q1. There are three types in the baselines we selected. BPR-MF is a conventional recommendation method. BUIR and SelfCF are both contrastive learning-based recommendation systems. NCL and SimGCL are two representative GCL-based recommendation systems. As to our proposed method, we take SGL as the backbone, the predecessor version of SimGCL. The generation modules in our proposed method have the same operations as our framework and can take various GCL-based recommendation methods as the backbone. The generation modules are independent of the GCL module. We can combine the generated prompts with the outputs of the GCL module to help to improve the recommendation performance. Q2. We agree that the original goal of prompt learning is to elicit the pre-trained model. However, it is time-consuming and requires experts while designing prompts. There are research works proposed a soft prompt paradigm [1][2] to address such limitations. Such a novel paradigm inspires our work. The fundamental procedure of soft prompts is to adaptively generate prompts based on side information like user profiles. Three user profiles are proposed in our work for graph-based recommendation scenarios without side information. All of them, including adjacency matrix factorisation (MF), are highly related to recommendation tasks. MF is a conventional recommendation method. It can produce high-quality low-dimension embeddings for users and items, reflecting the preferences of users and item features. So we can utilise the outputs of MF as the user profile for the downstream personalised prompt generation. Please note that the user profile acquisition is optional to be highly related to GCL. Historical interaction records and high-order user relations are simple aggregation methods, combining user embeddings and items related to the target users. However, adjacency matrix factorisation is a machine learning method that can help embed initial user and item representations to a low-dimension latent space. A trainable embedding method can produce better user profiles than the two simple aggregation methods. That is a potential reason why CPTPP-M has better performance. [1] Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. [2] Yiqing Wu, Ruobing Xie, Yongchun Zhu, Fuzhen Zhuang, Xu Zhang, Leyu Lin, and Qing He. Personalized prompts for sequential recommendation. Q3. Thanks for your reminder. We acknowledge the limitations of our visualisations. Determining which is more uniformly distributed according to the visualisation results is not straightforward. There is a potential alternative measurement. We can first generate a 2-dimension uniform distribution. Then, we use metrics like Kullback-Leibler divergence and Wasserstein distance to measure the difference between the generated uniform distribution and the distribution of obtained user embeddings. A slight divergence or distance indicates a minor difference. In this way, we may offer a quantitative method to measure the "uniformity". We will add quantitative results together with the visualisation to demonstrate the quality of obtained user embeddings. Q4. Yes, it can be symmetrically applied to items. Should the participants in some scenarios request to recommend users for a specific item to construct the list of potential clients, they can do it symmetrically. --- Rebuttal Comment 1.1: Title: Response to Rubttal Comment: I appreciate the authors for their efforts in addressing my concerns. My questions have been addressed. Generally, this is a satisfactory paper, and the experiments could be further improved as we discussed. I am still positive about this paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CAP: Correlation-Aware Pruning for Highly-Accurate Sparse Vision Models
Accept (poster)
Summary: This paper studies the unstructured pruning problem for vision transformer models. The Correlation Aware Pruner (CAP) is proposed. CAP takes into account weight correlations and achieves a new state-of-the-art result. Strengths: - The proposed method achieves strong experimental results. For the first time, ViT models can attain high sparsity levels (75-80%) without significant accuracy loss (<1%). Note that previous methods reached at most 50% sparsity. - CAP is reasonable and well-motivated. This paper points out an important problem that removed weights may themselves be correlated. It also highlights the importance of the learning rate schedule. Overall, this paper provides lots of useful information for unstructured pruning for ViT. - The whole paper is organized and written well. Experiments are sufficient and sound. Weaknesses: - The biggest problem is unstructured pruning struggles with GPU devices. The authors test models’ latency on a sparsity-aware CPU inference engine. However, GPU is not considered. Overall, unstructured pruning performs badly on GPU which limits its practical ability. - Some results of SViTE are missing in Tab. 2. It is better to change the order to put CAP at the bottom of each section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback! **1. Applicability to GPUs:** In Appendix H.3, we provided results for pruning with hardware-supported 2:4 sparsity pattern [1]. This semi-structured sparsity pattern can lead to speedups of tensor operations on the Ampere and Hopper GPU architecture. One can see that our method allows us to compress the models without significant drop in performance in this setup. For convenience, we present a sample of the results in **Table 1** below. **Table 1. Semi-structured 2:4 pruning of ViT models followed by 10 epochs of finetuning.** | Model | Method | Top1-Accuracy (\%) | |:-----------:|:------:|:------------------:| | DeiT-Tiny | - | 72.2 | | | GM | 68.8 | | | CAP | 71.5 | | DeiT-Small | - | 79.8 | | | GM | 77.9 | | | CAP | 79.0 | | DeiT-Base | - | 81.8 | | | GM | 81.2 | | | CAP | 81.3 | **2. Comparison with SViTE:** We included all the results from the original paper (where there was a result for a specific sparsity level) as well as additional experiments we performed ourselves (in case there was no experiment with the given sparsity in the paper). The training procedure in SViTE requires 2x more epochs than the full CAP pipeline, and has to be rerun for every sparsity level. (Recall that CAP produces all sparse models in a single run.) Due to the high computational cost of retraining ViT models, we were unable to perform experiments for some sparsity levels. We would have liked to run these additional experiments for SViTE for the rebuttal phase, but unfortunately the experiments wouldn’t finish by the end of the rebuttal period, given the large cost of running 600 epochs of training for large models on our computational resources. We plan to add a full comparison, across all sparsity levels, in the next revision. At the same time, we note that currently CAP appears to outperform SViTE across all sparsity levels. (Please see Figure 4 for an illustration across the entire DeiT family of models.) [1] https://developer.nvidia.com/blog/exploiting-ampere-structured-sparsity-with-cusparselt/ --- Rebuttal Comment 1.1: Comment: We wish to make an addition to our earlier response: As requested, we ran S-ViTE with 80% sparsity for DeiT-Small. Consistent with the previous results, the proposed CAP methods significantly outperforms S-ViTE. | Sparsity | Method | Top-1 Accuracy (%) | |:--------:|:------:|:------------------:| | 80 | S-ViTE | 75.8 | | | CAP | 78.0 | In addition, we note that S-ViTE is quite computationally expensive. Running of the algorithm for 600 epochs with the same hyperparameters as in the original paper requires 5 days on 4 RTX 3090 GPUs, whereas our approach produces a more accurate model using only half of S-ViTE computational cost.
Summary: The submission proposes a second-order method for unstructured pruning of neural net parameters, to leverage the efficiency gains of sparsity. It uses an optimization based on the empirical Fischer matrix to find saliency scores, that are used for the order in which weights are pruned. They present an algorithm based on solving the problem on individual sub-blocks, and then using that within a global pruning. They also present a schedule, across different hyperparameters, for training with pruning. This is demonstrated on ConvNeXt and DeiT models, along with other CNN and ViT models in the supplement. Strengths: **i)** Achieves better accuracy & sparsity. The proposed pruning gets pareto-dominant results relative to some similar prior work. The evaluation is done across multiple choices of the sparsity vs accuracy tradeoff. Not all experiments consider some of the most similar previous methods, though, such WoodFisher in the "Gradual Pruning" results, though the original work did include gradual pruning during training. (See section 5.2 of [38]) **ii)** Considers good selection of recent models Experiments demonstrate the pruning on both up-to-date ConvNet and vision transformer models. In the supplement and code, experiments on ResNet-50 and EfficientNet are also provided. **iii)** Includes code The submission includes the code used to run the experiments, based on SparseML, along with YAML files specifying the hyperparameters for a number of the experiments. Weaknesses: **iv)** Largely superseded by Optimal BERT Surgeon in a lot of settings OBS is a more scalable algorithm based on the same underlying principles. The core contribution seems to be a different method of approximating the solution to the original 2nd-order problem, plus better hyperparameter tuning. It does seem like a reasonable approach for relatively smaller models, to seek a tighter approximation to the original optimization. **v)** Source of improvement in end metrics remains unclear At its core, the starting point of the construction of the method in the submission is similar to [38]: namely, second-order pruning with the empirical Fisher matrix in place of the Hessian. The differences from [38] then include a) a different method of approximating the pruning on the full matrix, described toward the end of Section 3.1, and b) improved hyperparameters during gradual pruning. The paper states that ablation studies are given in appendices B, I, and J. These appendices specify different sets of hyperparameters, and include results of sparsity vs accuracy for multiple different choices of some of these hyperparameters. I didn't find any "ablation" studies that evaluate parts of the proposed method with other parts "ablated" or removed: for example, the results of using Algorithm 1 *without* the changes to data augmentation, the sparsity and learning rate schedule, etc. Experiments also don't compare against [38] for the gradual pruning case, so the effect of the hyperparameter tuning and schedules isn't measured by a comparison to baseline. It's possible that the improved choices in which weights to prune, as seen in the one-shot case, could also affecting the gradual pruning case, but it's also possible that the training can close a lot of this gap. The experiments also use very recent ViT-based models, showing some more up-to-date results than prior work. It is plausible that better pruning is necessary to get good results for these models, as argued, but this is incompletely supported. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Code**: Some miscellaneous minor comment from trying the included code README.md marks the step to install wandb as "optional." Not entirely optional, though, given that it does import it: ``` File "CAP-code/research/one_shot_pruning.py", line 4, in <module> import wandb ModuleNotFoundError: No module named 'wandb' ``` Commenting out the import, then see: ``` AttributeError: module 'numpy' has no attribute 'object'. `np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations ``` within the import of onnx inside sparseml: ``` from sparseml.pytorch.utils import LOGGING_LEVELS, BaseLogger, LoggerManager File "${HOME}/anaconda3/envs/CAP/lib/python3.9/site-packages/sparseml_nightly-0.13.0.20230704-py3.9.egg/sparseml/pytorch/utils/__init__.py", line 23, in <module> from .exporter import * File "${HOME}/anaconda3/envs/CAP/lib/python3.9/site-packages/sparseml_nightly-0.13.0.20230704-py3.9.egg/sparseml/pytorch/utils/exporter.py", line 26, in <module> import onnx File "${HOME}/anaconda3/envs/CAP/lib/python3.9/site-packages/onnx-1.10.1-py3.9-linux-x86_64.egg/onnx/__init__.py", line 20, in <module> import onnx.helper # noqa File "${HOME}/anaconda3/envs/CAP/lib/python3.9/site-packages/onnx-1.10.1-py3.9-linux-x86_64.egg/onnx/helper.py", line 17, in <module> from onnx import mapping File "${HOME}/anaconda3/envs/CAP/lib/python3.9/site-packages/onnx-1.10.1-py3.9-linux-x86_64.egg/onnx/mapping.py", line 27, in <module> int(TensorProto.STRING): np.dtype(np.object) File "${HOME}/anaconda3/envs/CAP/lib/python3.9/site-packages/numpy/__init__.py", line 305, in __getattr__ raise AttributeError(__former_attrs__[attr]) ``` Following instructions in the README had given me `numpy==1.24.3` and `onnx==1.10.1`. It's likely that one needs to hold back numpy, likely an underlying issue with the requirements specified for SparseML. Presumably this codebase is using an old version of sparseml that specifies a requires.txt such that it's versions specified with '>=' no longer really work with the latest versions now available. Authors should check & perhaps update the install commands in their README accordingly, or update their sparseml version. I was able to resolve this with instead: ``` conda install numpy==1.20.3 ``` Generally recommended practice for distributing this kind of research code is to explictly specify versions of the requisite packages with `==`, so we know exactly how the authors originally did all this. Sample command for `one_shot_pruning.py` in README also includes the command-line arguments `--data-dir` and `--sparseml-recipe`, when the arguments actually added to argparse are `--data_dir` and `--sparseml_recipe`, and arguments `-b` and `--experiment` that don't seem to exist. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Cite computation cost (at training/pruning, not inference/deployment time) as the limitation. The Fischer matrix will still grow quadratically in the number of parameters. This is unfortunate given the likelihood that the largest models that would benefit the most from pruning, in absolute terms. (And it is an oft-observed principle in ML that simple methods that scale better are usually more impactful.) Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback, which we address in detail below. > iv) Largely superseded by Optimal BERT Surgeon in a lot of settings In short, we emphasize that the scalability of both methods you reference (Optimal BERT Surgeon (OBS) and CAP) is essentially the same, while CAP is significantly more accurate across all the settings we tried. Both OBS and CAP use the same 2nd order approximation (the block-wise empirical Fisher). The computational and storage cost of both OBS and CAP are dominated by the estimate of the Fisher inverse. Specifically, most of the practical runtime is taken by the forward-backward passes required for estimating the Empirical Fisher at a pruning step. Then, the Optimal BERT Surgeon selects a large group of weights at a single step and prunes it. By contrast, our algorithm eliminates weights one-by-one and accounts for the change of correlations and importance of a particular weight after each previous weight. The key theoretical contribution of our work is a new algorithm for efficiently resolving these inter-weight correlations which arise during pruning: essentially, CAP allows us to efficiently “simulate” OBS weight removal one-weight-at-a-time (which would take massive time to run iteratively). In turn, taking correlations into account leads to much more accurate pruning, as we illustrated in Figures 2 and 3. If we remove this correlation-solving component, our approach becomes identical to OBS. In practice, the runtime and scalability cost of the CAP component is negligible relative to the cost of obtaining the Fisher estimate. We have mentioned this in the main text (l.352), and detailed it in Appendix G: our method has essentially the same runtime and memory costs compared to WoodFisher/Optimal BERT Surgeon, while being significantly more accurate. On methodology, we would like to emphasize that we have considered the best available implementation of the Optimal Brain Surgeon Framework, which is the Optimal BERT Surgeon. The hyperparameter tuning procedure was the same for all methods--we tuned each method independently, and always selected the best hyperparameter configurations for each method in turn. Details on this are provided in the Appendix. Hence, we believe that our comparisons are fair. > v) Source of improvement in end metrics remains unclear As noted above, our contribution is an efficient exact weight correlation solver during pruning: if we remove the correlation solver from CAP, we obtain an instance of OBS using a block-wise Fisher approximation, which is essentially WoodFisher/Optimal BERT Surgeon. Thus, our one-shot pruning comparisons essentially provide one of the key ablation study that the reviewer is asking for: WoodFisher/OBS is CAP without the correlation solving. So, in the comparisons with WoodFisher/OBS, we are ablating the key component of our method. We have provided this comparison, showing the impact of correlation solving, in Figure 2 and Table 4 in the main text, and Figure 6, Table 6, Figure 7, Table 7, and Figure 8 in the Appendix, across several models and tasks. Further, we have shown that correlation solving (our method) has a significant effect, beyond just accuracy: in Figure 3 (main text), we illustrated that the weight configurations chosen by CAP and WoodFisher/OBS are significantly different: the sparse weights obtained after one-shot pruning land at different points in the loss basin. In Appendix Figure 7, we show exactly the same one-shot effects for a ResNet50-D model, whereas Table 6 presents results for a ConvNext model. This validates the fact that our results hold across very different architectures. Moreover, we have also shown results on different tasks (DeTR). Appendix C performs ablations for the fine-tuning parameters (learning rate schedule and augmentation procedure), validating our choice of schedule. The one-shot + fine-tuning results in Appendix Tables 6 and 7 confirm the effect that the reviewer is referring to: fine-tuning usually reduces the gap between methods, but the differences remain significant, especially at large sparsities. (E.g., CAP produces a 90%-sparse model that is more accurate by 5 Top-1 points relative to WF, on ConvNext-Small.) In sum, we believe that all these results support our claim that correlation solving can have a consistent significant impact on pruning accuracy, across model scales, model types, and tasks. (For reference, WoodFisher and Optimal BERT Surgeon implement the same empirical Fisher approximation, but Optimal BERT Surgeon presents a more efficient implementation, with parameters specifically chosen for accuracy and scalability on Transformer models.) > Code: Some miscellaneous minor comment from trying the included code We would like to sincerely thank you for your effort in trying out our code, and we apologize for the dependency issues you encountered. We have taken all your suggestions into account to produce an improved version of the code package. > Limitations Indeed, computational cost is an issue for all approximate second-order methods in deep learning, which we see as an intriguing challenge to address. Please note, however, that the blockwise Fisher matrix approximation used in our work doesn’t scale quadratically with the dimension: it is linear in the dimension times block size: the algorithm requires $O(d B)$ memory and has $O(d B^2)$ runtime, as mentioned in Section 3.1. Larger models benefit more from pruning, but compression of moderate-size models (e.g. with 1-50M parameters) is still of great practical use for inference on edge devices, constrained in their compute power. Our work shows significant practical speed-ups for unstructured sparsity on commodity CPUs, therefore it can have real-world impact. To further address this concern of scalability, in the response to Reviewer JVcP and general response we present a much more efficient approximate version of CAP (FastCAP). --- Rebuttal Comment 1.1: Comment: > Thus, our one-shot pruning comparisons essentially provide one of the key ablation study that the reviewer is asking for: WoodFisher/OBS is CAP without the correlation solving This seems correct, and a good experiment without confounds (since its comparing under the simple & shared one-shot procedure) in the case of Figure 2 & similar experiments in the supplement. Though I'm perhaps looking in the wrong place for Table 4 in the main text? > general response we present a much more efficient approximate version of CAP (FastCAP). Where ought we look for more details on FastCAP? The description given in the top-level comment seems elides a lot that might be needed for a full review. > compression of moderate-size models (e.g. with 1-50M parameters) is still of great practical use for inference on edge devices Definitely true! Perhaps this is scalable enough. --- Reply to Comment 1.1.1: Comment: Thank you for your response! Please see replies inline: > This seems correct, and a good experiment without confounds (since its comparing under the simple & shared one-shot procedure) in the case of Figure 2 & similar experiments in the supplement. Though I'm perhaps looking in the wrong place for Table 4 in the main text? We are very glad that the reviewer found this part of the response clarifying. We will also highlight this point in the next revision. We apologize for the confusing table reference: we meant Table 1, not Table 4. More precisely, Table 1 shows the comparison of one-shot pruning performance of various methods on large (CLIP-sized) models. > Where ought we look for more details on FastCAP? The description given in the top-level comment seems elides a lot that might be needed for a full review. To address this, we provide the full pseudocode for FastCAP below; for simplicity, we provide the code for a single weight. The same procedure is applied for all weights. The main two components of the CAP/FastCAP algorithm are: 1. **Empirical Fisher estimate.** The original CAP has specific Fisher block for each output channel, whereas the FastCAP version averages blocks across the output channel dimension. (This reduces storage complexity.) 2. **Weight elimination and update iteration.** If the original CAP eliminates weights in greedy optimal order, FastCAP prunes weights in a fixed order. (This reduces computational complexity.) Notation: * `B` - Fisher block size * `b` - block size in iterative pruning algorithm * `N` - number of gradients * `N_B` - number of blocks per parameter 1) **Fisher accumulation** `F` - matrix with zeros with shape `(B, B)` **for** i=1:N **do** &ensp; reshape `dL_i / dw` to `(N_B, B)` &ensp; `F += (1 / N) (dL_i / dw)^T dL_i / dw ` // average Fisher blocks across the layer output dimension **end for** 2) **Iterative pruning** `M ← 1_{N_B x B}` // binary pruning mask `E ← 0_{N_B x b}` // block pruning errors `F ← Cholesky({F}^{-1})` // efficient form of the Fisher inverse **for** `i`=`0`, `b`, `2b`, … **do** &ensp; // select elements to prune &ensp; `M_{:,j:(j+b)}` ← mask of `(1 − p)`% weights `w_k` `in W_{:,j:(j+b)}` with largest `w_k^2 / F^{-1}]_{kk}^2` &ensp; for j=`i`, …, `i+b-1` **do** &ensp; &ensp; `E_{:,j−i} ← W_{:,j} / [F^{-1}]_{jj}` // pruning error &ensp; &ensp; `E_{:,j−i} ← (1- M_{:, j}) E_{:,j−i}` // freeze weights &ensp; &ensp; `W_{:,j:(i+b)} ← W_{:,j:(i+b)} - E_{:,j−i} F_{j,j:(i+B)^{-1}} ` // update remaining weights to correct output &ensp; **end for** **end for** `W ← W * M` // set pruned weights to 0 As mentioned in the earlier response, the Fisher is the same for all blocks, reducing space complexity to $O(B^2)$. We adapted the trick of maintaining the Fisher inverse in Cholesky form from the SparseGPT paper, to account for iterative elimination of the weights in a fixed order. This allows running the iterative process efficiently.
Summary: This paper proposes to consider the correlation between pruned elements in pruning deep neural netowkr models. The paper provides an efficient algorithm to distangle the correlation into a sparse regression problem, and propose a fast solver to find the solution. Further exploration is performed on the learning rate scheduling and data augmentation of the pruning and finetuning process. The final method results in better model size-accuracy tradeoff comparing to previous method with less training cost. Strengths: 1. This paper provides well motivated method on model pruning with solid theortical justification 2. The proposed reformulation of sparse minimization and the fast solver of the optimization is novel an have significant impact on model compression 3. Adequate experiments are provided to show the effectiveness of the proposed method, including pruning advanced highly accurate models 4. The paper is overall well written and easy to follow. Weaknesses: As discussed in the limitation, the cost of considering full correlation is still prohibitive on large models. It would be interesting to see some relaxation of the proposed method that can have less cost while maintaining most of the good performance. More discussion is encouraged on the potential relaxation strategy, and the tradeoff between optimization cost and performance. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitation and social impacts have been adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very insightful feedback! Regarding scaling, currently, CAP can scale easily to models with hundreds of millions of parameters, with reasonable block sizes and reasonable runtime (< 30 minutes / pruning step on 1 GPU). Yet, you are right in that it would be challenging to scale CAP to billion-parameter models, because of the necessity of storing and processing blocks of the Fisher inverse. Motivated in part by your suggestion, we observed that there is a relaxation of CAP which can easily scale to massive models, via two steps: * First, one can average Fisher blocks across the layer output dimension and use these blocks as proxy of the Hessian, where each Hessian matrix is the same for all output channels. (This step is similar to the Layer-Wise Optimal Surgeon paper [1].) * Secondly, following the recent SparseGPT paper [2], instead of eliminating the weights in the greedy-optimal order, i.e. always selecting the weight whose elimination leads to the smallest increase of loss, one can prune them in some fixed order, fixed across all output dimensions. Specifically, for this we adapted the iterative removal process from SparseGPT, which is more GPU-friendly and scales better with increase of model size. The resulting modification is an approximation of CAP: the main constituents - block-wise Fisher approximation and resolution of inter-weight correlations are the same as in the original algorithm. The two approximations above lead to reduction of the memory footprint and the runtime which are linear in the embedding dimension: in practice, this is 100x-1000x compared to the original CAP. Thus, FastCAP is easily scalable to billion-parameter models with reasonable computational cost. To validate performance of FastCAP we applied it for pruning of large ViT models in a one-shot setting, specifically `eva_giant_patch14_224.clip_ft_in1k` from the `timm` library. This model has **>1B** parameters. We compared the top-1 accuracy on ImageNet of FastCAP vs Magnitude pruner. One can see that FastCAP significantly outperforms the baseline. At the same time, FastCAP has a reasonable runtime and memory footprint at this scale. Pruning step takes ~400 seconds on a single A100 GPU. Table 1. Performance of large ViT after 1-shot pruning. | Model | Sparsity | Method | Accuracy (%) | |:---------:|:--------:|:---------:|:------------:| | EVA ViT-G | 0 | - | 88.7 | | | 50 | Magnitude | 87.9 | | | | FastCAP | 88.1 | | | 60 | Magnitude | 85.5 | | | | FastCAP | 86.3 | | | 70 | Magnitude | 64.3 | | | | FastCAP | 76.1 | [1] Dong, Xin, Shangyu Chen, and Sinno Pan. "Learning to prune deep neural networks via layer-wise optimal brain surgeon." Advances in neural information processing systems 30 (2017). [2] Frantar, Elias, and Dan Alistarh. "Massive language models can be accurately pruned in one-shot." arXiv preprint arXiv:2301.00774 (2023). --- Rebuttal Comment 1.1: Comment: I would like to thank the author for the response. I'm satisfied with the response and will keep my score.
Summary: The paper proposed a Correlation Aware Pruner (CAP) , a new unstructured pruning framework capable to prune models to high sparsity. It takes into account weight correlations. To do this, the paper reformulate the OBS multi-weight pruning problem: when using the empirical Fisher approximation, the problem of finding the optimal set of weights to be removed, while taking correlations into account, is equivalent to the problem of finding the set of sparse weights which best preserve the original correlation between the dense weights and the gradients on an fixed set of samples. On top of that, the paper also applies a series of training techniques to improve training: Learning Rate Schedule, Regularization and Augmentation, Efficient Sparsity Sweeps. Results show better performance compared to existing works, expecially at high compression ratio Strengths: The paper is well written, and has good logic. Great amount of experiments and ablations discussion to strengthen the proposed method. Weaknesses: Unstructured pruning is hard to have actual hardware speedup due to the irregular sparsity. Although the proposed method can maintain high accuracy at extreme prune ratios, it may just be just theoretically more efficient. The author is encouraged to add inference latency or throughput in the results tables 1 & 2 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weekness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Please address the limitations and potential negative societal impact in the revision. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments! It is true that, traditionally, sparsity is harder to leverage for computational speedups. However, unstructured sparsity is now supported with speedups on CPU, and 2:4 semi-structured sparsity is supported with speedups on NVIDIA GPUs. Our method can create accurate models targeting both sparsity types: * Figure 4 shows end-to-end throughput speedups of more than 2x for CPU deployments when running CAP models using the DeepSparse inference engine (supporting unstructured sparsity). * In Appendix H.3, we provide accuracy results for several models with 2:4 semi-structured sparsity, with minor or no performance drop compared to the original dense model. This format is natively supported with speedups by all modern NVIDIA GPUs (Ampere, Hopper) [1]. Following your suggestion, we will add the corresponding inference throughput speedups in Tables 1 and 2. The numbers can currently be read from Figure 4, which provides the desired data in speedup-vs-accuracy format. Please note that, e.g. for DeiT-Tiny, we obtain a real-world speedup of > 2x at negligible accuracy loss. The main limitation of our method is the scalability, since the memory footprint and runtime becomes prohibitively expensive for models with 1B parameters and more. However, one can propose a relaxed version with some additional approximations that can be scaled to large models. We describe it in more detail in general response. Sparse models are not expected to show more malicious behavior compared to the dense models. However, as any other kind of technology, they can be applied both for good and bad purposes. The main potential outcome of our work is the speedup and more widespread adoption of modern computer vision architectures in resource constrained setup, such as inference on edge devices. [1] https://developer.nvidia.com/blog/exploiting-ampere-structured-sparsity-with-cusparselt/ --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications. I would like to raise my score to week accept (6) --- Reply to Comment 1.1.1: Title: Thank you! Comment: We would like to thank the reviewer for their response, and for raising their score! P.S.: As far as we can tell, the score has remained the same in the Author Console.
Rebuttal 1: Rebuttal: We thank the reviewers for their feedback and comments on our work. Below is the summary of the main concerns and questions addressed in our rebuttal: **1. Difference between WoodFisher (Optimal Brain Surgeon) and CAP.** In the response, we emphasized that our contribution is not another approximation of Fisher matrix, but a new algorithm resolving the correlations between the weights during the pruning process. Specifically, CAP allows the user to efficiently emulate the ideal Optimal Brain Surgeon (OBS) process of pruning one-weight-at-a-time, adjusting the remaining weights after each removal. As such, it is the first implementation of “true” OBS at scale. If we completely remove correlation solving from CAP and perform one-shot pruning, we simply obtain a vanilla instance of OBS, essentially WoodFisher. Thus, our comparisons with regular instances of OBS such as WoodFisher/Optimal BERT Surgeon, which do not solve for correlations, essentially showcase the power of correlation solving. These results are shown in Figures 2 and 3 and Table 4 in the main text and Figure 6, Table 6, Figure 7, Table 7, and Figure 8 in the Appendix, show that resolving correlations accurately can have a major impact on the accuracy of the pruned model. Appendix Figure 12 performs a very fine-grained analysis of the impact of correlation solving on the accuracy of pruned models. **2. CAP Scalability.** The version of CAP we presented in our submission can scale to models with < 1B parameters, covering most standard vision models. Its key computational and memory cost comes because of computing and maintaining the inverse Fisher approximation. This is the same as other methods, such as WoodFisher/Optimal BERT Surgeon, and the running time of CAP is essentially the same as these prior methods, while being significantly more accurate. To address this scalability concern in full, in response to Reviewer jVCP we present a much more scalable approximate variant of CAP, called FastCAP, which is inspired by the recent SparseGPT scalable pruner (arXiv::2301.00774). Specifically, FastCAP reduces compute and memory cost by ~100x by leveraging a cheaper Fisher approximation and relaxing the optimal greedy order of pruning. The other method components stay the same. FastCAP can be scaled to billion-parameter vision models, i.e. we can prune ViT-Giant in a few minutes, while achieving reasonable performance in one shot-setting for 50-70% sparsity. We present a simple description of FastCAP in the responses to Reviewers jVCP and kMpj, and plan to provide the full results in the next revision. The results for pruning of large ViT model are presented in **Table 1**. Table 1. Performance of 1-shot pruning on ViT-Giant. | Model | Sparsity | Method | Accuracy (%) | |:---------:|:--------:|:---------:|:------------:| | EVA ViT-G | 0 | - | 88.7 | | | 50 | Magnitude | 87.9 | | | | FastCAP | 88.1 | | | 60 | Magnitude | 85.5 | | | | FastCAP | 86.3 | | | 70 | Magnitude | 64.3 | | | | FastCAP | 76.1 | **3. Practical Speedups.** We presented end-to-end speedup results on CPUs in the original submission: for instance, we obtain speedups of 1.5--2x across ViT family models, with negligible accuracy impact. Although general sparsity is not supported on GPU hardware, 2:4 structured sparsity is natively supported in modern (Ampere and Hopper) GPU architectures, and leads to speedups of the tensor multiplication operations. Our method can be applied to create 2:4 sparse models as well, with a minor modification to the sparsity mask constraint. Accuracy results for the resulting 2:4-sparse models were presented in Appendix H.3. For convenience, we present them in **Table 2** below. The resulting models could be executed with speedups via the NVIDIA TensorRT GPU inference engine, at relatively minor accuracy drops (0.8-0.1%) relative to the dense baselines. Table 2. Semi-structured 2:4 pruning of ViT models followed by 10 epochs of finetuning | Model | Method | Top1-Accuracy (\%) | |:-----------:|:------:|:------------------:| | DeiT-Tiny | - | 72.2 | | | GM | 68.8 | | | CAP | 71.5 | | DeiT-Small | - | 79.8 | | | GM | 77.9 | | | CAP | 79.0 | | DeiT-Base | - | 81.8 | | | GM | 81.2 | | | CAP | 81.3 | We look forward to an engaging discussion!
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ReSync: Riemannian Subgradient-based Robust Rotation Synchronization
Accept (poster)
Summary: The paper concerns synchronization of observed rotations with incomplete and corrupted observations. The authors construct the method ReSync that is a gradient-based algorithm for solving the problem. The paper describes the context, prior results on the synchronization problem, and the algorithm, and presents a thorough convergence analysis together with experimental evaluation. Strengths: - well-written and clearly presented paper - the presented method deals with an important problem - the presentation of the algorithm is followed by a thorough convergence analysis - the method performs well in the experimental validation Weaknesses: - constructing a gradient descent algorithm is not a big contribution in itself. However, I believe the geometric setting and the connection to the theoretical analysis, which is not trivial, makes the contribution important Technical Quality: 3 good Clarity: 3 good Questions for Authors: no questions Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the supportive and valuable comments. Should the reviewer have any further concerns, please inform us during the reviewer-author discussion period so that we can respond timely. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. My scoring has not changed. --- Reply to Comment 1.1.1: Comment: Thank you very much for reading our rebuttal and for your response. We will be closely following the Reviewer-Author discussion period in case the reviewer has any additional concerns or questions. Title: Thank you for your response
Summary: This paper presents a theoretical study for robust rotation synchronization with a least-unsquared minimization formulation over the rotation group. In particular, this paper proposes a two-step algorithm called ReSync, where the first step uses spectral initialization to generate an initial guess and the second step performs Riemannian Subgradient descent from the initial guess. The paper proves that, under suitable conditions of the random corruption model, this algorithm converges linearly to the groundtruth rotations. The paper presents numerical experiments that verify the correctness of the theorem and compares the performance of ReSync with other state-of-the-art algorithms. Strengths: - The theoretical contribution of this paper advances the previous state of the art. - Paper is well written and easy to follow, despite being a theory paper. Weaknesses: - I am curious if similar guarantees could be made in the case where the inlier measurements are corrupted by small (and bounded) noise? Could you guarantee the algorithm converges to a solution that has bounded error from the groundtruth rotations? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the supportive and valuable comments. We address the concern below. **A. Guarantees with additive noise.** Yes. It is possible to show the convergence results against additive noise, in which the algorithm will converge to the neighborhood of the ground-truth rotations up to the scale of the noise. The initialization analysis should not change much as it fits well with additive noise. Then, a crucial step involves providing a noise-perturbed version of the weak sharpness property, i.e., $$ f(\boldsymbol{X}) - f^\star \geq \mathcal{O}(npq) \operatorname{dist}_1(\boldsymbol{X},\boldsymbol{X}^\star) - \nu, $$ where the perturbation $\nu$ is caused by additive noise. The main difficulty lies in conducting the contraction analysis, as the presence of additive noise brings additional challenges for controlling the infinity norm-induced distance. We believe it is possible to circumvent this technical difficulty by studying more carefully the property of the noise. This line of research is our ongoing work. We hope that our response is satisfactory to the reviewer and that the concern has been addressed appropriately. Should the reviewer have any further concerns, please inform us during the reviewer-author discussion period so that we can respond timely. --- Rebuttal Comment 1.1: Comment: Thanks for the response, I maintain my original score. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thank you very much for reading our rebuttal and for your response. We will be closely following the Reviewer-Author discussion period in case the reviewer has any additional concerns or questions.
Summary: This work proposes to solve the rotation synchronization problem using Riemannian subgradient method with spectral initialization. The proposed formuation is sum of absolute deviations, which is robust to outliers. Exact recovery guarantees are provided under uniform corruption model (the graph is Erdos Renyi and probability of corruption 1-p). Numerical results show competitive performance of the proposed method compared to other state-of-the-art methods. Strengths: 1. The theoretical result in the noiseless case (corruption only) is quite strong. That is, it shows linearly convergence to the ground truth rotations whenever n>1/(p^7q^2) up to a log factor, where p is the probability of being a clean edge (conditioned on being an edge), and q is the probability of being an edge. 2. The numerical experiments show advantages over previous state-of-the-art methods in the presense of both corruption and noise. 3. The proofs look correct. Overall, I enjoyed reading the paper. Weaknesses: 1. This is not necessarily a weakness, but it would be even nicer if the authors could comment on the stability of your algorithm to noise (would it be possible to show an approximate recovery in this case)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: I wonder how sensitive is your method to initialization? For example, given random initialization, what is the typical behavior of your algorithm in numerical experiments and what about the theoretical results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the supportive and valuable comments. We address the concerns in a point-by-point manner below. **A. Stability to noise.** Yes. It is possible to show stability and convergence results against additive noise, which is our ongoing work. The initialization analysis should not change much as it fits well with additive noise. Then, a crucial step involves providing a noise-perturbed version of the weak sharpness property, i.e., $$ f(\boldsymbol{X}) - f^\star \geq \mathcal{O}(npq) \operatorname{dist}_1(\boldsymbol{X},\boldsymbol{X}^\star) - \nu, $$ where the perturbation $\nu$ is caused by additive noise. The main difficulty lies in conducting the contraction analysis, as the presence of additive noise brings additional challenges for controlling the infinity norm-induced distance. We believe it is possible to circumvent this technical difficulty by studying more carefully the property of the noise. The final result will be convergence to the neighborhood of the ground-truth rotations up to the scale of the noise. **B. Sensitivity to initialization.** We test our algorithm with random initialization using the setting of Fig. 2(a) in our manuscript; see Fig. 2 in the one-page supplementary PDF of the rebuttal. It can be observed that our algorithm continues to work. However, it requires a much larger diminishing factor $\gamma$ of the step size, resulting in notably slower convergence. Theoretically, we do not have an idea yet of how to prove convergence guarantees with random initialization, given that the geometric property (e.g., weak sharpness) only holds locally. We hope that our response is satisfactory to the reviewer and that all concerns have been addressed appropriately. Should the reviewer have any further concerns, please inform us during the reviewer-author discussion period so that we can respond timely. --- Rebuttal Comment 1.1: Comment: I thanks authors for the response and it addressed all my questions. After reading all the reviews and comments, it does not change my opinion that this is a solid theoretical paper. Therefore, I prefer not to change my score. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thank you so much for reading all the reviews and our rebuttals and for your response.
Summary: The paper proposes a Riemannian subgradient based algorithm for the robust rotation synchronization (RRS) problem. RRS involves recovering the absolute rotations of objects from the possibly corrupted/noisy relative rotations between pairs of objects. The problem setting involves two ratios: q \in [0,1] denotes the observation ratio and p \in [0,1] denotes the true observation ratio. The paper pose the problem as a (non-convex and non-smooth) least-unsquared minimization formulation over the rotation group. The proposed method ReSync has a spectral relaxation based initialization procedure, which is followed by Riemannian subgradient iterations. The main contribution of the paper is to show that under random corruption model (RCM) setting: (a) the proposed initialization (X^0) can be relatively close to true solution (X^*), depending on p and q, and (b) given the initialization guarantee, the Riemannian subgradient descent show local linear rate of convergence. Overall, the paper show that ReSync converges linearly to the ground-truth rotations when p^7 q^2 = \Omega(log n / n). Towards the end of the draft, the paper has few experimental results that compare the proposed algorithm against state-of-the-art. Strengths: The paper presents an interesting approach for recovering ground truth (X^*) in RRS problem. The key theoretical guarantees for ReSync comes from a) an initialization procedure SpectrIn, which ensures that the initialization X^0 is close to X^*, b) weak sharpness property of the least unsquared formulation, which is being solved via ReSync, and c) local linear convergence analysis for ReSync based on initialization and weak sharpness property. I have, however, not verified the correctness of the theoretical results. Weaknesses: Concerns regarding theory: 1. The paper assumes missing observations as zero matrix, which does not lie on the SO(d) manifold. Hence, only if (i,j) belongs to available observation, Y_{ij} \in SO(d). The paper does not provide any justification for this choice. A more suitable choice seems to be Identity matrix as it lies on SO(d). 2. In line 178-180, it is stated that E[Y_ij] = pq X_i^*(X_j^*)^\top for all (i,j). This does not seem correct as it is not clear how outlier points O_ij \in SO(d) are handled while computing this expectation. 3. While the paper cites and discusses its differences with [27] in lines 171-175, it seems that [27] should be discussed in more detail. While [27] focuses on orthogonal group with additive Gaussian noise and permutation group with outliers, it should be noted that permutation group is a special subset of orthogonal group. Interestingly, [27] states that "though it is not analyzed in our manuscript, the proof technique for the permutation group synchronization under uniform corruption could be directly modified to tackle this O(d) synchronization under uniform multiplicative corruption" (in the paragraph before Section 3.2). The proof of leave-one-out technique seems to be adopted from [27]. Hence, while [27] has been cited in Section 3.1 of the paper, it does not seem to be the main contribution and could been discussed in the supplementary material. Overall, [27] deserves more discussion, especially w.r.t. the above quoted statement, and in this regard contribution of the paper should be clearly highlighted. 4. A discussion on computational cost of the proposed algorithm is missing. Concerns regarding experiments: 1. The paper shows only a few empirical results on synthetic datasets. While this gives some insights on how the algorithm works in lab environment, performance on real-world setting gives an idea on how the algorithm will perform when used in real applications. If space was a factor, the paper could have moved some of the proofs/proof-outlines to the supplementary section. 2. While the paper discusses [40] and states that it "introduces a least-unsquared formulation and applies the SDR method to tackle it", the paper should have mentioned more directly that the main formulation (2), which papers tries to solve, was originally proposed in [40]. Hence, while the theoretical results of [40] are in q=1 setting, the paper should have empirically compared with [40] as well. Similarly, paper [30] should also be compared empirically. 3. Experiments are done in two settings: with and without additive noise. Without additive noise setting has been theoretically analyzed in the paper. One question is that in this setting, DESC method seems to be better or similar to ReSync. Any insights as to why it does better (where it does)? Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Please look in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the valuable comments. We address the concerns in a point-by-point manner below. **A. Missing observations (Q1 in concerns regarding theory).** Since formulation (2) only relies on available observations, we are allowed to assign the missing entries as $\mathbf{0}$. In theory, this setting ensures that the $d$ leading eigenvectors of $\mathbb{E}(\boldsymbol{Y})$ are $\boldsymbol{X}^\star$. However, assigning missing observations as identity matrices may not necessarily achieve this outcome when $q$ is small. This particular choice has also been utilized in previous works; see the overview in the fourth paragraph of [24, Section 2.1] (here, [24] refers to reference [24] in our manuscript). In experiments, we use the missing entries only in spectral initialization (Algorithm 2). We have also conducted a simulation to illustrate that assigning the missing observations as identity matrices will decrease the performance; see Fig. 1 in the one-page supplementary PDF of the rebuttal. We shall add the following sentence in line 29 in the revised version: *``The missing observations are set to be $\mathbf{0}$ by convention; see, e.g., [24, Section 2.1].''* **B. Computation of Expectation (Q2 in concerns regarding theory).** We are sorry for the confusion caused. We have used the fact $\mathbb{E} (\boldsymbol{O}_{ij}) = \mathbf{0}$ since outliers are assumed to be independently and uniformly distributed on $\operatorname{SO}(d)$ in the RCM. In the revised version, we shall replace "our random graph setup" in line 178 with *"the RCM"* to enhance clarity. **C. Connections to [27] and highlight contributions (Q3 in concerns regarding theory).** We have two major differences to [27]: 1) Nontrivial modifications due to the specific structure of $\operatorname{SO}(d)$. Our approach follows the standard leave-one-out analysis based on the standard "Dist" (up to $\operatorname{O}(d)$ invariance) defined above Lemma 3 in the Appendix. Nonetheless, we have to transfer the results to "dist" due to the structure of $\operatorname{SO}(d)$ in Lemma 5, which is new. 2) Handling incomplete observation ($q<1$). In the case of incomplete observation, the constructed $\boldsymbol{W}$ in (17) in the Appendix becomes more intricate; it has the additional third column, rendering the analysis of our Lemma 2 more involved. We shall elaborate on these discussions in the revised version. Concerning our contributions, in addition to the above nontrivial modifications in the initialization analysis, our more important contributions are found in Sections 3.2 and 3.3. These sections focus on geometric and contraction analyses, respectively, which were not present in [27]. **D. Computational cost (Q4 in concerns regarding theory).** Algorithm 2 has computational cost $\mathcal{O}(n^3)$. The per-iteration complexity of the Riemannian subgradient procedure is $\mathcal{O}(n^2 q)$. We shall add these discussions in Section 2.1 in the revised version. **E. Lack of real-world experiment results (Q1 in concerns regarding experiments).** Since the primary focus of this work is theory, we originally only conducted synthetic experiments to corroborate our theoretical findings. Per the request of the reviewer, we now implemented the experiment in [40, Fig. 7] for the real-world "Lucy dataset" (here, [40] refers to reference [40] in our manuscript); see Fig. 5 in the one-page supplementary PDF of the rebuttal. It can be observed that our method outperforms DESC and LUD (here, LUD refers to the algorithm in [40] and we use their implementation and default parameters). We shall add this experiment result in our revised version. Moreover, we are studying applying our algorithm to Cryo-EM imaging experiment settings (based on common-lines), which is our ongoing work. **F. References [40] and [30] (Q2 in concerns regarding experiments).** We shall add the following sentence at the beginning of line 63 in the revised version: *"This formulation was introduced in [40] as the initial step for applying the SDR method."* We have compared our algorithm with LUD in [40] (we use their implementation and default parameters) in the setting of Fig. 2 in our manuscript; see Fig. 3 in the one-page supplementary PDF of the rebuttal. LUD has competitive performance when additive noise is present, which is reasonable since LUD attempts to solve a convex relation of problem (2). We shall add these comparisons in the revised version. We do not compare with [30], as we observed that it has slightly suboptimal performance compared to CEMP and MPLS (which we included in our experiments) in its own simulation results; see [30, Fig. 4]. **G. Discussion on DESC (Q3 in concerns regarding experiments).** In the absence of additive noise, the good-cycle condition in DESC is likely fulfilled. This, together with its carefully designed post-procedure for recovering the ground-truth rotations from the estimated corruption level, leads to highly competitive experiment performance. In fact, our method performs better if we use larger $n$; see Fig. 4 in the one-page supplementary PDF of the rebuttal, where we use the setting of Fig. 3(a) and 3(c) in our manuscript with the only difference being $n =500$ rather than $n = 200$. We hope that our response is satisfactory to the reviewer and that all concerns have been addressed appropriately. Should the reviewer have any further concerns, please inform us during the reviewer-author discussion period so that we can respond timely. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for the detailed response. Regarding point A: The following point is not clear. It would be nice if the authors could elaborate on this. >In theory, this setting ensures that the leading eigenvectors of $\mathbb{E}[\mathbf{Y}]$ are $\mathbf{X^{*}}$. Regarding point B: It would be nice if the authors could explain why independently and uniformly distributed outliers on SO(d) will have zero mean. >We have used the fact $\mathbb{E}[\mathbf{O}_{ij}]=0$ since outliers are assumed to be independently and uniformly distributed on SO(d) in the RCM. The authors have answered my other questions. I have accordingly changed my score
Rebuttal 1: Rebuttal: Dear ACs and Reviewers, This global response contains our one-page supplementary PDF of the rebuttal. All additional figures are included in this file. Please find it in the attachment. Best regards, Authors. Pdf: /pdf/9f8df648769c053a74671597593fa33f1893e658.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Non-Rigid Shape Registration via Deep Functional Maps Prior
Accept (poster)
Summary: The method proposes an unsupervised pipeline to solve 3D shape-to-shape registration. Firstly, the two shapes are aligned by a pre-trained orientation regressor. Then, a soft-correspondence is obtained by a Point Feature extractor, optimized using a Deep Functional Maps schema, and considering several unsupervised regularizations (e.g, bijectivity, orthogonality). Finally, the source shape is iteratively deformed using a Deformation graph (using Chamfer distance, the learned correspondence, and as-rigid-as-possible), and the correspondence is updated every 100 iterations. The method is tested on humanoid datasets (FAUST_r, SCAPE_r, SHREC19_r, SHREC07-H, DT4D-H), and extensively compared against many other approaches, both supervised and unsupervised. obtaining interesting results. == FINAL RATING == In the discussion phase, authors provided much further evidence and clarified my concerns. I notice, however, that while a majority is leaning toward acceptance, there is not a consensus among the reviewers. Going through the reviewers that assigned negative scores, I do not find significant ground to change my rating. In particular, Reviewer xFDL mainly criticizes the novelty of the work but does not point to other works to support this claim. While I can understand that the work does not seem particular novel in its component (and the general principle of registration + correspondence in feature embedding can be found in previous works (e.g., more SmoothShell, DeepShell than DPC, while both of them are designed for meshes), I do not find itself a proper ground of rejection (as also reported in the Reviewing Guidelines; also combinations of existing techniques are valuable). The experimental evaluation is appreciable, the performance convincing, and I do not see this as incremental w.r.t. any other previous work. I find this an interesting contribution to a research field that counts only a limited amount of work. Instead, I see the detailed review and discussion of Reviewer iB8H, and I think it contains many valuable observations that could improve the paper. However, summarizing the reported criticisms seems to be: A) novelty w.r.t to DPC; which in the last comment looks solved, or at least significantly tuned down B) general motivation of the work; while I see that the underlying principle and positioning of the work in literature can be improved, overall, I do not see other works that perform similarly, and I struggle to see this as a "follow-up" of a specific methodology to consider this an incremental contribution of something in particular. Also, the obtained results seem to be already a reasonable justification for the proposed approach (since I think we all agree that the paper effort is beyond engineering work, and hence, they communicate a promising research direction) C) Other details (e.g., clarifying the role of hyper-parameters, missing citations, rephrasing); that I think make sense, but they can be easily addressed in the camera ready, and I do not consider a sufficient ground for rejection. For these reasons, I lean toward acceptance. I suggest authors incorporate the suggestions (especially about suggested experiments and paper positioning in literature), and I wish them the best of luck with their work. Strengths: 1) Not many methods are available to solve shape-to-shape correspondence in an unsupervised way; the proposed approach smartly combines existing techniques and well-established methods to obtain good results. 2) The proposed approach outperforms existing direct competitors, even some supervise approaches. I am sure it would be of interest for the shape-matching community. 3) The paper is well presented, clear and direct, I enjoyed the reading, and I had no problem understanding the main components. I am sure the method can be re-implemented with limited effort. Weaknesses: 1) The general applicative context of the method is unclear. From the introduction and the pipeline figure, I had the impression that the method is designed to obtain correspondence between point cloud. However, the method is about registering a mesh to a point cloud, which can be extended to the point cloud case, using the source mesh as a bridge. This, however, is not properly tested: while other approaches also show noisy point clouds with different levels of noise (e.g., [30]), in this case, only point clouds sampled from meshes are reported, with a gaussian noise that is not detailed nor in the main manuscript or in the supp.mat. . Given the nature of the used feature extractor (which I expect to be quite sensitive to real noise and clutter), I suggest including more details on the considered Gaussian noise, and test on real raw pointclouds (which could also be another failure case, but would point the reader to interesting future directions) 2) While I appreciate the simplicity of the proposed approach and the combination, I struggle to see a general message or insight. Also, I do not see the method as directly applicable since it is trained only on a few shapes, and we do not know how it would scale on larger datasets (e.g., AMASS), and I guess it would fail to solve registrations in real contexts (e.g., noisy scans), since it would ruin the learning on the underlying surface. It is also tested only on a domain (humans) with a significant amount of labelled data. So, without these elements, I wonder if the paper could be of significant impact or if it is just a carefully designed pipeline that will be just another matching method without much influence in the field. I suggest discussing the main message, analysing the scalability of the approach, and testing on domains in which labelling data are much more complicated to obtain (e.g., chairs, which also would make stronger the claim about topological noise in the supplementary) Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1) In Table 1 some results are not reported. Why? 2) The orientation regressor module resembles the input transform module of PointNet. Wouldn't a rotation augmentation for the feature extractor work (and remove the need for a further module)? Another possible alternative is given by the method proposed in [A]. 3) Given that overall the pipeline assumes some degrees of bijectivity, how would it perform in the presence of a significantly different number of vertices between the two shapes (e.g., by a factor of x100)? Even considering two complete shapes which have a bijectivity on the surface, the losses are defined vertex wise, and might lead to a degradation of the regularization impact. Minor: 1) I see that reference [30] is incorrect in the bibliography, and [31] has names compressed. I suggest double-checking the references. 2) Relevant references not discussed: [B], [C], [D] [A]: Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes, Zhou et al., 3DV 2022 \ [B]: NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go, Eisenberger et al., ECCV 2021 \ [C]: NCP: Neural Correspondence Prior for Effective Unsupervised Shape Matching, Attiki et al., NeurIPS 2022 \ [D]: 3d-coded: 3d correspondences by deep deformation, Groueix et al., ECCV 2018 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are discussed in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments and the recognition of our contributions. Below we address the comments: **Applicative context of the proposed method:** We actually follow the same scheme of adding noise perturbation as DiffFmaps [30]. We highlight our new results reported in Rebuttal Mat., which demonstrate that our method can be *directly* applied to matching large-scale real scans, as well as be extended to perform matching and registration regarding partial, even noisy point clouds. **Scalability in terms of training set:** First of all, we consider the ability of our method to efficiently and effectively learn from a small-scale training set as an advantage. That being said, we have followed the setting of SSMSM [8] to train our feature extractor on the SURREAL 5k dataset [f] and to test it on the FAUST_r, SCAPE_r, and SHREC19_r datasets. We select one training shape from SURREAL 5k as the template in registration. As shown in Table 2 of the rebuttal material, our method outperforms the competing non-rigid point cloud matching methods by a noticeable margin. We also refer the readers to Table 3 in [8] for a complete table including methods utilizing mesh input, and we highlight that our method is even comparable with the latter. In particular, ours vs. SOTA of mesh-based techniques over the three test datasets: 3.2 vs. 2.3, **2.5** vs. 3.3, **4.6** vs. 4.7. That is, we achieve the best *overall* result in two of the three test datasets. **Extension to more challenging datasets, such as chairs:** We highlight that our unsupervised matching network is essentially trained with the prior that the underlying maps among training shapes are isometric and bijective. Especially, the former plays a pivotal role in the development of DFM frameworks. In the case of chairs, such prior can be violated significantly (e.g., chairs can be with or without arms, back, etc.), making direct extension challenging. We believe that exploring effective matching priors for such data is an interesting future direction to investigate. **Missing scores in Table 1:** There are three axiomatic (optimization-based) methods in Table 1 -- Smooth Shells [12], NDP [27], and AMM [45]. Obviously, they do not have generalization results. Regarding the rest missing scores, we apologize for not being able to perform all the tests upon the deadline. On the other hand, we would like to point out that the regarding methods do not perform strongly in the accomplished, typically simpler, tests, which therefore has little impact on the overall comparison and analysis. **Alternatives to orientation regressor:** We emphasize that our current pipeline can in general take arbitrarily rotated point clouds as input (See Fig. 3 in the Rebuttal Mat.), performing data augmentation in SO(3) can be quite heavy, and might not be optimal (see Table 1, 2 in [g] for reference). **Dependence on bijectivity:** First of all, one simple solution is to down-sample the target point clouds. Note that our pipeline is primarily designed for matching organic shapes like humans and animals (see the latter in the rebuttal material), down-sampling the underlying smooth surfaces in general would not lose many high-frequency signals. Once the template mesh is deformed to fit the down-sampled point clouds, then one can easily infer the maps regarding the full-resolution point cloud, since all the shapes are explicitly, non-rigidly aligned. In practice, we also observe the robustness of our pipeline regarding the number of vertices. For instance, the number of vertices of the shapes in SHREC07 ranges from 2,000 to 16,000, while the template shape has around 5,000 points. And our method achieves decent registration results (see Fig. 1) and outperforms the baselines by a large margin (see Table 2) under such perturbation, beyond the heterogeneity. **References issues:** We thank you for pointing out the relevant prior works and would be happy to include and discuss them in the future revision. Of course, we will fix the typos in the current references as well. [f] Learning from synthetic humans, G. Varol, et al., CVPR 2017. [g] Vector Neurons: A General Framework for SO(3)-Equivariant Networks, C. Deng, et al., ICCV 2021. --- Rebuttal Comment 1.1: Title: Post-Rebuttal Comment: I thank the authors for their reply to my concerns. I appreciate the amount of experiments and different settings provided in the rebuttal. I would just say that probably the dependence on bijectivity can be elaborated on in the limitation section. This is because even if it is true that "down-sampling the underlying smooth surfaces, in general, would not lose many high-frequency signals", ideally, some applications would obtain the highest possible precision. Matching could have more "scales" (i.e., we seek a global but also a local coherence), and at higher frequencies, the details are (often) more prone to vary across different instances of the same class (and bijectivity does not hold much). Of course, it is not a request to solve also this, and I will not consider this a weakness, but more a discussion point to enhance the conclusion/limitation discussion. I do not have other questions, and I maintain my positive opinion. Looking forward to hear from other reviewers their feedback. --- Reply to Comment 1.1.1: Comment: We thank you for the reply and positive opinion. Regarding the problem of bijectivity dependency, we think that the challenging cases you mentioned are beyond the scope of the current submission and that it can be indeed an interesting future direction (e.g., matching high-resolution human faces with the proposed pipeline). We will add the respective discussion in the future revision.
Summary: In this paper an unsupervised non-rigid shape registration method is proposed. The proposed method combines intrinsic spectral mapping (i.e. based on the deep functional map framework) together with extrinsic deformable shape registration (i.e. a deformation graph) to enable unsupervised 3D deformable shape matching. In many challenging benchmark datasets, the proposed method demonstrates competitive matching performance, better cross-dataset generalisation ability, and robustness against noise and rotation of input shapes. Strengths: 1. The paper is well-written and easy to follow. The main contribution and methodology are well illustrated. 2. The paper integrates shape matching and shape registration into the same framework. The shape matching part is based on the deep functional map framework to obtain point-wise correspondences. The shape registration part is based on the deformation graph to non-rigid align two shapes to refine the final correspondences. 3. In order to enable matching shapes with different orientation, the paper proposes an orientation regressor to align shapes into a canonical frame. Weaknesses: 1. The novelty of the method is limited. The proposed method consists of three components (rotation regressor, feature extractor, shape registration) and each component is derived from prior works without large modifications. 2. The interconnection of the components in the method is missing. The first stage is to train a rotation regressor to align shapes into the canonical frame. The second stage is to train a feature extractor to obtain correspondences. The third stage is to optimise the deformation graph to align two shapes. Every stage is somehow separated during the training stage, while it is desired to see the connection of shape matching and shape registration like Deep Shells or NeuroMorph. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Since the proposed method is based on test-time optimisation, what is the convergence speed of the proposed method? 2. Since the deep functional map can be fully intrinsic (i.e. with intrinsic input features), what if we use it to obtain the initial correspondences and rigidly align the shapes into the canonical frame without using rotation regressor? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: 1. The proposed method is tailored to complete shape matching, so it cannot achieve desirable matching results for partial shapes. 2. The proposed method is based on iterative optimisation to align two shapes, so the runtime is slower than other learning-based methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments and the recognition of our contributions. Below we address the comments: **Modules are separated in the pipeline and lack connection:** Thank you for the constructive comment. We agree that integrating shape matching and shape registration in a more associated manner can be desirable. However, we would like to highlight that Deep Shells and NeuroMorph fully leverage the structural information within the meshes. More specifically, Deep Shells extensively computes the first 500 Laplacian eigenpairs, and NeuroMorph uses mesh connectivity in graph neural network training and dense geodesic distance matrices on both input shapes for training loss. Such intrinsic geometric information is critical for the respective non-rigid shape registration. In contrast, our method expects nothing but raw point clouds during inference (the template mesh is fixed), which can be challenging for extracting intrinsic information (see, e.g., shapes in Fig. 3 of the supplemental material). To overcome this difficulty, we propose to learn a teacher network (DiffusionNet) on a small number of training meshes (80 in all of our experiments), and then train a student network (DGCNN) that consumes points but infers intrinsic features by mimicking the teacher network. Finally, the student network is frozen and used to estimate correspondences dynamically during the final registration part. If we were to integrate shape matching and registration as an associated pipeline, it would make more sense to put the teacher network, the student networks, as well as the registration component all trainable. However, it would violate our main goal -- performing shape registration on raw point clouds. As a proof of concept, we have tried to unfreeze and fine-tune the point feature extractor during the registration process on the SCAPE_r dataset, in the following two ways: 1) Updating point-wise correspondences per 100 iterations as in Alg. 1, leading to a slight performance drop (from 2.6 to 2.8); 2) Updating per iteration, leading to a failure of convergence. **Convergence speed:** We have reported the average convergence steps as well as a full running-time decomposition in our Supp. Mat. (see Table 2 and Fig.6 therein). Especially, our method converges within 1274 iterations (1130+144) on average, tested in the Scape test set. **Use intrinsic methods to obtain rigid alignment:** We emphasize again that our main target is to perform *directly* shape matching/registration on raw point clouds. In particular, our pipeline, once trained, can deform a given template mesh to target point clouds without any pre-processing on the latter. Note that pre-processing may be slow and parameter sensitive, and is beyond the scope of this paper. **Applicability on partial point clouds:** In Fig. 1 and Fig. 2 in the Rebuttal Mat., we demonstrate some preliminary results on extending our pipeline to matching partial, even noisy point clouds. Essentially, we train a DFM tailored for partial-view point clouds generated on SCAPE_r dataset, and replace the two-way Chamfer distance with a one-way one. Note that in this experiment we assume the partial point clouds are rigidly aligned. Nevertheless, we believe the results have sufficiently shown the potential of our general scheme. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications. I will keep my initial rating.
Summary: The paper describes a method for corresponding a 3D triangle mesh to a point cloud of a similar (possibly articulated) shape. The two-stage process first corresponds the two shapes in a high-dimensional feature space, then corresponds them again using geometric features while deforming the source closer to the target. Strengths: The approach makes sense, and the results seem strong compared to previous work, both qualitatively and quantitatively. Weaknesses: In general, the presentation could be improved. There should be a figure analogous to Figure 1, but showing qualitative correspondences for cases where train/test datasets do match. Only showing severe failures of competing methods on very disparate shapes does not convey the full picture. The supplementary video showing the progress of the correspondence algorithm is very helpful. It would be even more informative if the mesh was texture mapped with the checkerboard as in Figure 1. Equations, Figures, Tables need more thorough descriptions. Equation 1: Even though the text says that one can optimize the feature function F when it introduces equation 1, it is unclear how the function F is optimized. If not including an equation involving F, you could at least point to a specific equation number in [11]. Equation 2: Parameter alpha should be defined after this equation, rather than after equation 4. Equation 3: What does the cross symbol represent? Equation 4: Parameter n2 is not used in the expression. Should the denominator summation go up to n2? Table 1 & 2: What are the two sections separated by a horizontal line (they each have a bold set of numbers)? Train/Test column contains no values. Table 2: What is "Ours-CRYPTO", this acronym is not discussed/referenced in the text? Table 3: What is "Ideal PC"? Algorithm should also initialize "Flag" (presumably to Stage-I). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Could this approach be implemented without learned functions? The intro mentions that spectral mapping can be geometric (rather than learned), and the second stage is akin to non-rigid ICP. Would it make sense to run Stage-I again after Stage-II, and repeat both stages a few times? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The method requires a triangle mesh for the source shape, which is not discussed in the limitations section. The exposition would be stronger if the authors took the source shape as a point cloud and ran some automatic meshing on it for their experiments. Another limitation that the authors mention is that the method only works on "full shapes". I assume this means that the whole source and target surfaces are adequately sampled. It would help if the authors suggested some ideas on how one would go about lifting this requirement. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments and the recognition of our contributions. Below we address the comments: **Improve paper presentation:** Thank you for the suggestions on improving the presentation of the paper, we would be happy to incorporate all of them in the future revision. Regarding Fig. 1, our intension is to illustrate the heterogeneity presented in the SHREC07 dataset, as well as our stronger generalization capacity than the competing baselines. The more thorough evaluation results are reported in Table 2, which also agree with the qualitative ones in Fig. 1. Below are responses to the minor comments: 1. We would be happy to clarify Eqn. (1)-(4) in the future revision. On the other hand, we would like to refer the readers to Sec. 1 of the Supp. Mat. for a more self-contained description; 2. The cross symbol (\dagger) in Eqn. (3) indicates the pseudo inverse of a matrix; 3. The PointInfoNCE loss is introduced to enforce the output features of DGCNN (for point clouds) to be *point-wisely* close to that of DiffusionNet (for mesh). The features are computed on the same shape (without and with mesh connectivity, respectively). A similar loss regarding S_2 is used as well; 4. In Table 1 and 2, the methods above the horizontal lines are *designed to* take meshes as input. The rest methods take point clouds directly; 5. Ours-CRYPTO in Table 2 indicates that a shape belongs to the category ‘Crypto’ of the DT4D-H dataset is used as template; 6. Ideal PC in Table 3 means the clean, aligned point clouds. **Implement our approach with non-learned functions:** We highlight that our pipeline requires no pre-processing or pre-alignment on the input point clouds. It is possible to implement with axiomatic spectral method. However, it would require to build graph Laplacian on-the-fly, since during registration the template is dynamically deformed. Such approach can be less efficient, and may suffer from scalability issues. **Repeat Stage-I and -II for multiple times:** Thank you for the suggestion. We have performed the whole registration procedure twice on the Scape dataset. The registration error decreased from 2.57 to 2.49, i.e., an improvement of 3.1%. It is worth noting, though, such approach on average introduces an over 60% computational overhead (convergence steps: 1274 vs. 1931). **Perform meshing on the source point cloud:** Thank you for the suggestion for improving the utility of our approach, which we believe is surely feasible. We think exploring it in a principled way can help to lift the need of template mesh in the future. **Applicability on partial point clouds:** In Fig. 1 and Fig. 2 in the Rebuttal Mat., we demonstrate some preliminary results on extending our pipeline to matching partial, even noisy point clouds. Essentially, we train a DFM tailored for partial-view point clouds generated on SCAPE_r dataset, and replace the two-way Chamfer distance with a one-way one. Note that in this experiment we assume the partial point clouds are rigidly aligned. Nevertheless, we believe the results have sufficiently shown the potential of our general scheme. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: My rating remains. --- Reply to Comment 1.1.1: Comment: Thank you for the reply and the positive feedback to our work.
Summary: This paper proposes an unsupervised framework for non-rigid shape registration. The proposed method deforms source mesh towards the target point cloud, guided by correspondences induced by high-dimensional embeddings learned from deep functional maps. Empirical results show that the proposed method achieves state-of-the-art results on several benchmarks for non-rigid point cloud matching. Strengths: 1. The overall writing is fluent; 2. The proposed method outperforms state-of-the-art approaches on several benchmarks; Weaknesses: 1. The organization can be further improved (the organization of Methodology does not follow Fig. 1), which makes the paper sometimes hard to follow (probably due to the complexity introduced by assembling 3 stages for tackling the problem); 2. The novelty of this paper would be a major concern. It seems the proposed algorithm in this paper simply assembles three stages, each with an existing method. The orientation regressor is from [9]. The feature extractor is the modified DGCNN proposed in [22]. The non-rigid registration mainly follows [18]. 3. The submission and main paper have different titles. Please fix it. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Why is the proposed method unsupervised? It seems the orientation regressor requires the ground-truth pose to the canonical space. And the PointInFoNCE loss (Eq. 4) also requires correspondence labels? Or the authors call the generalization from one dataset to the other datasets the unsupervised learning? 2. How good is the orientation regressor? Can it deal with large translations? e.g., the translation introduced by a walking human. From my experiences, this kind of pose regressor can only overfit to a specific model and hard to generalize. Also, it is usually not very accurate (correct me if I was wrong); 3. Why all the models are called "pre-trained"? Should it be generalizing a model trained on the same task to new data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations have been discussed in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for all the insightful comments. First of all, we would be happy to improve the presentation as suggested, as well as to fix the inconsistent titles in the future revision. Below we address the comments: **Novelty:** We acknowledge that many components in our pipeline are inspired by existing works. However, we would like to emphasize that the way they are integrated are novel, which leads to a simple yet effective solution to the challenging problem of matching **unstructured** point clouds undergoing significant deformations. Beyond that, to the best of our knowledge, our solution to train a point-based feature extractor that respects intrinsic geometry, but without ground-truth correspondence labels is novel. We kindly refer the readers to the Author Rebuttal for a more detailed response. **Why is the method unsupervised?** A more precise description of our supervision could be **correspondences label-free**. Given the opportunity, we would be happy to clarify this in the future revision. We acknowledge that the orientation regressor indeed relies on the ground-truth pose, which is obtained by simply fixing certain parameters in the SMPL generative model. On the other hand, we emphasize the fact that our method requires **no** ground-truth correspondence, either sparse or dense, throughout the pipeline. Obviously correspondence labels are much more difficult to obtain in practice. **PointInfoNCE requires labels:** The PointInfoNCE loss is introduced to enforce the output features of DGCNN (for point clouds) to be *point-wisely* close to that of DiffusionNet (for mesh). The features are computed on the same shape (without and with mesh connectivity, respectively). Therefore, we simply use the identity map between the same set of vertices (regarded as mesh vertices and point clouds respectively) on each shape to formulate Eqn. (4), which does *not* require any non-trivial correspondence label. **Performance of orientation regressor and its robustness regarding translation:** Since we assume the completeness of input point clouds in the submission, deviations induced by large translations can be resolved by moving the mass centers of all point clouds to a fixed point, which is also a standard practice in shape registration. Regarding the quality of orientation regressor, as shown in Fig. 3 of the Rebuttal Mat., though it is trained on a set of generated shapes sharing the same mesh connectivity, it can handle heterogenous shapes of varying number of vertices (2,000~20,000) well and delivers reasonable rigid alignments. Moreover, we empirically observe that our registration component enjoys certain robustness regarding the imperfect initial orientation. In particular, we *turn off* the orientation regressor, and *directly* perform registration on the rotated point clouds (see experiments reported in Table 3 of the main submission) and get **4.7(1.03)**/mean(std) in mean geodesic errors. New results show that our method still produces better and more stable results than the baselines. We attribute such robustness to the fact that non-rigid shape registration also involves deformations with respect to extrinsic orientation deviations. **Why all models are called "pre-trained"?** We call the orientation regressor and the point-based DGCNN *pre-trained* to emphasize that our key module, the final registration component, is optimization-based, leading to a geometrically meaningful procedure (see, for example, the video clip in the Supp. Mat.). The learning-based models are pre-trained and frozen during registration. --- Rebuttal Comment 1.1: Title: Reviewer Feedback Comment: Thanks for the feedback! That helps me understand your work better. As pointed by all the reviewers, the major flaw of this paper is the limited novelty. I think this is the most important part in evaluating a paper, and based on that, I will keep my initial rating, although it is a little bit negative. Moreover, I hope the authors could carefully revise their paper afterwards, as there are some places in the current version unclear and misleading. Cheers! --- Reply to Comment 1.1.1: Comment: Thank you for the reply, and we are glad that our rebuttal helps you to understand our framework better. We would be happy to revise our paper for a better and clearer presentation. **Novelty discussion:** We have actually devoted length to the discussion of our main contributions and novelty in the Author Rebuttal (see at the top of the page) and would be more than happy to address your *further comments/questions* regarding it. Our key novelties are two-fold, as re-iterated below for ease of discussion: 1. To the best of our knowledge, we are the first to address the problem of matching **unstructured** point clouds undergoing significant deformation via a hybrid approach. The relevant prior works depend on either heavily mesh structure (NeuroMorph, Deep Shells), or correspondence supervision (TransMatch). In contrast, our method can deform a template to a raw point cloud in a *direct and unsupervised* manner. 2. We propose a novel self-supervised learning scheme to infer intrinsic-aware features from unstructured point clouds effectively and efficiently. Compared with the relevant and concurrent work [8], our design is more general (not adhered to DiffusionNet architecture, not based on graph Laplacian construction) and flexible (can be extended to more powerful and/or more tailored-for backbones).
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time, effort, and insightful comments on our manuscript. We are glad that all the reviewers recognize our promising results on various benchmarks. We are encouraged by the recognition that our formulation is reasonable and that our writing is clear and easy to follow (Reviewer xFDL, CNuX, GUi9). We appreciate the recognition of our method's motivation and design by Reviewer CNuX, GUi9. We extend our gratitude to Reviewer GUi9 for acknowledging the difficulty of our task of interest and recognizing the contributions of our work to the shape matching community. Before we clarify our main contributions and address some common concerns, we would like to kindly refer all the reviewers to our **Supp. Mat.**, in which we put a decent amount of extra experimental results and analysis, as well as a video clip demonstrating intuitively our pipeline. We also report several experimental results per reviewers' requests in the attached document, which we refer as the **Rebuttal Mat.** in the following. ### **Problem statement and key challenges:** In this paper, we propose a hybrid pipeline for computing dense correspondences between a pair of unstructured point clouds, which can undergo significant non-rigid deformations. In particular, we encounter the following challenges: 1. It is difficult to directly infer intrinsic structure (i.e., geometric structure regarding the underlying surface) from raw point clouds. A common practice is to perform meshing or to build certain graph Laplacian on top of them [8], which can be inefficient and unscalable; 2. Non-rigid shape registration techniques [27, 45] can handle small to moderate non-rigid deformations via proximities in the ambient space R^3. However, in the presence of large deformation, they can fail significantly since the intermediate maps induced by Euclidean proximities do not necessarily respect the intrinsic, non-rigid deformations; 3. Except for the purely intrinsic methods, most of the existing works either require the input point clouds to be extrinsically aligned [8, 22], or depend on heavy correspondence labels during training [30]. ### **Novelty:** To address the above challenges, we propose a systematic pipeline, which combines the learning-based shape matching and the optimization-based shape registration techniques. We conclude our key novelties as follows: 1. To the best of our knowledge, we are the first to address the problem of **unstructured** point cloud matching via a hybrid approach. Relevant works such as Deep Shells, NeuroMorph strictly require meshes as input. We believe that lifting this assumption is novel and non-trivial. On the other hand, mesh-based methods can be sensitive to missing parts and topological noise (see Fig. 2, 3 & Table 1 in the Supp. Mat.); 2. Our solution to the first challenge above is novel. We consider the most relevant work as SSMSM [8], which also follows a self-supervised approach. However, [8] essentially depends on the fact that DiffusionNet can be trained jointly over meshes and the inherent vertices. In fact, [8] explicitly constructs graph Laplacian during training and inference, which is less efficient. For instance, it takes more than one minute to pre-process an input of 40,000 points with a V100 GPU. In contrast, our self-supervised scheme is without pre-process and can in principle take any pair of backbones tailored for meshes and point clouds respectively. The natural and simple teacher-student learning scheme effectively learns a point-based feature extractor by mimicking the mesh-based counterpart. Unlike [8], which is adhered to DiffusionNet, our scheme is more general, flexible, and can be extended in the future (e.g., replacing DGCNN with a rotation-invariant point feature extractor). ### **Potential contributions to the community:** We sincerely thank all the reviewers for their constructive comments, which help us to re-think and position our approach in a clearer way. In particular, we would like to highlight the following features: 1. The idea of combining DFM prior and optimization-based shape registration techniques is simple and of great potential. Our approach is simply a first step along this line of exploration, which can undoubtedly benefit from the rapid advances from both directions; 2. Our approach is **scalable** w.r.t input size. Thanks to the fact that we non-rigidly align shapes in R^3, we can in theory freely down- and up-sample both the template mesh and the target point clouds. Note this is non-trivial for methods based on mesh or graph Laplacian, as deducing dense maps with landmark correspondences over graph structures is a difficult task on its own [a]. In Fig. 4 of our Rebuttal Mat., we show the matching results on the real scans from the FAUST challenge [b], each of which consists of around 160,000 vertices. In contrast, [8] can handle at most 50,000 vertices without running out of 32G memory on a V100 GPU. We visualize its matching results on the subsampled (to 40,000 vertices) point clouds for comparison; 3. As a natural follow-up, we in fact have managed to get some preliminary results on extending our pipeline to matching **partial** point clouds (see Fig. 1 in the Rebuttal Mat.). Essentially, we train a DFM tailored for partial-view point clouds generated on SCAPE_r dataset, and replace the two-way Chamfer distance with a one-way one. We also test our pipeline with noisy partial scans from [c] (see Fig. 2 in the Rebuttal Mat.). Though currently, it remains a challenge to deal with partial point clouds of arbitrary orientation and position, we believe the results sufficiently show the potential of our general scheme. [a]: Weighted averages on surfaces, D. Panozzo, I. Baran, O. Diamanté, O. Sorkin-Hornung, SIGGRAPH 2013. [b]: https://faust-leaderboard.is.tuebingen.mpg.de [c]: http://domedb.perception.cs.cmu.edu Pdf: /pdf/1b832e9a2c54b19295e0ba6acd3502e8f002b6dd.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the problem of non-rigid shape matching. The problem is first decomposed into learning (rigid) orientation of shapes and then learning shape matching on aligned shapes. For the later, it proposes to learn it with a sequential pipeline, consisting of various modules, that is also optimized in a two stage manner. the main idea is to facilitate learning by similarity in both ambient space (R^3) as well as high dimensional learned feature space. To this end, it trains a DGCNN feature extractor based on combination of several loss functions (DFM prior loss, ARAP loss, Chamfer/cosine loss). The method is validated on isometric as well non isometric benchmarks. Post Rebuttal: The rebuttal provided additional experiments to support some of its claim as mentioned below. I lean towards rejection since the work is a sequential concatenation of existing individual modules (also noted by reviewer CNuX and xFDL ) without any conceptual justification to why those modules need to be combined e.g. why do we need to combine a DFM and ARAP loss. The rebuttal also contains several unsubstantiated statements detailed below. Combining two lines of research is not a technical contribution or contribution to community if there no conceptual justification given. Majority of the introduction section needs to be rewritten and contextualised *correctly* wrt prior work. Besides, submission had several hyper-parameters whose values were missing and there is no mention of model sensitivity towards these hyper-parameters even though the approach is unsupervised. Strengths: - Experimental results on various near isometric benchmark look promising (Table 1) and compares extensively with existing work. - the paper identifies the generalization problem of some embedding based approach [30,8] to unseen shape data. Weaknesses: - Presentation: Section 1 contains several factually incorrect or unsupported claims. Moreover, the motivation of the work is misplaced since their is no validation of it later in the experimental section. Please support all the intuitive claims either with a prior reference or by explicitly mentioning results from this paper. a) why do we need to decompose the shape matching problem into learning an alignment first and then matching aligned shapes? Is it because [8], [22] do so or is there a scientific justification behind it as shown empirically in rigid shape matching or non-rigid shape matching with DFM. e.g. it is shown to provide an extrinsic supervision that helps to disambiguate symmetry issues in DFM. Please motivate the problem/solution accordingly. b) Line 46 < prior work with resulting high dimensional embedding lack intuitive geometric meaning> Since this is used as one of 3 reasons behind this paper/formulation, please demonstrate this on an example where in contrast, this paper obtains an intuitive embedding with geometric meaning. c) Line 54-57: Please provide a reference to support these claim or prove it later with visualization and examples in experimental section. d) [30] requires shapes to be pre-aligned at train or test time. This is not true. You can train and test [30] without such alignment and it makes no such assumption on input requirements in the paper[30]. - Novelty: the main conceptual idea/key insight (Line 52) in this work is to enforce similarity in both ambient space as well as learned feature space. DPC [b] proposed the exact same idea with DGCNN, cosine/chamfer loss in the most simplest possible way for non-rigid shape matching. The idea is already well known that others have even built upon it for non-rigid shape matching e.g. [a]. So claiming this as a key insight is not justified IMO. - Missing references and comparison of a very similar work[b]: authors should also compare their work with DPC on their benchmarks. this will show what gains (if any) are brought by DFM prior or ARAP loss. a. Learning Canonical Embeddings for Unsupervised Shape Correspondence with Locally Linear Transformations b. DPC: Unsupervised deep point correspondence via cross and self construction, 3DV 2021. - formulation without a principled approach: the network consisting of (DGCNN, DFM prior, ARAP loss, chamfer loss, pointinfo loss etc) combines modules from different frameworks without any principled reason. e.g. DFM prior from DFM literature, DGCNN feature extractor, chamfer & cosine loss from DPC etc. in a two stage optimization procedure. why do we need a DFM prior when we are deforming a source and a target shape with a ARAP loss ? - Too many hyperparameters in an unsupervised approach : The paper should mention in the main body how many hyperparameters overall does this approach has and how were their values chosen? I count more than 10. Moreover, how is it justified to choose different values for the same hyperparameter (weighing scalars) in an algorithm and call the resulting approach *unsupervised*? there is a two order magnitude of difference in hyperparameter values ($\lambda_{cd}$ and $\lambda_{corr}$) between different runs of the algorithm (Stage 1 and Stage 2). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - for Non-isometric, SMAL is considered the main benchmark and there is already a large literature that benchmark their results on SMAL. why ignore such standard benchmark? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the constructive comments on our motivation and novelty. Below we address the comments. **Novelty, especially compared with DPC:** We argue that our approach is *fundamentally* different from DPC in the following perspectives: 1. Our feature extractor for point clouds is learned in an intrinsic-aware way, while such information is absent in learning the counterpart of DPC; 2. Our registration component explicitly deforms the template towards the target point cloud. In DPC, the cross-reconstruction is essentially a re-indexing of the target point cloud via soft maps induced by the latent proximities. For instance, Eqn. (3) of DPC amounts to a point permutation when the weights deduced in Eqn. (2) approximate a delta distribution; 3. Finally, like DFM-based approaches [8, 22], DPC infers correspondences via high-dimensional embeddings, while our approach does so in the ambient space, which is more intuitive and easier to analyze. Moreover, we compare our method with DPC on the near-isometric benchmarks. As shown in Table 1 & 2 in the Rebuttal Mat., our method outperforms DPC by a *significant* margin when trained on both small-scale and large-scale datasets. We refer the readers to Table 3 of [8] for more detailed results. **Principle of the proposed formulation:** We refer the readers to the Author rebuttal for detailed motivation of our framework design. **Motivation of alignment and matching:** We emphasize that our approach performs extrinsic shape registration on *raw* point clouds, in a direct fashion without any explicit mesh/graph construction. It is then clear that our approach is sensitive with respect to position and orientation of the input. Many relevant works either implicitly (NDP [27], AMM [45]) or explicitly (SSMSM [8], NIE [22]) require rigid aligned shapes as input/initialization. Another way out is to leverage dense correspondence labels (DiffFMaps [30]). We remark the latter is not by construction robust w.r.t SO(3) perturbations. As shown in Table 3 of our main submission, it is sensitive w.r.t rotations when trained on aligned point clouds, without the effective data augmentation in the original implementation. In contrast, we leverage the generative model and propose a principled solution to this challenge, which lifts the rigid alignment in inference or dense correspondence labels in training. **Geometric meaningful embeddings:** We clarify that our high-dimensional embeddings, similar to that of [8, 22], are obtained by an uninterpretable learned network, which also lacks geometric meaning. In general, correspondences induced by such embeddings are difficult to evaluate and analyze without ground-truth maps. In contrast, our formulation leverages the learned embeddings to deform a template shape explicitly towards a given target point cloud, which provides a more geometrically intuitive mapping/registration procedure. We particularly refer the readers to the video in the Supp. Mat. As a result, one can perform both qualitative (by visually comparing the deformed template and the target) and quantitative (by Chamfer distance or RMSE) analysis on the output maps, even without ground-truth maps. **Claim made in Line 54-57:** The regarding claim has been justified in Fig. 3 and Table 1 of the main submission. Especially, we compare our method with NDP [27] and AMM [45], which depend on proximities in the ambient space to iteratively estimate correspondences and can fail significantly in the presence of large deformation. In Fig. 3, we deform the FAUST template (the 3rd shape from left) to the right-most shape. It is obvious that ambient proximities would lead to erroneous maps in the beginning, making it difficult to guide the right deformation (to rise arms significantly). The quantitative results in Table 1 also validate this observation. Our method achieves at least 78% matching error reduction compared to them on the two standard benchmarks. **Motivation of using DFM prior when ARAP loss is used:** The ARAP loss in general serves as a regularizer for shape registration, which prevents the deformed template from being overly distorted. Solely using ARAP loss does not lead to successful registration, as it could not provide any cue for matching. We refer the readers to Table 4 of our main submission: Comparing *w/o Stage-I* and *Full*, we can see the DFM prior (i.e., Stage-I) contributes significantly. **Too many hyper-parameters:** The hyper-parameters used in the pre-trained models (orientation regressor and DFM) follows the regarding prior works. As for those for the registration optimization, we search for the optimal hyper-parameters with *2 pairs of training shapes* of SCAPE w.r.t the registration loss, which does not depend on any correspondence label. We remark that our hyper-parameters are fixed across different template meshes, training sets, and test sets . **Experiments on SMAL:** As requested, we have performed experiments on the *remeshed* SMAL dataset [d]. We first randomly generate 5000 shapes with SMAL model [e] to train the alignment module. Then we train a DFM with the remeshed SMAL dataset [d]. The template shape is ‘dog_06’ from the training set. The quantitative results are reported in Table. 3 of the Rebuttal Mat. Remarkably, our method achieves more than a 40% performance improvement than the second-best baselines. [d] Complex functional maps: a conformal link between tangent bundles, N. Donati, E. Corman, S. Melzi, M. Ovsjanikov, CGF 2022. [e] https://smal.is.tue.mpg.de --- Rebuttal Comment 1.1: Title: follow up on rebuttal Comment: thank you for your time and effort. The reviewer has gone through the supplement and main submission multiple times as suggested in rebuttal. Comments below: \ - Algorithm 1 (pseudo-code) not same in Main submission and supplement: There is an additional early stopping count condition in Line14-17 of Algorithm 1 in supplement. no such thing exist in algorithm 1 of main submission. I assume the results shown in main submission are based on the algorithm from supplement? if so, please explain what this early stopping (count <15) criteria is. why do we need it especially if the algorithm has another stop criteria (max_iteration =100) in the underlying optimization. (Line 183 in main submission). why $E_i -E_{i-1} < eps$ would not suffice for termination of any algorithm? \ - Claims made in Line 54-57: The rebuttal repeatedly points to the table 1 and texture transfer figure 1 to justify these intuitive claims.These table and figures contain the end result of this approach ( a concatenation of 5 full fledged siggraph/TOG/CVPR papers (DGCNN, DiffusionNET, orientation regresssor, modified DFM, ARAP) along with geodesics to tackle a single problem). the comparison with NDP/AMM to justify these claims is unfair since this submission takes a SOTA non rigid shape matching method (DiffusionNet+modified DFM) as a head start. a fair way to justify these intuitive claims of tackling large deformation with superiority over NDP/AMM or any other baseline would be to also initialize them with the same SOTA non rigid shape matching (e.g. w/o registration baseline in ablation table). \ - Too many hyperparameters and heuristics for an unsupervised approach: In addition to the count, iteration parameters outlined above, the rebuttal/submission also misses out on critical hyper parameters and their values etc: -- Threshold parameter in correspondence filtering and its value, how it is chosen -- Rebuttal mentions $\lambda_{cd}, \lambda_{corr}, \lambda_{arap}$ were chosen based on 2 training shape pairs but does not mention how even though their values are critical to understand what contributes to overall performance in shape registration pipeline (details below). Since these parameters are different for different stages, they should also be indexed accordingly to distinguish better. \ - Stage 1 and Stage 2 in Shape registration: Based on the $\lambda_{cd}, \lambda_{corr}, \lambda_{arap}$ values in two different stages (and two order of magnitude difference between them), stage 1 does not rely on chamfer distance whereas stage 2 does not rely on correspondence filtering. Does it not imply the network absolutely needs geodesic based filtering in stage 1 ? and that chamfer distance based on feature similarity( & NN) is not effective/needed in Stage 1? this is speculative since submission/rebuttal does not show the relative strength of different loss terms to gauge individual contribution within these individual stages. \ - alignment and matching: robustness is different than requirement of pre-aligned shapes. the submission should replace [30] with [38] (first work to show results with this requirement) or change the text accordingly. - Geometric meaningful embeddings: please clarify the same in Introduction. --- Reply to Comment 1.1.1: Title: Responses to further comments Comment: Thank you for the detailed reply. Below we address your new comments: **Inconsistencies in algorithm descriptions in the main submission and Supp. Mat.:** Due to the lack of space, we defer some algorithmic details to the more complete version in Supp. Mat. The algorithms are essentially the same. First, the 'converged' condition in Line 10, 11 of Alg. 1 in the main submission is described in details in Line 12-16 of the version in the Supp. Mat., which is the only stopping criteria in our algorithm. Second, Line 183 in the main submission is not an early stopping criteria. It says that we update point-wise correspondences between deforming template and the target with the learned embedding every 100 iterations during stage-I. **Ablation study on utlizing DFM output as initialization to NDP, AMM:** According to your suggestion, we have performed ablation studies to compare our pipeline with NDP and AMM based on the same initial correspondences. In particular, we train the DFM on the training set of SCAPE_r and use the SCAPE template (see Fig. 3 in the main submission) in all experiments. In the following table, we report the average errors of the initial maps computed by DFM, and that of output maps of Ours, NDP, AMM, which are all based on the same initial maps. It is evident that, across three different test sets, our method consistently improves the inital maps (at least 37% error reduction), while NDP and AMM can even lead to deteriorated maps than the initial input. These results shows the advantage of our proposed pipeline, especially the registration part. Test set | Ini. | Ours | NDP | AMM SCAPE_r | 5.5 | **2.6(-52%)** | 5.4(-2%) | 11.4(+107%) SHREC19_r | 8.1 | **5.1(-37%)** | 11.4(+40%) | 10.7(+32%) SHREC07-H | 11.5 | **5.9(-48%)** | 8.9(-22%) | 8.8(-23%) **Dissusion on hyper-parameters:** Thank you for the suggestions on clarifying the roles and choices of hyper-parameters. We would be happy to include a detailed discussion on it in the future revision. **How exactly the hyper-parameters (e.g., threshold in correspondence filtering, $\lambda_{cd}$, $\lambda_{corr}$, $\lambda_{arap}$) are chosen:** As we mentioned in the rebuttal, performing shape registration allows to evaluate the resulting registration/maps without ground-truth correspondences. In particular, we perform grid search with respect to the weights used in the final optimization to seek for the combination that leads to the the best registration results (quantitatively in terms of Chamfer distance and qualitatively by visual inspection). We seek for the threhold in correspondence filtering in a similar way, and it is set to 0.01 across all experiments. We emphasize again that (a) The above hyper-parameter selection is quite loose and may be *suboptimal*, since only two pairs of training shapes (in fact, the template source shape is the same) are involved; (b) We use the same hyper-parameters for *all* of our experiments. **The roles of $\lambda_{cd}$, $\lambda_{corr}$ at different stages:** Essentially, the registration procedures in stage-I and -II are respectively guided by proximities in the high-dimensional embedded space and the ambient space. On the other hand, the loss $E_{corr}$ (measuring how the deformation agrees with the maps induced by the learned embeddings) and $E_{cd}$ (measuring the discrepency between the deformed template and the target in R^3) quantify them respectively. Therefore, it is natural to put more weight on the resepctive loss in the corresponding stage. The exact ratio, 100:1 and 1:100, is determined as described above. In fact, we have not tried to turn off either $E_{corr}$ or $E_{cd}$ in any stage before. This is motivated by our initial motivation -- to enforce the deformed shape to be close to the target in *both* the high-dimensional embedded space and the ambient space. According to your suggestion, we have also performed ablation study on setting $\lambda_{cd}$ and$\lambda_{corr}$ to be zero independently in the two stages. The following table reports the scores. We find that most of the time, turning off either leads to worse results, especially when the test shapes are heterongenous (see, e.g., SHREC07-H). Test set | Ours | $\lambda_{cd}$ = 0 in Stage-I | $\lambda_{corr}$ = 0 in Stage-II SCAPE_r | 2.6 | **2.5(-4%)** | 2.7(+4%) SHREC19_r | **5.1** | 5.5(+8%) | 5.6(+10%) SHREC07-H | **5.9** | 6.9(+17%) | 6.8(+15%) **Revising arguments on alignment & matching and geometrically meaningful embeddings:** We would be happy to revise these in the future version.
null
null
null
null
null
null
Self-supervised Graph Neural Networks via Low-Rank Decomposition
Accept (poster)
Summary: The paper argues that when dealing with self-supervised learning tasks, utilizing propagation-based GNNs as encoder inevitably encounters two serious issues, i.e., failing to capture local property due to global parameters and lacking the ability on handling homogenous networks without label information. To this end, the authors introduce a different perspective by replacing the propagation-based GNNs with low-rank decomposition-based GNNs. The authors formulate the low-rank decomposition-based GNNs by employing low-rank decomposition to the attribute matrix. Besides, considering that networks require long-distance information for representation, the authors introduce the tensor-based formulation and construct the node attribute tensor from selected similar ego-networks. Strengths: - The paper provides a novel perspective to tackle GNNs encoder for self-supervised learning. - The proposed low-rank decomposition-based GNNs effectively solve the serious issues introduced by employing propagation-based GNNs. - The consideration of incorporating long-distance information successfully captures the long-distance relationships between original and selected similar ego-networks. - The experiments show the superiority and generalization of the proposed low-rank tensor decomposition-based GNNs. The ablation study and experimental analysis illustrate the success in a complete view. Weaknesses: - The reason why the low-rank decomposition-based GNNs preserve local information is not presented clearly. - Can the authors clarify the differences between networks beyond homophily and network with heterophily? - How can the proposed model be enhanced in semi-supervised tasks by using the node labels. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. The reason why the low-rank decomposition-based GNNs preserve local information is not presented clearly.** R1. The characteristic of local information preservation is from both the matrix/tensor construction and low-rank decomposition. Firstly, both the matrix and the slice of the tensor, which will be decomposed, are constructed from the attributes of nodes in the local ego network. This leads the obtained representation to learn from local information. Secondly, the low-rank property is the requirement of the local propagation between nodes with the same labels. Thus, the low-rank decomposition, which seeks low-rank representation, essentially performs local propagation. Therefore, the low-rank decomposition-based GNNs possess the potential of preserving local information. --- **Q2. Can the authors clarify the differences between networks beyond homophily and network with heterophily?** R2. The homophily rate is a measurement to describe the proportion of linked nodes belonging to the same classes. Vanilla GNNs perform well on networks with high homophily rates, i.e. homophilic networks. The network with heterophily are the networks with low homophily rates, while the networks beyond homophily are the networks, whose homophily rates are not high. Therefore, networks with heterophily belong to networks beyond homophily. --- **Q3. How can the proposed model be enhanced in semi-supervised tasks by using the node labels?** R3. The proposed model can be enhanced by constructing the matrix and tensor with the help of node labels. The current self-supervised model constructs the matrix and tensor, which will be decomposed, using the ego-network and similar ego-networks without any supervision. When the node labels are available, the matrix and tensor constructions can be improved according to the predicted node labels. Specifically, the labels of nodes, which are unlabelled, can be predicted using the existing GNNs. Then, the matrix can be constructed using the nodes in the ego-network, which share the same predicted labels as center nodes. And the tensor can be constructed using the ego-networks, whose center nodes’ labels are the same. By employing the predicted node label, the nodes in the constructed matrix and tensor may belong to the same classes with higher probability, thus their ranks tend to be much lower. This facilitates the following decomposition. Therefore, the model can be enhanced with the help of node labels.
Summary: The encoder for self-supervised graph neural networks is investigated. The authors identify the weaknesses of capturing local property and handling heterophilic networks in existing propagation-based encoder. They observe that the obtained node representations possess low-rank characteristics and tend to meet it using the low-rank matrix factorization. Besides, to extend to incorporate global information, attribute tensor is constructed and low-rank tensor factorization is applied. Experiments shows its effectiveness and robustness to noises, especially on networks with heterophily. Strengths: • The investigated encoder issues are significant to self-supervised graph neural network. Especially, the global parameter and the lack of supervision information are critical for encoder choice. • The employed low-rank decomposition algorithms are novel to GNNs. The decomposition to local attribute matrix facilitates the local structure capture and the requirements to label. • The performance enhancement on heterophilic networks is remarkable. Experiments on robustness and preventing oversmoothing are convincing. • The presentation is clear. The figure of the proposed framework is elaborate to show the process. The figure of visualization demonstrates the discriminability of node representations. Weaknesses: • The relationship between the proposed method and existing self-supervised GNN is not clear. Existing self-supervised learning relays on encoder for supervised learning and an objective function. However, the proposed method does not need objective function. Therefore, it is important to find their connections. • It is difficult for readers, who are not familiar with low-rank, to get the key points in motivations and details of the proposed methods. It is better to review the concepts of low-rank and self-supervised in preliminaries section or appendix. • There are some typos. For example, the is a nuclear norm missing in the formula between lines 162 and 163. • The performance on preventing over-smoothing issue is compared against semi-supervised GNNs, i.e., GCN and GAT. The self-supervised GNNs should be compared with. • The proposed LRD-GNN is a non-parameter model, which may benefit the task of self-supervised learning. However, how can it be improved if the labels are available. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Refer to Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1. The relationship between the proposed method and existing self-supervised GNN is not clear. Existing self-supervised learning relays on encoder for supervised learning and an objective function. However, the proposed method does not need objective function. Therefore, it is important to find their connections.*** R1. Thanks for your advice. Unfortunately, it is difficult to point out their detailed connections, since they are very different from design principal. Existing self-supervised GNNs focus on global characteristics, thus they employ encoders with global parameters and global objective functions. On the contrary, the proposed LRD-GNN pays much attention to local properties, and thus local encoder with a local objective function (low-rank decomposition) is utilized. The local encoder and objective function are essentially different from the global ones, and thus it is hard to point out their connections. Extensive experiments demonstrate the effectiveness and superiority of the local encoder and objective function. We want to conduct in-depth research on their theoretical connections in the future. --- ***Q2. It is difficult for readers, who are not familiar with low-rank, to get the key points in motivations and details of the proposed methods. It is better to review the concepts of low-rank and self-supervised in preliminaries section or appendix.*** R2. Thanks for your suggestion. We will provide the basic concepts of low-rank and self-supervised GNNs in the appendix to improve the readability and completeness of the paper. --- ***Q3. There are some typos. For example, the is a nuclear norm missing in the formula between lines 162 and 163.*** R3. Thanks for your suggestion. We will fully polish this paper. The formula between lines 162 and 163 can be corrected as $ \frac{1}{n_3} \sum_{i=1}^{n_3} ||\bar{A}^{(i)}||_*$. --- ***Q4. The performance on preventing over-smoothing issue is compared against semi-supervised GNNs, i.e., GCN and GAT. The self-supervised GNNs should be compared with.*** R4. According to your suggestion, the ability of our proposed LRD-GNN on preventing the over-smoothing issue is compared with DGI and MVGRL, which are representative self-supervised GNNs. The node classification results with various model depths on Cora, Citeseer, and Wiki-CS are as follows, as well as the visualization in Fig 1 in the global response PDF. Therefore, compared to both semi-supervised and self-supervised GNNs, the proposed LRD-GNN can prevent over-smoothing issue. ***Table 1. The results on Cora.*** | Model | 2-layers | 4-layers | 6-layers | 8-layers | 10-layers | |---------|----------|----------|----------|----------|-----------| | DGI | 82.15 | 74.47 | 54.25 | 31.22 | 27.63 | | MVGRL | 83.11 | 80.09 | 61.26 | 39.89 | 36.62 | | LRD-GNN | 84.74 | 81.77 | 81.62 | 80.53 | 80.42 | ***Table 2. The results on Citeseer.*** | Model | 2-layers | 4-layers | 6-layers | 8-layers | 10-layers | |---------|----------|----------|----------|----------|-----------| | DGI | 70.72 | 64.02 | 44.78 | 25.47 | 24.16 | | MVGRL | 71.51 | 65.43 | 62.24 | 57.7 | 47.59 | | LRD-GNN | 71.94 | 71.82 | 70.8 | 69.32 | 68.36 | ***Table 2. The results on Wiki-CS.*** | Model | 2-layers | 4-layers | 6-layers | 8-layers | 10-layers | |---------|----------|----------|----------|----------|-----------| | DGI | 73.62 | 68.07 | 64.37 | 55.36 | 51.64 | | MVGRL | 76.53 | 72.66 | 68.27 | 60.36 | 52.54 | | LRD-GNN | 81.55 | 81.17 | 80.62 | 79.53 | 79.42 | --- ***Q5. The proposed LRD-GNN is a non-parameter model, which may benefit the task of self-supervised learning. However, how can it be improved if the labels are available?*** R5. The proposed model can be improved by constructing the matrix and tensor with the help of node labels. The current self-supervised model constructs the matrix and tensor, which will be decomposed, using the ego-network and similar ego-networks without any supervision. When the node labels are available, the matrix and tensor constructions can be improved according to the predicted node labels. Specifically, the labels of nodes, which are unlabelled, can be predicted using the existing GNNs. Then, the matrix can be constructed using the nodes in the ego-network, which share the same predicted labels as center nodes. And the tensor can be constructed using the ego-networks, whose center nodes’ labels are the same. By employing the predicted node label, the nodes in the constructed matrix and tensor may belong to the same classes with higher probability, thus their ranks tend to be much lower. This facilitates the following decomposition. Therefore, the model can be improved with the help of node labels. --- Rebuttal 2: Title: Response to the rebuttal Comment: Thanks to the efforts made by the authors, all my concerns have been addressed. Therefore, I will maintain my acceptance of this paper.
Summary: This paper proposes to alleviate the issues in propagation-based self-supervised GNN using the low-rank matrix/tensor decompositions. Firstly, it points out existing self-supervised GNNs can not capture local information and handle networks with heterophily due to the global learnable parameters and the lack of supervision information. Secondly, it investigates the low-rank property of the local representation matrix. Thirdly, to meet these requirements, it presents low-rank matrix/tensor decomposition, which possesses attractive characteristics. Experiments on real networks verify the superior performance and robustness to noises. Strengths: - The motivation makes sense. It is interesting and meaningful to reveal the issues in existing self-supervised GNNs. Due the shared global parameter and lack of label information, propagation-based encoders tend to have some drawbacks. It seems the first attempt to seek specific encoder for self-supervised GNNs. - The technology is solid and novel. The observation that local representation matrix should be low-rank is insightful and interesting. The employed low-rank matrix/tensor decomposition are novel to the field of graph neural network. Especially, the tensor decomposition seems technically solid. Besides, the proposed methods have some attractive properties. - The evaluations are sufficient. Both the employed datasets and the baselines are representative and adequate. The performance improvements are acceptable. The visualization and ablation study are convincing. Weaknesses: - Some descriptions are not clear. For example, it is confusing to put the overflows of both matrix decomposition and tensor one in the same figure. Besides, Figure is too small and compact. See questions bellow. - It is difficult to understand why the proposed methods is better than propagation-based ones for reviewers who are not familiar with low-rank decomposition. It is not as intuitive as in computer vision. Therefore, some preliminaries should be given. - A section on related work should be given to make the readers to judge the contributions of this paper. - The presentation and writing should be carefully check. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What are the differences between unsupervised learning and self-supervised learning. Although I like the idea of seeking representation using low-rank decomposition, I think it is an unsupervised learning. It may be different from self-supervised one. Could you explain their differences. - The procedure of tensor construction is not clear. How the similar ego-networks are selected. Besides, it is not clear why the similar ego-networks are concatenated into a tensor instead of a larger matrix. - It is not clear why the proposed methods are robust to topology and attribute noises. Although the experiments verify this characteristic, the formal explanation should be given. - What do the white boxes mean in Fig 1(d)? === I have read the rebuttal and would like to keep my rating. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1. What are the differences between unsupervised learning and self-supervised learning. Although I like the idea of seeking representation using low-rank decomposition, I think it is an unsupervised learning. It may be different from self-supervised one. Could you explain their differences.*** R1. In my opinion, self-supervised learning belongs to unsupervised learning. Unsupervised learning is a broad class of tasks where the supervision information, such as node labels in the graph, is unavailable. It includes many tasks, such as clustering, dimension reduction, Probabilistic Density Estimation, etc. Self-supervised learning often employs to denote the unsupervised representation task with neural networks. Thus, self-supervised learning is a specific class of unsupervised learning. From this perspective, although low-rank decomposition is an unsupervised learning method, the proposed LRD-GNN is a self-supervised method for graph representation learning. --- ***Q2. The procedure of tensor construction is not clear. How the similar ego-networks are selected. Besides, it is not clear why the similar ego-networks are concatenated into a tensor instead of a larger matrix.*** R2. Due to the limited space, the details on similar ego-networks selection are elaborated in the supplement material as follows. We construct the attribute tensor by selecting similar ego-networks and splicing the attribute matrices of similar ego-networks into a 3-way tensor. We evaluate similarity from two perspectives. (1) We select nodes that have similar attributes to the target node. For instance, Cosine Similarity is used to measure the similarity of attributes between nodes. Then, the ego-networks from these nodes are selected to construct a tensor. (2) We select nodes that have similar local structures to the target node. The Shannon entropy value of ego-network is used to measure local structure similarity. For the ego-network $G_i = (V_i,E_i) $ around $v_i$ , the Shannon entropy value $H(G_i)$ is defined as $ H( G_i ) = -\sum_{v \in {V}_i} P(v)\log P(v) $, where $P(v)$ is the probability of the random walking visiting $v$ in ego-network. Then, we choose several nodes' ego-networks that are close to the Shannon entropy value of the central node to construct a tensor. The reason for employing a tensor instead of a large matrix is because that tensor can preserve more structure information compared to the matrix. According to the construction of the tensor, nodes belonging to the same ego-network are on the same slice of the tensor, while nodes in different ego-networks are on different slices. On the contrary, the large matrix can not distinguish whether nodes are from the same ego-network, since all nodes are concatenated in the same dimension. Therefore, the tensor is superior to the matrix. ***Q3. It is not clear why the proposed methods are robust to topology and attribute noises. Although the experiments verify this characteristic, the formal explanation should be given.*** R3. The robustness to topology and attribute noises can be ascribed to the employed low-rank decomposition. On one hand, low-rank decomposition avoids the propagation over the topology, which is sensitive to the topology noises. Existing propagation-based GNNs perform the message passing according to the links in the topology. Therefore, the noisy links inevitably introduce noises in representation. On the contrary, the representation learning in LRD-GNN does not strictly rely the topology. The topology is only utilized to construct the local information matrix in the LRD-GNN, while the low-rank decomposition does not use topology. Therefore, even though the noisy topology introduces noises in the local information matrix, the following low-rank decomposition possesses the ability to denoise. On another hand, the attribute noises can be removed via low-rank decomposition. The low-rank decomposition, which decomposes the attribute matrix into a low-rank matrix and a noisy one, has significant robustness to noises. Therefore, the low-rank decomposition contributes to the robustness to topology and attribute noises. --- ***Q4. What do the white boxes mean in Fig 1(d)?*** R4. The white boxes stand for elements with zero values. In the Noise part, the discrete white boxes denote the elements without noises compared to the gray boxes, which represent the noises. In other parts, including Feature Matrices Extraction, Node Attribute, and Low-rank Matrix Representation, the row-wise white boxes stand for the padding vectors in the slices with fewer nodes. Specifically, the selected similar ego-networks contain different numbers of nodes, thus the constructed matrices possess different sizes. To concatenate these matrices into a tensor, matrices with fewer rows, i.e., ego-networks with fewer nodes, should be padded with zero rows. We will add this explanation to the caption of the figure in the final version. --- ***Weakness: 1) Figure is too small and compact. 2) Some preliminaries on low-rank should be given. 3) A section on related work should be given. 4) The presentation and writing should be carefully check.*** R5. Thanks for your suggestion on improving the paper. We will add the preliminaries on the low-rank problem and a related work section, fully polish the paper, and separate Figure 1 into the LRD-GNN-Martix Figure and the LRD-GNN-Tensor one in the appendix for better illustration.
Summary: This paper studies the self-supervised graph learning for node classification tasks. The authors make an observation that traditional propagation based GNNs loose discriminative information via node property averaging. To address this issue, the authors propose a novel LRD-GNN method that encourages low rank property of the representation matrix. The extended version, LRD-GNN-Tensor, allows node feature propagation between long distance nodes and improve the method's ability to capture long distance relationships. Empirical results show the proposed method outperforms baseline methods and improves the model robustness towards noisy edges. Strengths: * The authors make an interesting observation that in order to propagate only among same class nodes, the property matrix needs to be low rank. However, instead of enforcing a low rank decomposition of the node property matrix, the method adopts the Robust PCA relaxation of the rank function/L0 norm and therefore does not need heuristic hyper-parameters or label supervision. * Empirical results show the proposed method has better performance than various baseline methods in both homophilic and heterophilic settings. Ablation study also shows the proposed method can alleviate the over-smoothing issue. Weaknesses: * LRD-GNN-Tensor considers long distance correlations in the graph by grouping similar ego-networks together. However, it is not clear from the paper how are these similar ego-networks selected. * It is claimed in the paper that the proposed method is scalable yet only small datasets are used in empirical studies. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Justification for selecting exactly M similar ego-networks for every node? * How are the similar ego-networks selected in practice? * Computation complexity and overall method scalability w.r.t. graph size? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation of this work is not discussed in the paper. I don't see any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Q1. Justification for selecting exactly M similar ego-networks for every node?*** R1. Thanks for your insightful question. Selecting exactly M similar ego-network is the compromise of the model’s expressive power and generalization ability. It is natural that different nodes should select different numbers of similar ego-networks, since different nodes possess different topology structures. Unfortunately, it requires additional components and parameters to determine it for every node. Although different M for different nodes can improve the model’s expressive power, it may induce an overfitting issue and reduce the generalization ability, especially in self-supervised learning tasks. Besides, we don’t find a practical relationship between the number of similar ego-networks and the node’s topology structure. This provides difficulty in modeling the number of similar ego-networks. From these perspectives, the strategy of exactly M similar ego-networks for every node is employed in our model. --- ***Q2. How are the similar ego-networks selected in practice?*** R2. In practice, the cosine similarity of the center nodes’ attributes is utilized to select the similar ego-network. In the supplemental material, two similar ego-networks selection strategies, i.e., attribute-based strategy and topology-based one, are provided. The attribute-based strategy employs the cosine similarity of center nodes’ attributes as measurement, while the topology-based strategy uses the Shannon entropy value as measurement. The ablation study on the strategies is given in Section 4.1.3 and Table 4. It shows that the attribute-based strategy consistently outperforms the topology-based one. Therefore, the cosine similarity of the center nodes’ attributes can be employed in practice. --- ***Q3. Computation complexity and overall method scalability w.r.t. graph size?*** R3. The overall complexity is linear with the graph size, thus the model is scalable. The main component of the proposed LDR-GNN is the low-rank matrix/tensor decompositions, which are implemented via RPCA and tensor RPCA. The complexity of RPCA on a matrix of $n_1 \times n_2$, where $n_1 > n_2$, is $O(n_1 n_2^2)$, while that of tensor RPCA on a tensor of of $n_1 \times n_2 \times n_3$, where $n_1 > n_2$ and $n_3$ is the third dimension is $O(n_1 n_2^2 n_3 + n_1 n_2 n_3 log(n_3) )$. In the case of LDR-GNN, $n_3 = M$ is the number of selected similar ego-networks. Since the dimension of node attribute $F$ is often larger than the size of ego-network $d_i$, we set $n_1 = F$ and $n_2 = \bar{d}$, where $\bar{d}$ is the average node degree. Therefore, the complexities of LRD-GNN-Matrix and LRD-GNN-Tensor for each node are $O(F \bar{d}^2)$ and $O(F \bar{d}^2 M + F \bar{d} M log(M) )$, respectively. Since every node separately performs LRD-GNN-Matrix or LRD-GNN-Tensor, the overall complexities are $O(N F \bar{d}^2)$ and $O(N F \bar{d}^2 M + N F \bar{d} M log(M) )$, where $N$ is the number of node in the graph, respectively. Therefore, the overall complexity is linear with the graph size, and the model is scalable. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed rebuttal. All of my questions are answered. However, as the weaknesses I listed above still hold, I will keep my score. --- Reply to Comment 1.1.1: Title: Response to the Weakness Comment: Thanks for your feedback. For the two weaknesses, we would like to clarify as follows. **W1. Similar ego-networks selection.** R1. The procedure of similar ego-network selection has been in the submitted appendix, while the ablation study has been performed in the experiment section. Besides, this procedure has also been clarified in the response to Q2. --- **W2. Only small datasets are employed.** R2. To the best of our knowledge, almost all datasets for node-level self-supervised learning have been used in this paper. Thus, according to the complexity analysis in the response to Q3, the proposed method is scalable to large graphs. **Therefore, we hope the weaknesses can be clarified.**
Rebuttal 1: Rebuttal: We would like to express our sincere appreciations to the reviewers for their insightful comments and compliments to our paper. The PDF contains the figure of results on preventing over-smoothing issue. Pdf: /pdf/2ca8f4a676785b7d0c3cf91049f71adad35293d9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
LayoutGPT: Compositional Visual Planning and Generation with Large Language Models
Accept (poster)
Summary: The paper highlights the challenges faced by existing models in generating objects with specified counts, positions, attributes, and sizes, and emphasizes the need for compositional skills that can effectively arrange components coherently, accurately reflecting object specifications and interactions. The authors propose LayoutGPT which aims to improve the utility of visual planning skills of Large Language Models (LLMs) by in-context learning. The method shows promising results in generating plausible layouts in multiple domains including 2D images and 3D indoor scenes. Experiments show that LayoutGPT outperforms text-to-image models and achieves comparable performance to human users in designing visual layouts for numerical and spatial reasonings. In addition, LayoutGPT also shows comparable performance to supervised methods in 3D indoor scene synthesis. The paper also proposes a new benchmark called NSR-1K for evaluating generations in terms of specified counts and spatial locations. Strengths: 1. The authors propose a novel solution for handling the issue that the current visual; generation models lack visual arrangments ability. They adopt LLMs as visual planners to generate layout information of objects in the target 2D images or 3D scenes, with the help of in-context visual demonstrations in style sheet language. 2. LayoutGPT generates reasonable layouts in multiple domains, including 2D images and 3D indoor scenes. 3. They build a challenging benchmark that characterizes counting and positional relations for text-to-image generations. 4. Experiments show the effectiveness of the proposed method. 5. The paper presents promising results and provides insights into the application of LLMs for visual planning and generation, both in terms of methodology and empirical findings. Weaknesses: 1. The writing of motivation is not clear due to a lot of inexplicable expressions. - The claim that existing visual generative models are not equipped with various reasoning skills that exist in LLMs is dubious, given the disparate nature of reasoning skills required for visual generative models compared to LLMs. Good reasoning skills are not equal to good generative effects. Therefore, the T2I models that fail to generate objects with specific counts, positions, and attributes do not indicate lacking reasoning skills. - In the sentence, 'But unlike LLMs, ... discrete categories ...', do the authors mean LLMs are already capable of generating layout? if yes, there should have some citations, and there is no need to further introduce the LLMs as a centralized model because there is no logical correlation with the previous text. In addition, why did the authors strengthen the discrete categories, could the LLMs generate continuous categories? - Unclear description of the drawbacks of the existing LLMs-centered systems. - The motivations behind constructing layouts with structured programs are not sufficiently persuasive, considering that LLMs also are trained on the plain text, and alternative methods exist for representing image layouts. In fact, the strict imposition of a structured format may pose challenges for LLMs. - To my knowledge, visual inputs generally refer to images or videos, and the LLMs are only able to handle textual inputs. Moreover, both tasks in this work involve textual information as input, and there is no experimental evidence supporting the claim that LLMs have the potential in handling complex visual inputs. - The meaning of "addressing the inherent multimodal reasoning skills of LLMs" requires clarification. To my understanding, LLMs already possess visual planning capabilities, and the authors' objective is to effectively leverage these skills rather than fundamentally modify or address them. 2. Inconsistent notations. - $o_j$ vs $\mathbf{o}_j$ in the section 3.1. - $o_j$ represents the layout of an object, so the sequence should be (c_j, x_j, y_j, w_j, h_j). 3. The utilization of CSS as a structured format for representing layouts is a bit farfetched and unnecessary. Firstly, LLMs are capable of understanding the meaning of each element in a plain sequence (o) if detailed task instruction is provided. Secondly, structured representation is not solely limited to the CSS format; alternative formats such as, 'layout: c_1(x_1, y_1, w_1, h_1), c_2(x_2, y_2, w_2, h_2), ...'. 4. The reasons behind the lower performance observed when GLIGEN generates images based on ground truth (GT) layouts during image evaluation, as well as when GLIGEN synthesizes images using layouts provided by humans, remain unexplained in Table 2. 5. It is important to note that LayoutGPT is specifically designed for generating layouts, encompassing object categories and corresponding bounding boxes. Therefore, it is unreasonable to conclude that LayoutGPT can accurately perform attribute binding. Correct attribute binding is only achieved when the attributes of objects in the generated images align with the textual descriptions. However, wrong attribute bindings between the generated images and sentences are apparent, such as the closed car window in the top row of Figure 4 and the absence of a handle on the basket in the bottom row of Figure 4. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The authors mention that the in-context examples are provided in reverse order based on similarity. Does the order have an impact on the effectiveness of the model? 2. How are the colored sentences in Figure 4 generated? 3. For 3D layout planning, how do the furniture frequencies in the instruction impact the performance? 4. Why does the CSS structure have more impact on task performance compared to instruction? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I do not see any limitations to this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for pointing out ambiguities in the manuscript, and we’d like to clarify a few misunderstandings below. >* Weakness 1-1: Reasoning ability of current T2I models We did not claim that existing visual generative models are not equipped with various reasoning skills. It is a fact that “text-to-image generation (T2I) models suffer from generating objects with specified counts, positions, and attributes” (line 22), which has been discovered in previous work like Stable Diffusion/DALLE/Imagen and etc. >* Weakness 1-2: Meaning of discrete categories We are one of the first work to use LLMs to generate layouts for both 2D and 3D scenes. The sentence refers to previous layout generation models (not LLMs) that are transformer-based and trained from scratch. These models previous a “class” for each bounding box where the class label is represented as a fixed-length one-hot encoding vector instead of free-form text. Thus, these vectors represents “discrete categories”. LLMs is much more flexible that can generate both class labels or detailed descriptions in free-form text. >* Weakness 1-3: Drawback of existing LLM-centered systems Line 37-38 refers to previous work like Visual ChatGPT [48] or MM-React [52] that runs image generation models as APIs and directly feed user prompt to the APIs. We will revise the sentence for clarification. >* Weakness 1-4: Motivation of using structured output format during LLM layout planning We’d like to point out that the all GPT models used in this study (Codex, GPT3.5, GPT4) are trained on both plain text and code snippets, which benefits them to adhere to structured output smoothly. Please find more discussion below in response to your “Weakness 3”. >* Weakness 1-5: Regarding LLM and visual input We have never claimed that our work enables LLMs to take visual inputs. Our experimental results support the potential because LayoutGPT can understand the spatial concepts (left, right, …) behind coordinate values in 2D or 3D spaces. We also show that LayoutGPT has the potential to handle complicated skeleton sequences with a few examples (Fig. 6 & Table 5 in appendix). >* Weakness 1-6: The meaning of "addressing the inherent multimodal reasoning skills of LLMs" We focus on designing a method to elicit the visual planning skills from LLMs. Table 3 proves that LLMs cannot effectively use the inherent reasoning skills without structured representations. Fig. 3 in the additional rebuttal PDF materials also substantiate the claim. >* Weakness 2: Regarding notations Thanks for pointing out, we will unify $o_j$ and $\textbf{o}_j$ in the revision. $\textbf{o}_j$ refers to the object layout, and our definition of $\textbf{o}_j$ (line 106) includes bbox location ($x_j, y_j$) and bbox size ($w_j, h_j$) through $\textbf{t}_j$ and $\textbf{s}_j$. For 3D objects, $\textbf{o}_j$ also includes orientation $\textbf{r}_j$ (line 108). >* Weakness 3: LayoutGPT with other structured format In Section 4.4 (page 7, line 201-208), we conducted an ablation study to check the effect of CSS structure and compare it with plain text structure. We compare prompts w/ CSS stuctures (e.g., “teddy bear {width: 32px; height: 45px; left: 31px; top: 9px; }”) and w/o CSS structures (e.g., “teddy bear: 32, 45, 31, 9”). (See Table 2 in the appendix for detailed examples). Results in Table 3 show that wrapping layout w/ CSS structures surpass plain text structure, which verifies the effectiveness of our method. LayoutGPT’s superior performance w/ CSS structure may due to the fact that OpenAI GPT models have read tons of HTML/CSS code during its pretraining process, and thus have acquired numerical & spatial concepts for webpage planning with similar format. >* Weakness 4: GLIGEN’s performance on ground-truth layouts There are still bottlenecks in existing layout-to-image models. The reason for GLIGEN’s image evaluation scores are two-fold: (1) GLIGEN has its shortcomings in conducting layout-to-image generation. Even though it is conditioning on groundtruth layout, it may still fail to render images with perfect quality. (2) The object detection model GLIP also has its shortcomings in detecting objects. This might explain why the generated images have a much lower image-level object accuracy than the layout-level object accuracy. We mentioned the above reasons in line 186-188, and will include more discussions in revision. >* Weakness 5: Regarding attribute binding Please see general response. >* Question 1: Providing in-context exemplars in reverse order We follow previous work [1, 51] for reverse order. The impact of exemplars order is not the main focus of the work. >* Question 2: How are the colored sentences in Figure 4 generated? We append the following additional paragraph to the instruction with one simple example and 0 ICL exemplars: *IMPORTANT: apart from generating name and bounding box location for each object, you should also write a detailed description of the object, and add the description to each CSS line. For example, dog {width: 20px; height: 19px; left: 42px; top: 25px; description: a white dog with many black dots}* >* Question 3: Influence of furniture frequencies in 3D layout planning We observe that instruction without the furniture frequency has similar KL Div., FID and OOB rates. We append the information for completeness yet finding a proper method to input the distribution remains a future study. >* Question 4: Impact of CSS structure vs. Impact of instruction The CSS structure explicitly clarifies the spatial meaning of each number with the dense format of “Property name: Property value;”. While instruction also explains the meaning of each value in (x,y,w,h), we hypothesize that it is a weaker implication for LayoutGPT to understand the values. >* Limitations: Limitation section We have included a section on LayoutGPT’s limitations in the supplementary material (Section F, line 210-225). --- Rebuttal Comment 1.1: Comment: Thanks for the response which has addressed most of my concerns. However, I remain uncertain about the concept of attribute binding. Could you provide further explanation on how to understand 'attribute binding' refers to binding attributes to generated grounding box instead of the generated images? Take Fig 4 in the paper for example, if the color of the bounding box of two objects. i.e., 'a brown horse' and 'a white truck', are swapped, I believe that another reasonable image could be generated based on the layout information via GLIGEN. Therefore, it is hard to distinguish between correct and incorrect attribute binging. In addition, regarding the quantitative results on attribute binding, why does A&E achieve better attribute binging performance than LayoutGPT+GLIGEN? Overall, the proposed method effectively explores the capability of LLM in layout planning under 2D images or 3D scenes and has achieved some promising results. --- Reply to Comment 1.1.1: Title: Official Response by Authors Comment: Dear Reviewer uKyZ, Thank you for your kind response and follow-up questions. Using Fig.4 as an example, our explanation of “binding attributes to generated grounding box” refers to the outcome that LayoutGPT outputs > *“**horse** {width: 40px; height: 40px; left: 12px; top: 12px; description: **a brown horse standing still**}* > ***truck** {width: 40px; height: 20px; left: 24px; top: 22px; description: **a white truck with four black wheels**}”* instead of > *(incorrect, attribute swapped)* > *“horse {width: …; description: a **white** horse standing still}* > *truck {width: …; description: a **brown** truck with four black wheels}”* or > *(incorrect, whole description swapped)* > *“horse {width: …; description: **a white truck with four black wheels**}* > *truck {width: …; description: **a brown horse standing still**}”* The word(s) (e.g. horse/truck) ahead of the left curly bracket “{“ defines the category or high-level description of the box. **We interpret the first box as a “horse” box and the second one as a “truck” box.** The “description” property between the curly brackets provides low-level attribute descriptions. **Therefore, LayoutGPT correctly binds the attribute “brown” to the “horse” box, and the attribute “white” to the “truck” box.** As indicated in the general response, LayoutGPT binds attributes to the correct box with 100% accuracy on HRS prompts. We will revise the paper accordingly to avoid further confusion. With GLIGEN/ReCo as the downstream model, the system ideally ends up with “a brown horse” and “a white truck” in the images. However, we observe that the description for each box has a weaker influence on the generated object in GLIGEN compared to ReCo. For instance, even though “a brown horse” is provided along with the “horse” box coordinates, GLIGEN fails to generate the correct color more often than ReCo or A&E. **In short, GLIGEN might be weaker than ReCo in controlling local attributes for each object.** We conjecture that the differences originate from the differences in training data: **GLIGEN is trained on boxes associated with short class names without attribute words (e.g. bride, groom in Fig. 2(a) in GLIGEN paper); while ReCo is trained on boxes associated with dense descriptions (e.g. a long white and red bus in Fig. 1(a) in ReCo paper).** As the major bottleneck lies in the downstream models, we believe that the whole framework would be improved with stronger layout-to-image models in the future. Please let us know if the explanation is now clear enough. We are always delighted to engage in further discussion and offer responses to your uncertainties. Thank you again for your appreciation of the overall contribution and experimental results of our work. Regards, Authors of #43
Summary: The paper propose LayoutGPT that can generate visual arrangements of objects using the input prompts, providing a way to collaborate with visual generative models for compositional layout based image generation in both 2D and 3D. Experiment results show that such a method can largely improve layout-based generation using reasoning ability of large language models. Strengths: 1. The paper is well written and easy to follow. 2. The authors have conducted a comprehensive evaluations/ablations on the method. 3. The paper provides an interesting way to connect large language models with visual generative models for image generation without any additional training. Weaknesses: 1. **The paper simply proposes a module using LLMs to conduct visual planning**, i.e., extract / reason object relations and the number of objects when given input text descriptions. For image generation, extracted bounding boxes are simply used as inputs to existing layout-based methods, thus no technical contributions to visual generative models. 2. **The paper over-claimed more or less.** For example, in Line 196, the author claimed "LayoutGPT can perform accurate attribute binding". However, in text-based inpainting section of Figure 4, the purple suitcase doesn't have the specified design "a blue, yellow and white flower" and the cat isn't black and white. Similarly, the spatial relationships extracted from the method seem to be off. The first example of text-based inpainting, the cat should be sitting under a bench, thus it seems to me that the method isn't that reliable. 3. **Lacking some baselines**. For 2D image generation, there are a few works that do layout-based image generation, specifically using bounding boxes. It would be good to compare with these methods. One example I can think of is [1], which also doesn't require additional training. 4. **Related Works**. It would be quite relevant to include compositional image generation, where you generate images conditioned on multiple specifications or objects. [1] Chen et al., Training-free layout control with cross-attention guidance (CVPR 2023) Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In table 4, when generating images, is the mainly prompt used as the input or are those colored sentences used as inputs? 2. It seems to me that layoutGPT can do a lot of guessing instead of reasoning. What if you use counter-factual examples or simple examples that rarely appear in real life? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors don't include limitation section. One thing is that LLMs can be biased such that generated imagery can also biased. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing suggestions for improvements and would like to clarify a few misunderstandings in the response. > * Weakness 1: Technical contributions of LayoutGPT We respectfully disagree that our contribution is limited. We have never claimed that LayoutGPT contributes to visual generative models. Our main contribution is: 1) combining program synthesis with ICL for LLMs to achieve layout generation; 2) NSR-1K benchmark; 3) proving the effectiveness of facilitating visual planning&generation using LLMs. We are also one of the first to show LLMs’ potential in understanding 3D concepts and 3D planning. As far as we know, layout generation alone has been extensively studied in 2D [17, 21, 23, 24] or 3D [31,35,44]. Moreover, LayoutGPT shows strong performance in both domains. > * Weakness 2: The paper over-claimed more or less. While we respectfully disagree, we do notice the confusion on attribute binding. Please refer to the general response for clarification. The inaccurate “purple suitcase” and “black and white cat” attributes to the bottleneck from the layout-to-image model. As is claimed above, the downstream layout-to-image model is NOT the main focus of our work. The examples of text-based inpainting aims to shows the flexibility and potential of LayoutGPT in “imagining” intricate object-level language descriptions for creative image generation. Fig. 1 in additional rebuttal PDF material shows that ReCo [53] can accurately represent each object with output from LayoutGPT. As for the mistake in spatial relation in Fig. 4 (bottom-left), these examples are generated by using just an instruction and no in-context examplars. We only append the following text to the instruction to enable text-based inpainting: *IMPORTANT: apart from generating name and bounding box location for each object, you should also write a detailed description of the object, and add the description to each CSS line. For example, dog {{width: 20px; height: 19px; left: 42px; top: 25px; description: a white dog with many black dots}}* Hence, the exception in Fig.4 doesn’t represent the overall performance. Please refer to Table 2&3 for systematic evaluation. > * Weakness 3: Lacking baselines like “Training-free layout control with cross-attention guidance (CVPR 2023)” As is claimed above, our main focus is layout generation, not layout-guided image generation. Yet, in fact, we have included the indicated work upon submission. In Table 3, line 9-10 shows the results of the listed work (which we refer to as “Layout-Guidance”) conditioning on the layouts predicted by LayoutGPT. The results verify that LayoutGPT is a model-agnostic approach, and can be applied to various layout-to-image generation models (line 209-214). > * Weakness 4: Add related work discussion on compositional image generation We will add a separate subsection for compositional image generation for completeness. We appreciate any specific references you would like to provide. > * Question 1: Input for image generation in Figure 4 For images in Figure 4, we input the main prompt, all the colored sentences, and corresponding bounding boxes to GLIGEN. GLIGEN will condition on the colored sentence when rendering objects for each bounding box. Similarly for ReCo in Fig.1 in the additional rebuttal PDF materials. > * Question 2: It seems to me that layoutGPT can do a lot of guessing instead of reasoning. Counterfactual / rarely seen prompts We respectfully disagree that LayoutGPT do more guessing than reasoning. We conducted multiple experiments to show the reasoning behind. First, in our appendix, line 98-105 and Table 3 shows that LayoutGPT achieves strong performance with random exemplars. LayoutGPT elicits reasoning abilities instead of copying from exemplars. Second, Fig.2&4 shows that LLMs generates novel sizes/locations instead of copying from the exemplars. Lastly, Table 3 in main paper and Fig. 3 (top) in additional rebuttal PDF materials show that all components are essential for correct spatial relations. Please also kindly refer to the general response. >* Limitations: Limitation section We would like to clarify an important misunderstanding regarding the limitation section. We have included a limitation section in the supplementary material (Section F, line 210-225) where we have discussed multiple points ranging from layout domains to knowledge distillation. We are happy to include the bias discussion in the next revision. --- Rebuttal Comment 1.1: Comment: In terms of compositional image generation, it has been many works: For example, 1. Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis (Feng et al) 2. Exploring Compositional Visual Generation with Latent Classifier Guidance (Shi et al) 3. Compositional Visual Generation with Composable Diffusion Models (Liu et al) thanks the authors for addressing some of my questions. I will keep my rating as it is (borderline accept). --- Reply to Comment 1.1.1: Title: Thank you for your comments Comment: Dear Reviewer HVHZ, Thank you for kindly pointing to these related papers. We will add these references in the next revision. Please kindly note that our work mainly focuses on layout planning and generation in 2D&3D spaces rather than downstream image generation methods. While these studies are highly relevant, it may not be suitable to directly compare LayoutGPT with them. Meanwhile, please kindly let us know if any of your questions or uncertainties remain unresolved. We are always delighted to engage in further discussion and offer responses to your remaining questions. We sincerely appreciate your active participation. Regards, Authors of #43
Summary: This paper introduces LayoutGPT, a training-free approach that injects visual commonsense into LLMs and enables generating plausible 2D images and 3D scenes conditioned on layouts based on text conditions. Specifically, the authors experiment with four variants of GPT models: Codex, GPT-3.5, GPT-3.5-chat and GPT-4 and showcase that LLMs can produce meaningful 2D/3D layouts using a CSS (Cascading Style Sheets) format, where every object is modelled as labelled bounding box, parametrized with three random variables indicating its category, size and location. For the case for 3D indoor scene synthesis, they parse the layouts into 3D scenes by simply replacing the bounding boxes with 3D objects from a library of assets. For the case of 2D image synthesis, they rely on GLIGEN that is a layout-to-image model to convert the generated layout to a 2D scene. For the case of 2D image synthesis, the authors compare their model to Stable Diffusion and Attend-and-Excite and evaluate the generated layouts based on the precision, recall and accuracy of the generated bounding boxes. To measure whether the generated image matches the provided text description, the authors report the CLIP/GLIP cosine similarity between text prompts and the generated images. For the task of 3D indoor scene synthesis, the authors compare their model to ATISS and measure the generation quality by reporting the KL-divergence between the object category distributions in the ground-truth and the generated scenes. For both tasks, LayoutGPT outperforms most baselines on most metrics and from the qualitative results, it seems that LayoutGPT faithfully generate layouts that match the input conditioning. Overall, I think this is a nice work that introduces an elegant way for using LLMs for 2D and 3D layout synthesis. I think that the idea of representing scene layouts in the CSS format is very intuitive and greatly simplifies the task of layout synthesis. My main concern, as also discussed in the Weaknesses section, is related to whether the proposed model can robustly generate (i) layouts with more complex text conditioning for the case of the image synthesis, (ii) 3D scenes conditioned on detailed scene descriptions clearly describing the how many and what objects should be placed in the scene. Strengths: 1. To the best of my knowledge, the idea of using LLM for 2D and 3D layout synthesis is novel and the authors clearly demonstrate that LLMs can produce complex 2D images and 3D scenes in a CSS format. Unlike other concurrent works that try to use LLMs for similar tasks, I think this model is simpler and more generic, hence it can be applied on both 2D scene synthesis and 3D scene synthesis. 2. I particularly liked that the authors provide results on both a 2D and a 3D task. They compare their model with several strong baselines and showcase that the proposed model can consistently produce 2D images/3D that match the input conditioning. Moreover, from the quantitative evaluation, we note that the proposed training-free method achieves state-of-the-art performance w.r.t. most metrics for both tasks. 3. I appreciated the additional ablations as well as the various implementation details provided in the supplementary. In addition, I think that also the proposed NSR-1K benchmark, can potentially be very useful for various tasks. Although the authors don't mention whether they plan on releasing these benchmark, I would like to encourage them to do so as I think such benchmarks can greatly benefit the research community. Weaknesses: 1. For the case of the 3D indoor synthesis task, I am wondering whether the authors tried to condition the scene generation on more detailed text descriptions that go beyond simply specifying the room type and the size of the room. For example, given a description like "a bedroom with one double bed, two nightstands and 1 wardrobe", would LayoutGPT be able to generate a layout that matches this description? I assume that to some extend this should work quite well so I am not sure why the authors did not provide these type of results. I think it would be beneficial for the paper to include them for the final version of the paper. 2. For the experimental evaluation of the 2D scene synthesis, the authors report precision, recall and accuracy. Is there a reason why not also report MeanIoU on the bounding box parameters? I am not sure whether I am missing out something but I think this metric is very important as the model generates bounding boxes in practice. In addition, I think that the section B2 in the supplementary that discusses the Evaluation metrics and in particular the accuracy computation is not very clear. I think it is good to polish this section a bit. 3. Although, I appreciate that the authors proposed a new benchmark for their image synthesis experiment, I think they should have also evaluated their model on the 2017 Panoptic version of COCO dataset that has been previously used by several generative models that perform layout generation. From the description in L152-161, it is not clear to me whether the panoptic version of the COCO dataset is included in the proposed benchmark. Can the authors please clarify this? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I am wondering whether the LLMs can produce diverse layouts conditioned on the same text conditioning. I believe that this is an interesting experiment that the authors should include in their evaluation. In addition, another valuable analysis would be to compare existing GPT models w.r.t. their capabilities for generating diverse layouts conditioned on the same input prompt. Being able to generate diverse layouts is a very important trait of existing models, hence I think the authors should provide additional experiments that demonstrate whether this is possible or not. 2. Looking at the Experimental Evaluation and in particular the Image Synthesis results, I am wondering how robust is the proposed model if the input text prompt contains larger descriptions. Looking at all the results both in the main paper and in the supplement, I think that the authors show layout generations with at most 5 objects (see Fig 3., top row, right column). Have the authors tried to condition the layout generation with more detailed text prompts with more objects? How good would their model work? 3. In Table 1, the authors mention that their proposed NSR-1K benchmark contains several text descriptions that have comparisons e.g. "A picture of three cars with a few fire hydrants, the number of cars is more than that of fire hydrants". I checked both the main paper and the supplementary but I was not able to find any conditioning like this. I think it would be great if the authors can provide some examples that show that their model works well with these more challenging text conditioning. 4. Can the authors clarify why LayoutGPT cannot work with floor plans of various shapes. In L219-220 of the main paper they state that it is not compatible with irregular floor plans but I am not sure why this is really an issue? 5. For the quantitative evaluation in Section 5.1 the authors should also mention the image resolution of the rendered images, used to compute the FID scores. Moreover, they mention that to compute the FID score they render scene images from four camera angles (L224-225), are these angles random per scene? I think it is important that the authors clarify this for reproducibility purposes. 6. Some paper references that are missing that I think the authors should include in their final version of their paper are listed below: * Variational Transformer Networks for Layout Generation, CVPR 2021 * BLT: Bidirectional Layout Transformer for Controllable Layout Generation * LayoutDM: Discrete Diffusion Model for Controllable Layout Generation, CVPR 2023 (this is a concurrent work but still it might be good to add it in the reference list) Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss the limitations of their work and show several failure cases in their supplementary material. In addition, they also discuss potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reading and insightful comments. > * Weakness 1: 3D layout planning with detailed description Thanks for your suggestions. Please note that 3D-FRONT does not have ground truth captions for the rooms. Besides, existing work mostly conditions on room types or floor plans instead of using language descriptions. As a reference, we generate template-based captions (e.g. “bedroom with a double bed, two wardrobes, and a pendant lamp.”) and use them as conditions. As shown in the table below, the near-zero KL Div. value indicates that LayoutGPT faithfully follows the descriptions to generate the type and amount of objects. However, captions do not provide additional information to improve out-of-bounds or FID scores. Please see Fig. 2 in additional materials for visualization. | | Bedroom | | | Living room | | | |--------------------------------|---------------|---------|-------|---------------|---------|-------| | | Out of bounds | KL Div. | FID | Out of bounds | KL Div. | FID | | LayoutGPT (GPT-3.5, w/caption) | 54.27 | 3.21e-7 | 27.68 | 77.36 | 3.54e-5 | 76.87 | > * Weakness 2: MeanIoU for 2D layout evaluation We did not use MeanIoU to evaluate layout performance because the nature of the task is generation instead of prediction. There may be inexhaustible possible valid layouts for the same prompt. Besides, the prompt does not describe object sizes or specific locations. Therefore, the MeanIoU between the prediction and the reference image may not adequately evaluate the “correctness” of the prediction in terms of numerical or spatial reasoning. > * Weakness 2: Metric for 2D layout evaluation Thanks for your suggestion. We will revise the metric section and provide some visualization examples. The accuracy is basically the percentage of test samples that end up with the correct numerical or spatial relations based on the layout (Layout Acc.) or the detected layout (GLIP Acc.). > * Weakness 3: Regarding COCO2017 panoptic subset Our NSR-1k benchmark is built on COCO2014 with 80 categories. Thank you for your suggestion and we evaluate LayoutGPT on the Panoptic benchmark. Please see our response to your “Question 2” below for detailed setup and experimental results. > * Question 1: Diversity of the layouts planned by LayoutGPT Beyond our observation of the diversity, we also generate five different layouts for each prompt and compute the standard deviation (std) of bounding box sizes and locations. The normalized bounding box sizes have a std of +-0.151 and the normalized locations have a std of 0.083. Please also note that users can adjust the diversity by setting different temperatures for the LLMs decoding stage, which further guarantees the diversity of layouts. > * Question 2: Prompting LayoutGPT with more objects We test LayoutGPT on 500 examples randomly sampled from the validation set of COCO2017 Panoptic with 6~15 annotated bbox (out of the consideration for GPT length limit). We retrieve 8 supportive examples from the train set during in-context learning. To adapt to the nature of the Panoptic task, we add the following additional instruction when prompting LayoutGPT: “The objects layout might be dense, and objects may overlap with each other. Some objects might not have been mentioned in the prompt, but are very likely to appear in the described scenario.” We generated images using GLIGEN and the LayoutGPT's output and achieve 86 FID score compared to 90 FID using LayoutTransformer[17]+GLIGEN In Fig.4 of the additional rebuttal PDF material pdf, we show the dense layout predicted by LayoutGPT together with GLIGEN’s visualization. Results show that LayoutGPT can be smoothly applied to more complicated scenarios with more objects (middle: loads of donuts) and with more categories (left: indoor scene; right: outdoor streetview). It is worth noting that, even though the prompt may only mention a few objects, LayoutGPT is able to predict the layout of the whole scene (as requested in the instruction above), including the objects that may commonly appear under each scenario (e.g., left: towel by the sink, mirror over the sink, vase on the counter, …). This further demonstrates LayoutGPT’s powerful reasoning ability with commonsense knowledge. > * Question 3: Demonstrative examples for numerical reasoning with comparative terms Thanks for pointing out. We’ll provide more demonstrative examples for numerical reasoning with comparison terms in the next revision due to space constraints. Yet please refer to Fig.4&5 in the additional rebuttal PDF for more than 5 objects per image. > * Question 4: Regarding floor plans with various shapes Non-rectangular floor plans from 3D-FRONT are represented as a long list of vertices or binary images. At this time, LLMs cannot understand the meaning behind the list or take images as inputs. Therefore, we leave it as future work to improve LLM’s compatibility or multimodal skills. > * Question 5: Image resolution for 3D scene FID Thanks for pointing out the issue. We render 256x256 images for each scene from four fixed camera positions (0,0,2), (0,2,0), (2,0,0), and (2,2,2) (unit: meters). Cameras always point towards the origin (0,0,0). > * Question 6: Add reference We will add these references in the next revision for completeness. --- Rebuttal Comment 1.1: Title: Official Comment by Authors Comment: As the end of the discussion period is approaching, we are wondering if you have read our rebuttal and if you have any remaining concerns. We are happy to clarify more before the discussion period ends. --- Rebuttal 2: Comment: Dear reviewer, Please look over the author response and the other reviews and update your opinion. Please ask the authors if you have additional questions before the end of the discussion period.
Summary: This paper proposes LayoutGPT, a method to compose in-context visual demonstrations in style sheet language to enhance the visual planning skills of LLMs. As the first work to use LLMs to generate layouts from text conditions, LayoutGPT can generate plausible layouts for 2D images and 3D indoor scenes, including challenging language concepts like numerical and spatial relations. Those generated layouts can be further used for image generation. When combined with a region-controlled image generation model, LayoutGPT outperforms existing text-to-image generation methods by 20-40% and achieves comparable performance as human users in generating plausible image layouts and obtaining images with the correct object counts or spatial relations. Strengths: 1. This is the first work to explore the ability of LLMs for layout generation. It reveals the spatial reasoning ability of LLMs and might inspire future explorations in this direction. 2. The training-free approach is easy to adopt for various applications. 3. The presentation is clear and easy to follow. Weaknesses: 1. Although this was no such exploration before, the proposed approach is a straightforward application of LLM. It would be better and more inspiring if authors could provide some in-depth analysis of the spatial reasoning abilities of LLM. 2. It is unclear if layoutGPT is robust to the selection of in-context exemplars or not. What's the size of the reference set? What if the text condition describes a rare scenario that does not appear in the reference set? Experiments in the supplementary material show that the performance is sensitive to the number selected in-context exemplars. 3. The evaluation is conducted only on NSR-1K for numerical reasoning and spatial reasoning in text-to-image synthesis and ATISS for indoor scene synthesis. Although the authors claim that layoutGPT can be used for accurate attribute binding and text-based inpainting, only visual results (fig.4) are shown and there are no quantitative experiments on such datasets to demonstrate such abilities and applications. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to the questions in the weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors mention the limitations in the supplementary materials. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. > * Weakness 1: straightforward application and in-depth analysis of spatial reasoning ability One of our main contributions is to combine style-sheet program synthesis with LLMs and in-context learning as the CSS style language inherently share similarity with bbox-based planning for visual generation. Different from previous work [48,52] that uses simple representation (x1,y1,x2,y2), our ablation study show that CSS structure elicits much stronger spatial reasoning skills from LLMs (see Table 3). This finding requires the awareness of shared properties between LLMs’ pretraining data (CSS-involved programs) and image/3D layout representations. While “w/ instruction” can achieve considerable spatial accuracy (lines 2&6 in Table 3), CSS structure is more important by comparing lines 6-8, which has been analyzed in Sec. 4.4. Therefore, we respectfully disagree that LayoutGPT is a straightforward application of LLMs. In addition, we substantiate the claim based on Figure 3 in the attached PDF response. Fig.3 (top) shows that CSS enables LayoutGPT to not only avoid unreasonable overlap but accurately generate spatial relations involving multiple objects. We hypothesize that LLMs understand the integer values in the 2D space much better because CSS explicitly declares the property names for these values. Fig.3 (bottom) shows the spatial reasoning ability across different versions of GPTs beyond quantitative differences. It’s surprising that GPT-4 can generate the shape and position of “a straw” precisely given that *no straw box examples are provided in the in-context demonstrations*. We hypothesize that either GPT-4 is pre-trained on a more comprehensive collection of layout data or the image branch training benefits layout generation as language tokens. > * Weakness 2: Size of ICL and robustness to exemplar selection For evaluation results in the main paper, we used eight (k=8) in-context demonstrations for image layouts and eight/four (k=8/4) for 3D bedroom/livingroom synthesis. In our supplementary Table 3, we show that even with random ICL exemplars or fewer exemplars (k<8), LayoutGPT can achieve comparable layout accuracy and GLIP accuracy for both numerical and spatial prompts. LayoutGPT tends to generate extra objects with random exemplars due to its hallucination nature, yet this does not harm the overall accuracy of counting and spatial reasoning. Besides, “train platform” and “straw” in additional rebuttal PDF material Fig. 3 never appear in the exemplars and are not part of the COCO annotations. However, LayoutGPT still manages to elicit the ability of LLMs to generate their 2D boxes. Therefore, we believe that our method is robust to rare scenarios and various sizes of exemplars. > * Weakness 2: LayoutGPT’s performance on rare scenarios Please refer to the general response. > * Weakness 3: Attribute binding & text-based inpainting. Please refer to the general response for attribute binding clarification and evaluation. Text-based inpainting is a creative application scenario and has no standard benchmark/evaluation yet. It shows the flexibility and potential of LayoutGPT in “imagining” intricate object-level descriptions for efficient and creative image generation. > * Weakness 3: More quantitative evaluation of LayoutGPT’s reasoning ability Please refer to the general response regarding LayoutGPT’s counterfactual prompts. We also report additional quantitative results on LayoutGPT’s size reasoning performance. --- Rebuttal 2: Title: Updated review after rebuttal Comment: Thank authors for the rebuttal. The authors have addressed most of my concerns so I changed my rating to weak accept. --- Rebuttal Comment 2.1: Title: Thanks for your response Comment: Dear Reviewer ykMz, Thank you for your kind response and support. We are glad to have addressed most of your concerns. Please let us know if you have additional comments or suggestions. Regards, Authors of #43
Rebuttal 1: Rebuttal: # General Response We thank all reviewers for their constructive feedback and comments. We would like to address reviewers’ common concerns in the following general response: > * Quantitative and qualitative results regarding attribute binding **(Reviewer ykMz & HVHZ & uKyZ)** We would like to clarify that “accurate attribute binding” in lines 196 & Fig. 4 refers to binding attributes to generated bounding boxes instead of objects in the generated images. We will revise the writing to avoid future confusion. Here, we’d like to show more quantitative and qualitative results on attribute binding. LayoutGPT binds attributes to each object’s bounding box with 100\% accuracy on HRS [1] color prompts (e.g. “a green car and a blue chair”). On top of that, we evaluate the attribute correctness rate (accuracy) on the final generated images when LayoutGPT is combined with downstream image generation models. The below indicates that the major bottleneck lies in the layout-guided generation part of the system. Fig.1 in the additional rebuttal PDF material shows that LayoutGPT+ReCo ends up with more faithful object attributes. | | Attribute binding accuracy using HRS eval metric on generated images | | | | |------------------|----------------------------------------------------------------------|----------------------|---------------------|----------------------| | | Overall | Prompts w/ 2 objects | Prompt w/ 3 objects | Prompts w/ 4 objects | | SD1.4 | 12.84 | 18.57 | 10.10 | 11.36 | | A&E | 22.96 | 31.43 | 19.19 | 20.45 | | LayoutGPT+GLIGEN | 18.68 | 22.86 | 19.19 | 14.77 | | LayoutGPT+ReCo | **36.96** | **40.00** | **37.37** | **34.09** | > * LayoutGPT’s performance on rare scenarios / counterfactual prompts **(Reviewer ykMZ & HVHZ)** We provide more discussions on LayoutGPT’s performance on rare scenarios or counterfactual prompts. * Quantitative Results We first evaluate LayoutGPT’s reasoning ability regarding object size. We use the standard HRS benchmark [1] which is designed for benchmarking compositional text-to-image models. HRS prompts for size reasoning contain comparison terms between randomly sampled common objects. The size relations described in HRS size prompts are often counterfactual and rarely seen (e.g., “a person which is smaller than a chair and larger than horse”, “a car which is smaller than a banana and chair and bigger than airplane”). LayoutGPT achieves an accuracy of 98.0% / 93.1% / 92.1% when the prompt involves size comparison between 2/3/4 objects. Meanwhile, the best size reasoning performance of 9 text-to-image models reported by the HRS benchmark has only 31.1% / 0.2% / 0%. The results verify that LayoutGPT acquires decent reasoning ability on rare scenarios / counterfactual prompts. * Qualitative Results In addition, we ask GPT-4 to write a few counterfactual prompts with the following instructions: *“Please provide a few counterfactual prompts that depict rarely seen the spatial relationship between the 80 MSCOCO object categories. An example would be "a monkey riding on top of a bird"”.* We test LayoutGPT on these counterfactual prompts with 8-shot in-context learning. The supportive examples for in-context learning are from the MSCOCO2017 train set that depicts everyday scenarios, which is very different from the ChatGPT-generated counterfactual prompts used for testing. We show the illustrative demo of LayoutGPT’s prediction in Fig.5 in the additional rebuttal PDF material. LayoutGPT demonstrates competent layout planning ability on these challenging counterfactual prompts and handles the relationship between objects well. [1] HRS-Bench: Holistic, Reliable and Scalable Benchmark for Text-to-Image Models (ICCV’23). Pdf: /pdf/bdc18b901e9ab6e5189e2aafa630fb440ac40eba.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Mildly Constrained Evaluation Policy for Offline Reinforcement Learning
Reject
Summary: The paper investigates the problem of policy constraints in offline reinforcement learning (RL) settings, and finds the phenomenon that milder constraints on policies during training can lead to better performances at inference tests. The proposed component MCEP can be added on existing algorithms including TD3BC and AWAC. Experiments on D4RL dataset show improved performances over vanilla TD3BC and AWAC. Strengths: The strength of policy constraints in offline RL is an important problem. It is a novel perspective to separate the policies for value estimation and inference with different constraint levels. I would say the proposition that milder constraints can improve policy inference performances is an interesting problem. The experiments are thorough with necessary ablation studies, and the results indeed show some improvement by using the proposed constraining method. Weaknesses: I think one major critique of the paper is: the most essential discovery that milder constraints may be required for test-time inference is mostly from experimental evaluations. The observations are not even consistent for that only 6 out of 9 shows this pattern. This is not strong evidence showing that milder constraints are necessarily always better. Some theoretical analysis or at least insights about this observation can be provided to make it more convincing. Another critique is that although the experiments show some improvement by using MCEP on TD3BC and AWAC and over some baselines like CQL and IQL. These are not the SOTA results on these offline datasets, there exists better algorithms proposed by the time of NeurIPS submission that should be aware of: [1] Hansen-Estruch, Philippe, et al. "Idql: Implicit q-learning as an actor-critic method with diffusion policies." arXiv preprint arXiv:2304.10573 (2023). [2] Garg, Divyansh, et al. "Extreme Q-Learning: MaxEnt RL without Entropy." arXiv preprint arXiv:2301.02328 (2023). [3] Wang, Zhendong, Jonathan J. Hunt, and Mingyuan Zhou. "Diffusion policies as an expressive policy class for offline reinforcement learning." arXiv preprint arXiv:2208.06193 (2022). The paper writing can be further improved. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: How to evaluate the strength of restriction for Eq. (8) given the fact that it has an additional $Q$ term? I think it’s not fair to just say Eq. (8) is less restrictive than Eq. (6) since after taking an exponential function it is similar as Eq. (6) but with advantage $A$ replaced by $Q$. For Fig. 5, I think it’s better to just give the $\alpha$ values in a table. Fig. 4, are the results averaged across different seeds? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The limitations of the current method are discussed, that the evaluation policies in MCEP may not be consistent with the value function and can lead to unstable value estimation if used in policy evaluation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I think one major critique of the paper …. > We appreciate reviewer nJEQ for their kind comments and review for our manuscript. We apologize for the misunderstanding of “not even consistent”, which is probably caused by our presentation. We argue that the experiment results of “6 out of 9” are **consistent** with our claim, even with a “5 out of 9” after we correct the mismatched axis (see Figure 3 in the submitted .pdf file) The “6 out of 9” (now 5 out of 9) comes from Section 5.2 and Figure 3 (pdf) refers to the number of tasks. Figure 3 (pdf) visualizes two areas. The orange area represents the constraint strengths that enable the evaluation policy of **TD3BC-MCEP** to outperform its target policy (which has a fixed constraint strength $\alpha=2.5$ for all tasks). In other words, in 7 out of 9 tasks, milder constraints enable the evaluation policy to outperform its target policy, which supports our claim. The blue area covers the constraint strengths of the actor in **TD3BC** that raise Q-value explosion during training. We visualize both areas in one figure to illustrate the benefit of wider policy space brought by MCEP (reaching the area beyond the blue area). Also, note that “Q does not explode” is not an indicator of a stable Q-estimate. i.e. In tasks where the orange area is covered by the blue area, unstable Q-estimate, or high Q-estimate error could still exist. We now introduce a new experiment to clearly illustrate it. We deploy a finer-grained hyperparameter search for TD3BC ($\alpha=\[1.0, 2.0, 2.5, 3.0, 4.0, … 10.0]$) and for TD3BC-MCEP ($\alpha^E=\[1.0, 2.0, 2.5, 3.0, 4.0, … 10.0]$ with a fixed $\alpha=2.5$ for the target policy). The goal is to find optimal constraint strengths for both methods in each task, as well as the corresponding inference performance. In Figure 1 Left (`.pdf`), we listed the corresponding constraint strengths ($\alpha^E$ for TD3BC-MCEP and $\alpha$ for TD3BC). In 'medium-replay" tasks, the $\alpha^E > \alpha$ indicates milder constraint improves performances. In "walker2d-medium-replay" task, the optimal $\alpha^E$ is found in the Q-explosion area which supports the discovery of Figure 3 (`.pdf`). In "medium" tasks, the optimal $\alpha^E == \alpha$ while the MCEP still outperforms TD3BC (see Figure 3 Right (`.pdf`) for the performance difference). This observation emphasizes that the Q-estimate error brought by milder constraints of the actor of TD3BC degrades the inference performance, even though these errors do not cause Q-value explosion. The insight here is milder constraints for the target policy will obtain larger Q-esimtate errors and these errors degrade the accuracy of the estimated Q function. The inaccurate Q function then misleads the optimization of the policy. In conclusion, in 8 of the 9 tasks, the optimal constraint strengths for TD3-MCEP are higher than its target policy ($\alpha=2.5$), and in 7 of the 9 tasks, the optimal policies found by TD3-MCEP outperforms optimal policies found by TD3. These results support our claim and confirm the effectiveness of the proposed MCEP approach. > Another critique is that although the experiments show some improvement by … there exists better algorithms proposed by the time of NeurIPS submission that should be aware of: [1] Hansen-Estruch, … [2] Garg, Divyansh, et al. … [3] Wang, Zhendong, … > Thanks for your kind suggestion. We are aware of the comparison to all the cited methods. To make fair comparisons, we consider the one-hyperparameter setting (same values for all tasks) and MuJoCo locomotion tasks. We rerun EQL[2] using the official implementation with the recommended hyperparameters. We rerun DQL[3] by sweeping the constraint $\eta=[1.0, 2.0, 2.5]$ (following the sweeping strategy in [1]) and found that the paper-recommended $\eta=1.0$ performs the best. Due to the time limit, we did not rerun IDQL [1] so we use the reported results. Among these methods, DQL performs the best. Full results show in the table of General Rebuttal (unable to post in this thread due to word limits). We found that our methods TD3BC-MCEP and AWAC-MCEP outperform [1] and [2]. [3] shows superior results among all methods. In addition, we integrated the proposed MCEP to the most performant [3], using the constraint hyper-parameter $\tilde{\eta} = 1.0$ and using a milder constraint strength $\eta^e=2.5$ for the evaluation policy. The resulting DQL-MCEP obtains improved performance based on DQL, which further verifies the effectivenesses of the proposed approach. > The paper writing can be further improved. > All authors promise to carefully proofread this paper for improving the writing. > How to evaluate the strength of restriction for Eq. (8) given the fact that it has an additional Q term? I think it’s not fair to just say Eq. (8) is less restrictive than Eq. (6) since after taking an exponential function it is similar as Eq. (6) but with advantage A replaced by Q. > Thanks for your kind comments. To make a fair comparison of constraint strengths, we re-design the Eq. (8) by replacing the Q term with advantage A. Using the advantage still obtains significant performance improvement based on the AWAC. The results are shown in the table above. $$ \mathcal{L}_{\pi^e} (\phi) = \mathbb{E}_{s, a \sim \mathcal{D}, \hat{a} \sim \pi^e (\cdot|s) }\[-A(s, \hat{a}] - \lambda \log \pi^e_{\phi} (a|s)\] $$ > For Fig. 5, ... > The new experiment in Figure 3 Left (`.pdf`) makes a clear comparison to the $\alpha$ values. We will also provide tables in the Appendix. > Fig. 4, … across different seeds? > TD3BC (left column and middle column) are single-seed results while AWAC is averaged among 5 seeds. We will replace this figure with Figure 2 (`.pdf`) where all curves are averaged among 5 seeds. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I would like to thanks the authors for providing rebuttals with additional experiments. For sentence "in 8 of the 9 tasks, the optimal constraint strengths for TD3-MCEP are higher than its target policy", how to see 8 tasks show higher constraint values? Also, should it be milder constraint strengths but higher values? Overall, this value to strength mapping is a bit odd, why not using a $1/\alpha$ value to directly indicate the constraint strength? The other remaining weakness of the paper is: 1. lack of theoretical insight and supports; 2. writing of the paper, which I hope will be improved. It's impressive to see this method is also applied on DQL and improves its performance. Thanks for adding these results. --- Reply to Comment 1.1.1: Title: Reply to Reviewer nJEQ Comment: Thank you for your reply. > For sentence "in 8 of the 9 tasks, the optimal constraint strengths for TD3-MCEP are higher than its target policy", how to see 8 tasks show higher constraint values? In Left of Figure 1 (submitted `.pdf`), there are 8 orange triangles having $\alpha^E>2.5$ and their target policies have $\tilde{\alpha}=2.5$ (not visualized). Hence we say they have higher $\alpha$ values than the target policies, indicating that they have milder constraint strengths than target policies. > Also, should it be milder constraint strengths but higher values? Yes, higher $\alpha$ values indicate milder constraint strengths. > Overall, this value to strength mapping is a bit odd, why not use a $\frac{1}{\alpha}$ value to directly indicate the constraint strength? We agree that representing with $\frac{1}{\alpha}$ is more clear. We will replace the y-axis with $\frac{1}{\alpha}$ values. > 1. lack of theoretical insight and supports; While we did not provide formal theoretical results, we think the intuition behind the empirical findings is clear: OOD constraints can reduce value estimation error while restraining the policy search space. The target policy needs restrictive constraints to reduce value-estimate errors that could be amplified by bootstrapping. An evaluation policy searches policies without influencing value estimate, its milder constraint provides a larger policy space for searching performant policy. > 2. writing of the paper, which I hope will be improved. Appreciated for the comment. We will try our best to improve the writing in the next version of the paper.
Summary: This paper proposes Mildly Constrained Evaluation Policy (MCEP) for offline reinforcement learning to address the issue of excessively restrictive constraints for action selection during test time inference. MCEP uses a more constrained target policy for value estimation and another less restrictive policy for performance evaluation. Empirical results demonstrate the effectiveness of MCEP. Strengths: MCEP is easy to implement and can be plugged into many policy constraint offline RL methods. Weaknesses: Since $\pi_e$ does not participate in the policy evaluation, I think line 7 of Algorithm 1 can be removed and $\pi_e$ can be extracted from Q after actor critic learning to save computational cost. The contribution of MCEP is only to extract a less restrictive policy after RL learning, which is somewhat limited. The overall idea of the paper is quite simple. However, the notations and descriptions are a bit confusing. For example, the notations in Algorithm 1 lack a clear definition ($\psi, \phi, \tilde \pi, \pi^e, \tilde w, w^e, \mathcal L(.,.)$). And $\psi$ and $\phi$ in line 6 and 7 of Algorithm 1 are reversed, since Q evaluation in Equation 2 is associated with $\phi$. No theory supports MCEP in the paper. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I do not have additional questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: More hyperparameters need to be tuned for MCEP compared with the original algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer FqHz for their feedback and comments of our manuscript. > Since $\pi_e$ does not participate in the policy evaluation, I think line 7 of Algorithm 1 can be removed and $\pi_e$ can be extracted from Q after actor critic learning to save computational cost. The contribution of MCEP is only to extract a less restrictive policy after RL learning, which is somewhat limited. > We thank the reviewer for the suggestion of extracting the evaluation policy after training the critic network to optimize the computation cost. Different approaches for evaluation policy optimization may influence efficiency. However, in our experiments, we found that iterative updating is stable for learning the evaluation policy and the significant performance improvements confirm the stability. This work investigates the role of policy constraints in policy constraints offline RL methods. This is an important fundamental problem for offline RL research. Furthermore, our empirical analysis provides insights into constraint strengths for stable Q estimate and inference-time performance. This insight explains the mediocre performance of policy constraint methods. The proposed MCEP differs from existing approaches as it circumvents the solving of the trade-off between stable Q estimates and test-time inference. MCEP enables mitigating the Q estimate error and achieving milder constraints for better inference performance at the same time. MCEP is a simple yet effective and general approach for offline RL. It enables conventional policy constraint methods (e.g. TD3BC and AWAC) to achieve SOTA-level performances and enables SOTA policy constraint methods (e.g. Diffusion-QL) to obtain further performance improvements. The generality, simplicity, and strong empirical performance are the main strengths of our paper. > The overall idea of the paper is quite simple. However, the notations and descriptions are a bit confusing. For example, the notations in Algorithm 1 lack a clear definition ($\psi, \phi, \tilde{\pi}, \pi^e, \tilde{w}, w^e, \mathcal{L}(.,.)$). And $\psi$ and $\phi$ in line 6 and 7 of Algorithm 1 are reversed, since Q evaluation in Equation 2 is associated with $\phi$. > 1. We will carefully revise some contents to make the notation definition more clear. We use these notations to distinguish different components of the proposed approach. As introduced in Section 4.1, for the proposed MCEP, $\psi$ and $\phi$ are parameters of policies to optimize (e.g. neural network weights). $\psi$ corresponds to the target policy $\tilde{\pi}$ (i.e. the actor in actor-critic). $\phi$ corresponds to evaluation policy $\pi^E$ that the algorithm returns. $\tilde{w}$ and $w^E$ are policy constraint hyper-parameters w.r.t target policy and evaluation policy. $\mathcal{L}(.., ..)$ is a notation of the Loss function and it is widely used in RL and ML papers. We hope that Figure 1 would help illustrate these components and their notations. 2. Equation 2 is shown in the Background section, the introduction to policy constraint methods. As we mentioned above, in MCEP, $\phi$ refers to an evaluation policy that the algorithm returns. Therefore we use $\phi$ in line 7 of Algorithm 1. We use $\phi$ in Equation 2 as this policy is also returned by the algorithm. In policy constraint methods, the target policy and evaluation policy refers to the same policy. This dual identity of this actor actually motivates the proposed approach to separate the actor into a target policy and an evaluation policy. We will add content to detail the notation definitions and to improve the clarifications. > No theory supports MCEP in the paper. > This work provides empirical analysis and insights instead of providing theoretical analysis. We provide a range of experimental results that 1) show the problems of overly restrictive constraints for the target policy (actor), 2) reveal the relation between constraint strengths for stable Q estimate and inference-time performance, 3) implements instances of the proposed general approach to convention and SOTA policy constraints methods and makes fair comparisons to SOTA offline RL methods 4) and verify the effectiveness of the milder constraints and the extra evaluation policy in our ablation study The theory behind the empirical analysis of this work is an interesting direction to explore. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal, it has been noted.
Summary: This work addresses the issue of excessive policy constraints in stabilizing value estimation within the offline RL paradigm. A separate target policy is used solely for evaluation and stabilizing value estimation, which is more constrained than the "evaluation policy." The evaluation policy does not participate in policy evaluation and is improved by the value function estimates, with the level of constraint adjusted by the weight of the term. Strengths: Major points: - This procedure can be easily integrated into offline RL algorithms that utilize policy constraints, and empirical results apply it to TD3+BC and AWAC. The empirical findings demonstrate promising improvements, with baselines encompassing the standard suite of state-of-the-art offline RL algorithms. - The paper is well-written, easy to comprehend, and thoughtfully structured. - The idea itself is intuitive, and the toy experiments convincingly demonstrate that over-constraint poses a significant issue. Figure 2 clearly illustrates the adverse effects of over-constraint, with the policy performing poorly in low state value regions of the maze. - The ablation studies are extensive and demonstrate the method's effectiveness. I believe this simple yet intuitive method is worth presenting to the broader offline RL community. I believe this work should be accepted. Weaknesses: Major points: - While the results show promise, they do not indicate substantial improvements across many environments, and there is some inconsistency observed. The method shows a decrease in performance in the medium-expert D4RL tasks and the pen task. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Can you explain the inconsistencies in the results, especially in the pen task? What could be a possible reason? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate reviewer sDxq for their review and kind comments for our manuscript. > While the results show promise, they do not indicate substantial improvements across many environments, and there is some inconsistency observed. The method shows a decrease in performance in the medium-expert D4RL tasks and the pen task. Can you explain the inconsistencies in the results, especially in the pen task? What could be a possible reason? > In our experiment results. 1. TD3BC-MCEP shows weaker performances than TD3BC on “halfcheetah-m-e” and “hopper-m-e” tasks. 2. The results of -MCEP is slightly weaker than their baseline (TD3BC and AWAC) in Adriot tasks. For all the abovementioned cases, we find that behavior cloning loss is the main contributor to the final performances. To show this, we present the results of a Top-10% behavior cloning agent (behavior cloning using 10% data of higher return). | Dataset | Top10 BC | TD3BC-MCEP | AWAC-MCEP | EQL | IDQL | DQL | DQL-MCEP | | ------------ | -------- | ---------- | --------- | --- | ---- | --- | -------- | | hafcheetah-m |$43.1\pm0.3$ | $55.5\pm0.4$|$46.9\pm0.0$|$46.5\pm0.1$ | $49.7$ | $49.8\pm0.2$ | $53.2\pm0.2$ | | hopper-m |$56.9\pm1.6$ |$91.8\pm0.9$|$98.1\pm0.6$| $67\pm1.3$ | $63.1$ |$81.7\pm6.6$ | $95.5\pm2.2$ | | walker2d-m |$73.3\pm2.5$ |$88.8\pm0.5$|$81.4\pm1.6$| $81.8\pm1.1$ | $80.2$ |$85.5\pm0.8$ | $75.3\pm3.6$ | | hafcheetah-mr |$39.9\pm0.8$ |$50.6\pm0.2$|$44.9\pm0.1$| $43.1\pm0.5$ | $45.1$ |$47\pm0.2$ | $47.8\pm0.1$ | | hopper-mr |$72\pm2.1$ |$100.9\pm0.4$|$101.1\pm0.2$| $97.3\pm3.3$| $82.4$ |$100.6\pm0.2$ | $100.9\pm0.1$ | | walker2d-mr |$56.6\pm3.3$ |$86.3\pm3.2$|$83.4\pm0.8$| $71.4\pm4.7$ | $79.8$ |$93.6\pm2.5$ | $92.6\pm2.1$ | | hafcheetah-me |$93.5\pm0$ |$71.5\pm3.7$|$69.5\pm3.8$| $89.4\pm1.6$ | $94.4$ |$95.7\pm0.4$ | $93.4\pm0.8$ | | hopper-me |$108.9\pm0.0$ |$80.1\pm12.7$|$84.3\pm16.4$| $97.3\pm3.3$| $105.3$ | $102.1\pm3.0$ |$107.7\pm1.5$ | | walker2d-me |$111.1\pm0.5$ |$111.7\pm0.3$|$110.1\pm0.2$| $109.8\pm0.0$| $111.6$ | $109.5\pm0.1$ | $109.7\pm0.0$ | | Average |$72.8$ | $81.9$ | $79.9$ | $78.1$ | $79.0$ | $85.0$ | $86.2$ | We observe that Top-10% BC agent shows superior performances on “medium-expert” tasks. These results are higher or similar to RL methods. In Adroit tasks, Behavior cloning also shows superior performance, as well as the TD3BC with a high coefficient for the BC loss (see Table 1 in the paper). In these tasks, high-quality data (e.g. expert data) exists and optimal actions can be inferred within the data distribution. As analyzed in [1-3], the estimated Q values for OOD actions could diverge and becomes arbitrarily high (may be much higher than the accurate estimate of optimal actions inside the dataset). In this case, a mild policy constraint could let the policy exploit these high but incorrect Q values so resulting in bad-quality evaluation policies. In other datasets, expert data does not exist in the dataset and the policy is required to improve **over** the dataset. Therefore, the problems of estimating the value of the state-action pairs (maybe not in the datasets) and exploring the critic network become important. In other words, balancing the tradeoff between stable Q estimate and test-time inference is key to obtaining performant policies. The proposed approach is effective for these tasks as the overly constrained target policy mitigates the Q-estimate error and the MCEP achieves better test-time inference. [1] Fujimoto, S., Meger, D. and Precup, D., 2019, May. Off-policy deep reinforcement learning without exploration. In International conference on machine learning (pp. 2052-2062). PMLR. [2] Kumar, A., Fu, J., Soh, M., Tucker, G. and Levine, S., 2019. Stabilizing off-policy q-learning via bootstrapping error reduction. Advances in Neural Information Processing Systems, 32. [3] Kumar, A., Zhou, A., Tucker, G. and Levine, S., 2020. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33, pp.1179-1191. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I thank the author's for the new results. In light of the theoretical concerns also mentioned by other reviewers, I keep my original rating of leaning towards acceptance.
Summary: Offline reinforcement learning (RL) methods frequently involve a policy constraint to mitigate error propagation when learning the Q function. Generally, a single constraint strength is used throughout training. This paper proposes instead to use different constraint strengths for learning the target policy, which is only used for learning the Q function, and the evaluation policy, which is the final policy returned by the algorithm. In particular, a stronger constraint is needed to ensure stability when training the target policy, but weakening the constraint for the evaluation policy can lead to better performance. Strengths: * The idea is fairly general and can be instantiated with various RL algorithms, as shown in the paper. * The experimental results provide insight into the role of the constraint and the tradeoff between stability and performance. * Conceptually, the approach allows for a continuum of algorithms between one-step RL and standard actor-critic methods. * The paper is clearly written and understandable. Weaknesses: * The algorithm introduces an additional hyperparameter that requires tuning, which is already a challenge in offline RL. * The paper found that “in 6 out of the 9 tasks, the $\alpha$ for better inference performance is higher than the $\alpha$ that enables safe Q estimates”. While 6/9 is a majority, this is not convincing evidence that weakening the constraints is always helpful. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In Section 5.2, how exactly do you determine "the lowest $\alpha$ value that causes Q value explosion"? (In particular, how is "explosion" defined?) Could this lead to a hyperparameter tuning procedure that involves only looking at the Q values and does not require off-policy evaluation or sample collection? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Yes, limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate reviewer wQCk for their review of our manuscript. > The algorithm introduces an additional hyperparameter that requires tuning, which is already a challenge in offline RL. > Compared to policy constraints methods, the proposed MCEP method has an extra hyperparameter for the constraint strength of evaluation policy. Separate constraint strengths for the evaluation policy and the target policy are important to find optimal values respectively for stabilizing Q-estimate and obtaining better inference performance. In practice, we found that the performance is not sensitive to the value of this extra hyperparameter. E.g. The orange region in Figure 5 indicates values of this hyperparameter that enable the evaluation policy to outperform the target policy, which widely covers the hyperparameter space. In addition, we use a simple hyperparameter search strategy and find it works effectively. We use paper-recommended constraint strengths for the target policy and tune the strengths for the evaluation policy to milder strengths. > The paper found that “in 6 out of the 9 tasks, the $\alpha$ for better inference performance is higher than the $\alpha$ that enables safe Q estimates”. While 6/9 is a majority, this is not convincing evidence that weakening the constraints is always helpful. > Figure 5 shows that in 6 out of the 9 tasks (5 out of the 9 tasks in the corrected version, Figure 3 of the submitted `.pdf` file), policy constraint strengths that enable evaluation policy to outperform target policy (the target policy has a fixed $\alpha=2.5$) may fall in the strengths that the same values will cause Q-explosion if assigned to the target policy. The results of these 5 tasks present one respect of unstable Q-estimate, i.e. high Q-estimate error that causes Q-value explosion. In the remaining $9-5=4$ tasks, where the Q-value does not explode, the evaluation policy with milder constraints still outperforms its target policy in 2 of the 4 tasks, in total, 7 out of the 9 tasks support our claim, which is consistent. We also provide an additional experiment (see Figure 1 `.pdf`) that investigates optimal policy strengths of TD3BC-MCEP and TD3BC. The results provide further insights that Q-estimate errors brought by mildly constrained target policy may still degrade the inference performance even though the Q-value does not explode. > In Section 5.2, how exactly do you determine "the lowest alpha value that causes Q value explosion"? (In particular, how is "explosion" defined?) Could this lead to a hyperparameter tuning procedure that involves only looking at the Q values and does not require off-policy evaluation or sample collection? > The blue area indicates $\alpha$ values of the TD3BC method. Under each $\alpha$ value, the training is run with 5 seeds and the Q value $Q(s, \pi(s))$ during training are visualized. If any one of these 5 runs shows Q-value explosion (Q value diverges and the policy performance is largely degraded), we consider this $\alpha$ value raises explosion and it will not be included in the blue area. Finally, we take the lowest value from those raised Q explosions as edges of the blue area. > Could this lead to a hyperparameter tuning procedure that involves only looking at the Q values and does not require off-policy evaluation or sample collection? > To investigate this approach, we present the full results of the hyperparameter searching in Figure 1 in the submitted .pdf file. In the case of TD3BC, we found that the optimal strengths are not always the milder ones without raising the Q explosion. Milder constraint introduces large Q-estimate errors and harms the inference-time performance, even though the Q values are not exploded (Q values explode when this error is high enough). --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions and providing additional experimental results. My concerns are largely addressed, and I am still in favor of acceptance. While the proposed method is simple, simplicity is not inherently bad IMO – indeed, simplicity has benefits as well – and the fact the idea works with several algorithms is evidence that it is general and will be useful to the research community.
Rebuttal 1: Rebuttal: ## Summary of Positive Feedback We would like to thank the reviewers for their thorough reviews and detailed feedback. It appears reviewers have perceived various aspects of this work's contribution. - reviewer nJEQ commented the studied problem *"is an important problem"* and the proposition of our main claim is "an interesting problem" - Reviewer wQCk commented that our approach is *"fairly general"* and *" provides insight into the role of the constraint"*. Reviewer sDxq commented that *"The idea itself is intuitive"* and *"believe this simple yet intuitive method is worth presenting to the broader offline RL community"*. - reviewer wQCk commented on the paper *"clear written"* and reviewer sDxq commented *"well-written, easy to comprehend, and thoughtfully structured."*. In addition, we are glad to see all the reviewers agreed that the proposed approach is simple yet general. ## Novelty **1. Methodology:** We present Mildly Constrained Evaluation Policy (MCEP) deriving an extra policy from the critic for evaluation, with an overly constrained target policy to mitigate the Q-estimate error by OOD actions. Our approach circumvents a well-known trade-off in policy constraint offline RL methods: stable Q-estimate and policy inference performance. **2. Conceptually:** We study the roles of the policy constraint: stabilize Q estimate and test-time inference. Our approach separates the roles into two policies: a target policy and an evaluation policy. **3. Empirically:** Despite the simplicity, the proposed general method enables conventional policy constraint methods (TD3BC and AWAC) to achieve SOTA-level performances and enables SOTA policy constraint methods (Diffusion-QL) to obtain further performance improvements. ## Empirical analysis 1. We argue that the experiment results shown in Figure 5 from Section 5.2 are **consistent** with our claim of "milder policy constraint is required for test-time inference than the constraint of stable Q-estimate". Reviewer wQCk and Reviewer nJEQ commented *"Results in 6 out of 9 environments support our claim but others do not"*. The 6 tasks point to those constraint strengths that may raise Q-value explosion if assigned to the target policy. But in 7 out of 9 environments (the range of the orange area is larger than 0 on 7 axes), a wide range of milder constraint strengths enable the evaluation policy to outperform the target policy. 2. To further clarify its insight, we introduce a fine-grained hyper-parameter search to compare the optimal constraint strengths for evaluation policy/actor (Figure 1 `.pdf`). In 8 out of 9 tasks, the evaluation policy finds its optimal constraint milder than its target policy. The results also show the performance degradation caused by Q-estimate error even when the Q-value does not explode. 3. As commented by reviewer nJEQ *"there exists better algorithms"*, we update our performance evaluation by introducing the following agents: 1) A behavior cloning agent with $10\%$ highest-return data (Top10BC). 2) comparison to 3 SOTA methods mentioned by reviewer nJEQ (EQL, IDQL, DQL). 3) DQL-MCEP by applying the proposed MCEP to a SOTA policy constraints method DQL (DQL-MCEP). | Dataset | Top10 BC | TD3BC-MCEP | AWAC-MCEP | EQL | IDQL | DQL | DQL-MCEP | | ------------ | -------- | ---------- | --------- | --- | ---- | --- | -------- | | hafcheetah-m |$43.1\pm0.3$ | $55.5\pm0.4$|$46.9\pm0.0$|$46.5\pm0.1$ | $49.7$ | $49.8\pm0.2$ | $53.2\pm0.2$ | | hopper-m |$56.9\pm1.6$ |$91.8\pm0.9$|$98.1\pm0.6$| $67\pm1.3$ | $63.1$ |$81.7\pm6.6$ | $95.5\pm2.2$ | | walker2d-m |$73.3\pm2.5$ |$88.8\pm0.5$|$81.4\pm1.6$| $81.8\pm1.1$ | $80.2$ |$85.5\pm0.8$ | $75.3\pm3.6$ | | hafcheetah-mr |$39.9\pm0.8$ |$50.6\pm0.2$|$44.9\pm0.1$| $43.1\pm0.5$ | $45.1$ |$47\pm0.2$ | $47.8\pm0.1$ | | hopper-mr |$72\pm2.1$ |$100.9\pm0.4$|$101.1\pm0.2$| $97.3\pm3.3$| $82.4$ |$100.6\pm0.2$ | $100.9\pm0.1$ | | walker2d-mr |$56.6\pm3.3$ |$86.3\pm3.2$|$83.4\pm0.8$| $71.4\pm4.7$ | $79.8$ |$93.6\pm2.5$ | $92.6\pm2.1$ | | hafcheetah-me |$93.5\pm0$ |$71.5\pm3.7$|$69.5\pm3.8$| $89.4\pm1.6$ | $94.4$ |$95.7\pm0.4$ | $93.4\pm0.8$ | | hopper-me |$108.9\pm0.0$ |$80.1\pm12.7$|$84.3\pm16.4$| $97.3\pm3.3$| $105.3$ | $102.1\pm3.0$ |$107.7\pm1.5$ | | walker2d-me |$111.1\pm0.5$ |$111.7\pm0.3$|$110.1\pm0.2$| $109.8\pm0.0$| $111.6$ | $109.5\pm0.1$ | $109.7\pm0.0$ | | Average |$72.8$ | $81.9$ | $79.9$ | $78.1$ | $79.0$ | $85.0$ | $86.2$ | A minor issue fixed: We correct the task-name mismatch problem for the orange area in Figure 5, resulting in Figure 3 in the submitted .pdf file. We will address individual comments from the reviewers by replying to separate threads below. Pdf: /pdf/9ed9990aeb4574de1ebb7c31db6a66fda603a087.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Is This Loss Informative? Faster Text-to-Image Customization by Tracking Objective Dynamics
Accept (poster)
Summary: Text-to-image generation models offer fine-grained control over synthesized images, but fast adaptation to smaller datasets or new concepts remains a challenge. Existing efficient adaptation methods suffer from long training times, hindering practical applications and resource usage. This work addresses the issue by studying the training dynamics of popular text-to-image personalization methods and proposes a drop-in early stopping criterion that significantly speeds up adaptation (up to 8 times faster) without compromising quality, as demonstrated through experiments on Stable Diffusion and various personalization methods. Strengths: 1. This paper proposes a simple but effective method to accelerate text-to-image customization. 2. The proposed method is well-motivated and easy to understand. Weaknesses: 1. It is imperative to provide supporting evidence to justify the necessity of adaptive step choices. Can we simply set a fixed step number (e.g. reduce to 1/3) without losing much performance? For instance, analyzing the outcomes and plotting the distribution of selected step numbers can demonstrate the potential reduction in unnecessary iterations. This approach would enhance the validity of the proposed method. 2. The authors should engage in a thorough discussion of pertinent literature concerning the acceleration of generative models. It is crucial to acknowledge and reference closely related research in this domain. --- Having read the author's rebuttal, I've chosen not to alter my score. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please address the issues highlighted in the Weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Please address the issues highlighted in the Weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and insightful suggestions! Allow us to address your concerns in a below response: >It is imperative to provide supporting evidence to justify the necessity of adaptive step choices. For instance, analyzing the outcomes and plotting the distribution of selected step numbers can demonstrate the potential reduction in unnecessary iterations. We agree that the justification of adaptive number of steps is highly required. This is why in **L262-265** we mention that non-adaptive Few Iters baseline has a **greater variance of almost all reported metrics**, which indicates a need for an adaptive early stopping method. However, plotting the distribution of selected steps might indeed be even more illustrative. Thank you for this suggestion: we include such a plot in the PDF of the general response. >The authors should engage in a thorough discussion of pertinent literature concerning the acceleration of generative models. Acceleration of generative models is indeed closely related to our work’s area of research. However, since DVAR does not change neither the optimization objective nor the sampling procedure, **all advancements in this area are complementary to our research**. In other words, one can use any of such techniques to accelerate the training process while using $L_{det}$ to track the convergence of the model. Keeping our paper succinct yet thorough required us to be selective about the related works that we cover. Nevertheless, we agree with your suggestion, and we are happy to discuss ways of accelerating generative models in an additional paragraph of Appendix A. We would be especially grateful for your suggestions of particular research areas that we should cover.
Summary: This paper studies the training dynamics of a few state-of-the-art txt-to-image personalization methods and proposes an early stopping criteria while fine-tuning the base models to speedup their customization. They evaluate this criteria on Dreambooth, Textual inversion, and custom diffusion on 18 concepts and showing close or slightly better img-txt clip similarity scores on the validation prompts while degrading the similarity to the input image. Strengths: 1. This paper presents a comprehensive study on the randomness factors during training of the customization models and their training curves and propose a simple stopping criteria based on a deterministic variance evaluation of the diffusion loss. 2. This early stopping results in a 2x speedup on custom diffusion as well as dreambooth-LoRA and 12x speedup in textual inversion. Weaknesses: 1. The whole idea of the paper is not very novel, and looks more like an analysis paper. 1. There is a trade-off in the samples similarity to the source image during both training and validation versus the number of finetuning steps. Although the authors have shown more generalization ability to validation prompts, it looks like identity preservation has been degraded. 2. The evaluations are only done on 18 concept examples. A larger set would be needed to make the results more convincing. 3. The scores which are reported in Table 2, only measure the similarity of the generated image with the validation text prompt. It would be important to see how much of the identity is preserved in these generations using the proposed early stopping criteria. Identity preservation is an important goal in personalization methods and is not studied well in this paper. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. In addition to the above questions mentioned above, it was not clear to me if the models are trained on all concepts together or if they are trained on the samples of each concept. If it is on each concept, where are 8 images per batch coming from? If that has been done on multi-concepts, then I'd expect to see some evaluations on multi-concept customization. 2. Very few qualitative examples are shown in the paper/supplemental. It would be interesting to add more examples comparing the results with the baseline. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - The proposed technique causes a natural trade-off between the number of fine-tuning iterations and identity preservation of the objects in the source images. While the latter is quite important in image personalization, it is degraded in this paper (based on Table 1) and is not studied well in human evaluations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to provide detailed feedback on our work. Please find our responses to your concerns and questions below: >The whole idea of the paper is not very novel, and looks more like an analysis paper. To the best of our knowledge, **we are the first to to study the optimization process of text-to-image customization** in detail and to propose a method to **make the loss of diffusion models informative**. If there are **specific studies** that you can direct us to where such an analysis has been conducted and a solution has been proposed, we would be grateful to learn about them. Moreover, we politely disagree that positioning our work as an analysis paper should necessarily be viewed as its weakness. We believe that providing comprehensive analysis presents its own set of equally important contributions for the scientific community. >There is a trade-off in the samples similarity to the source image during both training and validation versus the number of finetuning steps. You are correct: **we explicitly indicate in L269-270** that this trade-off exists and is one of the problems of the baseline methods we employ in our paper. However, **this is not a disadvantage of our method**. On the contrary, Table 2 reveals that although the baseline method often outperforms DVAR in terms of the reconstruction quality, it considerably lags behind our method in terms of the **customization** ability. Hence, our method allows us to identify an **optimal iteration to stop training** from the perspective of this trade-off. As evidenced in Table 1, early stopping by DVAR results in a negligible decrease in the quality of reconstruction on average across all concepts while preserving high quality of customization. >The evaluations are only done on 18 concept examples. A larger set would be needed to make the results more convincing. We agree and conduct additional experiments on **30 new concepts** from the DreamBooth paper. Please see the second paragraph of our general response for the updated results. >The scores which are reported in Table 2, only measure the similarity of the generated image with the validation text prompt. It would be important to see how much of the identity is preserved in these generations using the proposed early stopping criteria. Identity preservation is an important goal in personalization methods and is not studied well in this paper. We kindly disagree, because identity preservation is studied throughout the work. First, identity preservation is quantitatively evaluated by the **Val CLIP img** metric in Table 1. Similarly to the explanation of the Train CLIP img score provided in L229-231, Val CLIP img score measures identity preservation on **images generated from novel prompts**: we will emphasize that in the setup description to avoid reader confusion in the future. Judging by this metric, we see that DVAR is comparable to the baseline on DreamBooth-LoRA, slightly surpasses the baseline on Custom Diffusion, and slightly underperforms on Textual Inversion. Importantly, the possibility of identity preservation depends on the method and not just on the early stopping technique: training for more iterations leads to better identity preservation, but also to overfitting. Second, in Appendix F, Figure 13b provides instructions according to which the annotators determine the degree of customization. The third point of the instruction asks annotators to answer the question "Which image has a **<reference-image-object> incorporated into the scene?**" and the fifth point asks to determine **which image preserves the identity better**. Therefore, **identity preservation is covered by the Customization metric in Table 2** as well. >it was not clear to me if the models are trained on all concepts together or if they are trained on the samples of each concept. If it is on each concept, where are 8 images per batch coming from? The models were trained on the samples of each concept separately; we leave multi-concept personalization out of the scope of our work. Following Equations (2) and (3) and L100-102, the training batches consist of images (x ~ X, where X is the 3 – 5 images for each concept) with minimal augmentations (central/horizontal crop), randomly sampled captions (y ~ Y), timesteps (t ~ U[0, T]), and random noise from the multivariate Gaussian distribution. Therefore, each batch is generated from **a small set of original images with different random inputs**: note that DVAR does not fix them for the training objective, only for evaluation. Thanks for highlighting this point of confusion, we will clarify the training process in the camera-ready version. >Very few qualitative examples are shown in the paper/supplemental. Besides the side-by-side comparisons in Figures 4 and 13 of our paper, we provide additional qualitative comparison in the PDF attached to our general response. --- Rebuttal Comment 1.1: Title: Revised rating Comment: Thanks for the rebuttal. Since most of my concerns are addressed in this rebuttal, I'd be happy to increase my rating.
Summary: This paper argues that customization techniques for diffusion-based text-to-image generation models train for longer than is needed. This is because the training loss of diffusion models is often not informative -- i.e., often looks like stationary noise -- so practitioners tend to use a fixed (often excessive) number of training steps. This paper identifies and analyzes the sources of stochasticity in the training loss, and propose simple ways to eliminate them to make the loss more informative. They also introduce a simple early stopping criteria based on this interpretable loss. Strengths: Paper addresses a key issue many people training diffusion models face: the loss is not informative. That is, it often behaves like stationary noise despite the model continuing to improve on auxiliary metrics of interest (FID, human evals). The authors do a very principled analysis of the sources of stochasticity in this loss and identify the sampled time-step to be the main driver. Eliminating this randomness leads to loss that better corresponds to model performance. I could se this becoming common practice -- not just in the model customization/fine-tuning regime but also in the training of the base diffusion model. Weaknesses: The main weakness of the paper is that I'm not sure how useful the DVAR early stopping criteria is. See question below. However for this paper, this is less of an issue, since it's main contribution is a careful analysis of a problem people who train diffusion models face: uninformative loss and where it comes from. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: How does L_det vary across concepts/models/methods? One plot that could help motivate the DVAR is a plot that shows how L_det varies across concepts (and models/methods). If it's the case that L_det doesn't vary very much then maybe picking a fixed (but smaller) number of fine-tuning steps is sufficient. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We would like to address your concern in the following response: >How does L_det vary across concepts/models/methods? Different behavior of $L_{det}$ for different methods can be observed in Figures 2, 10 – 12 of our work. In Figure 2 of the PDF attached to our general response, we depict the behavior of normalized $L_{det}$ for 4 concepts across all three personalization methods. Normalization was necessary, because the scale of loss varies highly across concepts. From this plot, we can conclude that the dynamics of $L_{det}$ moderately differ from one concept to another. For example, the objective function exhibits earlier saturation on some concepts and methods and demonstrates a more unstable behavior on others. We will include this illustration in the next revision of our paper — thank you for this suggestion!
Summary: This paper studies the training dynamics of popular text-to-image personalization methods (such as Textual Inversion, DreamBooth, and Costom Diffsuion), aiming to speed them up by an early stopping approach which allows the model to optimize or fine-tune for fewer iterations. A key observation is that most concepts are learned at early stages and do not improve in quality later, but standard model convergence metrics fail to indicate that. Based on this observation, the authors propose a simple drop-in early stopping criterion that only requires computing the regular training objective on a fixed set of inputs for all training iterations. Experiments are conducted on Textual Inversion, DreamBooth, and Costom Diffusion. Strengths: 1. The observation that most concepts are learned at early stages and do not improve in quality later, but standard model convergence metrics fail to indicate that is interesting and inspiring. 2. The proposed approach is well-motivated with key observations and in-depth analysis before deriving the method. 3. The proposed approach improves the efficiency of personalized text-to-image models by a simple but effective early stopping scheme. Weaknesses: 1. Although the visual results and CLIP scores indicate no further improvement after optimization for a certain number of steps, it is still not clear whether the observation is solid. The CLIP score can be biased because of the limited ability of CLIP in understanding complex and detailed information. Good visual results in generating an image similar to the input image do not mean perfect identity and detail perservation for personalized text-to-image generation. 2. The writing and logic can be improved. For example, how does 3.2 (investigating the sources of randomness) relate to other sections is not well demonstrated. 3. The evaluation is limited. Firstly, only 18 concepts are used for evaluation. Prior work such as DreamBooth actually used more concepts and prompts. Secondly, the CLIP image-image and CLIP image-text similarities are not enough to evaluate the concept preservation and image quality of personalized diffusion models. Prior work DreamBooth used several other evaluation metrics to reflect the ability of the models. Thirdly, the visual results in Figure 4 are not impressive, and there are no examples for personalized generation where the user provides a different text prompt with the same visual concept from the original image but in a different background or scene. The different application scenarios of DreamBooth, Textual Inversion, and Custom Diffusion are not extensively experimented. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to the weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for a detailed review! We would like to answer your concerns about our work below: >Good visual results in generating an image similar to the input image do not mean perfect identity and detail perservation for personalized text-to-image generation. We agree that the CLIP image score may not fully reflect the process of the model learning complex details. However, as indicated in Table 2 of the paper, we also conducted **human evaluation** to compare our method with the baseline in two directions: concept **reconstruction** and **customization**. Our results demonstrate that 1) our method often stops at an iteration **that is better or equal in terms of reconstruction** quality 2) this iteration also proves to be **the best for image customization**, indicating the overfitting of original methods which can be overcome by DVAR. Additionally, the correlation between the CLIP image score and the visual quality of reconstruction can be partially confirmed using Figures 2, 10 – 12. >The writing and logic can be improved. For example, how does 3.2 (investigating the sources of randomness) relate to other sections is not well demonstrated. Section 3.2 is necessary to **motivate and introduce the new objective** $L_{det}$ (a key factor in our work) and to explain how it differs from the original training objective. $L_{det}$ is used for the DVAR early stopping criterion, which is one of the primary contributions of this paper. >The evaluation is limited. Firstly, only 18 concepts are used for evaluation. We have conducted additional experiments on the DreamBooth dataset with **30 new concepts**; see the general response for details. Prior work DreamBooth used several other evaluation metrics to reflect the ability of the models. Thank you for this suggestion! We have updated the table with main results from our paper with **two more metrics** used in the DreamBooth paper (DINO and DIV). Please refer to the table with updated results in our general response. >there are no examples for personalized generation where the user provides a different text prompt with the same visual concept from the original image but in a different background or scene. Thank you for pointing that out. Actually, this setting is shown in Figure 4 of the submission, but it might be difficult to interpret because we did not specify the validation prompt. For example, in the row “Textual Inversion val”, the prompt “A photo of <clock> *on the beach*” is used to check if the learned concept can be used in a different scene. Both the baseline and CLIP-s methods **fail to depict** the learned concept in the desired background, whereas **DVAR succeeds**. Other qualitative examples are provided in Figure 13 of Appendix J. An improved version of Figure 4 and other qualitative side-by-side comparisons can be found in the PDF attached to our general response. --- Rebuttal Comment 1.1: Title: revised rating Comment: Thank the authors for the response. The authors have addressed my concerns about the limited concepts and metrics used for evaluation by adding more concepts and evaluation metrics in the rebuttal. The authors also explained that they showed the personalized generation results in the paper although it was not illustrated properly (prompts are not shown so it was difficult to understand what the images mean). This can be improved in the final version of the paper. My remaining concern is that the proposed approach does not seem to demonstrate good identity and detail preservation as shown in the examples in the paper and the additional rebuttal page. Therefore, I increase my rating to borderline reject.
Rebuttal 1: Rebuttal: Dear reviewers, we deeply appreciate the time and effort you devoted to the review of our paper. We are glad that multiple reviewers recognize the motivation that drives our work (**5cki**, **DXLz**, **rpQb**), the depth of our analysis (Gk9z, DXLz), and the simplicity of DVAR combined with its effectiveness (**5cki**, **rpQb**). We have carefully considered your insightful feedback and have addressed the concerns raised by each reviewer in their respective responses. In this response we would like to address your collective questions and comments. First, we would like to address a common concern regarding the evaluation of the studied methods on only 18 concepts. In the submitted version of our work, we used all the concepts and prompts provided by the authors of two methods that we test: Custom Diffusion and Textual Inversion. The table below presents an extended evaluation on additional 30 concepts that were recently released by the DreamBooth paper authors (resulting in **48 concepts overall**). The evaluation is additionally augmented with two more metrics from the latest revision of DreamBooth paper (following the suggestion from Reviewer **5cki**): DINO measures the preservation of subject details in the images generated from train prompts and DIV measures the diversity of the samples generated from the same prompt. | | | | Textual Inversion || | | | |--------------------|-----------------|---------------|---------------|--------------|--------------|--------------|----------- | Method | Train CLIP img | Val CLIP img | Val CLIP txt | DINO | DIV | Iterations | Time, min | | Baseline | $0.840_{±0.051}$ | $0.786_{±0.075}$ | $0.209_{±0.021}$ | $0.635_{±0.094}$ | $0.742_{±0.078}$ | $6100.0_{±0.0}$ | $27.0_{±0.3}$ | | CLIP-s | $0.824_{±0.053}$ | $0.757_{±0.067}$ | $0.233_{±0.024}$ | $0.590_{±0.110}$ | $0.769_{±0.097}$ | $666.7_{±174.5}$ | $9.6_{±2.5}$ | | Few Iters (mean) | $0.796_{±0.069}$ | $0.744_{±0.073}$ | $0.232_{±0.023}$ | $0.559_{±0.133}$ | $0.768_{±0.099}$ | $475.0_{±0.0}$ | $1.6_{±0.0}$ | | Few Iters (max) | $0.806_{±0.066}$ | $0.767_{±0.071}$ | $0.219_{±0.022}$ | $0.591_{±0.103}$ | $0.757_{±0.092}$ | $850.0_{±0.0}$ | $2.8_{±0.0}$ | | DVAR | $0.795_{±0.067}$ | $0.748_{±0.068}$ | $0.227_{±0.024}$ | $0.566_{±0.119}$ | $0.777_{±0.091}$ | $566.0_{±141.5}$ | $3.1_{±0.8}$ | | | | | **DreamBooth-LoRA** || | | | Baseline | $0.857_{±0.061}$ | $0.824_{ ±0.079}$ | $0.205_{±0.021}$ | $0.721_{±0.103}$ | $0.565_{±0.154}$ | $1000.0_{±0.0}$ | $8.1_{±2.2}$| | CLIP-s | $0.862_{±0.045}$ | $0.788_{±0.075}$ | $0.225_{±0.022}$ | $0.709_{ ±0.093}$ | $0.630_{±0.100}$ | $353.2_{ ±88.1}$ | $6.1_{ ±1.5}$ | | Few Iters (mean) | $0.855_{ ±0.052}$ | $0.806_{ ±0.085}$ | $0.219_{±0.023}$ | $0.711_{±0.111}$ | $0.632_{±0.086}$ | $367.0_{±0.0}$ | $1.9_{ ±0.0}$ | | Few Iters (max) | $0.851_{ ±0.053}$ | $0.800_{ ±0.097}$ | $0.214_{ ±0.019}$ | $0.704_{ ±0.125}$ | $0.592_{±0.127}$ | $500.0_{ ±0.0}$ | $2.6_{ ±0.1}$ | | DVAR | $0.784_{ ±0.106}$ | $0.687_{ ±0.140}$ | $0.238_{ ±0.034}$ | $0.577_{ ±0.206}$ | $0.585_{ ±0.114}$ | $665.3_{ ±94.9}$ | $4.9_{ ±0.7}$ | | | | | **Custom Diffusion** || | | | Baseline | $0.755_{ ±0.077}$ | $0.695_{ ±0.069}$ | $0.258_{ ±0.021}$ | $0.475_{ ±0.139}$ | $0.753_{ ±0.061}$ | $500.4_{ ±0.5}$ | $6.5_{ ±0.9}$ | | CLIP-s | $0.757_{ ±0.076}$ | $0.691_{ ±0.069}$ | $0.258_{ ±0.023}$ | $0.471_{ ±0.140}$ | $0.748_{ ±0.063}$ | $510.4_{ ±134.1}$ | $9.7_{ ±3.0}$ | | Few Iters (mean) | $0.751_{ ±0.078}$ | $0.691_{ ±0.070}$ | $0.259_{ ±0.023}$ | $0.475_{ ±0.143}$ | $0.753_{ ±0.069}$ | $450.0_{ ±0.0}$ | $3.4_{ ±0.9}$ | | Few Iters (max) | $0.754_{ ±0.078}$ | $0.691_{ ±0.073}$ | $0.257_{ ±0.022}$ | $0.488_{ ±0.145}$ | $0.756_{ ±0.068}$ | $700.0_{ ±0.0}$ | $5.3_{ ±1.4}$ | | DVAR | $0.742_{ ±0.074}$ | $0.693_{ ±0.066}$ | $0.259_{ ±0.022}$ | $0.454_{ ±0.136}$ | $0.740_{ ±0.055}$ | $348.1_{ ±46.6}$ | $3.4_{ ±1.0}$ | Noticeably, adding 30 novel concepts does not change the relative ranking of early stopping methods: DVAR still allows for early stopping **without sacrificing the reconstruction quality** (judged by Train/Val CLIP img and DINO metrics) for two out of three evaluated customization methods. Moreover, models trained with DVAR demonstrate **higher customization quality** than the baseline (demonstrated by Val CLIP txt metric), while being **adaptive** and **non relying on the costly intermediate sampling**. Additionally, DVAR increases the DIV metric for two personalization techniques, signifying **reduced overfitting**. Moreover, we provide additional plots and illustrations in the attached PDF. Specifically, the attachment contains more qualitative side-by-side comparisons (**Gk9z**), a corrected version of Figure 4 with validation prompts specified (**5cki**), the distribution of final step numbers (**rpQb**), and the $L_{det}$ behavior on various concepts (**DXLz**). We hope that a more extensive evaluation of our method addresses your concerns and that the additional illustrations further confirm our findings. If you have any additional questions, we would be happy to respond to them during discussion. Pdf: /pdf/9282ee06c6df833d2aa2f92b5e467e416dc82b17.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Randomized Approach to Tight Privacy Accounting
Accept (poster)
Summary: The authors consider the problem of bounding the privacy loss for a composition of DP mechanisms. This problem is well-studied in the literature and the particular setting here is when the mechanisms are Gaussian or have Gaussian sub-routines but the privacy loss is measured through the original (approximate) DP definition. This is particularly difficult for Gaussians because of the sub-exponential tail and other work has created new definitions better suited to the Gaussian mechanism. While the privacy loss random variable may be challenging to work with under the original DP definition, it can still be efficiently sampled from, so an estimate of the expected privacy loss can be computed efficiently. The authors show that these estimates are quite accurate with high probability and leverage this guarantee to turn the estimate into a formal privacy bound (or failure to run with tiny probability). As a result, for certain settings the authors give improved privacy composition bounds over the previous work. Strengths: 1. Clever new technique to use composition privacy loss estimates instead by rejecting the estimate with low probability. 2. Significant technical work that combines a variety of techniques. 3. Their work extends nicely to the subsampled Gaussian mechanism for DP-SGD. 4. Empirical testing run in a variety of settings for both Gaussian mechanism and DP-SGD. 5. Improves upon previous work for DP composition in a reasonably important setting. Weaknesses: Remark 6 feels exaggerated. For the settings considered in this paper, the delta parameter is just a result of concentration bounds and sub-exponential tails of Gaussians, not the probability of a catastrophic data breach. Unless the authors know of examples with delta <= 10^{-10}, I've only seen practical applications with delta at minimum 10^{-7} and 10^{-6} is most common. Of course there are privacy advocates that will always push for more privacy, but they are also likely to further push for "pure" differential privacy in which composition is easy and Laplace mechanism must be used instead. $\textit{The authors pointed to the discussion here I missed, so strike this comment}$ There are privacy definitions better suited to Gaussians that have become very common both in practice and in the literature that allow for easier analysis and make this more difficult accounting less necessary (though still interesting). $\textit{The authors adequately addressed this in the rebuttal}$ The improvement in epsilon for composition of the Gaussian mechanism over the previous work is quite small (<1%) for their empirical study (figure 5). Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: $\textit{All questions were answered by the authors}$ What's the runtime for the 'Exact' method compared to FFT or EVR? It was a bit confusing why that couldn't be used given that it's the ideal privacy parameter to output. From the experimental result with DP-SGD, it looks like MC beats FFT even with more standard delta settings. Does this hold more generally? Why was this not the result emphasized in the intro instead of the gaussian mechanism composition? Are the curves in figure 4 identical but shifted? So basically using MC accounting you compute a better epsilon at each epoch? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The limitations were appropriately discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments! **Q1 [The Importance of Privacy Accounting in Regime of small $\delta$]** **A:** **Regarding the common rule of setting $\delta = 1/n$:** Setting $\delta$ around $1/n$ implies a maximum $1/n$ likelihood of a privacy guarantee failing for an individual. With a basic union bound, this offers no assurance that the privacy won't fail for everyone in the dataset. Mainstream DP textbooks advocate for a "cryptographically small" $\delta$, specifically, $\delta < n^{-\omega(1)}$ [[1]'s page 25, [2]'s page 9]. The current literature tends to choose a higher value of $\delta$ for a decent utility. Our Remark 6 is intended to remind the community about the importance of using a smaller $\delta$. While it still requires a lot of effort for achieving a good privacy-utility tradeoff even for the current choice of $\delta$, it is important to keep such a goal in mind. Additionally, recent literature [3] in DP foundation models uses a $\delta$ around $2*10^{-9}$. We anticipate $\delta<10^{-10}$ will become necessary given the trend of using large-scale datasets. **Our EVR algorithm has improvement when $delta=10^{-5}$ too**. While the improvement for bounding $\delta$ is barely visible when $\delta$ is around $10^{-5}$, the corresponding improvement for $\varepsilon$ can in fact be significant if we align $\delta$ to be the same for the EVR and the original FFT accountant. For example, in Figure 4 we obtain around 0.1 improvements in $\varepsilon$, which lead to around 0.8% accuracy improvement in the privacy-utility tradeoff. Figure 10 in Appendix shows $\varepsilon$-$k$ curves when fixing $\delta = 10^{-5}$. We can see that there’s a non-trivial improvement in $\varepsilon$. [1] Dwork and Roth. "The algorithmic foundations of differential privacy." [2] Vadhan, Salil. "The complexity of differential privacy." [3] Yu et al. “ViP: A Differentially Private Foundation Model for Computer Vision” **Q2.** *“There are privacy definitions better suited to Gaussians ...”* **A:** May we ask the specific privacy definition that *“suited to Gaussians”* the reviewer is referring to? If the reviewer is referring to GDP or RDP (zCDP), we have already shown the disadvantage of the privacy accounting techniques based on these two alternative privacy notions in Figure 1: **GDP-based accountant:** We can see that GDP accountant completely fails due to the relatively small number of composed mechanism (1200). The original paper [1]’s Figure 3 also shows that GDP's bound is very loose when $m$ is relatively small. **RDP-based accountant (Moment accountant):** We can see that the Moment Accountant (MA) in Figure 1 is sub-optimal for the entire regime of $\delta$. Moment Accountant is sub-optimal even for the Gaussian mechanism due to the lossy RDP-DP conversion [2]. [1] Analytical composition of differential privacy via the edgeworth accountant. arXiv 2022. [2] Optimal accounting of differential privacy via characteristic function. AISTATS 2022. **Q3** “The improvement in epsilon for composition of the Gaussian mechanism over the previous work is quite small (<1%) for their empirical study (figure 5).” **A:** While we agree the absolute improvement is small, we would like to stress that **(1)** Even a slightly more accurate estimation of $\varepsilon$ could result in hundreds of additional training iterations in DP-SGD, which can lead to higher utility; **(2)** Figure 5 aims to illustrate that MC accountant is **both** more accurate and efficient. In other words, MC accountant can achieve a better performance while being around 5 times faster than FFT. For completeness, we also tune the hyperparameter ($\varepsilon_{error}$) of the FFT accountant to adjust the runtime between FFT and MC accountant to be closer (by setting $\varepsilon_{error}=0.4$ while keeping all other hyperparameters the same). The result is shown in **Q3 in global response & Figure 14 in rebuttal’s PDF**. As we can see, the error of FFT is larger in this case. **Q4 [How does the `exact’ curve being computed in Figure 1 and 3(a)?]** **A:** In Figure 1 and 3(a), the “exact” curve can be analytically derived since it’s the composition of pure Gaussian mechanisms (e.g., see [1]’s Equation (10)). This means that the exact $(\varepsilon, \delta)$ curve can be analytically computed, and the composition of pure Gaussian mechanisms mainly served as a toy example for easier performance comparison between different DP accountants. This is a common experiment strategy in the prior literature (e.g., [1]’s Figure 2, [2]’s Figure 2). [1] Numerical composition of differential privacy. NeurIPS 2021 [2] Connect the Dots: Tighter Discrete Approximations of Privacy Loss Distributions. PETS 2022. **Q5 [EVR also has improvement on standard $\delta$ setting?]** *“From the experimental ... at each epoch?”* **A:** In Figure 4, we are comparing between FFT accountant and the FFT accountant augmented with EVR paradigm. That is, we are essentially comparing between FFT accountant’s "upper bound" and "estimate" for epsilon by aligning $\delta$ to be the same, which makes the curves look identical but shifted. In Figure 1, while the improvement for bounding $\delta$ can be barely visible when delta is around $10^{-5}$, the corresponding improvement for $\varepsilon$ can in fact be significant if we align $\delta$ to be the same for the EVR and the original FFT accountant. Hence, in Figure 4 we obtain around 0.1 improvements in $\varepsilon$, which lead to around $0.8\%$ accuracy improvement in the privacy-utility tradeoff curve. Additionally, Figure 10 in Appendix shows $\varepsilon$-$k$ curves when fixing $\delta = 10^{-5}$. We can see that there’s a non-trivial improvement in $\varepsilon$. We thought Figure 1 is better for illustrating the failure case of “strict upper bound”; for revision, we have further emphasized the improvement of EVR at the regime of $\delta=10^{-5}$ in the Introduction. --- Rebuttal Comment 1.1: Comment: I appreciate the authors thoroughly addressing all comments and questions. Also my apologies for fixating a bit too much on the small delta remark and not taking as much time to better understand other aspects that the authors explained well in the rebuttal. I still feel that in practice setting delta that small will be quite uncommon, and in my opinion that general rule of thumb has become rather antiquated in the same way that experts early on advocated for $\epsilon << 1$ which is generally far too impractical for industry use-cases. But I appreciate the authors clarifying the other points and emphasizing the improvement in larger delta regimes for future versions of their work. --- Reply to Comment 1.1.1: Title: Thanks for the prompt response! Comment: We sincerely thank the reviewer for the prompt response and for raising the score! We will incorporate your comments about the consideration of $\delta$ for practical industry use-cases into the revision!
Summary: Authors introduce a new privacy accounting method to characterize the privacy loss random variable. The work reduce the classical privacy accounting problem into mean estimation problem following the previous work and give a Monte Carlo solution. The work provides detailed analysis of the proposed method and its utility and also give some common distribution to show the effectiveness and performance theoretically. The numerical studies also show the correctness and practicability of the method in the real privacy accounting tasks. Strengths: Pros: 1. The analysis for the proposed method is detailed and the writing is friendly to follow with a detailed preliminary. 2. The proposed tool works better than the compared existing accounting tools like CLT and FFT-based methods. 3. The fast speed and the online implementation show some potential for the method used in privacy accounting applications. Weaknesses: Actually, I have reviewed the paper in ICML. I think the paper has fixed almost the problem in the former cycle. The only problem I still keep is about whether the method will suffer from the dimension curse when deriving the prv samples for a general distribution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: / Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer (again) for the very positive feedback! **Q** *“The only problem I still keep is about whether the method will suffer from the dimension curse when deriving the prv samples for a general distribution.”* **A:** This is a great comment that actually points to one of the important benefits of our approach. The cost of our approach would not increase (in terms of number of samples) even if the dominating pair is supported in a high dimensional space. For Monte Carlo estimate $\delta = 1/m \sum_{i=1}^m (1-e^{\varepsilon - y_i})_+$, $y_i$ is the sampled log density ratio, and hence it is a scalar value. We can now see from Hoeffding’s inequality that the expected error rate of estimation is $O(1/\sqrt{m})$ which is independent of the dimension of the support set of dominating distribution pairs. This means that the number of samples we need to ensure a certain confidence interval is independent of the dimension. However, we should also note that although the number of samples does not change, the sampling process itself might be more costly for higher dimensional spaces. But one would expect that to grow at most polynomially. We will clarify this in the paper.
Summary: The paper proposes a privacy accounting method called estimate-verify-release (EVR), whose basic principle is to convert an estimate of a privacy parameter into a formal privacy guarantee. The mechanism works by verifying whether the estimated privacy guarantee holds, and then releasing the query output depending on the verification result. The paper develops a Monte-Carlo-based verifier for this paradigm. The overall accountant is broadly applicable and is shown to give a tighter privacy-utility tradeoff than existing baselines. Strengths: The mechanism is applicable broadly, in particular to important DP algorithms such as DP-SGD (that is, the subsampled Gaussian mechanism). It exploits the fact that existing work provides good privacy loss estimates and converts these estimates into a formal paradigm for ensuring differential privacy. The method is shown to beat a strong baseline, namely the FFT-based accountant from [19]. Weaknesses: The paper places a lot of emphasis on the fact that we can't naively use an estimated privacy parameter as the truth, because DP is a strict guarantee, and this makes perfect sense. But then in the analysis and implementation of the accountant there are some steps of the new accountant that are not made completely rigorous, such as the number of Monte Carlo samples. Or, in Theorem 13 there is a nu parameter that is not known. So my suggestion is to write a fully formal version of the accountant for DP-SGD in the main body to show that the paradigm can indeed be applied fully rigorously. In addition, it would be good to see at least one more experiment showing the same comparison as Figure 5. In Figure 5 the gains of MC over FFT are not clear for a small number of compositions. So it would be good to see if the comparison of Figure 5 is robust and generalizes to different problem settings. A few minor comments: - In the Conclusion, you say "allowing safe privacy parameter estimates *without* provable assurance"? - The Figures don't seem to be in vector format and are blurry if zoomed in. - Typo in Line 132, "privacy loss random variable *is* ..." (right now there's no verb in the main clause) - In Line 180, say where rho takes values. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Nothing at the moment. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the very positive feedback! **Q1 [Fully formal version of DP-SGD with EVR paradigm]** *“The paper places a lot of emphasis on the fact that we can't naively use an estimated privacy parameter as the truth, because DP is a strict guarantee, and this makes perfect sense. But then in the analysis and implementation of the accountant there are some steps of the new accountant that are not made completely rigorous, such as the number of Monte Carlo samples. Or, in Theorem 13 there is a nu parameter that is not known. So my suggestion is to write a fully formal version of the accountant for DP-SGD in the main body to show that the paradigm can indeed be applied fully rigorously.”* **A:** We thank the reviewer for the very concrete and useful suggestion. In the updated paper, we have included a pseudo-code outlining the full steps of DP-SGD with the EVR paradigm, as well as a theorem stating that the algorithm satisfies DP guarantee (**see Q1 in global response and Algorithm 4 in the rebuttal’s PDF**). For ***“in Theorem 13 there is a nu parameter that is not known”:*** $\nu$ is the upper bound for the second moment of the MC estimator, which we discuss how to bound it (which can be explicitly computed) for $\delta_{SMC}$ and $\delta_{IS}$ in the last paragraph of Section 4.3 and Appendix E. **Q2 [More experiment showing the same comparison as Figure 5]** *“In addition, it would be good to see at least one more experiment showing the same comparison as Figure 5. In Figure 5 the gains of MC over FFT are not clear for a small number of compositions. So it would be good to see if the comparison of Figure 5 is robust and generalizes to different problem settings.”* **A:** Thanks for the suggestion! We have conducted additional experiment in Appendix G.3.2, and the results are shown in **Figure 12 and 13 in the rebuttal’s PDF (see Q2 in Global response)**. The experiment settings follow exactly the same as Figure 5. Figure 12 and 13 show the online accounting results for $(\sigma, \delta, q) = (0.5, 10^{-5}, 10^{-3})$ and $(\sigma, \delta, q) = (0.5, 10^{-13}, 10^{-3})$, respectively. For the setting of $(\sigma, \delta, q) = (0.5, 10^{-5}, 10^{-3})$, we can see that the MC accountant achieves a comparable performance with a shorter runtime. For the setting of $(\sigma, \delta, q) = (0.5, 10^{-13}, 10^{-3})$, we can see that the MC accountant achieves significantly better performance compared to the state-of-the-art FFT accountant (and again, with a shorter runtime). This further showcases the MC accountant's efficiency and accuracy in online setting. **Q3 [Typos & Grammar & Blurred Images]** **A:** Thanks a lot for the catch! We have fixed the typos and grammars in the paper. For the comment about blurred image, we checked all of the images and they all look clear even after zoomed in on our side. Could you kindly point out the specific figure you are talking about? We are more than happy to change it! --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the response; it was helpful and I am happy to see that all my concerns are addressed. Regarding the figures, if you zoom in all the way the figures look pixelated because they are raster (such as png, jpg) and not vector (such as pdf). In my opinion it does not look professional to have figures that are raster in a paper. --- Reply to Comment 1.1.1: Title: Thanks for the response! Comment: We sincerely thank the reviewer for the positive feedback! We will definitely change the figures' format according to your suggestion for the revision!
Summary: The authors propose EVR framework for privacy accounting. The core idea is to estimate the privacy budget, verify whether the budget is approximately met, and then decide whether to release the result or halt. The workhorse is a Monte-Carlo verifier (also used as an accountant through binary search). The empirical evaluation shows significant improvement over existing techniques, especially in the large epsilon/small delta regime. Strengths: 1. The authors propose a novel framework for privacy accounting: EVR. By trading off a small probability that the program halts with privacy budget, the authors manage to get a tighter privacy profile curve at larger epsilon/smaller delta regime. 2. The authors verify the proposed framework empirically and showcase the advantage of EVR. Weaknesses: 1. One downside of the proposed approach is that there is a probability that the mechanism halts with a privacy budget cost. To get a tighter epsilon, you need to take the risk that you get nothing. Although this seems to be reasonable trade-off, it can be a problem in practical use case. More discussion should be put on this. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Any idea on how to deal with halting in practical usage? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the very positive feedback! **Q [How to deal with halting in practical usage?]** **A:** In Section 4.4, we developed techniques for ensuring the false negative rate (i.e., the rejection probability) is around O(delta) when the proposed privacy parameter delta^est is close to the true delta. In Section 6.1’s experiment, we use the state-of-the-art FFT accountant to produce delta^est, which is very accurate as we can see from Figure 1. Hence, the rejection probability in the experiment is around O(delta), which means the probability of rejection is close to the probability of catastrophic failure for privacy. In addition, if one is still concerned that the rejection probability is too large, we can further reduce the probability as follows: we run two instances of EVR paradigm simultaneously; if both of the instances are passed, we randomly pick one and release the output. If either one of them is passed, we release the passed instance. It only fails when both of the instances fail. By running two instances of the EVR paradigm in parallel, the false positive rate (i.e., the final $\delta$) will be only doubled, but the probability of rejection will be squared. We can also introduce a variant of our EVR paradigm that better deals with the failure case: whenever we face the "rejection", we run a different mechanism $M’$ that is guaranteed to be $(\epsilon, \delta^{(est)})$-DP (e.g., by adjusting the subsampling rate and/or noise multiplier in DPSGD). Moreover, we use FFT accountant to obtain a strict privacy guarantee upper bound $(\epsilon, \delta^*)$ for the original mechanism $M$, where $\delta^{(est)} < \delta^*$. We use $p_{fp}$ and $p_{fn}$ to denote the false positive and false negative rate of the underlying DP verifier. - If the original mechanism $M$ is indeed $(\epsilon, \delta^{(est)})$-DP, then for any subset $S$ we have $\Pr[EVR(D) \in S] = p_{fn} * \Pr[M(D) \in S] + (1-p_{fn}) * \Pr[M’(D) \in S] $ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \le p_{fn} * (e^\epsilon \Pr[M(D’) \in S] + \delta^{(est)} ) + (1-p_{fn}) * (e^\epsilon \Pr[M’(D’) \in S] + \delta^{(est)}) $ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \le e^\epsilon \Pr[EVR(D) \in S] + \delta^{(est)}$ - If the original mechanism $M$ is not $(\epsilon, \delta^{(est)})$-DP, then we have $\Pr[EVR(D) \in S] = p_{fp} * \Pr[M(D) \in S] + (1-p_{fp}) * \Pr[M’(D) \in S]$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \le p_{fp} (e^\epsilon \Pr[M(D’) \in S] + \delta^*) + (1-p_{fp}) * (e^\epsilon \Pr[M’(D’) \in S] + \delta^{(est)})$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \le e^\epsilon \Pr[EVR(D) \in S] + \delta^{(est)} + p_{fp} * (\delta^* - \delta^{(est)})$ Hence, this augmented EVR algorithm will be $(\epsilon, \delta^{(est)} + p_{fp} * (\delta^*-\delta^{(est)}))$-DP, and if $p_{fp}$ is around $\delta^{(est)}$, then this extra factor $p_{fp} * (\delta^*-\delta^{(est)})$ will be very small. We can also adjust the privacy guarantee for $M’$ such that the privacy guarantees for the two cases are the same, which can further optimize the final privacy cost. We have added the above discussion to the paper.
Rebuttal 1: Rebuttal: We thank all of the reviewers for their detailed and valuable comments. We are pleased that all the reviewers expressed a positive view of our work! We considered the reviews carefully and modified our paper accordingly. We have answered other questions in the individual responses. Here’s a summary of the contents in the submitted PDF: **Q1 (for Reviewer UAod) [Fully formal version of DP-SGD with EVR paradigm]** **A:** We thank Reviewer UAod for the very concrete and useful suggestion. Here, we additionally outline the full steps of privacy accounting for DP-SGD with our EVR paradigm through pseudo-code in **Algorithm 4 in Rebuttal’s pdf**. Recall from Lemma 16 that for subsampled Gaussian mechanism with sensitivity $C$, noise variance $C^2 \sigma^2$, and subsampling rate $q$, one dominating pair $(P, Q)$ is $Q := N(0, \sigma^2)$ and $P := (1-q) N(0, \sigma^2) + q N(1, \sigma^2)$. Hence, for DP-SGD with $k$ iterations, the dominating pair is the product distribution **P** $ := P_1 \times \ldots \times P_k$ and **Q** $:= Q_1 \times \ldots \times Q_k$ where each $P_i$ and $Q_i$ follow the same distribution as $P$ and $Q$ (It can also be extended to the heterogeneous case easily). We have also included the following corollary in the paper so that the paradigm can indeed be applied fully rigorously. **Corollary:** The Step 2-3 in Algorithm 4 is $(\varepsilon, \delta^{est}/\tau)$-DP. The corollary directly follows from Theorem 9 in the paper. **Q2 (for Reviewer UAod) [More experiments for evaluating MC accountant in online setting]** **A:** Thanks for the suggestion! We have conducted additional experiment in Appendix G.3.2, and the results are shown in **Figure 12 and 13 in the rebuttal’s PDF**. The experiment settings follow exactly the same as Figure 5. Figure 12 and 13 show the online accounting results for $(\sigma, \delta, q) = (0.5, 10^{-5}, 10^{-3})$ and $(\sigma, \delta, q) = (0.5, 10^{-13}, 10^{-3})$, respectively. For the setting of $(\sigma, \delta, q) = (0.5, 10^{-5}, 10^{-3})$, we can see that the MC accountant achieves a comparable performance with a shorter runtime. For the setting of $(\sigma, \delta, q) = (0.5, 10^{-13}, 10^{-3})$, we can see that the MC accountant achieves significantly better performance compared to the state-of-the-art FFT accountant (and again, with a shorter runtime). This further showcases the MC accountant's efficiency and accuracy in online setting. **Q3 (for Reviewer eMPH) [Additional experiments which increase the runtime for FFT accountant]** **A:** Figure 5 aims to illustrate that MC accountant is **both** more accurate and efficient. In other words, MC accountant can achieve a better performance while being around 5 times more efficient than FFT. To better illustrate the advantages of MC accounting over the FFT accountant, we tune the hyperparameter ($\varepsilon_{error}$) of the FFT accountant so that the runtime of FFT and MC accountant are closer to each other. Specifically, we set $\varepsilon_{error}=0.4$ while keeping all other hyperparameters to be the same. The result is shown in **Figure 14 in the rebuttal’s PDF**. As we can see, the improvement of the MC accountant over the FFT accountant is larger in this case. Pdf: /pdf/f33a8220ba794c56992e3f5ef69c59cb35c418f1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Locality-Aware Generalizable Implicit Neural Representation
Accept (poster)
Summary: This paper studies the problem of generalizable implicit neural representation. The proposed method combines a transformer encoder with a locality-aware decoder to predict the output with the feature modulation through multiple frequency bandwidths. Experiments are performed on image reconstruction, few-shot novel view synthesis, and image synthesis to show the effectiveness of the proposed method for generalizable implicit neural representation. Strengths: - This paper studies an important problem in implicit neural representation and the motivation is clear. - This paper is generally well structured and the proposed method is easy to follow. - Experiments are sufficient to show the effectiveness of the proposed method. Weaknesses: - This paper mainly emphasis on locality-aware, but fails to provide a formal definition of the notation "locality". It would be better to provide a formal formulation and some intuitive examples. - Many recent works tend to explore a hybrid neural representations (e.g., feature maps with small mlps), while this paper only considers the pure MLPs. It would be interesting to have a further discussion about this. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Locality: Definition and Intuitive Examples]** We will include the following additional explanation of the concept of locality for a better understanding of readers. Although it is challenging to provide a formal definition of ‘locality’, we can provide its conceptual description and intuitive examples for clear understanding. Note that the term locality in the operating system field is also informally described. In our study, ’locality’ is a concept that the features/attributes of data points are highly correlated when these points in the input space are located nearby, having a small distance between data points. For example, two nearby pixels in a 2D image have potentially similar colors. Thus, our framework learns to extract a latent corresponding to specific local regions for effective data representation, leveraging the inherent high correlation of features within local regions. **[Discussion about a Hybrid Neural Representation]** Our framework and recent approaches of a hybrid neural representation commonly extract latents from local regions to predict the features on continuous coordinates. However, our framework does not assume the explicit grid structure of local latents, unlike the recent hybrid neural representations that assume an explicit grid structure of data (e.g., 2D grid for an image or 3D grid for a 3D object) to extract the latents of each region [NewRef: InstantNGP]. Thus, our framework can be viewed as a broader version of hybrid neural representations, capable of learning the local structure across diverse data types. [NewRef: InstantNGP] Müller, Thomas, et al. "Instant neural graphics primitives with a multiresolution hash encoding." ACM Transactions on Graphics (ToG) 41.4 (2022): 1-15. --- Rebuttal Comment 1.1: Comment: Thank you for your response and it helped clear up some of my concerns. I would like to keep my original rating.
Summary: Generalizable implicit neural representation (INR) can represent multiple data instances with a coordinate-based neural network, of which weights or intermediate features are modulated using instance-wise latent codes. A significant constraint of current generalizable INR is their struggle to localize and capture fine-grained details of data entities. This limitation results in a diminished expressive power of modulation. This research tackles this problem by presenting an innovative framework for a generalizable Implicit Neural Representation (INR). This framework merges a transformer encoder with a locality-aware INR decoder. Two key components are designed to enable to capture of the local information of data, including selective token aggregation and multi-band feature modulation. The authors demonstrate the effectiveness of their method with state-of-the-art performances on both image reconstruction and novel view synthesis. Furthermore, they show the potential of the proposed generalizable INR on the conditional image synthesis task. Strengths: - **Scope and relevance**: This paper studies an important topic that enables INR with generalization ability. - **Technical contribution**: The authors present a novel locality-aware INR decoder to improve the expressive power of modulation by learning locality-aware representation from data. - **Experiments**: The authors conduct extensive experiments and demonstrate significantly improved performances on multiple benchmarks. - **Clarity**: The paper is well-written. Weaknesses: Some technical details might need further examination. - **Selective Token Aggregation**: In Section 3.3.1, the cross-attention is applied in extracting a modulation vector $\mathbf{m}_\mathbf{v}$ with the coordinate $\mathbf{v}$ and the latent tokens. Can the cross-attention be replaced with standard transformer attention with input from the concatenation of latent tokens and the frequency features of the coordinate? Can other operations that fusion these two inputs also work well? - **Multi-Band Feature Modulation**: The authors use a range of frequency bandwidths to predict the details of outputs with a deeper MLP path to learning higher-frequency features. If I am not mistaken, more learnable parameters are introduced into the coordinate-based neural network. Is it possible the improved performance stem from the increased model size? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In addition to the above weaknesses, here are two more questions about conditional image synthesis with the extracted latent tokens (Section 4.4 and Appendix A.3): - To realize conditional image synthesis, do the authors mean by training a diffusion model to generate latent tokens with pre-trained generalizable INR models? - In lines 522-523, "We drop 10% of class conditions for our model to support classifier-free guidance." Why 10%? Does this number percentage matter? Overall, this paper is a good effect. I will raise my rating if the authors can address my concerns. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors haven't adequately addressed limitations and the potential negative societal impact of their work. I would suggest showing some failure examples, in which the proposed model can not work well, so as to let the community know the boundary of this work. Moreover, considering the critical importance of AI safety, it could be beneficial to discuss any potential adverse societal impacts that may arise from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Standard Transformer Attention for Selective Token Aggregation]** Although the standard transformer’s self-attention can be used to predict the modulation vector for a coordinate input, we adopt cross-attention due to the computational efficiency. Using self-attention with the concatenation of latent tokens with the frequency features of the coordinate results in the computational costs with $O( (R+1)^2)$, where $R$ denotes the number of localized latents. Contrastively, using cross attention requires $O(R \times 1)$. While other operations, such as k-NN, can also be used for selective token aggregation, using cross-attention is a natural and intuitive design choice within a modern deep learning architecture. **[Multi-Band Feature Modulation]** In Section 4.1, the number of trainable parameters for our framework is 44.14M, while IPC has 43.75M parameters. That is, the increased number of parameters is 0.4M, which is 0.9% parameters of the IPC framework. Although our framework uses 0.9% more total parameters than IPC, the improved performance is significant and not dependent on the increased parameters. To be precise, we increase the hidden dimension of MLP, $d$, for IPC and train the framework of IPC on FFHQ 256x256, as shown in the table below. Increasing the number of MLP parameters also improves the performance of IPC. However, the results show that our performance improvement does not come from a simple increase of trainable parameters but from effective architecture design. | FFHQ 256x256 | # parameters | d | R | PSNR | |:------|:------:|:------:|:------:|:------:| |Ours | 44.14 M | 256 | 256 | **39.88** | |IPC [19] | 43.75 M | 256 | 256 | 34.68 | |IPC [19] | **65.31 M** | **1024** | 256 | 38.43 | **[Training a Diffusion Model]** Yes, we utilize a pre-trained generalizable INR model to train a diffusion model for image generation. Specifically, after our generalizable INR is trained on ImageNet 256x256 to extract the localized latents of the images, the extracted latents of images are randomly corrupted by Gaussian noises to train a diffusion model. **[10% Drop of Class Conditions]** We follow the original paper on classifier-free guidance and conventional settings to train class-conditional diffusion models. While we have not conducted an ablation study on the probability of unconditional training (class condition drop), the original paper on classifier-free guidance describes the following findings: when $p_\text{uncond} \in \{ 0.1, 0.2, 0.5 \}$ is ablated, $p_\text{uncond}=0.5$ consistently performs worse than $p_\text{uncond} \in \{ 0.1, 0.2 \}$. Thus, it has been concluded that only a small portion of the model capacity is needed for the unconditional generation task for classifier-free guidance. **[Limitations]** We will add the following discussion about the limitations of our study. Although our framework significantly improves the image reconstruction performance for INRs, there is still room for improvement, especially in high-resolution image reconstruction, such as 1024x1024. Our experiments on novel view synthesis have been conducted only on a category-specific and synthetic dataset such as ShapeNet. Since our framework can be applied to generation tasks, our study is not exempt from generative models' potential negative societal effects. --- Rebuttal Comment 1.1: Title: Rebuttal Acknowledgment Comment: I thank the reviewers for their detailed clarification, which has released my concerns about their work. Therefore, I raise my rating to accept. --- Reply to Comment 1.1.1: Title: Thank you for engaging in the discussion and increasing the score. Comment: Dear reviewer, Thank you for engaging in the discussion and increasing the score. We sincerely appreciate your time and dedication in this matter.
Summary: This paper enhances the performance of generalizable INR via improving its locality awareness. The transformer encoder feed on patchs of an image and produce latent tokens with local information, and later extracted as modulation vectors. The proposed INR decoder with selective token aggregation and the multi-band feature modulation learn locality-aware representation in spatial and spectral domains respectively. Experiments show that the proposed method outperform previous methods both on reconstruction and downstream image generative tasks. Strengths: 1. The proposed make a lot of sense to enhance the locality of INRs and via patch processing and attentions in transformers etc., and it reaches very good results compared to previous SOTAs. The method is clearly illustrated, and the results are well displayed both quantitatively and qualitatively. There are also many ablations and discussions/analyses into the details and depth. The supplementary material also provides more details, discussions and results. 2. Besides the main reconstruction task, the proposed method is also able to perform a few generative tasks well, and also better than previous SOTAs. 3. Figure 5 (and Figure 12 in Supp) shows interesting results on replacing latent tokens with null ones, unvealing some underlying rationales of the learned space. 4. Source code is provided in the supplementary material. Weaknesses: 1. [Core] An INR usually refers to a neural network that store information in its model weights instead of feature maps. The proposed modulation vector m_v seems to be more like a latent code (in an encoder-decoder framework). I hope to see clear clarification on how the parameters of the proposed INR is composed of, i.e. how much portion is feature maps and how much portion is model weights. In addition, if most (or almost all) parameters of the "INR" is just feature maps, then it might not be very difficult to perform those downstream generative tasks, as it is essentially more an improved VAE than an independent INR. I'm wondering how this could help the displayed performance. 2. [Core] The INR (decoder or modulation vectors, i.e. the non-shared parts to independently form a single INR representing a single image) model size / compression ratio seems not to be mentioned in the paper. How big the produced INR is to representing one image of 256x256 for example? The paper has conducted a few ablations on the model structure details, including Table 4 and Table 6, while I'm wondering a more overall metric, such as number of paramters in an INR, its size in MB, or BPP (bits per pixel). How does this compared to other methods such as TransINR ordan IPC? (The above two points are my main concerns and the game-changer for my assessment.) 3. Figure 2 shows the core mechanism clearly, but in addition I think a specific figure to show the produced INR itself (e.g. many layers or blocks duplicated) would make the full pipeline clearer to understand. 4. Table 5 seems not to be mentioned in the main body text nor illustrated anyhow (while L271 should be referring to Figure 4 as a typo). Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: 1. The provided ablations are on reconstruction performance only. While this is the main task and objective, I'm also wondering how these ablations perform on other downstream applications in the paper, including novel view synthesis and image generation. Does any options or hyperparameters make a difference for these applications? Or do you find you don't have to carefully pay attention to them in these applications in your experiments? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: 1. More discussions and analyses on the learned INR model weight space would be welcomed and inspiring. How do you think the locality awareness may improve the INR space distance smoothness? For example, in LDM they applied KL loss or VQ regularization on VAE (although not big) to help the distance in the latent space (feature map) be consistent with that in the pixel space. Do you think this might be one of the point that the proposed method reaches better results in generative tasks including novel view synthesis and image generation? 2. IPC also conduct experiments on downstream classification tasks. It would be great to see similar results on your proposed method and comparisons with them. (Or do you think there might be any difference in design making that your method is better at and more fitful for generative tasks than classifications? Any discussion is welcomed.) Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[The Portion of Localized Latents in an INR]** The localized latents account for 9% of INR parameters (=65,536/725,446), which is a small portion of INR. Since the shape of extracted localized latents is 256x256, we emphasize that the size of localized latents is equivalent to **one weight matrix of a single MLP layer** with 256 hidden dimensions. Meanwhile, as the reviewer comments, our framework can also be interpreted as an improved version of encoder-decoder (INR) architecture, since we also use a hypernetwork to predict the latents for modulating INRs. A conventional VAE has a decoder of substantial size, having tens of millions of parameters (>10M), which is used to generate or reconstruct an image from feature maps. However, our INR decoder has a much smaller number of parameters, (0.73M) with its role being to simply represent a data instance itself as INR. **[The Size of Non-Shared Part]** The size of non-shared latents is fixed to 256x256 for all experiments for TransINR, IPC, and our framework, including experiments on high resolution images such as FFHQ 1024x1024. Thus, TransINR, IPC, and our framework require 0.25MB (=32-bit x 256 x 256) to specify a data instance as an INR. The compression ratio is 130%, 33%, and 8% for 256x256, 512x512, and 1024x1024 images, respectively. Compared with a previous study (e.g., COIN [10]), which exploits an INR for data compression and combines quantization techniques to improve compression ratio, generalizable INRs show worse data compression performance, since they have focused on improving the reconstruction performance. Considering that our framework significantly improves the reconstruction performance of a generalizable INR, combining quantization techniques with localized latents will be an interesting future work for data compression. We will attach this discussion to our revised manuscript. **[Explanation about Figure 2]** Figure 2 shows the produced INR in our experiments, illustrating the number of duplicated layers or blocks. We will add a detailed explanation to ensure understanding. **[Referring Table 5]** Thanks for the detailed comment. We will change “Figure 5” in line 271 to “Figure 6” and explicitly mention Table 5 in Section 4.4. **[Ablation Studies on Novel View Synthesis and Image Generation]** Since training a diffusion model for image generation is expensive, we also attach the ablation study on the novel view synthesis of ShapeNet-Lamp with 3 support views. The results below show that both selective token aggregation (STA) and multi-band frequency modulation (multiFM) improve the performance of our framework. | | ImageNette | FFHQ | Lamps-3 views | |:------|:------:|:------:|:------:| |Ours | 37.46 | 38.01 | 26.00 | |w/o STA | 34.54 | 34.52 | 25.35 | |w/o multiFM | 33.90 | 33.65 | 25.78 | |IPC [19] | 34.11 | 34.68 | 25.09 | **[More Discussions and Analyses on the Learned INR]** As the reviewer’s comment, we believe that the locality awareness can also enhance the smoothness of the INR space. Figure 6 in our submission shows that the generated latents of IPC cannot provide realistic local details, since an artifact affects all coordinates, as described in Lines 303-305 and Figure 5. The results imply that the INR space of IPC is sensitive to slight variations or corruptions, while our framework has a more smooth INR space than IPC. Meanwhile, the regularization techniques (e.g., KL loss or VQ regularization), which the reviewer has mentioned, can also be applied to our framework, making the INR space further smoother. However, when employing the KL regularization with 1E-6 loss weight, the PSNR on ImageNet 256x256 decreases from 37.7 to 30.7. We have also changed the KL regularization in LDM to the deterministic version and fixed the predicted covariance as Isotropic Gaussian, then the reconstruction PSNR becomes 33.9. We believe that further exploration of hyperparameters can enhance our framework, expecting that a smoother INR space can help a diffusion model improve the performance of downstream tasks. **[Experiments on Downstream Classification Tasks]** We have checked that IPC does not conduct an experiment on any downstream task, but Spatial Funta [4] has conducted an experiment on a downstream classification task. We have yet been able to conduct the classification task due to the limited computational resources during the rebuttal period. However, we expect similar results with Spatial Functa, since the performance of generative tasks also shows similar results with Spatial Functa. We do not assume that our framework is more fitful for generative tasks than classification. Instead, we perceive a generative task to be more difficult than classification. Note that a generative task requires learning all local data details, while a classification task requires understanding the global semantics. We will do our best to reserve additional computational resources for conducting experiments on classification tasks to attach the results in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your clear and informative responses. If I'm understanding it correctly, is the localized latent vector the only non-shared part for each INR? Is the decoder shared for all INRs or does each INR has its own decoder, whose weights are generated by the transformers given the specific image? I appreciate the authors' explanations for my Q1 and Q2; however, I still have distance from completely understanding it clearly. For example, In Q1 it is said that the localized latent is only part of the INR (65,536/725,446). But in Q2 it uses 32-bit x 256 x 256 to calculate the whole size of an INR for the compression ratio. In the main paper L103 it is said that "A generalizable INR uses a single coordinate-based MLP as a shared INR decoder $F_\theta$". I'd appreciate the authors if you could provide more detailed illustrations. In Figure 2, does the shallow yellow background indicate the whole components of one INR? In this way it seems that the decoder is dedicated for each INR? But it doesn't show how the parameters of it (e.g. the Fourier-FCs) are produced? --- Reply to Comment 1.1.1: Comment: Dear Reviewer KNfA, We appreciate your active participation and valuable comments in the ongoing discussion. Please find below our responses to your inquiries. We will also revise the paper accordingly. - The shallow yellow background in Figure 2 indicates the whole components of one INR. Here, we remark that the localized latent vectors are the only parts unique to each data instance, while the remaining parts are shared across data instances. Given that an FC represents a linear layer, the parameter count for FCs, which are shared among data instances, corresponds to a weight matrix of dimensions $d \times d$, with $d$ being equal to 256. - Although each data instance requires 725,446 parameters (32-bit x 725,446), we use the compression ratio (32-bit x 256 x 256) based on the localized latent vectors with 256x256 size, since the remaining parts of the decoder (725,446-65,536=659,910) are shared across data instances. That is, while generalizable INR uses a single coordinate-based MLP as a shared INR decoder (i.e., the remaining parts of the decoder), it is specified by the localized latent vectors to represent each data instance. That is, to represent $N$ data instances, the number of required parameters is 659,910 + 65,536 x $N$, while conventional INRs require 725,446 x $N$.
Summary: This paper tries to improve the expressive power of neural implicit function modulation by enhancing its ability to localize and capture fine-grained details of data samples. A novel framework for generalizable INR is proposed that combines a transformer encoder with a locality-aware INR decoder. It further utilizes selective token aggregation and the multi-band feature modulation to learn locality aware representation in both spatial and spectral aspects. Strengths: + combine locality aware transformer encoder with global neural implicit function to improve the representation quality of local details + use selective token aggregation for spatial locality + use multi-band feature modulation for spectral locality Weaknesses: - All the quantitative results are reported with PSNR metric. More evaluation metrics emphasizing local details should be used. - Limited results reported on novel view synthesis and conditional image generation. - In conditional generation, the latents are corrupted by Gaussian noise and denoised with a trained diffusion model. If the Gaussian noise is small, the task becomes trivial. An comparison experiment should be done for directly reconstructing with corrupted latents using the implicit neural function. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Can we interpolate between the latents of two data samples to generate a new sample? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Experiment results are limited in evaluation scale and in-depth analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[More Evaluation Metrics to Emphasize Local Details]** We compare our framework with IPC [19] to emphasize local details using HF-PNSR-r, which calculates PSNR only using the high frequency components of an image. We have also considered other evaluation metrics such as rFID, SSIM, LPIPS during the rebuttal period. However, for the reviewer’s request to emphasize local details, PSNR is the best metric to measure per-pixel errors, since the others measure the structural or perceptual quality rather than local details. The metrics shown in the table below are PSNRs between the ground truth image and the reconstruction of ImageNette 178x178, after the bottom 5%, 10%, 20%, 40%, and 80% of frequencies are filtered out. The results show that our framework outperforms a previous study, IPC, to reconstruct various ranges of high-frequency details. | | remove 5% | remove 10% | remove 20% | remove 40% | remove 80% | |:---|:---:|:---:|:---:|:---:|:---:| | IPC | 38.81 | 39.18 | 40.35 | 43.37 | 50.20 | | Ours | 46.26 | 46.38 | 46.88 | 48.77 | 54.48 | **[Limited Results on Novel View Synthesis and Conditional Image Generation]** We emphasize that our experiments on novel view synthesis are not insufficient compared with previous studies. We follow the experimental setting with previous studies such as TransINR [8], and IPC [19], while using the same experimental setting with IPC [19]. Compared with TransINR [8], which has trained their framework only with 1-2 support views, we provide the experimental results with 1-5 support views. While our framework and earlier research have concentrated on category-specific and synthetic datasets, we anticipate that the enhancements in our performance regarding novel view synthesis offer the potential for our framework to extend its learning capabilities to open-domain and large-scale 3D object datasets, such as Shap-E [NewRef: Shap-E], in future endeavors. For conditional image generation, we have trained a diffusion model on ImageNet 256x256, since ImageNet 256x256 is the renowned and standard benchmark for evaluating class-conditional image generation. Extending the range of conditional generation, such as layout-to-image or text-to-image, would be an interesting future work. However, we consider the in-depth analysis for training diffusion models to generate INRs is beyond the scope of this study. If the reviewer suggests detailed experiments to add, we will attach the experimental results in the revised version if possible. [NewRef: Shap-E] Jun, Heewoo, and Alex Nichol. "Shap-e: Generating conditional 3d implicit functions." arXiv preprint arXiv:2305.02463 (2023). **[The Magnitude of Gaussian Noise for Diffusion Model]** The task of denoising Gaussian noise is not trivial, since we follow the conventional setting to train diffusion models. Given a diffusion time step $t$, a Gaussian noise $\epsilon$ is added to the localized latents $\mathbf{Z}^{(n)}$ as $\sqrt{\alpha_t} \mathbf{Z}^{(n)} + (1-\alpha_t) \epsilon$, where $\alpha_0=1$ and $\alpha_T=0$. When a diffusion time step is close to $t=0$, the added noise is small. However, when the diffusion time step is close to $t=T$, the latents $\mathbf{Z}^{(n)}$ become a Gaussian noise after noise addition, making the denoising task nontrivial. We note that the generation process of diffusion models starts from pure Gaussian noise to generate INR latents. **[Interpolation of the Latents of Two Data]** Although we can interpolate the latents of two samples, the outcomes do not yield semantically meaningful images akin to a straightforward linear interpolation between two images.
Rebuttal 1: Rebuttal: We appreciate all reviewers's constructive comments to improve our paper. We have tried our best to sincerely respond to all concerns and questions. Pdf: /pdf/478c6702ee396bc1650a5de17ccae4d9ddfee984.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper aims to enhance the performance of generalizable implicit neural representation locality-aware model designs. A transformer encoder is applied to convert image patches into latent tokens, which the proposed locality-aware decoder composed of the selective token aggregation and the multi-band feature modulation is based on to predict outputs. Experiments on tasks including image reconstruction and novel view synthesis are performed to demonstrate the proposed method's state-of-the-art performance. Strengths: + The proposed locality-aware decoder containing selective token aggregation and multi-band feature modulation is novel and effective. + Extensive experiments show that the proposed method achieves good performance. + The paper is clear and easy to follow. Weaknesses: - It is not very straightforward to get the idea of "Locality" in this work. Any empirical evidence like visualization in Selective Token Aggregation to reveal how the locality is enhanced? - Analysis and comparison of model/runtime efficiency are missing. - Some important implementation details are missing, e.g. what's the value of L and what's the impact of this hyperparameter? Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - No analysis for failure cases. - Qualitative results in the paper are limited, and should be more diverse. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[The Idea of Locality]** In our study, “locality” describes a concept that the features in a data instance have potentially a high correlation with each other, where the distance between their corresponding coordinates is close. For example, in a 2D image, pixels located close to each other tend to have similar RGB colors when the two pixels are located in a nearby position. Thus, the notion of “locality-aware" implies that our latent tokens learn to contain the information of a local region for efficient and effective representation of a data instance. Figure 5 provides the empirical evidence of enhanced locality-awareness, since the previous study, IPC, has overlooked modeling the local information of a data instance. Figure 5 shows that each latent of IPC affects most coordinates' features, failing to model local information. However, our framework shows that each latent covers a certain region of the local area, enhancing the locality-awareness of our framework. **[Analysis and Comparison of Model Efficiency]** For the experiments on ImageNette 178x178 in Figure 1, our framework has 0.9% more parameters, 44.14M, than IPC having 43.75M of trainable parameters. The runtime of IPC is 78 seconds per training epoch, but our framework takes 90 seconds. However, Figure 1 shows that our framework is significantly more efficient and effective than IPC and TransINR, despite a 15% longer runtime per training epoch. **[The Value of $L$]** Our experiments use $L=2$ as described in Line 206 and Line 238, since $L$ is the number of frequency bandwidths. We will explicitly specify the total number of $L$ in our revised version. The experiments to study the impact of the hyperparameter $L$ are attached to Appendix B.1 in our supplementary material. **[Analysis for Failure Cases]** Although our framework significantly improves the performance of a generalizable INR, the reconstruction performance on 1024x1024 image resolution is still incapable of perfectly reconstructing all high-frequency details in original images. In addition, the qualitative results on novel view synthesis show blurry examples due to the lack of the training objective of generative modeling to synthesize unseen views. We will attach the explanation of failure cases above to inform the research community about the boundary of this study. **[Diverse Qualitative Results]** Please refer to the attached supplementary material for more diverse qualitative results. Our supplementary material includes various examples of novel view synthesis with different numbers of support views, image reconstruction with 256x256, 512x512, and 1024x1024 resolutions, class-conditional image generation on ImageNet, and additional visualization for locality analysis. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thanks for the response and it resolved some of my concerns. I would like to keep my rating.
Summary: The paper focuses on the task of training a single coordinate-based neural network to represent multiple scenes or instances. There are two main technical contributions that improve the quality of these generalized representations: (1) a Transformer-based encoder that extracts localized features of each target instance (e.g. image or scene), and adaptively weights these localized features during inference at different coordinate locations, (2) the coordinate-based neural network operates coarse-to-fine in the frequency domain, taking in high-frequency instance-specific features at earlier layers and lower-frequency instance-specific features at later layers, so that higher-frequency features are processed by a deeper network. The paper includes compelling results on medium and high resolution image datasets, and promising preliminary results on few-shot novel view synthesis. Ablation studies show that of the two technical contributions, neither alone yields improvement but both together do, compared to the main baseline Instance Pattern Composers. Strengths: These two technical ideas make sense, particularly the idea of having localized features, and the results on image datasets are very compelling—though some important omitted details make these difficult to fully interpret, as does my own lack of familiarity with the baseline methods. Ablation studies and more in-depth analysis of learned features (figure 5) are also interesting. Weaknesses: The paper writing/presentation could be substantially improved. The first roughly 2 pages of the paper are laden with jargon (and some typos/grammatical issues), making the actual new ideas in the paper difficult to tease out. As a reader who has worked in implicit neural representations but not generalizable ones, many parts of the paper required some assumptions or terms that might be common to those who work in generalizable INRs but unfamiliar to researchers in even a very adjacent area. Some examples: what exactly is a latent (ie what is the input used to produce a latent)? What is modulation (in signal processing this would be element-wise multiplication)? The methods section does largely (though not fully) clarify the method and the new ideas, but leaves me wondering about the motivation behind some of the design decisions that are stated without much explanation. I list these remaining questions in the “questions” section of the review, along with some questions about the experimental setup of the results. One separate comment/suggestion is about the clarity of the figures. Figure 1 shows that the proposed method trains a lot faster/better than two baselines, but doesn’t explain what the task is (just the dataset). Figure 2 gives a helpful overview, but doesn’t actually explain the two core new ideas of the paper. For example, from the figure and caption I can’t tell what is different about the two yellow blocks (one is for high-frequency features and one for low-frequency features, but I didn’t find this out until I read 2 pages past the figure). Some terms in the figure are also not defined; for example I assume that “FC” means fully-connected, but I’m not sure how this differs from “Linear”. Figure 3 shows a compelling comparison to prior work, but I would encourage the authors to include the ground truth image for full comparison. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - I am still not fully sure what exactly is the input to the model during inference. Is it basically an autoencoder, taking an image as input and compressing it into latents and then decoding them into the image again? Likewise for the generative experiment (figure 6), how are these latents generated? - Equation 2 describes the Fourier featurization; it appears to use the same featurization as the original NeRF paper (with axis-aligned frequencies) rather than e.g. a random Gaussian (not axis-aligned) set of frequency vectors as in the “Fourier Features Let Networks Learn…” paper, though both of these papers are cited. I wonder why the authors chose to use the original Fourier featurization rather than the newer version? - Equation 5 describes multi-head attention as the mechanism for aggregating/weighting the localized latent features when performing inference on a given coordinate vector. I wonder if the authors considered a more straightforward kernel function (e.g. Gaussian kernel based on Euclidean distance, or even a learned kernel) or what the motivation was for using multi-head attention. - Equations 6 and 7 describe a process for separating the modulation vector into different frequency bands, by using a shallow MLP with inputs based on different Fourier features (in the desired frequency band). I wonder why the MLP is necessary here, compared to just taking the subset of Fourier features that are in each frequency band? - Tables 1 and 2 show compelling numerical comparisons vs Learned Init, TransINR, and IPC. The accompanying text notes that the capacity of the encoder, latent tokens, and decoder are matched among all methods, except for the modulation methods. So I wonder how the capacity of the modulation methods compares? - I also wonder what the total training time is on each dataset, compared to prior work. - Figure 7 shows that test-time optimization of all parameters outperforms per-sample optimization. How is this possible? I wonder if training longer would allow the per-sample FFNet to “catch up”, or if the FFNet here has lower capacity, or if there is some other explanation for how a generalized model could outperform a more specialized model. Maybe the generalized model is starting from a better initialization and thus reaches a better optimum? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: Limitations are not really discussed, beyond mentioning some directions for future work--but for this topic I believe this is appropriate. I do not foresee direct negative societal impact from this work, though like all image processing research it has the potential to be misused e.g. towards surveillance or other harmful ends. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Improvement of Introduction]** As per suggestion, we will revise our manuscript to clarify the terminologies in the Introduction for better understanding as follows. The latents refer to the outputs of our Transformer Encoder, corresponding to the positions of learnable input tokens. The input of the Transformer Encoder is the concatenation of data tokens and learnable tokens, where the number of learnable tokens equals the number of latent tokens. Weight/feature modulation refers to the modification of a weight/feature by element-wise multiplication or addition to adapt a shared weight/feature for a data instance. Thus, the Transformer encoder extracts a set of latent tokens to modulate the shared parts of generalizable INRs to represent a data instance. Our study is primarily motivated by the limitation of SOTA methods, TransINR and IPC, which do not consider the locality in the data. We have elaborated on why the previous approach, IPC, cannot consider the local information of data in Lines 134-138 in Section 3.3.1. We will include the details of previous studies to clarify the motivation of our study and the limitations of previous studies. **[Typos and Grammatical Issues]** We will fix all typos and grammatical issues. **[Suggestions for the Clarity of Figures]** Reflecting the reviewer’s suggestions, we will add and clarify our figures as follows: - Figure 1: The task is image reconstruction. - Figure 2: The two frequency features have different bandwidths. We will replace “FC” with “Linear.” - Figure 3: We will add the ground truth image for detailed comparison. **[Inputs to the model]** Regarding explaining the input of Transformer Encoder, please refer to **[The Inputs of Transformer Encoder]** in our responses to Reviewer QDhX. Our framework for image reconstruction can be viewed as an autoencoder. However, our framework for synthesizing novel views should not be seen as an autoencoder, given that it involves the synthesis of previously unseen perspectives. We adopt the two-stage framework to generate an image for conditional image generation. We first train our framework on ImageNet 256x256 to represent an image as a set of localized latents for INRs. After representing each image as localized latents, a diffusion model is trained following the experimental setting in Appendix A.3. After the training, the diffusion model gradually denoises the corrupted latents, which start from isotropic Gaussian noises, to generate new images. **[Fourier Featurization]** We adopt the Fourier featurization with axis-aligned frequencies in the original NeRF paper to ensure the stable and high performance of generalizable INRs. Since the random initialization of Fourier features [36] requires a careful selection of the variance for each sample, adopting random Fourier features deteriorates the performance of generalizable INRs, as shown below. We also emphasize that recent NeRF models [NewRef: RefNerf] and generalizable INRs [8, 10] have still adopted the Fourier featurization with axis-aligned frequencies. | ImageNette 178x178 | PSNR | |:-----|:-----:| | IPC | 34.11 | | Ours | 37.46 | | IPC w/ random FF | 29.27 | | Ours w/ random FF | 30.94 | [NewRef: Ref-Nerf] Verbin, Dor, et al. "Ref-nerf: Structured view-dependent appearance for neural radiance fields." 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. **[Multi-Head Attention in Selective Token Aggregation]** We exploit multi-head cross attentions for selective token aggregation, since using cross-attention is an intuitive choice within the context of modern deep learning architectures. As learned kernel functions, cross-attentions can consider various similarity patterns between projected queries and keys, instead of a tailored similarity pattern. As shown below, adopting multi-head cross attentions improve the reconstruction performance on FFHQ 178x178, compared with single-head cross-attention, which is close to a learned kernel function. | | ImageNette 178x178 | FFHQ 256x256 | |:------|:-----:|:-----:| | Ours (2 heads) | 38.72 | 39.88 | | w/ single-head | 37.46 | 38.01 | **[A Linear Layer in Multi-Band Feature Modulation]** We add a Linear layer in Equations (6) and (7) to exploit complex frequency patterns, improving the performance. While the Fourier features consist of periodic patterns along an axis, the frequency patterns in Equation (6) can also include non-periodic patterns. Note that IPC [19] also uses a similar design, while modulating the second MLP layer to exploit complex frequency patterns. The linear layer in Equation (7) is used to process the modulation vector according to each frequency bandwidth, motivated by the design of separate projections for (query, key, value) in self-attention. The results below also show that removing the linear layers in Equations (6) and (7) significantly deteriorates the image reconstruction performance on ImageNette 178x178. | | ImageNette 178x178 | |:----|:----:| | Ours | 37.46 | | w/o Linear in Eq. (6) | 31.95 | | w/o Liner in Eq. (7) | 32.07 | | w/o Liner in Eq. (6) and (7) | 31.57 | **[The Capacity of the Modulation Methods]** The modulation capacity is determined by the size of latents to represent instance-specific information, except for the shared weights. TransINR, IPC, and our framework commonly use latents of size 256$\times$256 as instance-specific information to modulate a shared architecture. **[Total Training Time compared to Prior Work]** Please refer to our responses to Reviewer wXRn. **[Performance of Test-Time Optimization]** While per-sample optimization of FFNet starts from a random initialization, our framework provides a good initialization of an INR having high performance, as shown in Figure 7. Since our test-time optimization also updates whole parameters of the INR, it is reasonable to expect that improved initialization could lead to savings in training costs. --- Rebuttal Comment 1.1: Comment: The rebuttal addresses many of my concerns, but I still find the explanations in the paper to be needlessly confusing--even the first paragraph of the rebuttal does not really explain its terms (or rather, it explains terms by introducing more terms, which doesn't really help). I'm still on the fence about this paper because although the results are impressive quantitatively and the high-level ideas make sense, I'm not sure that the explanations are sufficient for other researchers to really understand what is going on (at least not without putting in a lot more effort as a reader than is common) or to build productively on the work. I won't stand in the way of acceptance given the strength of the results, but I do strongly encourage the authors to make sure that the writing/presentation in the final version is clear to readers in very adjacent (e.g. single-scene representation) if not identical research areas, to maximize the potential impact of the paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer AhKU, To avoid any confusion, we will revise our definitions by citing earlier works that provide more detailed definitions, including modulations [NewRef: Film] and the latents [NewRef: Perceiver], since these terms are frequently used in subsequent works without detailed definitions. [NewRef: Film] Perez, Ethan, et al. "Film: Visual reasoning with a general conditioning layer." Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018. [NewRef: Perceiver] Jaegle, Andrew, et al. "Perceiver: General perception with iterative attention." International conference on machine learning. PMLR, 2021.
Summary: The paper tackles the important problem of bridging the gap between generalizable implicit neural representations (INR) and per-sampled trained ones. The core hypothesis is that previous generalizable INRs failed to capture local details in the global latent code due to their inductive bias. The idea of the proposed method is to equip learning such INRs with a Transformer encoder, which selects and encodes local information into multiple tokens. The paper elaborates on all parts of the pipeline and presents the empirical study on image reconstruction and novel view synthesis with shapenet. Overall, the proposed method shows a great advantage over the prior art. Ablation studies are thorough. Strengths: The idea behind the method is novel and interesting. The paper borrows the best ideas from other domains, including transformers and nerf. Experimental studies on FFHQ and Shapenet are convincing. The paper is well-written and rather polished. Weaknesses: The introduction is a bit too broad and needs more specificities about the proposed method. The statements from the title, abstract, and intro, set expectations for something more generic than what can be processed by a transformer encoder. The frequency decomposition mechanism is not very well explained. Particularly, the choice of two blocks for frequency decomposition is never explained nor ablated. Would the performance keep growing if more of those blocks had a different frequency coverage scheme? The locality property of the produced tokens is also not very clear. On the one hand, the authors claim that the tokens are locality aware; on the other that thanks to permutation equivariance of the attention mechanism in the transformer encoder, the tokens form an unordered set. It would be better to explain these aspects more carefully (including in the rebuttal). List of typos - L49: outperformance Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Fig. 2, how do the transformer inputs form in each considered setting (image / NVS)? Does the framework require retraining everything from scratch for every new resolution? Can any of the pretrained backbones be used? Can the authors think of a way to visualize the effect they discuss regarding ordering frequencies based on the depth of the layer in the decoder? What do the results look like for the best possible setting for 1Kx1K resolution in Table 2? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Discussed adequately in the conclusion. However, it would be interesting to learn about dealing with arbitrary resolutions, or other forms of inputs not directly amenable to transformer encoders. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Broad Introduction]** We will revise the Introduction by adding the following specific description of our transformer encoder. Specifically, we will add the detailed explanations about the use of cross-attention in selective token aggregation to Lines 43-44. The cross-attention is used for each coordinate to extract the information of the latents, which are the output of the Transformer Encoder. We will also describe how multi-band frequency decomposition is designed in Lines 45-46. **[Choice of the Number of Blocks for Frequency Decomposition]** We will revise the explanations for better understanding of the frequency decomposition mechanism. In addition, Appendix B.1 includes the ablation study, which the reviewer has mentioned, on the number of blocks $L$ for frequency decomposition. For image reconstruction of FFHQ 256$\times$256, 512$\times$512, 1024$\times$1024, the performance improves with respect to $L$ increasing from 1 to 3, but the performance saturates at $L \geq 3$. In our main experiments, we use $L=2$, considering the trade-off between computational costs and performance. **[Clarifying Locality Property of the Produced Tokens]** Figure 5 demonstrates the locality property of the produced tokens. Figure 5 visualizes that each latent token captures the local information of a data instance and affects the pixels/rays in a certain local area. We will clarify how the permutation-equivariance of self-attention affects the design of our framework in terms of locality property as follows. The permutation-equivariant of self-attention in the Transformer encoder makes our framework not assume the local structures of data and latent tokens. That is, we do not assume the permutation of the location of latent tokens (ordering of latent tokens), but consider the latent tokens as a set of local information. During training, each latent token learns to capture the local information of data, while covering whole regions to represent a data instance. This property enables our framework to be readily applied to diverse data with non-grid coordinates. For example, determining the order and size of local regions is not straightforward for Plücker coordinates of rays. However, self-attention enables each local latent to capture the local information of data, while the set of local latents represents the information of whole rays. **[Typos]** Thanks for the detailed comment. We will fix all typos in our manuscript. **[The Inputs of Transformer Encoder]** A transformer input is the concatenation of image patches and learnable tokens, as described in Appendix A.1 and A.2. For image reconstruction, an image is represented as a set of patches, where each patch has $P \times P$ size. We use P=9, 16, 32, 48 for 178$\times$178, 256$\times$256, 512$\times$512, and 1024$\times$1024 resolution, respectively. For 178$\times$178 and 1024$\times$1024 resolution, images are zero-padded to make the size evenly divisible by the patch size. We use zero-padding of 1 and 16 pixels on every side, respectively. For novel view synthesis, we use the Plücker coordinate to represent the information of the rays in the rendering view of a 3D object. Given rendering images of support views, we concatenate the ray coordinates with pixels along the channel, and then patchify the support views using $P=8$ patch size. Then, we concatenate the patches of all support views with learnable tokens for the input of our Transformer. **[Retraining for new resolution]** Yes, we train our framework for each resolution of the dataset. **[The Use of Pretrained Backbones]** Since conventional pretrained models are not trained for the latents of INRs, we believe that such pretrained backbones cannot be utilized within our framework. **[Visualization of the Effect regarding Frequency Ordering]** In the additional pdf file for author responses, we visualize the effect of frequency ordering on the reconstruction of high-frequency details. We visualize the pixel-wise reconstruction error for models trained on ImageNette 178x178 with $(\sigma_1, \sigma_2)=(128, 32)$ and $(\sigma_1, \sigma_2)=(32, 128)$. Our design choice $(\sigma_1, \sigma_2)=(128, 32)$ shows superior performance in reconstructing high-frequency details of data. **[Qualitative result in 1024x1024 resolution]** Appendix B.3 includes the qualitative results for 1024x1024 image reconstruction. **[Limitations]** Thanks for the constructive comments. We agree that extending our framework to support arbitrary resolution will be an interesting future work. We will also describe the limitation of our framework that requires an amenable form of input for transformer encoders. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I thank the authors for carefully responding to my questions and concerns. I also checked the other reviews and ongoing conversations, especially in this thread https://openreview.net/forum?id=XqcXf7ix5q&noteId=mtClW0NJlm , and I could find resonating and well-justified arguments on both sides, the authors, and Reviewer KNfA. One thing is clear - works in our domain are getting harder to disseminate due to the lack of rigorous and established processes for many aspects of scientific writing. In this case - whether INR is more correctly defined as "baking the entire instance in the weights" (Reviewer's KNfA point of view) or "regardless of the size of the instance-specific code, it is an INR, because it expects an extra coordinates input" (the authors), is a matter of naming conventions. In my view, the "I" in INR is primarily borrowed from the notion of implicit functions, which were used as a representation for SDFs, encoding surfaces implicitly. The implicity in question stems from the fact that one has to actually solve for isosurfaces of F(x) = 0. NeRFs are also implicit due to the ray marching color accumulation process. It would be good if the authors of the paper took some time to carefully work through all the terms and notation, and explained with references what makes their representation implicit, what the overall field of INR properties consists of, and how generalization fits this landscape at all. Currently, generalizable INRs only fit the definition proposed by the authors and rather do not fit the other two; this should be reconciled. Considering the paper's empirical contribution and discounting for the naming conventions, I would like to keep my score, conditioned on the authors properly addressing writing and maybe setting some notation for the subsequent works. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s discussion and feedback. In particular, we fully agree with your emphasis on “rigorous and established processes for scientific writing,” and feel that these discussions with reviewers greatly help in that sense indeed. We are confident that our final camera-ready can address all such presentation issues. We agree with the reviewer’s view that the "I" in INR is primarily borrowed from the notion of implicit function formulation, while we have focused more on the concept of baking an instance in neural network weights in discussion with reviewer KNfA. One of early INRs, SIREN (NeurIPS’20), also presents that the “I” of INRs stem from solving implicit problem formulation of $F(\mathbf{x}, \Phi, \nabla_\mathbf{x} \Phi, \nabla^2_\mathbf{x} \Phi, …)=0$, where $\Phi: \mathbf{x} \mapsto \Phi(\mathbf{x})$ and $\mathbf{x}$ is a coordinate, and casts the optimization of implicit problem formulation into a loss function in Eq. (3) of the paper. We believe the formulation in the SIREN paper would be a good starting point for us to reconcile different views and applications. Solving for the isosurfaces of SDFs, which the reviewer mentioned, can be described as shown in Eq. (6) and Section 4.2 of the paper and our image reconstruction can also be formulated under the same setting of implicit problem as shown in Section 3.1 of the paper, although it is simpler than SDFs. Light fields, which replace a volume rendering process of NeRFs with directly predicting the rendering results, also preserve the form of implicit problem, while simplifying the optimization problem of NeRF as a simple first-order optimization problem [Ref-1]. Note that what we wanted to emphasize in discussion with the reviewer kNfA is that the notion of “implicit” does not restrict the scope of instance-specific parameterization. Our view is not different from that of the implicit problem formulation above; generalizable INR follows the formulation of implicit problem, while a function $\Phi$ is conditioned as $\Phi(\cdot | \mathbf{Z})$, where $\mathbf{Z}$ is the latents of each data instance. Since $\Phi$ adopts a parameterized coordinate-based neural network, the implicit representation of data corresponds to the (part of) parameters of coordinate-based neural networks under the implicit formulation. As the reviewer requested, we will do our best to revise our paper with clear notions and rigorous formulations for INRs based on the extensive discussions with the reviewers. [Ref-1] Sitzmann, Vincent, et al. "Light field networks: Neural scene representations with single-evaluation rendering." Advances in Neural Information Processing Systems 34 (2021): 19313-19325.
null
null
Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation
Accept (poster)
Summary: This paper proposes an interesting pipeline for large pre-trained model-based UDA, and realizes a latent subspace tuning for continuous adaption. Strengths: 1. The motivation behind this article is very meaningful, and the proposed method supports the motivation well. 2. In terms of methodology, the design of this paper is clear and innovative for the combination of existing technologies. 3. This easy but effective baseline and latent subspace tuning ability may provide new paradigm for future UDA. 4. Good performances. Weaknesses: Using CLIP as the backbone may limit the ability of MPA on only classification tasks, how to extend MPA to other tasks, like segmentation, referring, etc? And how to utilize other large pre-trained models like GPT-3? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors should provide the limitation of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *Using CLIP as the backbone may limit the ability of MPA on only classification tasks, how to extend MPA to other tasks, like segmentation, referring, etc?* As a matter of fact, there are many other CLIP-based segmentation/referring works [1] [2] [3], all of which used CLIP as backbones. Therefore we believe that CLIP backbone won't be a problem of transfering MPA to other tasks. [1] Lüddecke1 et al., Image Segmentation Using Text and Image Prompts [2] Liang et al., Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP. [3] Wang et al., CRIS: CLIP-Driven Referring Image Segmentation > *And how to utilize other large pre-trained models like GPT-3?* One possible way to use other large pre-trained models is to use them for better text prompt generation. However, since CLIP is trained with its own GPT-2 like text encoder, further engineering techniques might be needed for a better adaptation. > *The authors should provide the limitation of their work.* Thank you for pointing this out! As a matter of fact, we have provided certain limitation of our work in the Conclusion section: "As the model contains useful clues of multiple domains, one potential limitation is that it faces more risks in terms of information leakage where adversaries might produce attacks to steal information from the model." --- Rebuttal 2: Title: Have our rebuttal addressed your concerns? Comment: Dear reviewer 34CN, we would be grateful if you could confirm whether our response has addressed your concerns. Please do not hesitate to let us know whether there is anything else you would like to see clarified or improved before the end of the rebuttal period. --- Rebuttal 3: Title: Have our rebuttal addressed your concerns Comment: Dear reviewer, as the end of the rebuttal period is approaching, we would like to know whether our rebuttal have addressed your concerns.
Summary: This paper introduces prompt learning to multi-source unsupervised domain adaptation (UDA). Firstly, individual prompts for each source and target pair are learned using a contrastive loss. Then, MPA aligns the learned prompt by an autoencoder-based step with an L_1 constraint to generate consistent results for the same target domain image. In addition, the LST strategy delivers the first target domain adaptation to the subsequent target domains efficiently. Experiments on ImageCLEF, Office-Home, and DomainNet validate the effectiveness. Strengths: + This paper's application of prompt learning to multi-source UDA problems is groundbreaking. + This paper is well-written and easy to understand. Weaknesses: + Regarding the prompt design part in sec3.2, can it only be realized by following Ge[10]? There is a lack of novelty and contribution due to the lack of original works in the whole prompt design. + Regarding the problem of reducing the dimensionality of the high-dimensional learned prompt, is this step introduced because the prompt design is not optimal? From the perspective of Tab. 6, the growth brought by AE is very limited, so are other methods of dimensionality reduction effective, or even the step of AE can be removed by considering redundant information in prompt design? 3 The LST in 3 sec 3.3 is a further extended use of MPA, which is somewhat insufficient as an independent innovation point. 4 This paper needs to weaken the sense of the existence of CLIP and the dependence on the previous prompt design method, otherwise very much like a simple application essay. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In addition to the questions in the above weaknesses that need to be answered, there are a few things that need to be clarified. + The generalization of CLIP is a huge advantage. Although the author discussed the impact of CLIP in 4.3, I hope that the author will discuss the design experiment of the prompt. In addition to using the original "a photo of [CLS]," the number of channels of the trainable prompt params can also be discussed. + Please state the problems and experimental results faced when the approach of Ge [10] is transferred to the tasks addressed in this paper. And for the problem and results, describe the differences of this paper. + For now, it is unknown why prompt, as a trained parameter, has class-specific and domain-specific attributes. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *Regarding the prompt design part in sec3.2, can it only be realized by following Ge[10]?* There are other ways of implementing the prompt structure. For example, we had considered training just a naive soft prompt for each source-target domain pair, followed by concatenating them and apply an additional linear/convolution layer to extract a "common" prompt for the target domain. However, emperically we find that such way of realizing performs slightly worse than using Ge's: | | Art | Clipart| Product |Real World | Avg| |:-|:-:|:-:|:-:|:-:|:-:| |convolutional fusion |74.1 |54.0|85.3|85.2 |74.7| |Ge's |74.8 |54.9 |86.2 |85.7 |75.4| Yet, we would like to highlight that the success of MPA is not only from such prompt design. As presented in Table 1 and 2, directly extending Ge's work to MDA would result in limited performance. Furthermore, we had also considered using both text and image prompts. However, since most of the SOTA's are using resnet based backbones, it is hard to incorporate image prompts to such architecture. Therefore for a fair comparison, we chose to not use such strategy. > *Regarding the problem of reducing the dimensionality of the high-dimensional learned prompt, is this step introduced because the prompt design is not optimal? From the perspective of Tab. 6, the growth brought by AE is very limited, so are other methods of dimensionality reduction effective, or even the step of AE can be removed by considering redundant information in prompt design?* Thank you for the question! We would like to note that our intention is **reconstruction** rather than **dimension reduction**, and the reason is not because of the prompt design, but to "remove redundant information potentially stemmed from the discrepancies among all the source domains.", as stated in L55-57. This is one major reason why we kept the AE structure. Another reason is that by leveraging the auto-encoder structure, especially its decoder, we are able to adopt our LST strategy that allows efficient and effective adaptation to multiple target domains, which we believe is of practical importance. > *The LST in 3 sec 3.3 is a further extended use of MPA, which is somewhat insufficient as an independent innovation point.* We respectfully disagree that LST doesn't serve as an innovation point. While LST is based on the domain-invariant latent space found by MPA, it surves different purpose and has a complete different design principle compared with MPA. Specifically, LST is most suitable in situation where adaptation to multiple target domains is needed. In such scenarios, MPA would require **all** source-target domain prompts to be repeatedly trained. On the contrary, for LST, all we need is to tune **one** prompt with only the target domain data, which significantly reduces the computational cost. As stated in L249, LST would boost the speed of adaptation on the DomainNet dataset of approximately 5 times. Such improvement is far beyond marginal and we humbly believe that it serves as a solid contribution. > *This paper needs to weaken the sense of the existence of CLIP and the dependence on the previous prompt design method, otherwise very much like a simple application essay.* Thank you for the suggestion! We will revise accordingly. > *I hope that the author will discuss the design experiment of the prompt. In addition to using the original "a photo of [CLS]," the number of channels of the trainable prompt params can also be discussed.* As a matter of fact, the channels of prompt parameters are fixed to 512-d in CLIP, as stated in L122. What could be changed is the length of the prompt token, i.e., the $M_1$ and $M_2$ in L140, and ablation study on these parameters are discussed in Section 4.3 (Table 5). > *Please state the problems and experimental results faced when the approach of Ge [10] is transferred to the tasks addressed in this paper. And for the problem and results, describe the differences of this paper.* The usual way of extending single source method to multiple sources (the task in this paper) is to apply to the source combined scenario, where all source domains are combined into one single domain, and this is what we have done in the paper (DAPL in Source Combined). However, by doing so, one major problem is that no strategy is taken on dealing with domain gap among the source domains, and often the times will produce unsatisfactory performances. The experimental results for doing so are already shown in Table 1 and 2, and we have put them on the tables below for reference: ImageCLEF: | | C | I | P | Avg | |:-|:-:|:-:|:-:|:-:| |DAPL in Source Combined | 96.0|89.2 |76.0.7 | 87.1 | |MPA | 98.6|96.2 |80.4 | 91.7 | OfficeHome: | | Art | Clipart | Product | Real World | Avg | |:-|:-:|:-:|:-:|:-:|:-: | |DAPL in Source Combined |72.8 |51.9 |82.6 |83.7 | 72.8 | |MPA|74.8 |54.9 |86.2 |85.7 | 75.4| DomainNet: | | Clp | Inf | Pnt | Qdr | Rel | Skt | Avg| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |DAPL in Source Combined |62.4 |43.8 |59.3 |10.6 |81.5|54.6 | 52.0 | |MPA |65.2 |47.3 |62.0 |10.2 |82.0 |57.9 | 54.1 | Here we can see MPA surpasses DAPL in Source Combined setting by an average of 3.1%, indicating the efficacy of our approach. The reason for such performance gain is that unlike DAPL, in MPA we treated each source domain and target domain pair independently, and further incorporated our alignment strategy for dealing with the domain gap among the source domains. > *For now, it is unknown why prompt, as a trained parameter, has class-specific and domain-specific attributes.* Great question! The intuition is that if we use a hard prompt like "a [Domain] of [CLS]", then for each class, the embedding of [Domain] will remain the same while that of [CLS] will change. We hope this is what the soft prompt will learn through training. --- Rebuttal Comment 1.1: Title: Additional Comments Comment: Thank you to the author for the careful responses that solved most of my questions. I'm keeping my score unchanged, mainly considering the lack of novelty of the prompt method proposed and the closeness of the implementation of LST to MPA. I appreciate the motivation for this article, which is the main reason for me to hold the current opinion. It would be better to do some prompt and structural innovation. --- Rebuttal 2: Title: Have our rebuttal addressed your concerns? Comment: Dear reviewer UUa1, we would be grateful if you could confirm whether our response has addressed your concerns. Please do not hesitate to let us know whether there is anything else you would like to see clarified or improved before the end of the rebuttal period. --- Rebuttal 3: Title: Have our rebuttal addressed your concerns? Comment: Dear reviewer, as the end of the rebuttal period is approaching, we would like to know whether our rebuttal have addressed your concerns.
Summary: This paper deals with the multi-source domain adaptation problem. It proposes to tune the designed domain-invariant prompts and domain-specific prompts to enable the domain adaptation ability. Generally, the training consists of two objectives, i.e., the individual prompt learning objective and the de-noising objective via prompt auto-encoders. To enable a test-time domain adaptation ability and reduce the amount of learnable parameters, the authors propose LST. Experiments on various multi-source benchmarks verify the effectiveness of proposed method. Strengths: - The idea is simple and generally reasonable. - The experiment results are good. Weaknesses: - In LST, I don't think it is reasonable to use randomly initialized representations as the input of the back-projector to perform prompt reconstruction, as the latent representations don't subject to the same distribution. And there is no empirical evidence to show such a way really works. - Missing comparisons to strong baselines. E.g., on DomainNet, [1] achieves a multi-source domain adaptation performance at 53.2 with a ResNet-101 backbone. The authors should compare to the strong baselines. [1] Contrastive adaptation network for single- and multi-source domain adaptation, TPAMI 2020. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: I don't see any serious potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *In LST, I don't think it is reasonable to use randomly initialized representations as the input of the back-projector to perform prompt reconstruction, as the latent representations don't subject to the same distribution. And there is no empirical evidence to show such a way really works.* As pointed out in L130, the learned latent space is supposed to be domain-invariant. Besides, this representation is further tuned by pseudolabels of the target data. Therefore even though the representations subject to different distribution, the final result shouldn't be affected. As for empirical evidence, there are actually a few research papers that would use a random initialization when tuning the latent space. For example, in [1], 1000 latent vectors are randomly initialized to find the best performing one. In [2], it is found that random initialized representations are better at generating pictures other than faces. Here the representations are also of a different distribution (faces v.s. cat, dog etc.). [1] Wen et al., Diamond in the rough: Improving image realism by traversing the GAN latent space [2] Abdal et al., Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space? > *Missing comparisons to strong baselines. E.g., on DomainNet, [1] achieves a multi-source domain adaptation performance at 53.2 with a ResNet-101 backbone. The authors should compare to the strong baselines.* Thank you for pointing this out! We will include [1] in our paper for a more comprehensive comparison. We would also like to note that since [1] performs a clustering on all source domains for each iteration, our method is much more efficient. | | Clp | Inf | Pnt | Qdr | Rel | Skt | Avg| |:------------|:-----:|:---:|:---:|:-----:|---:|---:|---:| | MSCAN |69.3 |28.0 |58.6 | 30.3 |73.3|59.5 | 53.2 | |MPA |65.2 |47.3 |62.0 |10.2 |82.0 |57.9 | 54.1 | [1] Kang et al., Contrastive adaptation network for single- and multi-source domain adaptation. --- Rebuttal 2: Title: Have our rebuttal addressed your concerns? Comment: Dear reviewer tF9m, we would be grateful if you could confirm whether our response has addressed your concerns. Please do not hesitate to let us know whether there is anything else you would like to see clarified or improved before the end of the rebuttal period. --- Rebuttal 3: Title: Have our rebuttal addressed your concerns? Comment: Dear reviewer, as the end of the rebuttal period is approaching, we would like to know whether our rebuttal have addressed your concerns.
Summary: The paper proposes an extension of [10] (Domain Adaptation via Prompt Learning Ge et al., 2022) to the multi-source UDA set-up. (i) Distinct soft prompts are learnt via contrastive loss for each source-target pair; each source-target prompt is composed of class-wise source- and target-prompts. In the target domain, learning is achieved by utilizing pseudo-labels from the pretrained CLIP model, which possesses strong zero-shot ability. To encourage consistent target outputs across all learned prompts, a consistency L1 loss is applied to the soft outputs of all target samples. (ii) Prompt reconstruction: to eliminate redundant information that may impede performance, an autoencoder (AE) is learned to "denoise" the acquired prompts. (iii) Latent Subspace Tuning (LST): efficient adaptation to a new target domain, after learning from the first target domain, is accomplished by optimizing on the latent space of the learned AE using pseudo-labels of the new target. The proposed framework demonstrates superior results compared to previous State-of-the-Art (SOTA) methods in multi-source UDA. Strengths: The proposed method leverages the powerful zero-shot capability of the pretrained CLIP model for the multi-source domain adaptation (DA) task. Empirical results demonstrate its effectiveness across various benchmarking setups. The presentation is good, enabling easy comprehension of the method. Weaknesses: The main concern of this work is the lack of technical novelty and a more rigorous evaluation. The proposed framework is a straightforward extension of the work by Get et al. (2022) [10] to the multi-source setting. In terms of method evaluation, it is expected to compare against stronger CLIP-based baselines, both in the multi-source UDA experiments (Tables 1 and 2) and in the LST experiment (Table 3). Further details are provided in the following section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Given that CLIP alone produces results close to the state-of-the-art (SOTA) as shown in Table 1, and even outperforms the SOTA as shown in Table 2, my main concern is whether the authors have made adequate efforts to create decent baselines using CLIP-pretrained models. I appreciate their effort in Section 4.3 when they ``swap MFSAN’s ResNet50 backbone pre-trained on ImageNet to CLIP’s image encoder''. However, I would love to know more details to know as if careful considerations were taken into account for MFSAN + CLIP or simply the authors swap the backbone's weight. For example, did the authors preserve the text classifier and the contrastive los of CLIP ? As the proposed framework is based on self-training with pseudo-labels, I wonder if how the CLIP + self-training and Single Prompt + self-training baselines perform. Furthermore, I am curious to know if CLIP with fixed prompts has been pushed to its maximum extent. In other words, how far can this naïve baseline reach with better prompt engineering instead of using the simple prompt "a photo of [CLS]"? One can imagine incorporating domain-specific information into the prompt or defining a set of templates rather than relying on just one template. In the LST experiment, how good is MPA (on the first domain) + self-training? For efficiency, one can try test time prompt tuning (TPT). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work lacks significant technical novelty, and the experiment could be improved by using stronger baselines. Based on the current state of the submission, my recommendation leans towards the negative side. ---- After rebuttal ---- The rebuttal is convincing and has helped clarify most of my technical concerns. I believe that this work is indeed interesting and help advocate the usage of prompts in domain adaptation. While the novelty limitation still pertains, the original set of experiments, along with the new ones provided during the rebuttal, is sufficient. I believe the paper after revision would certainly pass the NeurIPS' bar. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *Lack of technical novelty. The proposed framework is a straightforward extension of the work by Ge et al. to the multi-source setting.* We respectfully disagree. As shown in Table 1 and 2, directly applying Ge's method to MDA produces limited performance. This is because their method lacks strategies on dealing with domain gap among the source domains. On the contrary, Table 6 shows that with our proposed alignment strategy, domain shift among source domains is effectively reduced, resulting in a significant performance boost. > *Lack of a more rigorous evaluation. It is expected to compare against stronger CLIP-based baselines.* To the best of our knowledge, there are **no** other CLIP-based MDA baselines. To make up for such shortcomings, we proposed **3** CLIP-based baselines, i.e., the zero-shot CLIP baseline, the DAPL in the source combined scenario baseline, and a Simple Prompt baseline based on the paper of CoOp. Results from Table 1 and 2 show that MPA consistently achieves better results than these methods. > *Given that CLIP alone produces good results, have the authors made adequate efforts to create decent baselines using CLIP-pretrained models.* We would like to highlight again that there are no other CLIP-pretrained baselines. Therefore, we believe **CLIP itself forms a decent baseline**. In addition to CLIP, we have also proposed a Simple Prompt baseline, and tested a SOTA method with CLIP-pretrained backbone. More details on these baselines are in the following answers. We understand that concerns might be raised by using CLIP and really have tried our best to present a fair comparison. If you feel that these are insufficient, would you please suggest us what other attempts we can do? > *Whether careful considerations were taken into account for MFSAN + CLIP?* We've actually tried keeping the text classifier with the contrastive loss, but finally decided to not do so. The reasons are as follows. In MFSAN, a common feature extractor together with domain specific feature extractors and domain specific classifier heads are used. In order to keep the text classifier, two main technical difficulties need to be overcomed: (1) Text classifier generated by CLIP is dependent on the input text prompt and would be the same for all domains if we use the naive "a photo of a [CLS]" prompt, which contradicts the design of MFSAN where domain specific classifier heads are used; (2) In MFSAN, the features extracted from the common feature extractor are downsampled from 2048-d to 256-d, whereas text classifier from CLIP is 1024-d. To solve the above issues, we could apply soft prompt methods and train a prompt for each domain to generate the domain specific heads, followed by either applying another linear layer to reduce their dimension to 252-d or only downsample the features to 1024-d. However, both settings produced unsatisfactory results: | | C | I | P | Avg | |:-|:-:|:-:|:-:|:-:| |Linear layer to 252-d | 21.0|37.8 |19.7 | 26.2 | |Downsample to 1024-d | 88.7|81.7 |74.2 | 81.5 | |No text classifier |96.7 |93.0 |77.7 |89.1 | Based on the above results, further techniques might be needed to acquire a good performance, which however, would most likely result in a complete new method and is beyond the scope of analyzing whether the performance gain of MPA is from CLIP's pretrained visual backbone. Therefore, we chose not to keep the text classifier and only replaced the weights of the common feature extractor with CLIP's visual backbone weights. > *How the CLIP + self-training and Single Prompt + self-training baselines perform.* When using naive CLIP with manually designed prompts, there are no parameters needed for training (usually the image backbone is frozen). Therefore it is rather unclear to us what CLIP + self-training refers to. Did you mean CLIP with soft prompts + self-training? If so, this is exactly what the Simple Prompt baseline refers to in Table 1 and 2. In Simple Prompt we train a soft prompt using pseudolabels generated by CLIP and results show that simply doing so is ineffective in achieving good MDA ability. We are also confused about the "Single Prompt + self-training baseline". Did you mean "Simple Prompt + self-training"? If so, in Simple Prompt, self-training is already used. > *If CLIP with fixed prompts has been pushed to its maximum extent.* Thank you for the suggestion! Based on your comment, we tested two sets of prompts: (1) "a [Domain] of [CLS]" (e.g., a painting of dog) and (2) "a photo of [CLS] in domain [Domain]" (e.g., a photo of dog in domain painting). Their zero-shot results on Office-Home are shown in the following table: | | Art | Clipart | Product | Real World | Avg | |:-|:-:|:-:|:-:|:-:|:-: | |a [Domain] of [CLS] |71.7 |52.4 |74.9 |81.0 | 70 | |a photo of [CLS] in domain [Domain]|68.1 |53.1 |81.3 |82.0 | 71.1| |a photo of [CLS] |71.5 |50.2 |81.3 |82.4 |71.4| We also tested MPA's performance with pseudolabels generated from these two new prompts: | | Art | Clipart | Product | Real World | Avg | |:-|:-:|:-:|:-:|:-:|:- | |a [Domain] of [CLS] |74.1 |55.1 |82.2 |85.0 |74.1| |a photo of [CLS] in domain [Domain]|72.7 |55.7 |87.1 |85.2 |75.2| |a photo of [CLS] |74.8 |54.9 |86.2 |85.7 |75.4| In general, the default "a photo of [CLS]" still performs the best. While better handcrafted prompts might exist, for now due to time limitations we are unable to find significantly superior ones. > *In the LST experiment, how good is MPA (on the first domain) + self-training?* MPA on the first domain already uses self-training. The results are slightly worse than those in Table 2, as less source domains are used: | | &rarr; Inf*,Clp | &rarr; Clp*,Inf | &rarr; Skt*, Pnt | &rarr; Pnt*, Qdr | &rarr; Qdr*, Rel | &rarr; Rel*, Skt | |:-|:-:|:-:|:-:|:-:|-:|:-:| |MPA |46.7 |64.6 |57.5 |62.2 |10.1|81.8 | --- Rebuttal 2: Title: Have our rebuttal addressed your concerns? Comment: Dear reviewer tSjx, we would be grateful if you could confirm whether our response has addressed your concerns. Please do not hesitate to let us know whether there is anything else you would like to see clarified or improved before the end of the rebuttal period. --- Rebuttal 3: Title: Have our rebuttal addressed your concerns? Comment: Dear reviewer, as the end of the rebuttal period is approaching, we would like to know whether our rebuttal have addressed your concerns.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Regularized Behavior Cloning for Blocking the Leakage of Past Action Information
Accept (spotlight)
Summary: This paper introduces Past Action Leakage Regularization (PALR) for blocking the leakage of past action information during behavior cloning. Concretely, PALR focuses on the problem when a BC agent simply remembers the past actions rather than learning a generalized behavior, which gives a degenerated policy. Specifically, PALR tackles the problem by learning a representation of the history to remove unnecessary from the observations via conditional independence. PALR adopts the HSCIC metric to measure the conditional independence, which alleviates the shortcomings of the information-theoretic metric. In their experiments, the authors compared different metrics for behavior cloning on a range of continuous control tasks. Strengths: This paper is overall well-written and easy to follow. The idea is well-motivated as well. Weaknesses: The paper has the following main issues. Firstly, the contribution of this paper is relatively incremental. The formulation of the BC regularization as conditional entropy is new. Wen et al. [1] was the first to propose this formulation. As a result, the main contribution of this work is only the HSCIC regularization term. In addition, the paper has claimed that the focus of the work is POMDPs, where the past action leakage might come from the history. However, in the experiments, the only domain considered is state-based continuous control, which is a fully observable MDP. In my opinion, such a domain does not support the claim of the work, and is unsuitable to measure the capability of the proposed method. More complex and natural POMDP domains should be considered, e.g., indoor navigation with pixel-based observations, to better support the claims. References: [1] Wen, C., Lin, J., Darrell, T., Jayaraman, D., & Gao, Y. (2020). Fighting copycat agents in behavioral cloning from observation histories. Advances in Neural Information Processing Systems, 33, 2564-2575. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: How does the method compare with more advanced imitation learning algorithms that require interactions with the environment, e.g., DAC [1] and PWIL [2]? BC is a relatively weak baseline. Can this regularization term be used in these algorithms to further boost the performance? References: [1] Kostrikov, I., Agrawal, K. K., Dwibedi, D., Levine, S., & Tompson, J. (2018). Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. arXiv preprint arXiv:1809.02925. [2] Dadashi, R., Hussenot, L., Geist, M., & Pietquin, O. (2020). Primal wasserstein imitation learning. arXiv preprint arXiv:2006.04678. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: This paper has relatively limited novelty and insufficient experiment results. Given its current state, I think the paper is not ready for publication yet. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate your constructive and insightful comments. **1. Novelty in our work** We think that although the HSCIC regularization indeed holds significance within our work, our contribution extends beyond the introduction of the HSCIC regularization term: in the paper, we introduced a novel perspective on the issue of past action leakage through the lens of conditional independence. Our approach, which is derived naturally from this perspective and is implemented as a regularizer for conditional independence, covers FCA [A], HSCIC, and all the other measures for conditional independence. Note that an analogous approach based on conditional entropy, not conditional independence, is less general; for instance, it does not cover HSCIC. Identifying the important role of conditional independence in our problem setting is, we believe, an important conceptual contribution of our work. **2. Mujoco experiments in POMDP settings** We want to clarify that we have reconfigured the observations of Mujoco tasks to retain only positional information while excluding velocity information from observations, hence we conduct experiments on POMDP versions of Mujoco tasks. For further details, please refer to Section C.1 of the supplementary material. **3. Additional experiment on a complex domain** To see if our approach is effective on a complex domain, we perform an evaluation of our approach and the baseline methods within the CARLA environment. Please see our general response and Table A in the attached PDF file for more details. **4. BC as a fundamental baseline** Our focus is on offline IL-OH, excluding online interaction with the environment. While BC is simple, it's not weak in this context. In our experiments, BC's performance rivaled that of other baselines like FCA and MINE. Additionally, BC's role in simplifying training, away from intricate policy optimization, helps dissect the impact of different regularization methods. **5. Comparison with online imitation learning algorithms** We appreciate your comments and would like to provide clarification regarding our research focus. Our research centers on offline IL from observation histories (IL-OH), and while we acknowledge methods like DAC and PWIL focus on online IL with fully-observable MDPs, which is beyond the scope of our present research. Please also note that our study pertains to addressing the challenge from the problematic phenomenon of past action information leakage, which is present in offline IL-OH problems. If such a phenomenon occurs in online IL, we believe that our regularization could effectively mitigate the spurious correlation between past expert actions and imitator actions. We leave this extension as our future research. **References** [A] Wen et al., "Fighting Copycat Agents in Behavioral Cloning from Observation Histories.", NeurIPS 2020. --- Rebuttal Comment 1.1: Comment: I appreciate the author's rebuttal and efforts for additional experiments. They have addressed most of my concerns. I've changed my rating to 5 and as this paper possesses the necessary qualities for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your positive response. We are glad to hear that your concerns are addressed.
Summary: This paper solves the past action information leakage problem in behavior cloning. The paper first formally defines this problem, and then provides some potential regularization methods. After careful analysis of the pros and cons of each method, the authors decide to use HSCIC and validate its performance on several tasks. Strengths: - The flow of this paper is very clear: problem definition -- potential solutions -- analysis -- evaluation - In the evaluation, the authors not only show the improvement of the final performance of BC but also highlight the correlation between the performance and HSCIC regularization. This strongly proves that regularization is the major contribution to performance improvement. - Comprehensive ablation study and analysis are provided in the paper. Weaknesses: The major concern is how to apply this regularization to complex environments with images (or even multi-modality) as input and high-dimensional action space (6 DoF or even more). If the proposed method can be applied to such complex tasks, I'm willing to increase my score. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The BC policy used in this paper is just a feed-forward policy p(a|s). Is it possible to apply the regulation to other kinds of policy, for example, an energy-based model [1] or even a diffusion model[2]? [1] Florence, Pete, et al. "Implicit behavioral cloning." Conference on Robot Learning [2] Chi, Cheng, et al. "Diffusion policy: Visuomotor policy learning via action diffusion." arXiv preprint arXiv:2303.04137 Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: no explicit limitation discussion are provided in the paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We hold your thoughtful viewpoints on our work in high regard. **1. Effectiveness in complex task** To validate past action leakage regularization is effective on complex tasks, we conduct an experiment on CARLA environment. Please see our general response and Table A of PDF. **2. Applicability to complex policy architecture** We appreciate your consideration of different policy models. While we chose a feed-forward policy due to task generality, we acknowledge the potential to extend our approach to more specialized policies. Methods like energy-based [1] and diffusion models [2] designed for tasks with visual inputs and complex action spaces could enhance the applicability of our approach. Transitioning to these models is a direction we foresee exploring in adapting PALR for visuomotor control tasks. Furthermore, we've extended our investigations to the Decision Transformer (DT) [A] architecture to assess PALR's effectiveness on complex policies, as detailed in our general response and Table B of PDF. **3. Limitation Section** Limitations section can be found in Section F provided in supplementary material. **Reference** [A] Chen et al., “Decision Transformer: Reinforcement learning via Sequence Modeling.”, NeurIPS 2021.
Summary: This paper proposes to use HSCIC (Hilbert-Schmidt Conditional Independence Criterion) to alleviate the leakage of information from past actions in behavior cloning from observation histories. The advantage of HSCIC compared with information-theoretic regularization is it can compute in closed-form and does not require parametric assumption on the data distribution. Experimental results show HSCIC is a good indicator of agent performance and by directly minimizing HSCIC, it consistently improves over naive BC and other baselines with different regularizations. Strengths: Experimental results show significant improvement over baselines across different locomotion tasks. Good analysis showing negative correlation between HSCIC and agent performance, which motivates to minimize it directly in training objectives. Writing is clear and straightforward. Good comparison of kernel-based regularization and information-theoretic approaches. Weaknesses: Lack of comparisons with non-regularization based approaches to tackle past action information leakage. For example, Key-frame focused visual imitation learning proposes a sampling/weighting approach and Fighting fire with fire: Avoiding DNN shortcuts through priming proposes to use additional prior information as inductive biases. Experiments are all on simple state-based results. Hard to judge the performance on more complex domains, e.g. visual imitation learning, robotic manipulation or autonomous driving. Kernel-based approaches have time complexity depending on the number of samples. It would be worth comparing the computation tradeoff between different approaches. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How well does the proposed methods work on more complex environments, e.g. Atari, Carla, etc.? Could you show some quantitative results on how the agents avoid failure due to leakage of previous action? Is there any heuristics of turning the hyperparameter alpha? Could this regularization be combined with other approaches (e.g. information bottleneck, sampling) to give better results? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Authors state theirhyperparameters are sensitive and hard to tune without interaction with the environment. No broader impact section found in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We value your considerate insights and invaluable suggestions concerning our work. **1. Comparisons with other non-regularization approaches** Thank you for the pointers to the missing important related work. We recognize that both of your suggestions are valid to compare with our approach. During the remaining rebuttal period, we do our best to evaluate these two approaches and update our experiment. **2. Experiments in complex domain** We additionally conduct evaluation of our method and baselines in CARLA environment provided in D4RL dataset. Please see our general response and Table A of PDF file. **3. Time complexity comparison** It is worth noting that the computation time of our PALR implementation increases with mini-batch sizes (denote $M$), but it does not depend on the number of total data samples; the kernel regression is applied to each mini-batch, not to the entire training set. Furthermore, there are techniques for computing the HSCIC estimator efficiently [A]. In contrast, current information-theoretic approaches often require multiple iterations of inner optimization (denote $k$ as the number of inner iterations) to obtain reliable estimations of quantities of interest (e.g. mutual information or entropy). In practice, the elapsed time of these approaches can be significantly higher than PALR. To compare computation costs, we measure the wall-clock time of one policy update over our method and baselines. ($M$= 1024, $k$=5, 1000 repetitions) | | BC | RAP | FCA | MINE | PALR | | --- | --- | --- | --- | --- | --- | | Big-O | $O(M)$ | $O(M)$ | $O(kM)$ | $O(kM)$ | $O(M^3)$ | | Average Elapsed time (ms) | 3.038 | 5.295 | 12.203 | 19.854 | 15.64 | [Machine specification] CPU : Intel(R) Core(TM) i7-4770 @ 3.40GHz (4 cores), GPU : Titan X, Memory : 32Gb Our approach demonstrates scalable time complexity, particularly with moderate batch sizes, for example 512 to 1024. This ensures that PALR remains scalable and effective in various practical scenarios. We will include a detailed discussion of the computation tradeoff comparison in the appendix of our revised manuscript. **4. Quantitative analysis** To enhance the credibility of our experimental interpretation, we broaden our quantitative analysis. Firstly, we assess the HSCIC score across all 8 problem settings (refer to Table C in our provided PDF). Secondly, we examine conditional MI as an additional measure of conditional independence to robustly validate our argument (refer to Table D in the PDF). Please see our general response for further details. If you have any further question in additional quantitative results, please feel free to inform us. **5. Heuristics of tuning alpha** In the context of offline model selection for sequential decision-making, it is well-acknowledged that identifying optimal hyperparameters from offline dataset is a non-trivial task [B,C,D,E]. Systematic hyperparameter tuning in an offline setting remains an open challenge. In our empirical observations from Mujoco experiments, we note that effective coefficients typically fall within the range of 10 to 1000. Notably, this range tends to align the scale of the HSCIC regularization term with that of the BC loss term. **6. Combination with other approaches** PALR could potentially synergize with orthogonal regularization approaches, which we consider for future exploration. **References** [A] Quinzan et al., “Learning Counterfactually Invariant Predictors”, arxiv 2022. [B] Hussenot et al., “Hyperparameter Selection for Imitation Learning.”, ICML 2021. [C] Zhang et al. "Towards Hyperparameter-free Policy Selection for Offline Reinforcement Learning." NeurIPS 2021. [D] Paine et al. "Hyperparameter Selection for Offline Reinforcement Learning." arXiv 2020. [E] Lee et al., “Batch Reinforcement Learning with Hyperparameter Gradients.”, ICML 2020. --- Rebuttal Comment 1.1: Title: Update in Comparisons with Non-regularization-based Approaches Comment: To address your concern regarding the lack of comparisons with non-regularization-based approaches, we have introduced additional baseline methods. As you suggested, we have included two additional baselines [F, G] and expanded our evaluation to cover all tasks included in our manuscript and rebuttal (4 POMDP versions of Mujoco tasks and the CARLA environment). The results are summarized in the following table: | Task | W | BC | RAP | FCA | MINE | PALR | Keyframe [F] | PrimeNet [G] | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | hopper | 2 | 32.47 ± 2.85 | 20.19 ± 1.38 | 31.89 ± 2.54 | 24.98 ± 1.89 | **42.01 ± 2.44** | 32.01 ± 1.86 | 29.98 ± 1.57 | | | 4 | 47.65 ± 3.43 | 32.61 ± 2.62 | 36.90 ± 2.35 | 37.60 ± 3.14 | **58.39 ± 2.76** | 45.74 ± 0.95 | 45.31 ± 2.77 | | walker | 2 | 53.04 ± 2.69 | 15.82 ± 2.03 | 63.11 ± 2.69 | 58.62 ± 5.52 | **79.83 ± 2.29** | 49.97 ± 2.32 | 48.50 ± 3.25 | | | 4 | 63.15 ± 6.28 | 25.39 ± 2.14 | **81.88 ± 3.26** | 68.71 ± 6.66 | **83.42 ± 5.43** | 77.37 ± 1.97 | **79.17 ± 3.30** | | halfcheetah | 2 | 74.08 ± 2.33 | 63.90 ± 2.14 | 78.24 ± 2.80 | 76.29 ± 1.87 | **86.44 ± 1.09** | 64.26 ± 1.41 | 61.47 ± 1.90 | | | 4 | 68.35 ± 2.60 | 58.97 ± 2.66 | 69.89 ± 2.64 | 73.4 ± 2.35 | **79.05 ± 4.28** | 55.71 ± 4.14 | 45.51 ± 1.66 | | ant | 2 | 56.25 ± 3.45 | 44.05 ± 1.19 | 51.08 ± 2.19 | 53.88 ± 1.87 | **59.57 ± 3.03** | 54.94 ± 1.68 | 51.72 ± 2.38 | | | 4 | **64.39 ± 1.77** | 48.63 ± 2.63 | 57.73 ± 1.25 | 56.56 ± 1.76 | **64.64 ± 2.53** | 48.59 ± 3.75 | 58.18 ± 1.92 | | carla-lane | 3 | 53.82 ± 7.66 | 20.15 ± 7.91 | 51.83 ± 7.91 | 60.62 ± 6.40 | **72.27 ± 2.62** | 66.36 ± 3.44 | 61.72 ± 1.53 | | carla-town | 3 | -3.94 ± 1.68 | -7.64 ± 0.97 | -9.42 ± 0.47 | **0.13 ± 0.93** | -1.15 ± 1.01 | -6.68 ± 0.07 | -9.50 ± 0.33 | For implementation details of [F], we used a softmax function as a weighting function for the weighted BC in Mujoco tasks and a step function in CARLA tasks, akin to the original paper’s experiment. Similar to regularization-based methods, we selected optimal hyperparameters $\tau \in [0.01, 0.1, 1, 10, 100]$, which represents the temperature for the softmax function. For CARLA tasks, we used the same hyperparameters (threshold=0.1,weight=5) as in the CARLA experiment of the original paper. Please refer to the reply to our general response for other details. **References** [F] Wen et al., “Keyframe-Focused Visual Imitation Learning”, ICML 2021. [G] Wen et al., “Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming”, ICML 2022.
Summary: This paper aims to solve the problem of negative action leakage from past observations in the context of imitation learning. This is specifcally applicable to imitation learning when considering a history of observations. The paper argues that an information theoretic (entropy or MI based) regularization requires training an additional network and a nested optimization, however, HSCIC avoids both of these inefficiencies. The experiments show that (1) HSCIC is a good indicator of leakage, and (2) the propose approach which regularizes the BC loss with HSCIC outperforms other information theoritic and vanilla baselines quite a bit, in offline imitation settings. Strengths: - To my knowledge this is a novel approach - The method is very interesting, and explained in a way that is easy to understand, and is also an innovative solution - The motivation is well described, and this is an important area to explore - The results show strong performance, and the analysis of the utility of HSCIC is insightful as well - The paper is well written In general, I believe this paper should be presented at the conference. Weaknesses: I think a couple of things could have been discussed: - How sequence modelling architectures can play a role here such as transformer/Decision Transformer (Chen et al., 2021) - What are the different modes of leakage, and do they affect performance in different ways Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are sufficienlty addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive feedback. **1. Applicability to sequence modeling architecture** We acknowledge the potential effectiveness of our approach in sequence modeling architecture. To explore this further, we have applied our regularization to the Decision Transformer architecture you suggested. Please refer to our general response and Table B in the provided PDF. **2. Different modes of leakage on imitation learning** The exploration of information leakage in imitation learning remains a promising direction of research, accompanied by several open questions. One example can be drawn to the concept of target leakage [A] in the context of data mining, where information produced from targets is contained in input data. Similarly, in imitation learning, a related phenomenon arises where information generated by the current expert action can become embedded within the observation. In this situation, the imitator easily captures that spurious information to predict expert action during training, but the imitator will fail in test time. This discrepancy between training and inference will lead to harmful effects on performance. **Reference** [A] Kaufman et al., "Leakage in Data Mining: Formulation, Detection, and Avoidance.", KDD 2011.
Rebuttal 1: Rebuttal: # **General Response** We sincerely thank the reviewers for their insightful and detailed comments. Below, we address key questions and feedback that have been consistently raised by the reviewers. If there are any aspects that still need clarification or elaboration, we are more than happy to address them during the author-reviewer discussion period. ## **1. Experiment in CARLA Environment (Table A)** To demonstrate the effectiveness of our method on high-dimensional observation scenarios, we extended our evaluation to the CARLA environment, an image-based autonomous driving task. We only consider pixel images as observations, excluding other information like velocity or sensor data. Leveraging `carla-lane-v0` dataset provided in D4RL dataset [A], we implement the imitator policy built upon ResNet34 architecture, and the parameters of ResNet34 are fixed and only used for feature extraction from images. Detailed results are presented in Table A of the provided PDF file. Notably, PALR outperforms the baseline methods by a significant margin. This observation supports our conclusion that PALR can successfully enhance imitation performance in complex offline IL-OH problems. ## **2. Effectiveness on Decision Transformer Policy (Table B)** To assess the efficacy of our method in regularizing policies with complex network architectures, we conducted additional experiments employing the Decision Transformer (DT) [B], one of the prominent offline RL methods. Given that reward information is absent in the offline IL dataset, we inserted a reward input of 0 into DT's structure to retain its original configuration. Our regularization approach, termed DT-PALR, was applied to the last hidden state of DT. We evaluated both the standard DT and DT-PALR across three POMDP versions of Mujoco tasks, consistent with the scenarios outlined in Table 1 of the paper. The results are presented in Table B of the PDF file. With the exception of the `halfcheetah` task, it becomes evident that DT's performance lags behind that of BC, as demonstrated in Table 1 of the manuscript. This divergence might be attributed to DT's utilization of a larger network size, rendering it more susceptible to capturing spurious causal relationships in environments where access to complete states and rewards is restricted. Encouragingly, our results indicate a substantial enhancement in the performance of DT for the `hopper` and `walker2d` tasks when subjected to the DT-PALR method. This observation strongly suggests the adaptability of our approach to intricate models. ## **3. Comprehensive Quantitative Analysis (Table C, D)** To enhance the reliability of the quantitative analysis provided in our manuscript, we comprehensively evaluate HSCIC scores ($\widehat{\mathrm{HSCIC}}^2(a^I_t, a^E_{t-1} | a^E_{t-1})$) across all 8 problem settings (see Table C in PDF file). The results demonstrate that our method consistently reduces the conditional dependence between $a^I_t$ and $a^E_{t-1}$ given $a^I_{t}$. Notably, PALR achieves lowest HSCIC scores in entire problem settings. Moreover, to show a more robust assessment of conditional independence, we also estimate conditional mutual information (CMI) $\hat{I}(a^I_t; a^E_{t-1} | a^E_{t-1})$ that also measures conditional dependence of our interest. (see Table D in PDF file) As we described in Section 4.2.1, CMI can be decompose into two MI terms, $I(a^E_{t-1}; \varphi_t, a^E_t)$ and $I(a^E_{t-1}; a^E_t)$, we estimate them using MINE [C] respectively. The table shows that PALR consistently presents lower CMI estimates compared to BC across 6 out of 8 problem settings. We will include these responses into the appendix of our revised paper. Once again, we extend our gratitude for the valuable feedback from the reviewers. ## **References** [A] Fu et al., “D4rl: Datasets for deep data-driven reinforcement learning.”, arxiv 2020. [B] Chen et al., “Decision Transformer: Reinforcement learning via Sequence Modeling.”, NeurIPS 2021. [C] Belghazi et al., “Mutual Information Neural Estimation.”, ICML 2018. Pdf: /pdf/f74359b676c1195c7558e6a557fc7bb2f21c4280.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper addresses the information leakage problem of imitation learning with observation histories. To this end, the paper measures the leakage of past action information based on conditional independence and proposes Past Action Leakage Regularization (PALR) for behavioral cloning (BC). The experiments show that the proposed method outperforms four baselines on four MuJoCo continuous control environments. Ablation studies suggest PALR can improve BC with a proper coefficient. This work defines and studies an essential problem of imitation learning from partially observable environments. However, the experiments can be improved to make the work more concrete. Strengths: **Clarity** - The overall writing is clear. The paper gives clear descriptions in both theoretical and intuitive ways. The notations, formulations, and theorems are well-explained. **Ablation study** - Ablation studies are comprehensive. The provided ablation studies help understand the effectiveness of the regularization coefficient (Sec.5.2) and the target to apply regularization (Sec.D). **Reproducibility** - The code is provided, which helps understand the details of the proposed framework. - Given the clear description in the main paper and the details provided in the supplementary materials, I believe reproducing the results is possible. Weaknesses: **Method** - This work proposes a kernel-based method to regularize BC for imitation learning from partially observable environments. While I am aware of the advantages of kernel-based methods (stable, does not need additional networks or hyperparameters tuning), recent works use neural networks, such as variational model [1] or causal transformer [2], to capture essential information from observation histories and have shown promising results. The effectiveness of the proposed PALR would be more convincing if the authors could demonstrate comparisons to the above methods. **Experiments** - Some parts of the experimental results are not easily interpretable. Figure 1 shows the negative correlation between the number of training data and the HSCIC score. Can the author explain why the information leakage problem is more severe when the training data is insufficient? - I am not entirely convinced by the explanation of why FCA [3] and MINE [4] get inferior results (Line 340-345 & Figure 2a). The author only evaluates the HSCIC score of each method on hopper-W4. However, the proposed PALR is directly regularized by the HSCIC metric, so it is straightforward that the proposed PALR gets the lowest HSCIC score. It would be better to evaluate the HSCIC score on all environment setups in Table 1 to 1) support the negative correlation between HSCIC estimations and the performance of algorithms and 2) analyze the performance of FCA and MINE, which outperform BC only on walker2d and halfcheetah environments. - The paper only evaluates the proposed method on four continuous control environments, which is insufficient. RL tasks such as navigation, robot arm manipulation, or Atari games can also be considered. **Experiment details** - The details of the normalized score (line 280) are missing. What is the maximum score and the minimum score? **Typo** Line 323: ..., which is a one of the ... [1] Rafailov, R., Yu, T., Rajeswaran, A., & Finn, C. (2021). Visual adversarial imitation learning using variational models. Advances in Neural Information Processing Systems, 34, 3016-3028. [2] Bonatti, R., Vemprala, S., Ma, S., Frujeri, F., Chen, S., & Kapoor, A. (2022). Pact: Perception-action causal transformer for autoregressive robotics pre-training. arXiv preprint arXiv:2209.11133. [3] Wen, C., Lin, J., Darrell, T., Jayaraman, D., & Gao, Y. (2020). Fighting copycat agents in behavioral cloning from observation histories. Advances in Neural Information Processing Systems, 33, 2564-2575. [4] Belghazi, M. I., Baratin, A., Rajeshwar, S., Ozair, S., Bengio, Y., Courville, A., & Hjelm, D. (2018, July). Mutual information neural estimation. In International conference on machine learning (pp. 531-540). PMLR. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your constructive and enlightening comments. **1. Comparison to other methods** We appreciate your comments on alternative methods and their potential effectiveness. However, we would like to clarify some distinctions between the alternative methods in [1] and [2] and our proposed PALR. Firstly, the variational model [1] is designed for online IL scenarios, where the agent can interact with the environment during the learning process. In contrast, we focus on offline IL, where the policy is trained solely on a pre-collected dataset without any interaction with the environment. Consequently, the direct application of the variational model to our offline setting is not straightforward. Similarly, the causal transformer [2] involves a domain-specific fine-tuning step in each robotics downstream task, making it challenging to perform a fair comparison with our PALR, which is designed to be domain-independent. We specifically designed PALR to be applicable to a broader range of offline IL from observation history, with a focus on addressing past action leakage. While we recognize the potential effectiveness of the above two methods, we highlight that PALR is a generic approach for offline IL that does not rely on any domain-specific information. Our regularization method aims to improve imitation performance in scenarios where past action leakage remains a challenge, making it applicable across various domains without requiring domain-specific considerations. **2. Correlation between dataset size and HSCIC** Thank you for pointing out the negative correlation between the number of dataset and HSCIC score in Figure 1 in our manuscript. The correlation indicates that when the training data is insufficient, the problem of past action information leakage becomes more severe. To clarify, we plot the correlation into Figure A in the PDF file. This phenomenon can be attributed to the higher risk of overfitting in cases with smaller training instances, which can lead to the capture of false causal relationships within the training data. These findings align with similar results reported in [A] (see Figure 4 of the paper). **3. Complete evaluation on conditional independence** First, we clarify that $\mathrm{HSCIC}(\varphi_t, a^E_{t-1} | a^E_{t})$ and $\mathrm{HSCIC}(a^I_t, a^E_{t-1} | a^E_{t})$ (the score recorded in Figure 2a) are different. PALR regularizes HSCIC with respect to representation, not HSCIC with respect to imitator action directly. As shown in Theorem 1, PALR regularization enforces the conditional independence between actions. To provide a more trustful interpretation of our experiment, we comprehensively expand our quantitative analysis: (1) We evaluate the HSCIC score in all 8 problem settings (see Table C of our PDF file.) (2) We evaluate conditional MI as another conditional independence metric to provide robust verification of our argument. (see Table D of PDF) Please refer to our general response. **4. Evaluation on complex task** We perform an evaluation of our approach and the baseline methods in the CARLA environment using the D4RL dataset. Please see our general response and Table A of PDF. **5. Normalized score** We follow the D4RL dataset evaluation protocol, which designates the expert score as the upper limit and the score of a random agent as the lower limit for normalization. We will clarify this in the revision. **Reference** [A] Haan et al., “Causal Confusion in Imitation Learning.”, NeurIPS 2019. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I appreciate the author's rebuttal, which addresses some of my concerns. I believe this work studies a promising problem and provides meaningful insights. However, I still feel the experiments are a bit limited. As suggested by Reviewer xrec, comparing the proposed method against implicit BC and diffusion policies can make this work more convincing. Also, as described in my review, experimenting with robot arm/dexterity manipulation tasks or games can significantly widen the scope of experiments. In sum, I am still slightly leaning toward accepting this paper while I won't fight for this paper if the majority of the reviewers have a different opinion. --- Reply to Comment 1.1.1: Title: Addressing Concerns in Our Experiments Comment: Thank you for your thoughtful feedback for our rebuttal. As discussed in our response to Reviewer xrec, we want to emphasize that our approach focuses on a general solution for offline IL-OH, not specifically designed to resolve challenges posed by high-dimensional action spaces, which are commonly addressed by methods like implicit BC [B] or diffusion policies [C]. Acknowledging the validity of your suggestion, a comparison with such methods could indeed demonstrate our method's adaptability to high-dimensional action spaces. However, due to limited computational resources, we have predominantly focused on evaluating our approach across standard tasks, leaving this research direction open for future exploration. Instead, to see if our method can be effectively applied to complex policy structures (e.g. diffusion models or transformers, …) with high capacity enough to cover various imitation tasks, we present experiment results of applying our regularization method to the Decision Transformer [D] as a representative instance of complex policies. Details can be found in paragraph 2 of our general response and Table B within the attached PDF. Furthermore, to extend the coverage of our experiment comparison, we have conducted additional evaluations on (1) pixel-based imitation tasks (CARLA experiment) and (2) comparison of two additional baselines [E,F] across all our tasks. Please see our comment to the general response. We hope these expanded results can address your remaining concerns regarding our work. **Reference** [B] Florence, Pete, et al. "Implicit Behavioral Cloning.", Conference on Robot Learning 2021. [C] Chi, Cheng, et al. "Diffusion Policy: Visuomotor Policy Learning via Action Diffusion.", arXiv 2023. [D] Chen et al., “Decision Transformer: Reinforcement learning via Sequence Modeling.”, NeurIPS 2021. [E] Wen et al., “Keyframe-Focused Visual Imitation Learning.”, ICML 2021. [F] Wen et al., “Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming.”, ICML 2022.
Summary: This paper proposes Past Action Leakage Regularization (PALR) to resolve the copycat problem in behavior cloning (BC) methods: 1. mathematically defines the past-information-leakage problem. 2. introduces PALR to formalize the methods. 3. uses Hilbert-Schmidt Conditional Independence Criterion(HSCIC) to measure the conditional independence, and analyses the advantages of HSCIC over CMI as the metric. 4. provides experiments verifying the correlation between HSCIC and performance, and the superiority of the HSCIC-based method over baseline methods. Strengths: 1. The paper is well-written and easy to follow in most parts. 2. The authors provide sufficient experiments to verify their claims. 3. The idea of replacing CMI with HSCIC is natural in maths and easy to apply. It does bring a big improvement over the previous methods. 4. The paper comprehensively explains how HSCIC becomes a better metric than CMI. Weaknesses: 1. Some small typos: $H_{\mathcal{X}}$ in line 133 should be $\mathcal{H}_{\mathcal{X}}$; the full name of MOMDP should use \emph{observable} other than \emph{observed} in line 153. I didn't go through every word, so there may be other mistakes. 2. In line 203, $\phi_t=\phi(z_{t_\omega:t})$, but in line 210, 212, $\phi\sim\phi(z_{t_\omega:t})$. This caused some confusion in reading. Did I miss anything? 3. In line 193, ''Ideally we would like to achieve conditional independence in Eq. (2)" is not obvious. And actually, it's not reasonable to require eq (2) to be true, for even a perfect imitator can't do this. The structure of MOMDP described in section 3.2 determines that it's not mathematically possible to predict $A_t^I$ given $A_t^E$. A better objective may be minimizing the CMI, as is practically done in the experiment. Furthermore, I know that this has been an unsolved problem from the first time copycat problem being proposed, but you may modify it by coming up with a better lower bound of CMI or HSCIC other than 0 to fix this mathematical flaw. 4. Theorem 1 does not help much in proving the claim that lower loss indicates lower dependence, which only proves the equality between two independences. I believe that this is true and thus PALR is effective. Can you come up with a new theorem to fix this? 5. Some important references are missing: [1] Ortega P A, Kunesch M, Delétang G, et al. Shaking the foundations: delusions in sequence models for interaction and control[J]. arXiv preprint arXiv:2110.10819, 2021. [2] Wen C, Lin J, Qian J, et al. Keyframe-Focused Visual Imitation Learning[C]//International Conference on Machine Learning. PMLR, 2021: 11123-11133. [3] Wen C, Qian J, Lin J, et al. Fighting fire with fire: Avoiding dnn shortcuts through priming[C]//International Conference on Machine Learning. PMLR, 2022: 23723-23750. [4] Spencer J, Choudhury S, Venkatraman A, et al. Feedback in imitation learning: The three regimes of covariate shift[J]. arXiv preprint arXiv:2102.02872, 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Do you have any explanation for the phenomenon that more historical information damages the performances in HalfCheetah? 2. Why only caring about $A_{t-1}^E$? I think $A_{t-2}^E$ can also cause leakage. Or can you prove that the former independence indicates the latter one? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As mentioned in Weaknesses, the current theories are not rigorous. They can be revised to make the whole logic consistent. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive and insightful feedback. **1. Assumption on ideal imitator** Thank you for bringing up this important point. To clarify, in our problem setting of imitation learning from observation histories, we make the assumption that the control-relevant (state) information can be fully recovered from observation histories as we described in Abstract and Section 3.2. This assumption means that we consider only those POMDPs where the imitator policy can perform equally well as the expert policy, even when the imitator's action is determined solely by the observation history. Thus, under this assumption, the (marginal) action of an ideal imitator, denoted as $A^I_t$, would be exactly equal to $A^E_t$, satisfying Eq. (2). In such an ideal situation, the true value of CMI or HSCIC of our interest would indeed be 0. We acknowledge the importance of clarifying this assumption in detail to prevent any misunderstanding regarding our problem setting. In our next revision, we will provide a more explicit explanation of the assumption. **2. Relationship between loss and conditional dependence** While Theorem 1 serves the purpose of justifying the regularization of representations rather than the direct action output within our proposed PALR method, we understand the need for a more direct link between loss reduction and dependence minimization. Regarding the proposal for a new theorem, we recognize the complexity of establishing a strict relationship between lower loss and reduced dependence. The challenge arises from the fact that methods like FCA, MINE, and ours rely on individual estimations of dependence measures, and ensuring that lower loss strictly implies lower dependence necessitates accurate estimations across the board. However, existing literature demonstrates that minimizing estimated dependence measures, as observed in [A,B,C,D], has proven effective in practical scenarios. Consequently, while we acknowledge the theoretical difficulty in universally proving the connection, our approach aligns with established practices that demonstrate the practical utility of minimizing estimated measures of dependence. We have taken this approach to design our method and validated its effectiveness through empirical evaluations. We hope this clarification addresses the concern and illustrates the rationale behind our choice. **3. Performance degradation in `halfcheetah`** In our observation configuration, we’ve discovered that BC in `halfcheetah` task demonstrates competitive performance when provided with only a single observation. This suggests that the single observation contains sufficient information for effective control, potentially rendering additional historical context redundant in the scenario. In this situation, the remaining part of historical data may not significantly contribute to information for decision making. **4. Reason for caring only $A^E_{t-1}$** To maintain simplicity and unify the existing work into our framework, we focus on the one-step past action in this work. While we recognize the potential influence of $A_{t-2}^E$ or action histories, our primary objective is to show the efficacy of one-step past action regularization within IL-OH problems. The exploration of multi-step past action leakage naturally extends from our current work and we consider it a promising avenue for future research. We really appreciate that let us inform typos and important references and we update them in the revised version of our manuscript. We acknowledge the confusion caused by our use of the notation $\varphi_t$ for both random variables and values of representation. To enhance clarity in our presentation, we will introduce a distinct notation. **References** [A] Belghazi et al., “Mutual information neural estimation.”, ICML 2018. [B] Pogodin et al., “Efficient conditionally invariant representation learning.”, ICLR 2023. [C] Poole et al., “On variational bounds of mutual information.”, ICML 2019. [D] Quinzan et al., “Learning counterfactually invariant predictors.”, arxiv 2022. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal! Comment: I appreciate the response from the authors and the additional experimental results. I think the authors' clarification is convincing and makes the paper sounder. I recommend accepting this paper. Here are some further suggestions: 1. Please describe the detailed experiment setup, environment, and model architecture of CARLA (if you want to put it into your final paper). Given that you have adopted a specific version of the CARLA environment in alignment with D4RL, distinct from the more intricate settings outlined in references [1, 2, 3], 2. I highly recommend adopting a consistent nomenclature for the problem under investigation. The existing literature contains varied terminologies denoting the same phenomenon, such as the inertia problem, copycat problem, latching problem, and leakage of past action, etc. To facilitate the advancement of this subject, it is advisable to standardize the terminology. Since you follow [4] closely, utilizing ''copycat problem" consistently throughout your paper could contribute to clarity and coherence. [1] Codevilla et al. ''Exploring the limitations of behavior cloning for autonomous driving." [2] Wen et al., ''Keyframe-Focused Visual Imitation Learning” , ICML 2021. [3] Wen et al., ''Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming”, ICML 2022. [4] Wen et al., ''Fighting Copycat Agents in Behavioral Cloning from Observation Histories.", NeurIPS 2020. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your valuable comments. Following your suggestions, we will provide figures and tables that detail the overall network architecture, environment setup, and hyperparameters we used for CARLA tasks in the appendix of our next revision. In addition, we will ensure that we unify the terminology in our final draft. Title: Thank you for your follow-up feedback
null
null
null
null
SQ Lower Bounds for Non-Gaussian Component Analysis with Weaker Assumptions
Accept (poster)
Summary: This paper provides an SQ lower bound for the problem of distinguishing the standard Gaussian distribution from a distribution with a single non-Gaussian component, under relaxed assumptions compared to what was previously known. In particular, in prior work, similar SQ lower bounds were established in the case where the non-Gaussian component corresponded to any single-dimensional distribution $A$ whose low order moments match those of a standard Gaussian and the ($\chi^2$) distance between $A$ and the standard Gaussian is bounded. In this work, the bounded ($\chi^2$) distance assumption is removed. As consequences of the main result, two additional SQ lower bounds are provided, one for List-Decodable Mean estimation and one for the problem of anti-concentration detection. The approach is simple and consists of direct probabilistic bounds for the difference between the value of a statistical query under the Gaussian and the corresponding value under a distribution with a non-Gaussian component in a uniformly random direction. The bound is shown via Fourier analysis over the Hermite basis. Strengths: The problem is clear and well motivated. The claimed result is a strict improvement over prior work with some further applications. Weaknesses: The paper has several important weaknesses. 1. The main result seems to contradict prior work (which is not cited in the paper), which raises serious doubts about its correctness. In particular, in Theorem 1.3 of [DK22] it is shown that non-Gaussian components that are far from the unit normal are detectable (even under the moment matching assumption). See also the discussion on page 2, paragraph "Motivation of This Work" of [DK22]. [DK22]: Diakonikolas, Ilias, and Daniel Kane. "Non-gaussian component analysis via lattice basis reduction." Conference on Learning Theory. PMLR, 2022. 2. The presentation of related work contains inaccuracies. In particular, within the paragraph in lines 66-74, it is implied that all of the listed papers make use of results on the hardness of non-Gaussian component analysis (NGCA) to gain results for other problems. However, for example, [GGJ+20] does not even mention the problem of NGCA, but makes use of some appropriate combinatorial quantity (called statistical dimension) to arrive to their main result. 3. The text in the introduction is sloppy and overly casual and section 3 is too technical. For example, lines 52-57 could use some polish and lines 148-153 give the impression that there are no further technical challenges, so the following paragraph becomes confusing. -- The authors addressed my main concern that their results seemed to contradict prior work. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. My main question concerns the correctness of the main result. How does this result compare to Theorem 1.3 of [DK22]? Such a discussion should, in any case, be a part of the paper. 2. Section 3 is quite dense and cannot be easily parsed, so it might be beneficial to substitute the proofs with proof sketches that contain less quantitative information (the full proofs can be moved to the Appendix). Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The existence of an LLL-based algorithm for the problem in question [DK22] should be discussed thoroughly. [DK22]: Diakonikolas, Ilias, and Daniel Kane. "Non-gaussian component analysis via lattice basis reduction." Conference on Learning Theory. PMLR, 2022. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and feedback on our work. We would like to address the following questions/comments. 1. Regarding the reviewer’s point that the main result seems to contradict the algorithm in prior work [ZSWB22] and [DK22]. If we take the one-dimensional distribution $A$ in the NGCA to be a distribution with finite support and matching the first $k$ moments with the standard Gaussian, where $k$ is at least a sufficiently large integer, then the SQ lower bound given by our result will be larger than the algorithmic upper bound in [ZSWB22] and [DK22]. However, as also pointed out by reviewer nJue, the algorithms in [ZSWB22] and [DK22] are based on LLL lattice-basis-reduction which is not captured by the SQ framework, so this does not contradict our SQ lower bound result. Another classical example is the exponential SQ-hardness of learning Parities, when a polynomial-time algorithm (based on Gaussian elimination) exists for the problem. Interestingly, the same situation holds for low-degree polynomial tests and SoS bounds (i.e., these restricted models of computation do not capture LLL or Gaussian elimination). We would also like to point out that the above does not imply that any NGCA family of instances with infinite chi-square distance can be solved efficiently. Importantly, LLL-based algorithms only work in the restricted setting that the support of the one-dimensional distribution A is discrete/near-discrete. For example, if the one-dimensional distribution $A$ is a mixture of a discrete distribution and a continuous distribution (as is the case in the Anti-concentration Detection problem considered in our paper), linear-algebraic algorithms will not work although the distribution has infinite chi-square distance and the problem is believed to be hard for all efficient algorithms. 2. Regarding the review's point about presentation of Related Work: Within the paragraph in lines 66-74, we have provided an extensive — but not exhaustive – list of 12 papers using NGCA to establish SQ lower bounds. The reviewer points out that one of these papers, namely [GGJ+20], does not explicitly use the NGCA problem. Upon inspection, one can see that the CSQ hard instance in [GGJ+20] uses a construction very similar to the concurrent work [DKKZ20] (which shows a quantitatively stronger lower bound for the same problem). The lower bound construction of [DKKZ20] explicitly relies on NGCA. This connection is discussed in the second to last paragraph of the “Our Techniques” section of [DKKZ20]. Reference:\ [DK22] I. Diakonikolas and D. M. Kane. Non-Gaussian Component Analysis via Lattice Basis Reduction. In Conference on Learning Theory, COLT 2022, volume 178 of Proceedings of Machine Learning Research, pages 4535-4547. PMLR, 2022.\ [GGJ+20] S. Goel, A. Gollakota, Z. Jin, S. Karmalkar, and A. R. Klivans. Superpolynomial lower bounds for learning one-layer neural networks using gradient descent. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, volume 119 of Proceedings of Machine Learning Research, pages 3587–3596. PMLR, 2020.\ [ZSWB22] I. Zadik, M. J. Song, A. S. Wein and J. Bruna. Lattice-Based Methods Surpass Sum-of-Squares in Clustering. In Conference on Learning Theory, COLT 2022, volume 178 of Proceedings of Machine Learning Research, pages 1247-1248. PMLR, 2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for responding to my comments and questions and for resolving a part of my concerns. 1. While the authors' response alleviated my concern that their work contradicts existing results in the literature, I believe that comparing their results to [ZSWB22] and [DK22] should be a central theme in their paper and I still find it surprising (and confusing) that they did not even mention these results in the first version of the paper. In particular, the existence of these algorithms should have been thoroughly discussed as a limitation of their work. In view of the results in [ZSWB22] and [DK22], the provided SQ lower bounds seem less appealing, since a non-SQ algorithm is already known for (a restricted version of) the setting and therefore the SQ framework might not be the correct lens to study the problem; in contrast, for other problems in learning theory, SQ lower bounds are often accompanied by similar cryptographic hardness results that apply to all algorithms. That said, from a purely theoretical perspective, it is interesting that there is a separation between SQ and non-SQ algorithms in the setting. Also, I appreciate the fact that LLL-based algorithms are brittle and SQ lower bounds might be useful to rule out more robust approaches. 2. I strongly believe that the paragraph in lines 66-74 is confusing and even misleading. While I only provided one example in my first response, the first sentence of the paragraph ("The SQ-hardness of NGCA can be used to obtain similar hardness for a number of well-studied learning problems that superficially appear very different.") does not do justice to at least the following papers: [DKKZ'20], [GGK'20], [CLL'22], [GGJ+20]. The authors claim that "the SQ-hardness of NGCA can be used to obtain similar hardness for a number of well-studied learning problems", which suggests that because NGCA is hard, other problems are also hard, implying that some reduction-based approach would work. However, for each of the aforementioned papers, this is not true. Instead, some of these papers use some tools from the foundational paper of [DKS'17] ([GGJ+20] does not -- they do not even cite [DKS'17]) , but these tools are not tied to NGCA, but to the SQ-framework in general (see, e.g., Fact 3.4 from [CLL'22]). The section "Our Techniques" from [DKKZ'20] (which the authors pointed to in their rebuttal) only acknowledges this fact: "To achieve this, we build on an idea introduced in [DKS17]." It is not surprising that ideas used to prove one kind of SQ lower bound can be used to prove another SQ lower bound, and if it is, it only implies that the specific ideas in question are important and not necessarily that any results on the problem of NGCA are. A working (but still potentially confusing) alternative to the first sentence of the paragraph 66-74 might be for example: "Proofs of SQ-hardness of NGCA in foundational prior work [DKS'17] have constituted technical and/or conceptual points of reference for proofs of hardness of a variety of well-studied learning problems." **References:** [DK22] I. Diakonikolas and D. M. Kane. Non-Gaussian Component Analysis via Lattice Basis Reduction. In Conference on Learning Theory, COLT 2022, volume 178 of Proceedings of Machine Learning Research, pages 4535-4547. PMLR, 2022. [ZSWB22] I. Zadik, M. J. Song, A. S. Wein and J. Bruna. Lattice-Based Methods Surpass Sum-of-Squares in Clustering. In Conference on Learning Theory, COLT 2022, volume 178 of Proceedings of Machine Learning Research, pages 1247-1248. PMLR, 2022. \+ Reverences in the paper --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging that their prior concern (regarding correctness issues of our main SQ lower bound) is now resolved. We remind the readers that the context and our response on this point was also provided in the overall response to all reviewers (first paragraph after the quoted question by the reviewer). Below we respond to the additional points made by the reviewer: - (Significance of NGCA SQ Lower Bounds in Our More General Setting) In addition to its theoretical significance, our general SQ lower bound for NGCA (without the chi-squared bound assumption) has applications to concrete learning problems that are believed to be computationally hard. In the paper, we provided two applications – one for list-decodable mean estimation and one for the anti-concentration detection problem. - For the former problem, we prove (Theorem 1.8) an SQ lower bound for a broader set of parameters compared to prior work (corresponding to small values of the parameter $\alpha$). Notably, the chi-squared distance for these instances is very large but finite, so that an application of the [DKS17] SQ lower bound for NGCA gives vacuous results. - For the latter problem, we establish the first super-polynomial SQ hardness (Theorem 1.10). For the corresponding SQ-hard instances, the chi-squared distance is infinity. - In both of these applications, the SQ-hard instances cannot be learned efficiently via LLL-type algorithms. Please see the overall response to all reviewers (second paragraph after the quoted question by the reviewer). - (Discrete Moment-matching Distribution) Even for the very special case where the one-dimensional distribution $A$ is (nearly) discrete, we believe that our SQ lower bound is conceptually interesting (a view shared by other reviewers). To reiterate, our SQ lower bound for this special case implies that LLL-based-algorithms are not captured in the SQ framework. This is a novel and interesting limitation of SQ algorithms. It was previously known that this limitation is shared by two other prominent restricted families of algorithms (namely, SoS algorithms and low-degree polynomial tests); it was unknown whether SQ algorithms have this limitation. As a corollary, we now know of two “exceptional” algorithms that are not efficiently implementable in these models: Gaussian elimination (for learning parities) — this is a classical result — and LLL-based algorithms (as follows from our work for the class of SQ algorithms). - In summary, we respectfully disagree with the reviewer’s subjective point of view, namely that “SQ lower bounds seem less appealing” in this setting. Moreover, we consider the statement “SQ lower bounds are *often* accompanied by similar cryptographic hardness results” inaccurate. While there are a few concrete problems where SQ lower bound constructions have led to similar crypto hardness, this is not true in most of the cases – hence, the term “often” is factually incorrect. - (Presentation) Here we address a couple of points on the presentation of our work raised by the reviewer: - (Remark on LLL-based algorithms) As we noted in our initial response, we will add a remark in the revised version of our paper explaining the connection to recent LLL-based algorithms and the associated implications. We respectfully disagree with the reviewer’s statement that the comparison should be “a central theme” in our paper, because the corresponding instances (for discrete $A$) are a very special case of the instances we establish SQ-hardness for and not a central focus/application of our general theorem. - (Paragraph in lines 66-74) We are happy to revise this paragraph, as it seems to have been confusing to the reviewer. The punchline of this paragraph is that the papers we have listed share the following property: **The SQ-hard instances in these papers are (SQ-hard) instances of NGCA for a specific choice of the moment-matching distribution $A$.** In other words, the SQ-hardness results in these works are obtained by applying a generic NGCA SQ-hardness result (either from [DKS17] or a natural generalization thereof). To achieve this, for each of these works, the authors construct a distribution $A$ that satisfies Condition 1.4 (for problem-specific values of $m$ and $\chi^2(A, N(0, 1))$) such that the corresponding $P_{\bf v}^{A}$ (Definition 1.2) belongs to the class of distributions being learned. In summary, all these SQ-hardness results are literally reductions from instances of the corresponding learning problem to *specific* instances of NGCA. The reviewer is referred to Section 8.4 (third paragraph) of the recent book [DK23] on algorithmic robust statistics surveying this line of work.
Summary: This work studies non-Gaussian component (NGCA) analysis in the context of the statistical query model. In NGCA the task is to distinguish a standard multi-variate Gaussian distribution from a distribution that is standard Gaussian in all but a random direction $w$ and equal to a one-dimensional distribution $A$ along $w$. Previously, it was known that this problem is hard in the SQ model when 1. $A$ matches many moments with the (one-dimensional) standard Gaussian, 2. the $\chi^2$-divergence between $N(0,1)$ and $A$ is finite. Also, the quality of the lower bound depends on the $\chi^2$-divergence: If this is very large with respect to other problem parameters (e.g., the dimension and the number of moments matched), the lower bound can potentially be weaker. In their work, the authors show that Assumption (2) above is not necessary for the hardness result to hold. They use this to obtain an improved result for list-decodable mean estimation (when the inliers are Gaussian with identity-covariance). They further show SQ hardness of detecting whether a distribution has a constant fraction of its probability mass in a lower-dimensional subspace. On a technical level, they are able to remove the assumption on the $\chi^2$-divergence since they do not show hardness based on the SQ dimension (a widely used notion which implies SQ hardness), but rather show directly that every statistical query on the two distributions must receive the same answer up to some small error. They show this by carefully analyzing the Fourier moments of both distributions. Strengths: In my eyes, the main strength of the paper is that it allows for more flexibility when using the NGCA framework. Second, it also makes it easier to apply. Since this framework has found numerous applications, I believe that many researchers will appreciate this result. On a conceptual level, it is pleasing to see that the $\chi^2$-divergence condition is indeed just a technicality (and the "true hardness" comes from the moment-matching condition). I believe the result can lead to: 1. improved lower bounds (since the $\chi^2$-divergence did affect the quality of the lower bound), 2. and simple(r) constructions (since we now have more flexibility). The authors gave one example for each of the two in their paper. The paper is generally well-written. The authors give a nice and easy-to-follow overview of the main result in Section 1.2. Their solution is very natural. The actual proofs are more technical, but this is shared with other SQ lower bounds using the NGCA framework. Weaknesses: Minor: While the quantitative improvement for the list-decoding lower bound is non-trivial, it appears only in a very restricted regime of parameters, when the fraction of inliers goes to 0 when the dimension grows. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In some of the existing SQ lower bounds using the NGCA framework it was very useful if the moments of $A$ only need to *approximately* match those of a standard Gaussian (e.g., the lower bounds for learning halfspaces under Massart noise in the distribution-independent setting). Are your techniques able to capture this scenario as well? If this is the case, it would be very helpful to include it in the main theorems since it would make them even easier to apply. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors give a fairly definitive answer to the question they studied. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and positive assessment of our work. We would like to address the following questions/comments. 1. Regarding the reviewer’s point: “While the quantitative improvement for the list-decoding lower bound is non-trivial, it appears only in a very restricted regime of parameters, when the fraction of inliers goes to $0$ when the dimension grows.” We point out that this setting of “small $\alpha$” (e.g., subconstant in the dimension) is actually of interest in some applications, including in mean estimation. A concrete example in a related crowdsourcing setting is the COLT 2018 paper by Meister and Valiant [MV18] dealing with this “small $\alpha$” parameter regime. 2. Regarding the reviewer’s point: “Does the technique here capture the scenario that $A$ only approximately matches moments with the Gaussian?” Yes, our technique captures the approximate moment-matching scenario. The only difference in the statement and proof would be an extra term due the approximate matching-moment in all the calculations. Reference:\ [MV18] M. Meister and G. Valiant. A Data Prism: Semi-verified learning in the small-alpha regime. In Conference on Learning Theory, COLT 2018, volume 75 of Proceedings of Machine Learning Research, pages 1530-1546. PMLR, 2018. --- Rebuttal Comment 1.1: Comment: Thank you for your comments. Regarding your second point: That's great to hear. I would again encourage the authors to include this as a formal result in their submission (in the appendix is fine if they wish not to clutter the main text).
Summary: The paper considers the SQ-hardness of non-Gaussian component analysis. The main result of the paper is a statement of the hardness without an assumption required by results stated in previous works: finite chi-squared distance between the non-gaussian distribution and the standard normal. Two applications are discussed. The bulk of the paper is focused on proof techniques, which makes use of Fourier transforms and Hermite polynomials to bound expected values of query functions. Strengths: The main result of the paper by itself is interesting enough to be the greatest strength of the paper. Originality. While the topic of hypothesis testing of NGCA in the Statistical Query (SQ) model is not new, and it has been known that the problem itself is SQ hard, this paper removes one of the main assumptions required for results in previous works to hold. From this perspective, the paper provides original contributions to the field. Quality. The quality of the paper is quite nice. The paper does a good job in describing the problem formulation, downfalls of past approaches, and definitions of the various technical tools required to establish the proof of the main result. Clarity. The paper is well written and evidently polished. The main contributions of the paper are well stated and clear. The proof sections in the last few pages of the paper may need a bit more revision to be better readable. Significance. The main results of the paper are widely applicable to various areas of machine learning theory. Weaknesses: Overall, the paper is nicely written. Some (minor) comments on weaknesses of the paper are: 1. It is stated in the paper that the main result is "near-optimal". Can some elaborations be made on what is the "optimal" result, and, what are the technical difficulties causing the discrepancy between the results in this paper and the "optimal" result? 2. Discrepancy in order of Theorem 1.8: the results in the paper give a tolerance requirement of the order "$Ω(d)^{−k/32}$" on $d$ while Fact 1.6. has the order "$Ω(d)^{-(k+1)(1/4-c/2)}$". In cases where $c$ is closer to $1/2$ it appears that the previous result is tighter, while the opposite is true when $c$ is closer to $0$. What is causing the discrepancy and why is this not reflected in the original comparison in line 62, i.e., is the order of magnitude listed in the equation on line 62-63 inaccurate? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some additional comments and questions: 1. Possible typo on Line 289: "i-th" might have been a typo of "k-th". 2. Lemma 3.10 requires that $d$ "is at least a sufficiently large universal constant." How large does $d$ need to be in order for the results in the paper to be applicable? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: There are limited discussions on limitations available in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and positive assessment of our work. We would like to address the following questions/comments. 1. Regarding the reviewer’s point: “It is stated in the paper that the main result is "near-optimal". Can some elaborations be made on what is the "optimal" result, and, what are the technical difficulties causing the discrepancy between the results in this paper and the "optimal" result?” Our result is “near-optimal” in the sense that it is optimal up to a constant in the exponent — since one can solve the NGCA in $d^{O(k)}$ time. In fact, we believe that with the same approach but a more careful analysis, one can get a lower bound of $d^{ck}$, where c is any constant smaller than $1/8$ (and that the constant of $1/8$ cannot be improved in general). 2. Regarding the reviewer’s point: “For the comparison between Fact 1.6 and Theorem 1.8, why is Fact 1.6 better for $c$ close to $0$ and Theorem 1.8 better for $c$ close to $1/2$?” We believe this statement is actually not entirely accurate. Fact 1.6 uses the SQ lower bound in [DKS17], so the lower bound there in fact is roughly $\exp(O(\alpha^{-2/k}))d^{-k/(1/4-c/2)}$, where the extra $\exp(O(\alpha^{-2/k}))$ term corresponds to the chi-square distance of the one-dimensional distribution $A$ in the construction. So, if $\alpha\ll\log(d)^{-k/2}$, regardless of the choice of $c$, Fact 1.6 will always fail to give any nontrivial bound. In contrast, Theorem 1.8 can still give a nontrivial lower bound. Indeed, there are some parameter regimes where Fact 1.6 will give a quantitatively slightly better bound than Theorem 1.8 (up to a constant in the exponent). This is due to the fact that our main result is proved for any one-dimensional distribution $A$ — while the prior result of [DKS17] is proved only for one-dimensional distributions $A$ with finite chi-square distance. This allows [DKS17] to obtain a better constant factor in the exponent, but at the same time costs an extra multiplicative “chi-square distance term”. 3. Regarding the reviewer’s point: “Possible typo on Line 289: "$i$-th" might have been a typo of "$k$-th".” Thank you for pointing out the typo. We will address it in the final version. 4. Regarding the reviewer’s point: “Lemma 3.10 requires that “$d$ is at least a sufficiently large universal constant." How large does $d$ need to be in order for the results in the paper to be applicable?” We need $d$ to be at least 10 so that Lemma 3.10 is true. Note that since $d$ is the dimension in the NGCA problem, if $d$ is smaller than 10, then for any number of matching moments $k$ only depending on $d$, the problem can always be solved in $d^{O(k)}$ time (which is constant time). Therefore, the main result Theorem 3.1 is applicable for any $d$ (since if $d$ is small, then the constant is absorbed in the big-O notation). --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for responding to my questions. I took a look at the questions raised by other reviewers and agree with some of the points raised in the reviews/discussions by g3gW and oUue. I am adjusting my confidence accordingly.
Summary: The paper discusses SQ lower bounds for Non-Gaussian component analysis. A very influential result by [DKS17] has established an SQ-lower bound suggesting d^m time as long as the non-Gaussian component's distribution A, satisfies (a) that the first m moments of A match the m moments of N(0,1) and (b) the \chi^2 between A and N(0,1) is finite. The authors prove that the same SQ lower bound holds under only condition (a) [i.e., without assuming condition (b)] ## Correctness From my quick check, the argument appears correct and sound. Strengths: I find the result very nice and the contribution a crucial addition to the literature. It is quite interesting that the authors follow a direct approach, not using the "standard" SQ dimension approach introduced in [Feldman et al '2017]. Weaknesses: A weakness is perhaps that the SQ lower bound technique seems to be very tailored to NGCA, as opposed to the Feldman et al 2017 result. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My understanding is that by removing the \chi^2 assumption, the authors result also shows that SQ algorithms fail even when the support of A is finite. This is a quite interesting case, as it includes examples of A supported on a lattice, where lattice-based methods work in polynomial time (e.g. see [Zadik et al' Lattice-Based Methods Surpass Sum-of-Squares in Clustering], , [Diakonikolas et al' Non-Gaussian Component Analysis via Lattice Basis Reduction]). Hence, this implies a separation between SQ methods and poly-time methods in NGCA. In case that is not already known, I encourage the authors to add this interesting implication of their result. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and positive assessment of our work. We would like to address the following questions/comments. 1. Regarding the reviewer’s point: “A weakness is perhaps that the SQ lower bound technique seems to be very tailored to NGCA, as opposed to the Feldman et al 2017 result.” The focus of our work is on the problem of NGCA. Specifically, we show that under the moment-matching assumption the NGCA problem is SQ-hard. We note that a wide range of learning problems have at their core “hard instances” that can be formulated as specific instances of NGCA – for an appropriate choice of the moment-matching distribution $A$; see, e.g., lines 66-74 and the associated references. As a corollary, our result implies SQ-lower bounds for all these problems. On the other hand, [FGR+17] defines a notion of “SQ-dimension” and shows that “large SQ dimension implies SQ-hardness”. The notion of SQ dimension in that work is not sufficient for our setting. Moreover, even if the “SQ-dimension” of [FGR+17] had been sufficient (via an appropriate modification), one would need to establish that the NGCA problem has a “large SQ dimension” — which is the main technical contribution of our work. In summary, we believe that our contribution is incomparable with that of [FGR+17]. 2. Regarding the reviewer’s point: “This is a quite interesting case, as it includes examples of A supported on a lattice, where lattice-based methods work in polynomial time (e.g. see [Zadik et al' Lattice-Based Methods Surpass Sum-of-Squares in Clustering], , [Diakonikolas et al' Non-Gaussian Component Analysis via Lattice Basis Reduction]). Hence, this implies a separation between SQ methods and poly-time methods in NGCA. In case that is not already known, I encourage the authors to add this interesting implication of their result.” The reviewer is correct. As a corollary of our result, it follows that for the case of discrete distribution $A$, efficient SQ algorithms for NGCA do not capture all polynomial-time algorithms. In particular, the LLL-based algorithm in the aforementioned works is not efficiently implementable in the SQ model. (Interestingly, the same separation holds for low-degree polynomial tests and Sums-of-Squares algorithms.) We will add an appropriate remark in the revised version of our paper. Reference:\ [FGR+17] V. Feldman, E. Grigorescu, L. Reyzin, S. Vempala, and Y. Xiao. Statistical algorithms and a lower bound for detecting planted cliques. J. ACM, 64(2):8:1–8:37, 2017.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and effort in providing feedback. We are encouraged by the positive comments, and that the reviewers appreciated the paper for the following: (i) **importance** (YVEY), and (ii) **clarity** and **quality of writing** (YVEY, oUue). We would like to address the following question from the reviewers here. 1. “How does the main result of SQ lower bound for NGCA compare to the algorithms in [ZSWB22] and [DK22] which solve the NGCA when the hidden distribution is discrete?” If we take the one-dimensional distribution $A$ in the NGCA to be a distribution with finite support matching the first $k$-degree moments with the standard Gaussian, where $k$ is at least a sufficiently large integer, then the SQ lower bound given by our result will be larger than the algorithmic upper bound in [ZSWB22] and [DK22]. However, as pointed out by reviewer nJue, the algorithms in [ZSWB22] and [DK22] are based on LLL lattice basis reduction which is not captured by the SQ framework, so this does not contradict our result. It is worth noting that not only the SQ model, but two other popular restricted computational models (lower-degree framework and SoS framework) also fail to capture the LLL lattice basis reduction. A comparison here can be made between the LLL algorithm and Gaussian elimination. While there is a classical exponential SQ hardness result for learning Parity functions, a polynomial-time algorithm based on Gaussian elimination can solve the problem. We would also like to point out that the above does not imply that any NGCA family of instances with infinite chi-square distance can be solved efficiently. Importantly, LLL-based algorithms only work in the restricted setting that the support of the one-dimensional distribution $A$ is discrete/nearly discrete. For example, if the one-dimensional distribution $A$ is a mixture of a discrete distribution and a continuous distribution (as is the case in the Anti-concentration Detection problem considered in our paper), linear-algebraic algorithms will not work although the distribution has infinite chi-squared distance and the problem is believed to be hard for all efficient algorithms. Reference: \ [DK22] I. Diakonikolas and D. M. Kane. Non-Gaussian Component Analysis via Lattice Basis Reduction. In Conference on Learning Theory, COLT 2022, volume 178 of Proceedings of Machine Learning Research, pages 4535-4547. PMLR, 2022.\ [ZSWB22] I. Zadik, M. J. Song, A. S. Wein and J. Bruna. Lattice-Based Methods Surpass Sum-of-Squares in Clustering. In Conference on Learning Theory, COLT 2022, volume 178 of Proceedings of Machine Learning Research, pages 1247-1248. PMLR, 2022.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Res-Tuning: A Flexible and Efficient Tuning Paradigm via Unbinding Tuner from Backbone
Accept (poster)
Summary: This paper provides a unified framework (called Res-Tuning) to combine different efficient tuning methods. It introduces an unbinding form that integrates existing methods and allows combination flexibility. A memory-efficient variant is also introduced for the sake of training memory efficiency. Experiments are performed on discriminative tasks (CIFAR-100 and VTAB-1K benchmark) and generative tasks (Text-to-image generation on COCO), demonstrating the proposed framework's flexibility and efficiency. Strengths: 1. This paper is well-organized, and the presentation is clear. 2. The formulation of the unified and unbinding form is sound and straightforward and is novel to this field of research. 3. The unbinding form allows combination flexibility, which is the main strength of the proposed methods. 4. Empirical evaluation and theoretical analysis verify the equivalence between the unbinding form and existing approaches. 5. Empirical evaluation on different standard benchmarks shows better performance among existing approaches. Weaknesses: My main concerns about this paper are the comparisons with previous works. From my perspective, this paper partially draws inspiration from [13] (MAM-Adapter), which also struggled to unify parameter-efficient tuning methods such as Adapter, Prefix or Prompt tuning. All these methods are evaluated in NLP tasks rather than vision tasks that this paper used. On vision tasks, VPT [18] is particularly proposed to explore PETL methods in vision (rather than providing a unified view), Thus, the evaluation following VPT is not sufficient for this work. Besides, VPT can work well with Convolution-based foundation models and self-supervised pre-trained models (e.g., MAE, MoCo V3). I don't find such discussion in this work. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Why does this paper not evaluate the proposed method on NLP tasks that most parameter-efficient tuning methods evaluated on? This paper only includes the reproduced results of MAM-Adapter on CIFAR-100 without any discussion on NLP benchmarks. Since this paper aims to introduce a unified framework for efficient tuning methods, it is necessary to include empirical evaluations on commonly used benchmarks. I would increase my score if sufficient results or discussion were provided. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are not adequately discussed in this paper. It is unknown whether the proposed method can generalize well. Negative societal impacts are unknown. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer hNoJ, Thank you for the acknowledgement of our contributions and your valuable comments. We address your concern as follows: **Q1: Comparisons with previous works** We would like to first point out that the unified formulation of existing PETL approaches in an unbinding form is only a part of our contribution in the manuscript. The unbinding formulation leads to the Res-Tuning framework, and we discuss its novelty and contribution compared to existing works in detail in the general response. Another important contribution of our manuscript that distinguishes our work with existing approaches is the memory-efficient variant called Res-Tuning-Bypass, which is able to reduce the resource and time consumption for both training and inference in many scenarios, and is stronger than other existing memory-efficient methods (Side-Tuning and LST) presented in NLP. **Q2: Extended experiments on different structure** Following VPT, we provide more analysis on our approach in comparison with the existing PETL approaches. In Table R1, we provide performance comparisons with different pre-training sources. In Table R2, we compare the performances with different convolutional backbones. We will include more relevant results in our revisions. Table R1. Performance comparison on different pre-trained models on CIFAR-100. | Method | MAE | DINO | |---|---|---| | Full | 85.90 | 87.88 | | Linear | 69.83 | 85.51 | | Adapter | 85.86 | 89.01 | | VPT | 82.44 | 88.33 | | Res-Tuning | 86.37 | 89.03 | Table R2. Performance comparison on different CNN models on CIFAR-100. | | ConvNext | | | ResNet-101 | | | |---|---|---|---|---|---|---| | Method | Accuracy | Params(M) | Mem. (GB) | Accuracy | Params(M) | Mem. (GB) | | Full | 90.15 | 87.67 | 11.16 | 77.40 | 42.70 | 7.30 | | Linear | 90.06 | 0.11 | 3.36 | 54.96 | 0.20 | 2.83 | | Res-Tuning | 90.86 | 0.87 | 9.45 | 86.80 | 0.92 | 7.20 | | Res-Tuning-Bypass | 90.51 | 1.13 | 3.63 | 72.27 | 3.63 | 4.05 | **Q3: Extended evaluations on NLP benchmarks** Thanks for the suggestion. Limited by time, here, we preliminarily provide the results of text classification in Table R3. It is observed that the performance of our Res-Tuning framework is on par with or better than MAM-Adapter in text classification, with slightly longer training time but lower memory consumption. Our Res-Tuning-Bypass significantly reduces the training time and memory consumption and achieves a mildly lower performance. In our revisions, we will try to include more NLP-based evaluations for more thorough comparisons. Table R3. Performance comparison with MAM-Adapter on text classification. | | SST2 | | MNLI | | | | | |---|---|---|---|---|---|---|---| | | Accuracy | Train (Min/Epoch) | Accuracy | Train (Min/Epoch) | Param. (M) (Train) | Param. (M) (Inference) | Mem. (GB) | | MAM Adapter | 94.2 | 7.2 | 87.4 | 41.4 | 46.78 (37.4%) | 0.61 (0.5%) | 22416 | | Res-Tuning | 94.56 | 7.9 | 87.45 | 47.3 | 0.97 (0.77%) | 0.97 (0.77%) | 19308 | | Res-Tuning-Bypass | 92.94 | 4.2 | 82.01 | 24.2 | 0.98 (0.78%) | 0.98 (0.78%) | 4392 | **Q4: Limitations** Sorry for the disorientation. We have discussed the limitations and the potential societal impacts in the supplementary material. [1] Zhang et al. Side-tuning: Network adaptation via additive side networks. ECCV2019. [2] Sung et al. LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning. NeurIPS2022. [3] Jia et al. Visual Prompt Tuning. ECCV2022. --- Rebuttal Comment 1.1: Title: Comment by Reviewer hNoJ Comment: Thank the authors for their feedback. The authors provide sufficient evidence to resolve my concerns. I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you again for the insightful suggestions that helped improve our manuscript. We really appreciate your adjustment of the rating.
Summary: This paper proposes a new tuning paradigm, dubbed as Res-Tuning. They first introduce the basic building blocks of foundation models and then unbind three popular tuners from foundation models. They provide theoretical and empirical evidence to support their structural disentanglement. By detaching from the foundation models, they further propose a memory-efficient variant of Res-tuning, dubbed as Res-Tuning-Bypass. They conduct extensive experiments on both discriminative and generative tasks to demonstrate the superiority of their method. Strengths: - This paper is well-written and the illustration of this paper is concise and easy to understand. - The idea is simple yet effective. - They conduct extensive experiments and the results demonstrate superior efficacy and efficiency on both discriminative and generative tasks. Weaknesses: - There are some detail errors in line 124 and line 130. The reference seems to be Fig.3c, not Fig.3b. And there are some punctuation errors in Eq.(8). There are many similar issues in the article, the author should check this paper again to correct them. - Code is unavailable now and open-source as soon as possible is beneficial for expanding the influence and credibility of this paper. - The author provides detailed derivation in supplementary materials, but the manuscript is hard to understand. It would be helpful if the deriving process have a more detailed explanation. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - Please see the weaknesses part. - There are no queries in FFN/Block, How did you apply ResPre/ResPro in FFN/Block? - The form of adapter tuning in Eq.(6) and Fig. 2c is inconsistent. Is there an FFN before the adapter of the parallel branch? - LoRa [1] is an effective method in PETL. Adding a comparison with Lora in discriminative tasks can make this paper more solid. - The full fine-tuning and linear probing results in Tab.3 are inconsistent with the results in SSF[2]. The results of the same task (CIFAR-100) and model (ViT-B/16) are relatively low. #### Reference [1] E. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, L. Wang, and W. Chen. LoRA: Low-rank adaptation of large language models. In Int. Conf. Learn. Represent., 2021. [2] D. Lian, D. Zhou, J. Feng, and X. Wang. Scaling & shifting your features: A new baseline for efficient model tuning. Adv. Neural Inform. Process. Syst., 2022. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The transfer ability of this method depends to a large extent on the performance of the upstream model. This method shares the same vulnerability as existing PETL solutions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer DEjH, Thank you for the acknowledgement of the proposed method and experiments. We address you concerns as follows: **Q1: Typos.** Thanks for spotting the errors. We will carefully fix them and polish the writing in our revisions. **Q2: Code release.** Limited by our organization's disclosure policy, we are unable to provide the full code for training and evaluating the model for now. But we do have submitted a core part of the model implementation to the AC. Additionally, we are actively preparing the release of the full code and will release them in the near future. **Q3: Detailed explanation for the deriving process.** Thanks for the suggestion. We will add more detailed explanation and make corresponding modifications to make the manuscript more easily understandable in our revisions. **Q4: How is Res-Pre. and Res-Pro. applied in FFN/Block?** For FFN and Block, we apply them directly with the output of FFN or Block as the query. **Q5: Inconsistency between Eq.6 and Fig.2c.** Thanks for pointing this out. In fact, we set out to use the form in Eq.6, but later we opt for the structure in Fig.2c and remove the FFN before the adapter to avoid the backpropagation through the FFN, especially when it is used in the Res-Tuning-Bypass framework. We will clarify this in the revisions. **Q6: Experiments for the comparison with LoRA.** Thanks for the suggestion. We will include the following empirical comparisons in the revisions. Table R1. Performance comparison on FGVC. † denotes our own implementation. | Method | CUB_200_2011 | NABirds | OxfordFlowers | StanfordCars | StanfordDogs | Mean | |---|---|---|---|---|---|---| | LoRA† | 86.02 | 80.22 | 99.2 | 85.16 | 88.59 | 87.84 | | Res-Tuning | 89.66 | 85.87 | 99.45 | 87.58 | 92.21 | 90.95 | | Res-Tuning-Bypass | 88.75 | 83 | 99.61 | 75.41 | 92.4 | 87.83 | Table R2. Performance comparison on VTAB-1k. | Method | Natural Mean | Specialized Mean | Structured Mean | Group Mean | |---|---|---|---|---| | LoRA | 79.49 | 84.55 | 59.78 | 74.60 | | Res-Tuning | 82.29 | 85.46 | 61.19 | 76.32 | | Res-Tuning-Bypass | 76.73 | 84.56 | 55.66 | 72.32 | **Q6: Baseline setting** For the baseline performance, we mainly followed the settings in AdaptFormer[1] and obtained a similar performance to theirs. We will try reproducing the result of the baseline of SSF later as well as applying the similar experimental settings on our approach to see the result. [1] Chen et al. AdaptFormer: Adapting vision Transformers for scalable visual recognition. NeurIPS2022. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. Your answer has addressed my concerns, and I will maintain a positive rating. I am looking forward to a revised version of the paper with fewer errors and the release of the source code. --- Reply to Comment 1.1.1: Comment: Thank you so much for your reply. We sincerely appreciate your recognition and constructive comments to improve our work.
Summary: This paper shows some existing parameter-efficient tuning methods can be decoupled from backbones, which can be formulated as a unified Res-Tuning model. Furthermore, the authors conduct empirical experiments to seek the optimal Res-Tuner. Additionally, a memory-efficient variant of Res-Tuning is introduced by combining outputs of backbone with previous Res-Tuner, which can only compute the gradients for Res-Tuning module. The experiments are conducted on several downstream tasks. Strengths: +: Compared with parameter-efficient tuning, memory and speed efficient tuning methods show more practical in real-world applications. The proposed Res-Tuning-Bypass show clearly superior to memory-efficient counterparts, i.e., Side-Tuning, LST and linear probing. Meanwhile, Res-Tuning can show speed-efficient on multi-task learning. +: The proposed method is easy to implement, and is clearly described. Weaknesses: -: The contribution of this work could be further clarified. (1) The statement on unbinding formulation seems a bit overclaimed. For tuning parameters of large models, parameter-efficient modules can be naturally designed in a cascaded or parallel manner. For parallel structures, outputs of frozen backbones and learnable branch can be fused by element-wise addition or multiplication, and such kinds of structures are often used for design network architectures. For Res-Tuning model, it can be regarded as a parallel structure with element-wise addition for fusion. Therefore, it is hardly regarded as a novel or special structure. (2) The authors show existing PETL methods have equivalent counterparts in the unbinding formulation, but I feel a bit confused about necessity of this conclusion. Besides, derivation of equivalent on adapter seems be less rigorous. I am not sure could such conclusion help to design Res-Tuner? Because the optimal Res-Tuner seems be decided by empirical experiments. (3) I feel a bit confused about the relationship between the unbinding formulation and memory-efficient variant of Res-Tuning. In my opinion, memory-efficient Res-Tuning can be regarded as an improved LST, where an adapter is used for outputs of frozen backbones before fusing with learnable branches. Besides, learnable modules of LST and Res-Tuning are different. The authors would better give more discussion about above issue. -: Some issues about experiments. (1) As shown in Table 3 (c), why Linear probing can be improved when only Bypass is used? The authors would better give more discussion. (2) In line 172, improvement of 0.94\% should be corrected to 0.92\%? -: The writing can be further improved. (1) Line 124, Fig. 3b -> Fig. 3c (2) FFN$_{adapter}$ in Eq. 6 is inconsistent with description of Fig. 2c. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see paper weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer nLwc, Thank you for your time and helpful comments. We address your concerns below: **Q1: The contribution of the unbinding formulation.** The essential contribution of the unbinding formulation is to abstract existing PETL methods into a unified formulation of a frozen operation and a learnable tuner. This formulation allows for the **flexible combination** of various approaches, which encompasses existing PETL methods and is able to derive new ones. We would prefer to regard the unbinding formulation as a new perspective towards PETL methods instead of a novel structure. Additionally, it also serves as the basis for the memory-efficient version Res-Tuning-Bypass, where the side network is entirely constructed by the Res-Tuners in the Res-Tuning framework. We include more detailed discussions in the general response. **Q2: Equivalence of unbinding formulation** The motivation for the unbinding formulation is to provide a unified description to the existing PETL methods. The core reason for us to prove the equivalence between the parallel form and the original form of existing PETL approaches is that the existing PETL methods are proven effective in various tasks, and the equivalence between two forms would ensure the theoretical effectiveness of our formulation. The theoretical proof combined with empirical validation demonstrates that our unbinding formulation is effective and even stronger than existing PETL methods. **Q3: Relationship between Res-Tuning and Res-Tuning-Bypass.** Sorry for the disorientation. It is indeed that Res-Tuning-Bypass could be viewed as an improved version of LST. From another perspective, the bypass network in the Res-Tuning-Bypass framework is constructed entirely using Res-Tuners, which is the parallel form of existing PETL methods derived in our unbinding formulation. With both the theoretical proof and empirical validation in Res-Tuning, we can safely reduce the design space and rely on the validated structures for constructing the model. Hence, the Res-Tuning framework and its unbinding formulation serves as an important basis for the Res-Tuning-Bypass model. **Q4: Issues about experiments.** (1) Improvement of bypass without Res-Tuners. Thanks for spotting this. It is indeed an interesting result that worth discussing. We believe the performance improvement brought by the plain bypasses is because the structure of Res-Tuning-Bypass without Res-Tuners essentially performs feature ensemble for features generated by different layers in the Transformer. (2) Performance improvement of 0.94%. The number referes to the comparison with existing tuning approaches (underlined numbers in Table 2a), where the highest performance is 92.34%, and our Tri-Res-Tuner, where the performance is 93.28%. Hence, we obtain the least performance improvement of 0.94%. **Q6: Writing issues.** Thanks for the suggestion. We will correct the mistakes in our revisions. --- Rebuttal Comment 1.1: Comment: I sincerely thank the authors for providing the feedback. I suggest that the authors could further clarify the relationship between Res-Tuning and Res-Tuning-Bypass and provide the experimental results of plain bypasses without Res-Tuners in the revision. --- Reply to Comment 1.1.1: Comment: Thanks for the valuable suggestion. The relationship between Res-Tuning and Res-Tuning-Bypass will be further clarified in the revision. Furthermore, in our manuscript, we have provided the analysis of our Res-Tuning-Bypass, containing experimental results of plain bypasses without Res-Tuner in Table 3b. We will add more detailed explanation about the results in our revision.
Summary: This paper proposes an unbinding formulation of parameter-efficient methods and further leverage structural disentanglement to develop a memory-efficient variant. Sufficient experiments on both visual discriminative and text-to-image generative tasks are performed. Strengths: 1. novel implementation. 2. good motivation, which is reasonable for me. 3. good evaluation, promising results, and easy to follow. Weaknesses: 1. The proposed Res-Tuning is not that novel and the analysis of several PET approaches is similar to that in Towards a unified view of parameter-efficient transfer learning (ICLR2022). Parallel adapter design is already a consensus, minor changes seem trivial. Rather than this boring repetition, I find the memory-efficient variant i.e., Res-tuning-bypass is meaningful and novel. I suggest that the authors make this section a priority and restructure the article. 2. Following Lader Side Tuning, the proposed Res-tuning-bypass can free the backbone from backpropagation. In this way, the training time will be reduced, however, only the training time for generating tasks is reported. Thus I am curious about the training time for visual recognition tasks. 3. The authors compute inference time for multi-tasks, although it seems a bit trivial, I recognize the significance of this time reduction. However, I cannot accept the comparison with just non memory-efficient methods in Figure 4. The authors should report LST's results and discuss them carefully. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors employ the CLIP model for text-to-image generation, however, the generative model is more difficult to evaluate its performance. CLIP-based Parameter efficient approaches have also gained extensive attention recently, such as CoOp[1], PLOT[2], MaPLe[3], and CoPrompt[4]. I am curious whether the proposed Res-tuning-bypass can also reduce the memory cost for vision-language PET approaches with small performance degradation. [1] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9):2337–2348, 2022. [2] Chen G, Yao W, Song X, et al. PLOT: Prompt Learning with Optimal Transport for Vision-Language Models[C]//The Eleventh International Conference on Learning Representations. 2022. [3] Khattak M U, Rasheed H, Maaz M, et al. Maple: Multi-modal prompt learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 19113-19122. [4] Roy S, Etemad A. Consistency-guided Prompt Learning for Vision-Language Models[J]. arXiv preprint arXiv:2306.01195, 2023. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Yhf3, Thank you for the acknowledgement of the proposed method and experiments. We address your concerns as follows: **Q1: Novelty of Res-Tuning and similarity to MAM-Adapter in terms of analysis on existing methods.** We totally agree that the priority of the manuscript is the memory-efficient Res-Tuning-Bypass, and we will reorganize the manuscript to make this clearer. As mentioned in the general response, the novelty of Res-Tuning lies in its flexibility. It encompasses most popular existing approaches and can derive new ones. Empirically, we found the performance of Res-Tuning stronger than existing PETL methods. In terms of the similarity to MAM-Adapter with respect to the analysis on existing methods, we claim that our analysis is more rigorous, proving the equivalance of our formulation and existing works both theoretically and empirically. This also provides the foundation for further research on parallel modules for PETL. Essentially, we believe that Res-Tuning is an indispensible part of our manuscript as it serves as the basis for Res-Tuning-Bypass. Nevertheless, we will adjust the organization of our manuscript to make the emphasis on Res-Tuning-Bypass clearer in our future revisions. **Q2: Training time on discriminative tasks** We provide the training time on discriminative tasks in Table R1. Table R1. The training time on CIFAR-100 corresponds to the Table 2c in manuscript. | Method | Train (Min/Epoch) | Percentage w.r.t. Full | |---|---|---| | Full | 2.65 | 100.00% | | Linear | 1.19 | 45.10% | | MAM-Adapter | 3.07 | 115.72% | | AdaptFormer | 2.18 | 82.23% | | Res-Tuning | 2.46 | 92.91% | | Side-Tuning | 1.33 | 50.15% | | LST | 2.22 | 83.65% | | Res-Tuning-Bypass | 1.92 | 72.52% | **Q3: Inference time for multi-tasks** Thanks for the suggestion. Figure 4 mainly demonstrates the advantage of Res-Tuning-Bypass framework in terms of inference efficiency on multiple tasks. In fact, such a property is shared by approaches similar to ours, which includes Side-Tuning, LST, as well as linear probing. Including other memory-efficient approaches would make the figure a bit scattered, and it would be almost impossible to distinguish the memory-efficient approaches in the figure. Hence, we will include another figure for comparing the inference efficiency of memory-efficient approaches. Additionally, we provide a table here for reference in terms of the inference efficiency. Generally, the inference time grows linearly with the number of tasks, so we use the inference time per additional task for comparing the efficiency in Table R2. Overall, the inference time for different approaches is similar. The additional inference time introduced on top of linear probing for Res-Tuning-Bypass is higher than Side-Tuning and lower than LST, but since there are around 10K data for each test dataset, the additional latency introduced by memory-efficient approaches could be negelected (less than 1ms per sample). Table R2. The testing time on CIFAR-100 between different memory-efficient methods. | Method | Testing (sec./test dataset) | |---|---| | Linear | 23.90 | | Side-Tuning | 24.41 | | LST | 25.28 | | Res-Tuning-Bypass | 24.98 | **Q4: Further evaluations on vision-language task.** We include the results on the vision-language tasks in Figure 2 in the extra pdf material. We noticed that with the same backbone (ViT), the performance is slightly worse than CoOp. In Table R3, we provide the comparison in terms of parameter and the memory consumption between CoOp and Res-Tuning-Bypass. It is observed that the memory consumption of Res-Tuning-Bypass is reduced by 45% when compared to CoOp. In our future revisions, we will include more relevant results and comparisons. Table R3. Comparison of parameter and memory consumption between CoOp and our Res-Tuning-Bypass. | | Param. (M) | Mem. (GB) | Percentage | |---|---|---|---| | CoOp | 0.008 | 5768 | 100% | | Res-Tuning-Bypass | 0.38 | 3174 | 55% | [1] Sung et al. LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning. NeurIPS2022. [2] Zhou et al. Learning to prompt for vision-language models. IJCV2022. --- Rebuttal Comment 1.1: Comment: Dear reviewer Yhf3, Thanks again for all of your constructive comments and suggestions, which have helped us improve the quality and clarity of this paper! We sincerely hope that our analyses and added experiment on the vision-language tasks could address your concerns. Since the deadline for discussion is approaching, we would like to kindly ask whether there is any additional concerns or questions that we might be able to address. Thanks very much for your effort! Best regards, Authors
Rebuttal 1: Rebuttal: Dear all, We would like to express our gratitude to our reviewers for their valuable comments. For positive comments, - memory-efficiency of Res-Tuning-Bypass (R-mYTv, R-nLwc), - sufficient and strong experiments (R-mYTv, R-Yhf3, R-DEjH, R-hNoJ), - well-structured and easy to follow (R-mYTv, R-nLwc, R-Yhf3, R-DEjH, R-hNoJ), - novel (R-Yhf3, R-hNoJ), - good motivation (R-Yhf3), - and flexible (R-hNoJ), we appreciate them and will carry them forward. We address the common concern here, which regards the **novelty of the Res-Tuning framework**. The novelty of the Res-Tuning framework lies in the abstraction of existing PETL methods to the parallel connection of a frozen operation and a learnable Res-Tuner. This allows for the independent development and flexible combination of the structure of the foundation models and the structure of the PETL tuners. More importantly, the significance of the Res-Tuning framework is that it serves as the basis for the Res-Tuning-Bypass framework. With the unbinding formulation in the Res-Tuning framework, we can treat existing PETL methods as basic building blocks for constructing the bypass network. Such a formulation also allows for the easy adaptation of the Res-Tuning-Bypass model with the development of new PETL methods, which could be simply achieved by replacing the existing Res-Tuners in the bypass network by new modules developed in the future. *Novelty compared to other existing parallel PETL methods.* Our Res-Tuning framework is able to encompass them and derive new methods based on their module. *Novelty compared to MAM-Adapter.* The Res-Tuning framework is partially inspired by MAM-Adapter. Besides the modality difference between Res-Tuning and MAM-Adapter, the derivation process for existing PETL methods is more rigorous in Res-Tuning, proving the equivalence of our Tuner with existing methods. We also included the analysis for prompts, which is not included in the analysis of MAM-Adapter. Indeed, we agree with Reviewer **mYTv**, **Yhf3**, and **nLwc** that the major contribution of our manuscript is the Res-Tuning-Bypass framework, which we will further clarify in the revisions. For other concerns, we address them in the respective comments to the reviewers. Thanks and best regards, Authors of Submission 12594 Pdf: /pdf/bfbf01697860bfb16f91305c8e5e77ed0a97940a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper pays attention to Parameter Efficient Tuning and propose a unified framework namely Res-tuning. More importantly, based on the proposed unified framework, this paper constructs a memory optimization scheme similar to the LST in the language model. Several experiments are performed to validate the effectiveness , including over visual recognition tasks including VTAB-1k, Cifar-100 and text to image generation tasks on COCO and Oxford Flowers and Food-101. Strengths: 1. I am gald to see the success of Res-Tuning-Bypass, an efficient memory PET that is the counterpart of LST in visual tasks. 2. Sufficient experiments are conducted and Res-Tuning achieves impressive performance gain compared to state-of-art approaches. 3. The paper is well structured and easy to follow. Weaknesses: 1. The novelty of unbinded Res-tuning framework seems limited. As for me, Convpass [1] and AdaptFormer adopts a parallel module for PET, which is similar to Res-tuning. 2. The analysis of prompt tuning, prefix tuning, and adapter tuning is similar to that in [2], which is also cited in the paper. For me, the major contribution of this paper is the Res-tuning-bypass, which brings the success of memory-efficient side tuning to visual tasks. 3. The research of parameter efficient tuning has gained extensive attention recently. More recent related works should be added and described or even compared. In the work, only VPT, SSF, NoAH, and Adaptformer are used as baselines. More recent work such as Convpass[1], SNF[3] should be carefully examined. 4. Lacking some important experiments, such as few-shot experiments on FGVC and Domain Generalization experiments on four ImageNet related datasets (ImageNet-A, ImageNet-R, ImageNet-V2, ImageNet-Sketch). [1] Jie, Shibo, and Zhi-Hong Deng. "Convolutional bypasses are better vision transformer adapters." arXiv preprint arXiv:2207.07039 (2022). [2] He J, Zhou C, Ma X, et al. Towards a Unified View of Parameter-Efficient Transfer Learning[C]//International Conference on Learning Representations. 2021. [3] Wang Y, Shi B, Zhang X, et al. Adapting Shortcut With Normalizing Flow: An Efficient Tuning Framework for Visual Recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 15965-15974. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. This approach has only been evaluated on ViTs, and I'm curious if Res-tuning-bypass will work on CNNs. 2. The performance of Side Tuning is quite poor, and VPT proves that this memory-saving design is not suitable for visual tasks. This paper replicates the success of LST for the first time in a visual task, and I am very curious about the implementation process. However, the source code is not submitted in the supplementary material. If the authors can provide anonymous github project link in the rebuttal and I am glad to raise my score. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors have discussed the limitaions and the potential societal impacts in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer mYTv, Thank you for the acknowledgement of our contributions and your valuable comments. We address your concern as follows: **Q1: Novelty of the unbinded Res-Tuning framework.** As mentioned in the general response, the novelty of the unbinded Res-Tuning framework when compared to parallel modules such as Convpass and AdaptFormer is the flexibility, as well as its ability to derive new PETL methods with the unbinding formulation. Empirically, we also show that Res-Tuning framework performs favourably against them (in Table 3 and Table R1). More importantly, the significance of the Res-Tuning framework in our manuscript is to provide a basis for the Res-Tuning-Bypass framework. With the unbinding formulation, we can now use the tuners derived from existing PETL methods (proven effective in PETL applications) as the basic building block for the bypass network in Res-Tuning-Bypass. **Q2: Similarity to MAM-Adapter in terms of the analysis of existing PETL methods.** We agree that the analysis on the existing PETL methods is similar to that of MAM-Adapter. In fact, our Res-Tuning framework is partially inspired by them. However, we believe that our analysis is more rigorous, where we show the equivalence of our framework and the existing methods both theoretically and empirically. We also agree that the major contribution of the manuscript is Res-Tuning-Bypass. We sincerely thank you for the acknowledgement of our contribution. In the revisions, we will carefully reorganize the manuscript to show that more clearly. **Q3: More comparison experiments on recent work** Thanks for the suggestion. Here, we provide an empirical comparison of the mentioned methods in the Table 1 of the extra pdf material. Overall, our Res-Tuning framework shows a competitive performance. We will try to include more recent works in our revision. **Q4: Extended experiments on Few-Shot Learning and Domain Generalization** Thanks for the suggestion. We have added the experiments as follows: - Domain Generalization Follow the setting of NOAH[4], we first train a model on ImageNet using 16 shots per category and test it on four other variants of ImageNet. The list of data also comes from NOAH and all results are averaged over three random seeds. Our Res-Tuning goes beyond NOAH by 6.54\% on ImageNet and 2.6\% on the average accuracy of ImageNet. Surprisingly, Res-Tuning-Bypass also achieved better results than other tuning methods. Table R1. Results on domain generalization. 'Mean' denotes the average accuracy of four variants of ImageNet. | | Source | Target | | | | | |---|---|:---:|:---:|:---:|:---:|:---:| | | ImageNet | IN-V2 | IN-Sketch | IN-A | IN-R | Mean | | Adapter | 70.5 | 59.1 | 16.4 | 5.5 | 22.1 | 34.7 | | VPT | 70.5 | 58.0 | 18.3 | 4.6 | 23.2 | 34.9 | | LoRA | 70.8 | 59.3 | 20.0 | 6.9 | 23.3 | 36.1 | | NOAH | 71.5 | 66.1 | 24.8 | 11.9 | 28.5 | 40.6 | | Res-Tuning | 78.04 | 66.58 | 29.23 | 13.15 | 29.01 | 43.20 | | LST† | 70.00 | 57.04 | 14.39 | 7.21 | 17.02 | 33.13 | | Res-Tuning-Bypass | 77.30 | 65.23 | 27.39 | 10.66 | 26.45 | 41.41 | - Few-Shot Learning We have included the results for few-shot learning in the extra pdf material. Specifically, we include the few-shot performance on FGVC in Figure 1. Compared with existing parameter-efficient tuning methods, our Res-Tuning shows a certain advantage over the few-shot performance on the FGVC dataset. **Q5: Extension of Res-Tuning-Bypass to CNNs.** This is indeed an interesting aspect to explore. We present the results for ConvNeXt pretrained on IN21K and ResNet-101 pretrained on IN1K in Table R2. We observe that two convolutional models have different characteristics, where the performance variations of ConvNeXt is small and that of ResNet is large. In terms of the effectiveness of Res-Tuning-Bypass, it is observed that it outperforms the fully-finetuned version of ConvNeXt and notably improves the performance of linear probing for ResNet. Table R2. Results on CNNs backbones. | | ConvNext | | | ResNet-101 | | | |---|---|---|---|---|---|---| | Method | Accuracy | Params(M) | Mem. (GB) | Accuracy | Params(M) | Mem. (GB) | | Full | 90.15 | 87.67 | 11.16 | 77.40 | 42.70 | 7.30 | | Linear | 90.06 | 0.11 | 3.36 | 54.96 | 0.20 | 2.83 | | Res-Tuning-Bypass | 90.51 | 1.13 | 3.63 | 72.27 | 3.63 | 4.05 | **Q6: Code release.** We are sorry that at the current stage, we are only able to provide the core part of the model implementation as well as relevant documentations (which we have submitted to the AC) due to the disclosure policy on codes within our organization. But we are actively preparing the formal release of the full code for training and evaluating our framework and we promise we will do that in the near future. [1] Jie et al. Convolutional bypasses are better vision transformer adapters. arXiv. [2] Chen et al. AdaptFormer: Adapting vision Transformers for scalable visual recognition. NeurIPS2022. [3] He et al. Towards a unified view of parameter-efficient transfer learning. ICLR2022. [4] Wang et al. Adapting Shortcut With Normalizing Flow: An Efficient Tuning Framework for Visual Recognition. CVPR2023. [5] Zhang et al. Neural Prompt Search. arXiv. --- Rebuttal Comment 1.1: Comment: Dear Reviewer mYTv, We would like to thank you again for your time and effort in reviewing our manuscript. It would be greatly appreciated if you could check our responses and provide your valuable feedback. We have given a more detailed explanation about your concerns and provided additional experiments on more SOTA comparisons, few-shot learning on five FGVC datasets, domain generalization on ImageNet and four ImageNet variants, and CNN backbones. This helped us further demonstrate the effectiveness of our work. In addition, we also provide the code implementation for our manuscript. Since the deadline for discussion is approaching, please feel free to let us know if there are any additional clarifications or experiments that we can offer. Best regards, Authors
null
null
null
null
null
null
Distributed Inference and Fine-tuning of Large Language Models Over The Internet
Accept (poster)
Summary: The paper proposed cost-efficient inference and fine-tuning methods for LLMs on geodistributed devices in a consumer-grade network. The motivation is that, by pooling together idle compute resources of multiple research groups and volunteers, we could make LLM research and applications accessible to broader communities. Technically. this paper comes up with an algorithm to address two challenges: 1) how to conduct the computing reliably if any device can disconnect abruptly; 2) how to to partition LLMs between devices with uneven hardware. According to their simulations and real-world experiments, the proposed method can outperforms other approaches to running inference on consumer-grade hardware. Strengths: - The motivation of this paper is realistic. As we all know that playing with LLM is costly regarding the computing resources. Using the idle resources to do LLM inference and fine-tuning is socially and environmentally good. - The idea of this paper is clear and practical. - The paper applied multiple optimizations from different dimensions regarding the training/inference of LLM under low-resource, e.g., quantization both weights and activations between pipeline stages, efficient fine-tuning and so on. Although each of these methods are not really new, it is still inspiring to put them altogether and show that they work well. - The experiments are reasonable, especially the real-world setup. Weaknesses: 1. How to do fine-tuning under the proposed setting is not that clear, although the authors wrote one paragraph to explain the fine-tune part. Inference is clear and relatively simple, but how to recover from the failure during the training is not described and verified. 2. The communication cost from the tensor parallel is kind of missing. It seems that authors assume each server/client is able to hold a pipeline stage. However, in a more realistic scenario, each stage is further divided into multiple parts and one server/client could only hold a part of a stage. Then the intensive communication of tensor parallel may dominate the inference speed, because usually the tensor parallel only happen within one server, instead of distributed devices. 3. In Table 1, why the performance of Cache-less is irrelevant to the failure rate? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The potential limitations include privacy of data processed by outside peers, as well as broader impact of making LLMs more accessible. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing the paper and leaving valuable feedback. We address the raised concerns below. > How to do fine-tuning under the proposed setting is not that clear, although the authors wrote one paragraph to explain the fine-tune part. Inference is clear and relatively simple, but how to recover from the failure during the training is not described and verified. Unlike inference, fine-tuning forward and backward passes process the entire batch at one go and **do not need to store past attention caches** between successive client requests. Thus, in case of a failure, we discard the incomplete forward/backward pass and just repeat the previous forward/backward pass request. This algorithm behaves similarly to the cache-less inference baseline in Table 1. We will extend the paper to clarify that in the next update. We also provide **fine-tuning experiments in Appendix F.1**, verifying our algorithm experimentally. > The communication cost from the tensor parallel is kind of missing. It seems that authors assume each server/client is able to hold a pipeline stage. However, in a more realistic scenario, each stage is further divided into multiple parts and one server/client could only hold a part of a stage. Then the intensive communication of tensor parallel may dominate the inference speed We run our experiments without tensor parallelism, since one transformer block of the largest open LLMs available at the moment (such as BLOOM-176B and OPT-175B) fits into 3 GiB of memory with 8-bit quantization. Thus, most consumer GPUs compatible with deep learning software **can hold a pipeline stage** with at least 1-2 transformer blocks. We do not use tensor parallelism over the Internet since its communication overhead is indeed too large. However, **our algorithm allows to use tensor parallelism across GPUs on the same machine** to speed up pipeline stages hosted on machines with multiple GPUs, since communication cost between machine's GPUs is much lower compared to the Internet. Our algorithm can treat such machines as a single "virtual GPU" with higher performance due to parallel computations. In this case, the system will take intra-host communication overheads into account while evaluating the machine's total compute throughput (later used for load balancing and fastest-inference routing). > In Table 1, why the performance of Cache-less is irrelevant to the failure rate? In case of failure, the cache-less baseline only needs to retry the last generation step and does not need to recover anything. For failure rate $p$, the number of retries for each step follows the geometric distribution with success probability $1 - p$ with the expected value of $1 / (1 - p)$. Thus, the entire process slows down by $1 / (1 - p)$ times, which turns out to be not significant for the setup from Table 1 with failure rates $p \le 10^{-2}$. However, cache-less inference is highly inefficient for longer sequences, which is not appropriate for most LLM use cases.
Summary: The paper discuss about an important application problem of distributed inference for large language models. Given the size and inference requirement for large language models, and the constraint of hardware resources, the authors put forward utilizing idle GPUs in network to sped up the inference, providing detailed algorithm implementation and real world experimentation. Overall the work is solid and the presentation is good, and the application has important real world use cases. I have a few minor comments below. Strengths: 1. real world problem of focus - the target problem to be resolved is important 2. solid experimentation - across continents experiment to demonstrate the distributed accelerator system's performance is amazing 3. detailed implementation - algorithm detail, result comparison, and analysis are solid Weaknesses: Overall I feel given the topic of distributed inference for LLM, the work is solid and clear. Some aspects to improve the work includes 1. server utilization discussion - the paper lacks coverage regarding the resource utilization of different GPU servers, given the distributed and heterogeneous computing setting 2. distributed accelerator infra requirement for LLMs - depending on different LLMs, there should be some basic requirement on the hardware. For example TPU is not covered, high-end CPUs are also not explored 3. expansion to LLM training - this is not the weakness of work, but would be great if the direction can be explored 4. consumer-grade network constraint - similar to 2, there are other real world constraints given the topic for the "consumer-grade network". To push the work beyond lab setting, network constraints like different firewalls, fault-tolerance mechanism considering failure rate beyond 1% (the upper bound of experimentation setting), etc Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above comments Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see above comments Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing the paper and leaving valuable feedback. We address the raised concerns below. > server utilization discussion - the paper lacks coverage regarding the resource utilization of different GPU servers, given the distributed and heterogeneous computing setting Resource utilization is indeed an important concern with two main aspects: memory and compute. The **memory utilization was above 90%** for all GPUs used in Section 4.2. Most of this memory is allocated for storing LLM parameters, the rest is used for activations and attention caches. In turn, the compute utilization rate depends on many factors including network bandwidth, client activity, and the exact workload. To that end, we evaluate several different server configurations with varying GPU and bandwidth: **(1)** RTX 3060 with 100 Mbit/s bandwidth, **(2)** RTX 3090 with 500 Mbit/s bandwidth and **(3)** RTX 3090 with 100 Mbit/s bandwidth. We assign each server to run forward passes for batches of 128 tokens sent by multiple concurrent clients (up to saturation), using the same workload as in Appendix F.1. We measure volatile GPU utilization (see [1]) averaged over 100 consecutive samples after a 1 minute warmup. We observe an average of **91% utilization for RTX 3060 and 100 Mbit/s, 94% utilization for RTX 3090 and 500 Mbit/s, and 66% utilization with 100 Mbit/s bandwidth.** [1] https://developer.download.nvidia.com/compute/DCGM/docs/nvidia-smi-367.38.pdf > distributed accelerator infra requirement for LLMs - depending on different LLMs, there should be some basic requirement on the hardware. For example TPU is not covered, high-end CPUs are also not explored **Server requirements.** The only GPU requirement for a server node is to have enough memory for one pipeline stage (= one transformer block). This is not an issue for most GPUs — only 3 GiB are needed for the largest open LLMs available (BLOOM-176B, OPT-175B), given that we use 8-bit quantization. Running on CPUs is possible but most CPUs are an order of magnitude slower. TPUs should work in principle, but are not supported by our software stack. **Client requirements.** The client node only computes input and output embeddings and does not require an accelerator, unless we perform fine-tuning with a computationally expensive loss function. The only requirement is to have enough memory for the embeddings (8 GiB are needed for BLOOM-176B embeddings in bfloat16). > expansion to LLM training Our system does support fine-tuning (see Section 3.4, experiments in Appendix F.1). As for pre-training from scratch, a similar setup was explored in other recent work [1, 2]. However, in contrast to our system, these methods are not designed to **(a)** run autoregressive inference with low latency and fault tolerance and **(b)** allow users to fine-tune the distributed model for multiple different tasks simultaneously. [1] Yuan, Binhang, et al. "Decentralized training of foundation models in heterogeneous environments." Advances in Neural Information Processing Systems 35 (2022): 25464-25477. [2] Ryabinin, Max, et al. "Swarm parallelism: Training large models can be surprisingly communication-efficient." arXiv preprint arXiv:2301.11913 (2023). > To push the work beyond lab setting, network constraints like different firewalls, fault-tolerance mechanism considering failure rate beyond 1% (the upper bound of experimentation setting), etc We explore other challenges, such as NATs, firewalls, and using heterogeneous hardware, in the "Real-world setup" experiments (L318-321). In particular, we show that **the servers are able to traverse NATs and firewalls** using the Circuit Relay protocol from libp2p, by opening a long-living connection to another directly available peer and asking it to become a relay. We also report plots showing the behavior of our algorithm **for failure rates 2% and 5%** in the general response PDF. We can see that it still has competitive performance in these cases. We will include these plots in the next paper revision.
Summary: The objective of this study is to facilitate the operation of Large Language Models (LLMs) using commodity hardware over the internet. However, such hardware can often be characterized by high unreliability and latency issues in networks. To mitigate these challenges, the paper introduces a dual attention caches method that backs up intermediate results and supports device failure recovery. Additionally, the authors have developed a decentralized load-balancing algorithm to optimally assign transformer blocks to each server in a bid to maximize the system's overall throughput. The authors have provided a robust implementation of the proposed system and demonstrated impressive performance on the largest publicly available open-source LLM. Strengths: * Offloading parameters of LLM to remote devices instead of local local storage (e.g. SSD) make senses. Although the former could have higher bandwidth, the IO amount could be much higher than the later. Emprical results also support this analysis. * The analysis on inference chanllenges is interesting. For example, the communication cost, past token storage. It helps the community understand the chanllenges of serving a large language model with billions of parameters. * client/server caching, shorest path routing, and automatic load balancing improve robustness, effiiency and soundness of the system. Weaknesses: * I found that Table 2 is a bit vague to follow. Please eborate the metrics steps/sec and tokens/sec per user. For example, the difference between step and tokens and between clients and users. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * For the comparsion between distributed inference and local offloading in table 2, does the local offloading also use the qunatized version of BLOOM? A clarification on this will help understand where the improvement of distributed inference comes from. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * Precise and explicit definitions of client and server are missing. * Missing references on offloading inference compute across multiple devices [1,2, 3]. Discussing these would provide a more comprehensive context and enhance the depth of analysis. [1] Kang, Yiping, et al. "Neurosurgeon: Collaborative intelligence between the cloud and mobile edge." ACM SIGARCH Computer Architecture News 45.1 (2017): 615-629. [2] Matsubara, Yoshitomo, et al. "Head network distillation: Splitting distilled deep neural networks for resource-constrained edge computing systems." IEEE Access 8 (2020): 212177-212193. [3] Dong, Xin, et al. "Splitnets: Designing neural architectures for efficient distributed computing on head-mounted systems." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing the paper and leaving valuable feedback. We address the raised concerns below. > I found that Table 2 is a bit vague to follow. Please eborate the metrics steps/sec and tokens/sec per user. For example, the difference between step and tokens and between clients and users. Table 2 reports the speed of autoregressive inference and parallel forward passes that **each client gets** on average. We assume that each user runs one client, so there is no difference between "clients" and "users" in this context. For inference, the speed is measured in generation steps per second each client can do (we use batch size 1, so each step generates 1 token), showing **generation latency**. For parallel forward, the speed is measured in tokens per second each client can process, showing the swarm's throughput during **batch processing and/or fine-tuning**. We will update the table caption and titles to clarify this in the next revision. > For the comparsion between distributed inference and local offloading in table 2, does the local offloading also use the qunatized version of BLOOM? Yes, we mention that in L267 after the paragraphs about quantization. > Precise and explicit definitions of client and server are missing. We provide short definitions in L150-152 and will expand them to the more explicit definitions provided below in the next revision. A **client** is a node operated by the user, which runs inference or fine-tuning jobs through the swarm of servers. A client only holds input and output embeddings (< 3% of model weights for BLOOM-176B) and delegates running transformer blocks (the most expensive computations) to remote servers. A **server** is a GPU-enabled node holding a set of consecutive transformer blocks and processing requests coming from client nodes. > Missing references on offloading inference compute across multiple devices [1, 2, 3]. Thank you for making us aware of this work. We focused on optimizing distributed computation of common model architectures widely used in practice today, such as BLOOM/Falcon, OPT, and LLaMA. We think the work that you mentioned provides an interesting approach that highlights how new architectures can be used in distributed computing. We will add discussion of these papers to the related work section.
Summary: This paper presents a system designed for decentralized inference and fine-tuning of large language models over distributed hardware, which allows users to efficiently run LLMs without requiring high-end hardware. The system leverages pipeline-based model parallelism, distributing model layers across nodes. Additionally, this work proposes fault-tolerant inference algorithms and load-balancing protocols to enable dynamic deployment of models. During the evaluation, experiments were conducted using the BLOOM-176B model, which demonstrated a 10x improvement in performance compared to the offloading method. Strengths: The writing of this paper is fluent, with clear and logical reasoning, and the proposed solution is well-aligned with the requirements; The design is consistent with current trends and can effectively address the problem, making it highly practical; This paper presents a comprehensive system, with reasonable comparisons made against upper bounds and benchmarks. Weaknesses: The innovative aspects of the paper are not sufficiently elaborated upon. For example, I would like to know if there are any outstanding advantages compared to the latest research such as DeepSpeed, aside from differences in application scenarios. Has there been any comparison of similar metrics? The results are somewhat unsatisfactory. Caching with restarts appears to be quite competitive, and the algorithm proposed in the paper performs better only under high failure rates. However, for short sequences, Cache-less inference performs better under high failure rate conditions (which may be due to some inherent communication issues in system). It would be helpful to compare several lengths or plot curves to more clearly show the trend and find the optimal point. Although the focus of the research is on the inference process, it would be best to systematically elaborate on the experimental results and methods related to fine-tuning for the sake of experimental completeness. A minor suggestion is to cite Algorithms 2/3 more specifically in the text to make the writing clearer. I am not particularly familiar with distributed learning, so please forgive me if I make any inaccuracies in my statements. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The paper mentions that autoregressive LLM inference cannot be performed with a single pass through the model, leading to a higher system complexity. Do the proposed algorithms still have advantages for models that are not autoregressive? Was this the primary design point of the system, or is it also applicable to other models? During the evaluation of offloading, how many GPUs were used for benchmarking? Is there a strict basis for the best-case scenario? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: (1)To better evaluate the system, it would be beneficial to extend the experiments to include more models. (2)Data privacy is a concern, as multiple clients may contribute to data misuse. It would be helpful to propose some solutions to address this issue. (3)Furthermore, the innovative aspects of the paper could be further clarified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing the paper and leaving valuable feedback. We address the raised concerns below. **Weaknesses** > The innovative aspects of the paper are not sufficiently elaborated upon. For example, I would like to know if there are any outstanding advantages compared to the latest research such as DeepSpeed, aside from differences in application scenarios. Has there been any comparison of similar metrics? Our paper focuses on an algorithm for running LLMs using a swarm of globally distributed GPUs connected over the Internet, addressing challenges of unreliable network and hardware. Standard pipeline parallelism methods like DeepSpeed can't work in this setup since they are designed for high-speed networked GPU clusters. Thus, we offer **a novel, cheap way to efficiently run LLMs for people without high-end hardware** — they can collaborate with other researchers and join their GPUs over the Internet to host the full model. This was **not possible with previously existing methods** and software. We do not expect our algorithm to beat existing methods on a local, reliable GPU cluster, since this is not a setup it was designed for. Still, **we provide comparison with local pipeline parallelism that uses DeepSpeed** in Table 2 (see "Local PP (NVLink)", details in L309-311) to demonstrate the overheads that the proposed geo-distributed setup has compared to a local GPU cluster (expensive hardware that is not available to everyone). > Caching with restarts appears to be quite competitive, and the algorithm proposed in the paper performs better only under high failure rates. [...] It would be helpful to compare several lengths or plot curves to more clearly show the trend and find the optimal point. The paper proposes the general-purpose system that has to successfully operate on both short and long sequences for various failure rates. The baselines are indeed competitive at some operating points but are highly impractical at others. We agree that the plots would be useful and provide them in the general response **PDF**. Unlike the baselines, our algorithm provides **reasonable performance for all tested conditions**, especially for higher failure rates (common for communicating over the Internet, using spot/preemptible instances or unreliable hardware). We will include the plots in the next paper revision. > it would be best to systematically elaborate on the experimental results and methods related to fine-tuning for the sake of experimental completeness. Please see **fine-tuning experiments in Appendix F.1**. Unlike inference, fine-tuning processes the entire batch at one go and does not need to store past attention caches between successive client requests. Thus, in case of a failure, we discard the incomplete forward/backward pass and just repeat the previous forward/backward pass request. We will elaborate on this in the main text in the next revision. **Questions** > Do the proposed algorithms still have advantages for models that are not autoregressive? Our system is applicable to arbitrary models, not only autoregressive ones. In our opinion, using it may be beneficial when: **(1)** the model does not fit into most customer GPUs (i.e., you can't load it locally), **(2)** single-batch computation time is much faster when the entire model is present in the GPU memory, compared to copying model weights to GPU on demand (i.e., you can't use offloading efficiently). These points are especially relevant for autoregressive LLMs due to their huge size, and the extra need to maintain past attention caches makes the problem even more pronounced and challenging. > During the evaluation of offloading, how many GPUs were used for benchmarking? Is there a strict basis for the best-case scenario? Note that the bottleneck for offloading-based LLM inference is the GPU bus throughput, not the GPU performance (or the number of GPUs). For inference, the **best-case speed estimate** for offloading considers the GPU bus throughput only (for best existing hardware) and **assumes infinite GPU performance — and still turns out to be 5-10x slower** than our system. We provide step-by-step calculations in Appendix B. The offloading experiments use 1x A100 (see Table 2 in Section 4.2) or 1x 3090 (see Table 6, Appendix F.2) and fully confirm our theoretical analysis. They show that **(1)** the **real offloading performance is even smaller** than the estimated best-case performance due to compute costs and other overheads (0.0495 < 0.18 steps/sec) and **(2)** the offloading performance **does not depend much on the GPU model**, having similar performance for both A100 and 3090 (0.0495 vs 0.0427 tokens/sec). **Limitations** > it would be beneficial to extend the experiments to include more models We agree and provide experiments with **Llama 2 (70B)** in the general response. We will include them in the next paper revision. > Data privacy is a concern, as multiple clients may contribute to data misuse. It would be helpful to propose some solutions to address this issue. We acknowledge this limitation in the paper (L346) and discuss potential solutions, such as using privacy-preserving computation methods or setting up a private swarm between trusted parties, in **Appendix G.** --- Rebuttal Comment 1.1: Comment: Firstly, I appreciate the authors' efforts in addressing my concerns with detailed explanations. Regarding the suggestion about applicability across different models, it was a more general remark, and I commend the authors for promptly conducting the experiments. The timely addition of the final improvement effect graph and the willingness to acknowledge the limitations in the work are also notable. I have accordingly adjusted my evaluation score.
Rebuttal 1: Rebuttal: We thank all reviewers for taking the time to study our paper and leave valuable feedback. We are glad that the reviewers appreciated the motivation behind our work (*8b6R, zN15*), our analysis of LLM inference challenges (*2M8A*), the soundness of the proposed system for geo-distributed inference (*2M8A, 8b6R*), and our experimental work (*zN15, 8b6R*). **Experiments with Llama 2 (70B).** Following the request of reviewer *DRjo*, we provide experiments with Llama 2 (70B) below. We run our system on 3 machines with one T4 GPU (16 GB memory) each and compare its performance to offloading running on each of these machines independently. The model is quantized to the NF4 format [1] in both cases. All other setup details are the same as in Section 4.2 and Table 2. | GPUs | Parallel clients | Bandwidth, RTT | Single-batch inference | | Parallel forward | | |------|------------------|--------------------|------------------------|---------|------------------|-----------------| | | | | steps/s for each client | | tokens/s for each client | | | | | | **128 tokens** | **2048 tokens** | **Batch 1x128** | **Batch 64x128** | | | 1 | 1 Gbit/s, < 5 ms | 2.29 | 2.02 | 45.4 | 155.1 | | | 1 | 100 Mbit/s, < 5 ms | 2.29 | 2.01 | 37.5 | 140.2 | | 3x T4 | 1 | 100 Mbit/s, 100 ms | 1.57 | 1.44 | 23.7 | 128.7 | | | 3 | 1 Gbit/s, < 5 ms | 2.02 | 1.74 | 21.2 | 124.2 | | | - | Offloading | 0.139 | 0.139 | 18 | 139.9 | We can see that our system still **beats offloading by more than 10x for inference**, even when all clients run inference simultaneously. Our system is also faster at fine-tuning in case of smaller batches or good network bandwidth. Thus, **all conclusions from Section 4.2 hold** for this setup. [1] Dettmers, Tim, et al. "QLoRA: Efficient finetuning of quantized LLMs." arXiv preprint arXiv:2305.14314 (2023). **Experiments with more failure rates.** Following the feedback from reviewers *DRjo* and *zN15*, we attach a **PDF with plots** that report our system's performance for a wider range of failure rates (including > 1%) and sequence lengths. Unlike baselines, our algorithm provides **reasonable performance in all tested conditions**, especially for higher failure rates (common for communicating over the Internet, using spot/preemptible instances or unreliable hardware). **Limitations.** Finally, reviewers *8b6R* and *DRjo* noted data privacy as a limitation of our work. We acknowledge it in the paper (L346) and discuss potential solutions, such as setting up a private swarm between trusted parties or using privacy-preserving computation methods, in **Appendix G**. Pdf: /pdf/5dbf60402f41fb25c5a283ed53836479598afe7e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Exact Representation of Sparse Networks with Symmetric Nonnegative Embeddings
Accept (poster)
Summary: This paper extends for by Chanpuriya et al (2020) for networks with homophilous and heterophilous edges. They show that their model is able to yield interesting results and they give theoretical underpinnings as well. Strengths: The model is straightforward but seems to work well. Due do its relative simplicity and its relationship to previous work it is possible to give theoretical guarantees. Weaknesses: The paper seems to ignore relationships to multilayer and multiplex networks which are also able to capture different types of relationships through different layers. Mason Porter, Peter Mucha and others have written many papers on such networks. There is also a literature on spectral methods for signed networks which can also capture heterophily; Mihai Cucuringu comes to mind. These areas should be mentioned in the related works section. Some of the presentation is not clear: In (4) it seems that A is a graph but the right hand side is a probability matrix. How is a 0-1 matrix reconstructed this way? The overall method is competitive but does not outperform any of the other methods. The conclusion does not give any indication of how the performance could be improved. Is the issue related to independence assumptions? A discussion would be helpful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the motivating example there are 10 cities but the model uses 12 communities. How is the number of communities chosen? Equation 4: what do the wriggly lines mean? Theorem 5.1: can it not happen that X Y^T has entries taking the value 0, in which case H would give the value 1/2 and not 0 or 1? Proof of Theorem 5.2: In my understanding a forest is a collection of trees. What is a partition into forests? It would be good to give a worked example in the appendix. How are the children found? Are the trees supposed to be rooted, choosing a root at random, and then orienting the tree from there? The proof seems to be constructive but it is not clear what the resulting matrices X and Y would be; the final step in the proof seems to be missing. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no mention of societal impact. For example when assessing credit risk by assigning customers to communities of different risk, would extra caution be advised? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and detailed questions, which we address piecewise. *On connections to multiplex networks and spectral methods for signed networks* Thank you for bringing up these interesting connections. Indeed, in some sense, signed graphs seem to capture the idea of heterophilous communities more directly than standard graphs. It seems quite possible that one of the spectral methods you allude to, such as SPONGE, could be slightly altered to detect heterophilous communities instead of homophilous ones. Regarding multiplex networks, this is also an interesting area for connections. At a high level, one of our ideas with this paper is considering types of communities beyond the standard, homophilous ones, and this concept arises more readily when studying types of graphs beyond the standard. This includes multilayer/multiplex graphs, which can have many kinds of edges and thus give more freedom in defining a “community.” We will reconsider these topics and the suggested references, and look into integrating them appropriately into our text. >The overall method is competitive but does not outperform any of the other methods. The conclusion does not give any indication of how the performance could be improved. Is the issue related to independence assumptions? A discussion would be helpful. Our method does generally outperform the competitors on the three tasks in the main paper, as well as the one in the appendix; on some tasks and datasets, there is a significant outperformance. That being said, there is certainly room to improve on the numbers. Rather than maximizing empirical performance, one of our main goals with this paper is advancing a conceptual idea and putting it on solid theoretical and empirical footing. This core idea is the benefit of adding to nonnegative embedding models a second, heterophilous embedding in addition to the usual homophilous embedding. This idea could be integrated into larger, more complex, and possibly more performant models, such as deep nonnegative matrix factorization or deep clustering models. >In the motivating example there are 10 cities but the model uses 12 communities. How is the number of communities chosen? There are 10 cities but also 2 genders, for a total of 12 (overlapping) communities. The goal with the unsupervised task here is, by detecting both the city and gender communities, to retrieve a 12-dimensional vector for each node/person which is very positive for that person’s city and gender, and roughly 0 for the 10 other entries. In Figure 3, it is shown that our method achieves this, whereas SVD and BigClam have some shortcomings. >In (4) it seems that A is a graph but the right hand side is a probability matrix. How is a 0-1 matrix reconstructed this way? > Equation 4: what do the wriggly lines mean? $\mathbf{A} \approx \sigma(\mathbf{X} \mathbf{Y}^\top)$ denotes approximate equality, as opposed to exact equality as in $\mathbf{A} = \sigma(\mathbf{X} \mathbf{Y}^\top)$. Equation 4 is informally expressing that the goal of LPCA is for $\sigma(\mathbf{X} \mathbf{Y}^\top)$ to be a high probability (near $1$) where $\mathbf{A}$ is $1$ and a low probability (near $0$) where $\mathbf{A}$ is $0$. >Theorem 5.1: can it not happen that X Y^T has entries taking the value 0, in which case H would give the value 1/2 and not 0 or 1? Indeed, $H(0) = 1/2$, but the theorem is asserting that there exist $\mathbf{X}$ and $\mathbf{Y}$ such that their product is positive where $\mathbf{A}$ is $1$ and negative where $\mathbf{A}$ is $0$; the theorem is asserting the existence of such $\mathbf{X}$,$\mathbf{Y}$ whose product has no zero entries. >Proof of Theorem 5.2: In my understanding a forest is a collection of trees. What is a partition into forests? It would be good to give a worked example in the appendix. How are the children found? Are the trees supposed to be rooted, choosing a root at random, and then orienting the tree from there? Indeed, a forest is a union of disjoint trees. Equivalently, it is an acyclic undirected graph. For our purposes, an undirected graph being a forest means that its edges can be made directed such that each node has at most one incoming edge. This is because each node in a forest participates in a single tree and is either that tree’s root or has a single parent, so orienting each edge towards the parent achieves the desired outcome. A partition into $\alpha$ forests is a partition of the graph’s edges into $\alpha$ undirected acyclic graphs. For our purposes, this means the graph’s edges can be oriented such that each node has at most $\alpha$ incoming edges. We have provided a diagram of an example (see Figure 2 in the PDF of rebuttal figures), and we hope it will help clarify this. >The proof seems to be constructive but it is not clear what the resulting matrices X and Y would be; the final step in the proof seems to be missing. We show how to construct a matrix $\mathbf{M} \in \mathbb{R}^{n \times n}$ that exactly encodes the graph in the signs of its entries. We also prove that this matrix is low-rank, specifically that $\text{rank}(\mathbf{M}) = O(\alpha^2)$. By definition of rank, this means there exists a factorization $\mathbf{M} = \mathbf{X} \mathbf{Y}^\top$ for some matrices $\mathbf{X},\mathbf{Y} \in \mathbb{R}^{n \times O(\alpha^2)}$. Constructively, this factorization could be retrieved in many ways, e.g., by eigendecomposition of $\mathbf{M}$. We will add a mention of this to clarify the constructive nature of the proof. --- Rebuttal Comment 1.1: Title: The main result from Chanpuriya et al (2020) Comment: Thank you for your reply. Regarding (4), it would be better to rephrase as it sounds as if (4) was a way to construct \tilde{A}, which is not the intention I understand. Instead (1) is the definition of \tilde{A} when B and C are given. Looking more into Theorem 5.1, this is not actually the main result from Chanpuriya et al (2020); they prove it for the function \sigma(s) = max( 0, min (1,x)) instead of the Heaviside function H(x). So why does Theorem 5.1 hold? Thank you for expanding on your notion of a forest. What is the difference between a forest and a tree in your notion? Usually forests are collection of trees, and hence a partition into forests is ambiguous. Your example in the pdf seems to indicate that you equate forest and tree. --- Reply to Comment 1.1.1: Title: Response to follow-up questions Comment: Thank you for following up. >Regarding (4), it would be better to rephrase as it sounds as if (4) was a way to construct \tilde{A}, which is not the intention I understand. Instead (1) is the definition of \tilde{A} when B and C are given. Thank you for the suggestion. We agree that it would improve clarity to modify (4) by replacing $\mathbf{A} \approx \sigma( \mathbf{X} \mathbf{Y}^\top )$ with $\mathbf{A} \approx \tilde{\mathbf{A}} = \sigma( \mathbf{X} \mathbf{Y}^\top )$. We will make this change and modify the surrounding text appropriately. >Looking more into Theorem 5.1, this is not actually the main result from Chanpuriya et al (2020); they prove it for the function \sigma(s) = max( 0, min (1,x)) instead of the Heaviside function H(x). So why does Theorem 5.1 hold? This is a great, precise question. The proof of the theorem in Chanpuriya et al. (2020) is actually in terms of the Heaviside nonlinearity $H(z)$ (they call it ($s(z)$), and they just note how it implies the theorem in terms of the clipping nonlinearity $\sigma(z)$ (see the second paragraph of the proof). We lift the theorem from their work without the extra step of this implication because $H(z)$ is more suitable for this paper; we will note this clearly in the revision. (The preceding answers the question of why Theorem 5.1 holds, but if you are curious, the implication is as follows: suppose you were given an adjacency matrix $\mathbf{A}$ and found $\mathbf{X},\mathbf{Y}$ such that $H( \mathbf{X} \mathbf{Y}^\top ) = \mathbf{A}$. Then you can scale up the factorization by replacing $\mathbf{X}$ with $\mathbf{X}' = c \cdot \mathbf{X}$ for a large enough positive constant $c$ such that $\sigma( \mathbf{X}' \mathbf{Y}^\top ) = H(\mathbf{X} \mathbf{Y}^\top ) = \mathbf{A}$. This is because, for each entry $z$ in $\mathbf{X} \mathbf{Y}^\top$, either 1) $z$ is negative, in which case $c \cdot z$ is still negative and hence $\sigma(c \cdot z)=H(z)=0$; or 2) $z$ is positive, in which case, for a large enough $c$, you have $c \cdot z \geq 1$ and hence $\sigma(c \cdot z)=H(z)=1$.) >Thank you for expanding on your notion of a forest. What is the difference between a forest and a tree in your notion? Usually forests are collection of trees, and hence a partition into forests is ambiguous. Your example in the pdf seems to indicate that you equate forest and tree. We use the standard notion of a forest, which is a union of *disjoint* trees. The example in the PDF shows a partition of the edges of a graph into two forests - call them $F_ \text{blue}$ and $F_ {\text{orange}}$. In this case, since we wanted a small example, $F_ \text{blue}$ and $F_ \text{orange}$ both happen to be trees. This does not mean that the union of $F_ \text{blue}$ and $F_ \text{orange}$ itself is a forest, because the union is exactly the original graph, which has a cycle and hence cannot be a forest. (Note that if the definition of forest was *any* union of trees, rather than a union of *disjoint* trees, then the union would be a forest. This is disallowed because the two trees are touching and not disjoint.) Perhaps this thought will help clarify: Suppose we took the example in the PDF and added two extra nodes which have an edge between them but are disconnected from the rest of the graph. These two extra nodes will definitely demand an extra tree to cover (which will just comprise the single edge between them), but they will not demand an extra forest, because this extra tree will be disjoint from both $F_ \text{blue}$ and $F_ {\text{orange}}$ and hence can be added to either forest.
Summary: This paper proposes and studies a community-based factorization model for exact representation of sparse networks. The authors extend a prior result on exact factorization which is based on logistic principal component analysis (LPCA), and show that an LPCA factorization can be converted to the proposed community-based graph factorization. The new factorization model explicitly captures both homophily and heterophily among nodes in the graph, and hence it is more interpretable. The authors carry out experiments using a different training algorithm to fit the model, and illustrate the advantage of the proposed decomposition method over existing ones for community detection and link prediction tasks. Strengths: - The paper provides an intuitive community-based graph factorization model that is able to explicitly capture both homophily and heterophily among nodes in the graph. The expressiveness of the new factorization is backed by a reduction from LPCA which is shown to be able to exactly represent any sparse graph with bounded arboricity. Although it may not be something of extremely high theoretical significance, I liked this aspect of technical development. It is an interesting result to have. Weaknesses: - There is a clear gap between the exact factorization studied up to Section 5 and the actual training algorithm evaluated in Section 6. Although the authors have mentioned the algorithmic gap explicitly at the end of Section 5. In order to well approximate the heavyside function which is used to achieve exact representation by LPCA, the input of the logit function should have entries that are arbitrarily large in magnitude. However, the training algorithm uses regularization to prevent this. The regularization does exactly the opposite of what is needed for exact representation. Therefore, there is a significant gap between theoretical development and empirical verifications. - Since the paper studies exact representation, there should be at least an experiment (e.g. the synthetic one) which shows that the reconstruction error goes to 0. - The overall writing and clarity can be improved. There are a few typos. For example, Line 32, principal components analysis -> principal component analysis. I also could not find where in the paper the term arboricity is clearly defined. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - For the synthetic experiment shown in Figure 2, what happens if you increase the embedding length further? Will SVD beat your method afterwards? Can you make your method has close to 0 error (since it is supposed to be exact)? - What happens if you don't regularize your training algorithm for the experiments in Section 6, and how would the methods compare in that case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I cannot find where the authors discuss the limitations or potential negative societal impact of their work, but I think it is fine. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. We appreciate the recognition of our conceptual and theoretical contribution. We are thankful for the suggestions on improving clarity – we will incorporate them. We address the criticisms, which mainly concern the empirical piece. >There is a clear gap between the exact factorization studied up to Section 5 and the actual training algorithm evaluated in Section 6… The regularization does exactly the opposite of what is needed for exact representation. We discuss this a bit around Line 290, but exact factorization is not the goal of our empirical contribution. As we note, our training algorithm is quite close to that of Chanpuriya et al., and essentially the whole of their empirical work concerns finding these exact embeddings with such a training algorithm. They report the embedding dimensionalities needed for exact factorization of various networks, among other related information, and we have little to add on that front. We instead use the empirical section to complement our theoretical contributions, and highlight the benefit of adding to nonnegative embedding models a second, heterophilous embedding in addition to the usual homophilous embedding. Indeed, regularization does exactly the opposite of what is needed for exact representation, but it is crucial for our method (as well as the comparison methods) to yield good results on tasks like community detection and link prediction. We also note that the proofs in our theoretical section are constructive and hence provide a totally different “fitting” algorithm that is guaranteed to give an exact factorization, though the resulting embeddings are unlikely to be of practical use. >For the synthetic experiment shown in Figure 2, what happens if you increase the embedding length further? Will SVD beat your method afterwards? Can you make your method has close to 0 error (since it is supposed to be exact)? It is quite likely that if we increase capacity by increasing the embedding length that SVD will have lower reconstruction error since it is unregularized. At your suggestion, we ran the training for our method without regularization, evaluated embeddings lengths from 20 to 60, and plotted the results in the PDF of rebuttal figures – see Figure 1. Our method has lower error at all embedding lengths. Further, the fit was essentially exact at length 60. SVD will not achieve zero error until the embedding length equals the number of nodes, 1000, since the adjacency matrix is generally full-rank. We do note that, related to our points above, this is of limited practical significance, since our regularized method captures all the “signal” in the data at embedding length 12, after which there is only noise to fit. --- Rebuttal Comment 1.1: Comment: Thank you for the responses and additional experiments. Most of my questions have been addressed, I slightly raised my score. But I still don't like fact that the actual exact factorization may have very limited practical use, as the authors pointed out in the above. It seems that what works in practice is the training algorithm in Section 6. The theory-to-practice gap is still there. --- Reply to Comment 1.1.1: Comment: We thank you for reviewing, following up, and reconsidering your score. We are glad we have addressed the point from your review about our training algorithm's capability for exact factorization. Regarding the point from your comment on the usefulness of exact factorization itself, we briefly expand on this: One of the core ideas of the paper is the benefit of adding to nonnegative embedding models a second, heterophilous embedding in addition to the usual homophilous embedding. This allows for the representation of heterophilous structures, which models like BigClam and SymNMF (which lack a heterophilous embedding) will struggle with at practical embedding lengths, as is clearly illustrated in Figures 1-3 for the synthetic plot and also evidenced for real-world networks in Figure 4. The point of the exact embedding result in this context (though it may also be of independent theoretical interest) is to show that our model, and more broadly, nonlinear factorizations with *both* kinds of embeddings, are enough to represent *any* kind of network structure. In other words, representing homophilous and heterophilous structure is enough, and there is no need for, e.g., a third embedding. While the exact embedding results (both theoretical and empirical) are in the regime of no/low regularization and tight fitting, they are reflected in the practically regularized regime which tries to capture signal and ignore noise: as we show throughout the experiments, there is benefit to also capturing heterophilous "signal." --- Rebuttal 2: Title: Please provide additional feedback Comment: Hi, You seem to have the lowest score for this paper. Could you please acknowledge that you have read the rebuttal and let us know if you still have concerns or not. If not, I would encourage you to raise your score.
Summary: This paper proposes to use the logistic PCA model to represent. Paper's main contribution is theoretical – improving the embedding dimensionality bounds from maximum degree to the arboricity of a graph. Overall, I believe the contribution is significant, but currently the paper does not highlight why too much – see more in the weaknesses section. The evaluation is severely limited, which might be forgivable for a more theory-oriented paper. Strengths: * Theoretical guarantees are interesting to (potentially) broader research community. * The methods offers grounded interpretability of node embeddings in terms of the community structure. * The paper is written clearly and should be easy to understand to the general audience. Weaknesses: * While the theoretical results are interesting, they are not well connected to the literature on either: ** Spanning trees and effective resistances and graph sparsification would be exciting to sparsification folks. ** Curvature (through bounds on Ollivier-Ricci curvature from effective resistance) would yield interesting connections to GNNs * The proposed algorithm is not really practical computationally speaking and does not scale to large graphs. * Experiments are not really indicative of real-world performance of the algorithm and rather focus on provide some free-form support for the theoretical claims of the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Would node classification experiments on the datasets like POS/PPI/Blog (these are already present in the experimental section) strengthen the paper? Suggested comparison is some early neural embedding such as DeepWalk. This would open up interest from the subfield of more heuristic node embedding methods. * Can you detail the comparison with Chanpuriya et al., especially in terms of the proof? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes; although I would stress limited practicality of the proposed algorithm due to its time complexity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and suggested directions for improvement. We address your criticisms and suggestions as follows. *On connections of our theoretical results to effective resistance, sparsification, curvature, and more* Thank you for suggesting these connections. We will look into integrating and discussing them appropriately. We discuss some initial thoughts: While we focus on exact embeddings in the theoretical results, it would certainly be interesting to look at approximate embeddings, e.g., an embedding that produces a graph which is a spectral sparsifier or spanner of the input graph. Typically, spectral sparsifiers are produced by randomly sampling edges of a graph according to their effective resistances (see e.g. [Spielman, Srivastava 2008]). However, it is also known that a spectral sparsifier can be obtained by sampling $O(\log n / \epsilon^2)$ random spanning trees (see e.g., [Fung, Harvey 2010]). This result ensures that any graph has a spectral sparsifier with arboricity $O(\log n / \epsilon^2)$, and thus can be approximately embedded according to our results with dimension $O(\log^2 n / \epsilon^4)$. Observe that such a result was not possible with just the max degree bound of Chanpuriya et al. as there exist simple examples of graphs where all spectral sparsifiers must have high max degree (e.g., a star graph). It would be interesting to understand if the above approximate embedding bound could be improved via a more refined argument, or extended, e.g. to spanner constructions, many of which should also have low arboricity. It would also be interesting to find practically effective algorithms for finding such approximate embeddings. Relating to spanners, while our exact embedding does not directly constitute a spanner, it can be thought of as an exact distance oracle, which is a general data structure for approximating distances on graphs (see e.g., [Thorup, Zwick 2005]). However, while our embedding is very compressed, it is not clear if it is useful in this context, since there is no obvious way to *efficiently* calculate distances in the graph given just the embedding, faster than simply reconstructing the full graph. However this could be an interesting direction to pursue in future work. We are less familiar with Olivier-Ricci curvature, but this also sounds like an interesting connection. Certainly, looking at, e.g., [Sia, Jonckheere, Bogdan 2019], curvature appears to be a powerful tool for detection of traditional, homophilous communities. Possible links to heterophilous community detection and exact embedding are less immediately clear, but are an interesting direction. Please do elaborate if you had some such links in mind. >The proposed algorithm is not really practical computationally speaking and does not scale to large graphs. The algorithms we evaluate in the main paper indeed scale quadratically in $n$, and hence are not suitable for large graphs. However, as we note around Line 293, our loss function admits a natural stochastic approximation, so our method transforms straightforwardly to a stochastic version that scales linearly in $n$. We discuss this more in the appendix, where we also evaluate these more scalable algorithms for an industry task. As we state in Line 297, we use the simpler non-stochastic training in the main paper to isolate the impact of model capacity, which is the focus of this work, as opposed to optimization. >Experiments are not really indicative of real-world performance of the algorithm… Would node classification experiments on the datasets… strengthen the paper? Suggested comparison is some early neural embedding such as DeepWalk. Indeed, we focus in this work on experiments which complement our theoretical results, which concern non-negative / community-based embeddings. Relatively few methods generate such embeddings, and DeepWalk, node2vec, LINE, etc. are not among them. This is also why we do not evaluate on the node classification task. The concept and theory behind our model is relevant to community detection, which we do evaluate on, and which is an unsupervised analog of node classification. We do this to keep a tight focus on one of our main goals with this paper, which is advancing a conceptual idea and putting it on solid theoretical and empirical footing. This core idea is the benefit of adding to nonnegative embedding models a second, heterophilous embedding in addition to the usual homophilous embedding. As we note in the response to Reviewer zLuj, this idea could be integrated into larger, more complex, and possibly more performant models, including deep models / GNNs, and including models more suitable for node classification. >Can you detail the comparison with Chanpuriya et al., especially in terms of the proof? We start by comparing the proofs of the bounded degree/arboricity results. Both proofs involve polynomial interpolation arguments (this is a common technique for theory about sign rank in general), and we use similar notation where possible for accessibility, which emphasizes similarities. However, they are different in that 1) the central polynomials are different; 2) our proof involves a particular decomposition of $\textbf{A}$ into $\textbf{B}+\textbf{B}^\top$; and 3) the final part our proof involves an entrywise product of polynomial matrices. No analog of (2) or (3) appears in Chanpuriya et al. More broadly, we think that, while this work certainly builds on Chanpuriya et al., the combination of the arboricity bound above with the non-negative component yields a new conceptual contribution. The result in Chanpuriya et al. is that bounded-degree graphs admit exact low-rank factorization; in this work, we show that for sparse graphs in general (which covers a much wider range of real-world networks), you can exactly express the graph structure in terms of node communities, so long as you also allow for "heterophilous" communities. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I hope you find connections interesting/useful. In the current form without additional experimental evidence I do not think I should raise the score. --- Reply to Comment 1.1.1: Comment: We thank you for reviewing, responding, and raising interesting connections. Related to your comment, we briefly summarize our experiments for the reviewers and chairs: - We evaluate NMF models on their ability to represent a synthetic network, as well as several real-world networks. In the rebuttal PDF, we also confirm our model's ability to do exact embedding. - We then evaluate on two other *unsupervised learning* tasks (more precisely, tasks without info beyond network structure): 1) community detection and 2) link imputation. - In the appendix, to investigate a more realistic application, we evaluate scalable ($O(n)$) versions of the NMF methods for tabular data completion (via link imputation) on industry datasets. These experiments span a range in terms of corroborating our theoretical results directly to supporting the graph modeling concepts we introduce. While we believe we have added sufficient experimental support, in order to maintain clarity and readability, we are careful to keep our experimental section closely related to the theoretical and conceptual contributions of the work.
Summary: This paper concerns several main results/themes: - the authors show a theoretical result on a prior model known as LPCA. Their result indicates that exact factorizations for graphs under the LPCA model is possible under a bounded arboricity assumption, which is more generally applicable than the prior degree-based assumption. - the authors propose a graph generative model that uses two non-negative vectors (one for homophily, one for heterophily) per node as well as a non-linear linking function to embed a given graph in an interpretable manner. This models generalizes the prior LPCA model, and has nice properties such as having factorizations that are non-linear, captures heterophily in addition to homophily, and is non-negative. - they show that the theoretical results that are available for LPCA (including the one that they derived) is applicable to their more general model as well. - they show promising experimental results on datasets. Strengths: - Originality: The two original contributions are mainly as follows. 1. the theoretical result on LPCA representations which uses a new arboricity criteria rather than a max degree criteria. 2. The proposed generative model (which is more general than LPCA). To the best of my knowledge, these two contributions have not been priorly proposed. I believe that the level of originality of this article, while not groundbreaking, is sufficient for publication at NeurIPS. - Quality: The quality of the experiments and the theoretical results are sound and technically solid, to the best of my knowledge. The results of the experiments supports the authors' claims that their proposed approach capture heterophily in an interpretable manner and that in specific cases it outperforms competitors. - Clarity: the paper is presented in a clear and easy to read manner, which is much appreciated. The authors motivated their results well. - Significance: Graph embeddings are certainly a relevant area of research nowadays. The authors' contributions have moderate impact, given their expressive, interpretable and more generally applicable model. It is significant enough for publication at NeurIPS. Weaknesses: - The authors covered virtually all the bases. Overall, I do not see any obvious shortcomings/omissions in the paper. I do have some comments/questions below: - From a theoretical and originality perspective, the advances that they make over the prior result (the arboricity vs degree assumption) appears rather marginal (similar to Chanpuriya 2020). Yet a new result is a new result. I do not find this to disqualify this article from being accepted. - I find a small disconnect between the theoretical results and the proposed methods. The theory says that exact representations exist. The methods provide one way to find a representation. But there seems to be little linking the two: there seems to be no guarantee that what the algorithm actually finds in practice is the exact representation. It is possible that I might be missing something here, and any clarifications from the authors would be appreciated. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - See above section Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The main limitations of these classes of methods is scalability and computational complexity. The authors briefly address this in the paper in A.3 for example by mentioning a stochastic training approach. I think the author's discussion of the limitations is sufficient but could be improved, for example, by discussing computational complexity (big Oh run time) in more explicit manners in both the deterministic and the stochastic case. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review. We address some points you raise below. >From a theoretical and originality perspective, the advances that they make over the prior result (the arboricity vs degree assumption) appears rather marginal (similar to Chanpuriya 2020). We believe that, while this work certainly builds on Chanpuriya et al., the combination of the arboricity-based bound with the non-negative component yields a new conceptual contribution. At a high level, the result in Chanpuriya et al. is that bounded-degree graphs admit exact low-rank factorization. In this work, we show that for sparse graphs in general (which covers a much wider range of real-world networks), you can exactly express the graph structure in terms of node communities, so long as you also allow for "heterophilous" communities. Both our theoretical and empirical work ground a core idea that could be integrated into larger and more performant models than the ones we test here, including deep models: the benefit of adding to nonnegative embedding models a second, heterophilous embedding in addition to the usual homophilous embedding. We see Chanpuriya et al. as being more abstracted from practical applications. >I find a small disconnect between the theoretical results and the proposed methods. The theory says that exact representations exist. The methods provide one way to find a representation. But there seems to be little linking the two: there seems to be no guarantee that what the algorithm actually finds in practice is the exact representation. It is possible that I might be missing something here, and any clarifications from the authors would be appreciated. Indeed, we provide no theoretical guarantee that our gradient-descent-based training algorithm in the empirical section will find an exact embedding. In some quick experiments in response to Reviewer SSbD (shown in Figure 1 of the rebuttal PDF), we did find that this algorithm can find an exact embedding, at least on the synthetic graph, but we do not focus on this for a few reasons. As we note around Line 290, exact factorization is not the goal of our empirical contribution. Our training algorithm is quite close to that of Chanpuriya et al., and essentially the whole of their empirical work concerns finding these exact embeddings with such a training algorithm. They report the embedding dimensionalities needed for exact factorization of various networks, among other related information, and we have little to add on that front. We instead use the empirical section to complement our theoretical contributions and highlight the core idea we discuss above. We also note that the proofs in our theoretical section are constructive and hence provide a totally different “fitting” algorithm that is guaranteed to give an exact factorization, though the resulting embeddings are unlikely to be of any practical use. >The main limitations of these classes of methods is scalability and computational complexity. The authors briefly address this in the paper in A.3 for example by mentioning a stochastic training approach. I think the author's discussion of the limitations is sufficient but could be improved, for example, by discussing computational complexity (big Oh run time) in more explicit manners in both the deterministic and the stochastic case. This is a good point. We note why we focus on the less scalable non-stochastic version in the main paper, and we discuss and evaluate a more scalable stochastic version in the appendix, but we will also specifically note the computational complexities of these versions, which are quadratic and linear in the number of nodes, respectively. --- Rebuttal Comment 1.1: Title: reply to authors Comment: I thank the authors for reading and address my comments. I am satisfied with the authors' responses.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their time and helpful suggestions. We address each reviewer with an individual reply. Here, we post the PDF of rebuttal figures that are referenced in these replies. Pdf: /pdf/3fc7ee135670d9dff2dccaebb77a19f5aba91c10.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data
Accept (spotlight)
Summary: This work proposes a structure-free graph condensation paradigm to distill a large-scale graph into a small-scale graph node set without explicit graph structures. The node attributes of the obtained small-scale condensed graph-free data could encode topology structure information. And the condensed node set could serve as a substitution to replace the large-scale graph for training GNNs and achieves comparable performance on the test set. The method contains two techniques, (1) the parameter matching schema under imitation learning, and (2) a dynamic evaluation schema with a GNTK score. The experimental results could verify its claims and show performance effectiveness. Strengths: S1-[originality]: This work's most remarkable part is its structure-free condensation paradigm, which removes the graph structure by encoding it into node attributes, so that the obtained condensed graph-free data only contains a set of informative nodes. I think this paper would inspire many interesting questions for future researches and topics, when only use a set of nodes to represent the whole large-scale graph for training but obtain comparable test results. S2-[clarity]: Overall, the question descriptions, contributions, techniques, and experimental results of this paper are described clearly and soundly. first, the concept of structure-free graph condensation and its relevant scenarios have been clearly defined and exemplified. second, the contributions and techniques including the training trajectory matching with online GNN parameters and the GNTK-based dynamic score are clear and sound. This paper could ease three-level optimization to a bi-level optimization without condensed graph structures, which is a novel and interesting structure-free graph condensation pattern. The GNTK-based dynamic evaluation score enables the closed-form solution of GNN evaluation and sounds novel to me. third, the experimental results compared with the whole graph dataset training results could support this paper's arguments. An ablation study verifies the effectiveness of the structure-free paradigm. There is another interesting part that the results across different GNN architectures show surprisingly good generalization ability in terms of the condensed node set. This is an inspirable result and deserves future explorations. S3-[significance]: this work on graph condensation is an interesting problem to reduce the effects brought by large-graph data scale and quantity, and it might be helpful for reducing calculations in real-world applications. Weaknesses: There are only some technical details that are not well presented here. W1: how to choose the condensation ratios like Cora with 0.9% and Citeseer with 1.3%? More details and explanations should be involved. W2: how to deal with the condensed graph labels Y? Is it generated according to the original large-scale graph's classes and each class examples? W3: are the student steps and teacher steps empirical hyperparameters or learnable parameters that need to be optimized in the condensation process? More details should be given. Besides, this work provides some interesting results, and more discussions of experimental findings should be given, for instance, why the synthesized graph-free data has good generalization ability. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please provide more details corresponding to the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes, the authors have mentioned limitations, no negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer 4KcB** Thank you for taking the time to review my work and providing your valuable feedback. We are so encouraged by your positive comments on our originality and clear clarity and we appreciate your recognition of our research significance. The following are our detailed responses. We are expecting these could be helpful in answering your questions. **W1: Details of condensation ratio**: The condensation ratio is chosen based on the labeling rates of the benchmark graph datasets for training, that is the proportion of labeled training nodes in relation to all nodes. For example, in Cora and Citeseer, their training set labeling rates are 5.2% and 3.6% (for 20 labeled samples per class), respectively, and we choose the set according to labeling percentage as {25%, 50%, 100%}, corresponding to {1.3%, 2.6%, 5.2%} for Cora, and {0.9%, 1.8%, 3.6%} for Citeseer. Hence, the 5.2% and 3.6% would be the maximum ratios for Cora and Citeseer, respectively. **W2: Condensed graph labels Y**: As mentioned in Lines 133-134 of the main submission, for all graph condensation tasks, 'label $Y^{\prime}$ of the small-scale condensed graph are pre-defined based on the class distribution of the label space $Y$ in the large-scale graph.' For instance, given Flickr dataset, which has $N = 44,625$ nodes from $C=7$ classes (labeled from '0' to '6') in its training graph. Among $N = 44,625$ nodes, we have $N_{C1}=4,321$ nodes labeled '1', $N_{C2}=3,164$ nodes labeled '2', and $N_{C0}+N_{C1}+\cdots+N_{C6}=N$. In this case, for the $r=0.1$% condensation ratio, the synthesized condensed graph-free data would have $N_{C1}\times r = 4,321\times 0.1$% $\approx 4$ nodes labeled '1' and $N_{C2}\times r=3,164\times 0.1$% $\approx 3$ nodes labeled '2', and so on for the other classes. **W3: Student/ teacher step setting and experimental finding discussions.** Student steps and teacher steps are empirical hyperparameters with no need to be optimized. We have provided their settings in Table A5 of the Appendix. We appreciate your recognition in terms of the good generalization ability of synthesized graph-free data condensed by our proposed SFGC. One of our contributions is to overcome the poor generalization ability issue in existing graph condensation methods with our proposed structure-free paradigm. The main reason that leads to such good generalization ability can be that different GNN architectures mainly differ in convolution operations along graph structures, our proposed structure-free paradigm would minimize the impact of different convolution operations on graph structures by only learning with a condensed graph node set, leading to consistent and reliable performance across various GNN architectures. Thanks for your suggestion and we will add more discussions in our revised version. --- Rebuttal Comment 1.1: Comment: Thank you for the response and my confusion has been clarified. I keep my score.
Summary: This paper studies an interesting problem. The authors proposes a new paradigm for reducing the size of large-scale graphs without explicit graph structures. The proposed SFGC, encodes topology structure information into node attributes in synthesized graph-free data. Extensive experiments demonstrate the effectiveness of the proposed method compred with existing graph condensation methods. Strengths: - The paper proposes a novel SFGC approach. - This paper studies the effectiveness and generalization ability issues in graph condensation. - This paper provides theoretical illustrations of the proposed structure-free graph condensation paradigm from the views of statistical learning and information flow, respectively. Weaknesses: - The work fails to provide a clear motivation for why it is important or required to reduce the size of big graphs lacking explicit graph structures. - The paper could benefit from a more thorough discussion of the limitations and potential future directions of the proposed approach. - Without having access to the source code, it is challenging to reproduce the results. The experimental setting is described, however it would be beneficial to have access to the source code to guarantee that the findings can be reproducible. If the main concern about the reproducibility is solved, I am willing to increase the score. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - How computationally efficient is the SFGC strategy compared to the current graph condensation techniques? - How effective and robust is the SFGC method on real-world graphs with noisy or insufficient data? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors could benefit from a more thorough discussion of the limitations and potential negative societal impacts of their work, as well as potential ways to mitigate these issues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer PLzT** We sincerely appreciate the time and effort you dedicated to reviewing our work. We have carefully considered your comments and suggestions. Following the instructions for the rebuttal, we have sent the source code via an anonymous link to the AC. We appreciate it if you obtain the link from AC. The source code will be made public upon acceptance. The following are our detailed responses. We are expecting these could be helpful in answering your questions. **W1: Motivation for why it is “important or required” to reduce the size of big graphs lacking explicit graph structures**: In the general context of dataset condensation and graph size reduction, our proposed SFGC shares a similar motivation that reducing the size of big graphs into small counterparts could help in reducing storage costs and accelerating GNN model development progress, as mentioned in Lines 25-30 of the main submission. Moreover, the motivations for our proposed ‘Structure-free’ graph condensation pattern (without explicit graph structures) are in the following aspect: (1) simplifying existing complex ‘triple-level’ optimization (‘with structure’) into effective ‘bi-level’ progress (‘structure-free’) to improve graph condensation effectiveness (Ref. Lines 60-66 of the main submission); (2) alleviating the limitation of condensed graph data on certain graph convolution operations that depends on graph structures to improve synthesized graph data generalization ability.(Ref. Lines 67-70 of the main submission). Hence, even without explicit graph structures, our proposed SFGC encodes topology structure information into the node attributes in the synthesized graph-free data (Ref. Lines 76-78 of the main submission), achieving comparable test performance with the original large-scale graph. Besides, the importance and necessity of our proposed structure-free graph condensation can also be reflected in various potential applications in practice (Ref. Section-B of Appendix), covering: graph neural architecture search, privacy protection, adversarial robustness, and continue learning. **W2: More thorough discussion of the limitations and potential future direction.** As mentioned at the end of our main submission (ref. Lines 358-360), our proposed SFGC mainly works on node-level condensation by reducing the number of nodes in a single graph, as a result, our condensed graph-free data has limited ability on graph-level tasks, which requires multiple graphs to supervise GNN training. In light of this, for potential future direction, we would like to further explore more on the graph-level condensation problem, which should simultaneously reduce “the number of nodes“ and “the number of graphs“ in the large-scale graph collection. The proposed long-term training trajectory meta-matching scheme would be considered for multiple graph condensation scenarios, and the core challenge would be how to jointly incorporate the node-level and graph-level complexity and diversity into the condensation progress for high-quality condensed data. Besides, condensing graph data for various downstream graph learning tasks is another promising research direction, for instance, condensing large-scale graph data to a small-scale counterpart for improving GNN online serving performance. **Q1: How computationally efficient is the SFGC strategy compared to the current graph condensation techniques?**: As mentioned in our Appendix, in Table A3, Figure A1, and Sec.E.1, we have provided the (1) Running time comparison, (2) Dynamic tensor used memory cost, and (3) Theoretical Time Complexity Analysis, respectively, to illustrate the computation efficiency of our proposed SFGC and existing graph condensation method GCOND [14]. In summary, our proposed method has (1) 5x less running time (SFGC: 150.35s vs. GCOND [14]: 885.58s) on Ogbn-arxiv condensation; (2) significantly low dynamic memory usage (SFGC: 118.116 vs. GCOND [14] 910.4)*100Mb in the overall optimization progress; and (3) at least less O(LN'^2d) time complexity. More detailed results and analysis can be found in our Appendix. These results could reflect the good computational efficiency of our proposed SFGC compared with the current graph condensation method. **Q2: How effective and robust is the SFGC method on real-world graphs with noisy or insufficient data?** For **noisy data**, we would like to emphasize that, in the context of graph condensation, our proposed SFGC is a “data-centric” method that synthesizes small-scale graph-free data from the original input graph, with the constraint of ensuring comparable test performance on GNNs. Hence, if the original graph is noisy and taken as the input for training a certain GCN* model, in the process of graph condensation, the condensed small-scale graph-free data might learn the distilled comprehensive information (including noisy) from the original graph by imitating the GCN*’s learning behavior. In this case, the noisy information could also be filtered by the graph condensation process. For **insufficient data**, as the graph condensation technique, the central focus of our proposed SFGC is to distill large-scale graph data into small-scale synthetic graph-free data as its training substitution. Hence, when given graph data is ‘insufficient’ (not in large-scale quantity), there might be no necessity to further condense it in practical scneario. --- Rebuttal Comment 1.1: Comment: Thanks for your response. The response has addressed my concerns. I would like to raise my score. --- Rebuttal 2: Title: Gentle Reminder to Reviewer PLzT Comment: Dear Reviewer PLzT, Thank you again for taking the time to provide valuable feedback on our paper, and we genuinely hope that our responses have adequately addressed your concerns and questions. We would like to kindly inquire whether you have received the anonymous code link from the AC? We are sincerely looking forward to your response, and we are always open to engaging in further discussions to address any questions or concerns you may have regarding our work. Thank you and best regards.
Summary: This paper presents a new method to condense a training graph into a smaller number of disconnected nodes, such that a GNN trained on these nodes performs similar to one trained on the original graph at test time. Strengths: **Originality.** Structure-free condensation has been reported before (GCOND-X), but the proposed method to do so is novel. **Quality.** The overall quality of the work is good, including the presented techniques, results, tables and plots. **Clarity.** The paper is mostly clear. **Significance.** Graph condensation can be very useful under the right application scenarios. I don't see any special significance of aiming for structure-free condensation, except that the method ends up giving better results (e.g. due to easier optimization, as discussed in the paper). Weaknesses: The paper is not particularly easy to follow. GCOND-X is not discussed explicitly in the related works, even though it is a structure-free graph condensation method. Hyper-parameter details have not been provided. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: What does the condensed structure-free data look like in relation to the original training data? Some intuitive visualization would be nice to have. What are the GNN depths used? It seems that the fidelity of structure-free condensation should go down as more neighbor aggregation steps are involved at test time, as they were not encountered when training on the condensed data. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: A discussion of limitations is missing. An obvious limitation is that structure-free condensation won't work under ego- and neighbor-embedding separation [1] which is an effective recommendation for heterophilic datasets. [1] Zhu et. al. "Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs." NeurIPS 2020 Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer yKD6** We sincerely appreciate your thoughtful review of our paper. We are glad to hear that you recognize the significance of graph condensation research, as well as encouraging comments for our work. We have carefully considered your comments and suggestions, and the following are our detailed responses. We are expecting these could be helpful in answering your questions. **W1: Discussion of GCOND’s graphless variant GCOND-X**: Our SFGC has different motivations and technical implementations with GCOND-X [14]. GCOND aims to simultaneously learn node attributes and graph structure, 'graphless' variant GCOND-X is not their main goal, but a by-product for ablation study. In contrast, our SFGC directly encodes structure information into more compact condensed node attributes, with comprehensive theoretical analysis and thorough empirical studies in Appendix Sec. D and Sec.3. Besides, our SFGC conducts offline long-term training trajectory meta-matching schema for condensation, where GCOND-X conducts online gradient matching schema. Thanks for your suggestion and we will add these discussions in the final version. **W2: Hyper-parameter details**: We have provided detailed hyper-parameter settings in Table A5 containing student and teacher steps, meta-matching learning rate, GNN training step size for all 5 datasets with 15 condensation cases. **Q1: Visualization of the condensed structure-free data in relation to the original training data**: Thanks for your suggestion, and the visualization of (a) original Cora dataset and (b) the condensed Cora dataset (r=5.2%) by our proposed SFGC in **Fig. Re1 of the response PDF file** for illustrating their relationship, and we will add this to the final version. It can be observed that our proposed method significantly distills the original graph with complex structures (black dense edges ) to reduced small-scale node set without explicit graph structures. Importantly, they share the same class-label space and similar test performance, as illustrated by the experimental results in Table 1 of our main submission. **Q2: What are the GNN depths used? As neighbors (structures) were not encountered when training on the condensed data, whether performance of structure-free condensation would drop at the test time**: We use two-layer GCN for condensation progress. Even the topology neighbors “were not encountered when training on the condensed data” explicitly, the performance of our proposed SFGC on the test set of large-scale graph would not drop at the test time (as illustrated by the results in Table 1 of our main submission). That is because, our proposed structure-free condensation could enforce the condensed node features to encode topology structure information of the original large-scale graph, and its GNN learning behavior imitation strategy could comprehensively distill the large-scale graph information (both nodes and structures) into small-scale condensed node set. **Discussed Limitation and potential future direction**: We mentioned the limitation at the end of our main submission (ref. Lines 358-360), that is our proposed SFGC mainly works on node-level condensation by reducing the number of nodes in a single graph, which has limited ability on graph-level tasks that require multiple graphs to supervise GNN training. In light of this, for potential future direction, we would like to further explore more on the graph-level condensation problem, which should simultaneously reduce “the number of nodes“ and “the number of graphs“ in the large-scale graph collection. **L1: Heterophilic datasets**: Thank you for suggesting the interesting heterophily graph type. On the heterophilic graph dataset, a possible and straightforward solution under our proposed SFGC can be: we use the heterophilic-GNN, e.g, H2GNN mentioned in [1], as the condensation model (rather than vanilla GCN in our submission) to distill the information of the original heterophilic graph to the condensed graph-free data. Once the heterophily characteristic of the original heterophilic graph is condensed into the synthesized node attributes, we still use the condensed data to train a GNN model and infer on the test set of the heterophilic graph. However, effectively learning the intricate and diverse heterophily characteristics presents a severe challenge that deserves more future exploration. We deeply value your perceptive insights and we will add the discussion of this interesting research question to the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I would like to keep my score.
Summary: This paper studies the problem of reducing the size of a large graph dataset while preserving task-relevant information. It introduces a new methodology to distill large-scale real-world graphs into smaller synthetic graph node sets by disregarding graph structures to create condensed graph-free data. The approach involves two key components: a training trajectory meta-matching scheme for effectively synthesizing small-scale graph-free data, and a graph neural feature score metric for evaluating the quality of condensed graph-free data dynamically. Extensive experiments have demonstrated the efficiency and effectiveness of the proposed method. Strengths: 1. The paper addresses an interesting problem in the field of graph condensation, which has significant practical implications. 2. The proposed SFGC methodology exhibits commendable performance and generalization across various graph neural network (GNN) architectures. 3. The utilization of the Graph Neural Tangent Kernel (GNTK) to avoid iterative training of GNNs adds an interesting aspect to the paper. Weaknesses: Suggestions for Improvement: 1. To further strengthen the paper, it is recommended to demonstrate the benefits of SFGC in practical applications such as neural architecture search, privacy protection, adversarial robustness, or continual learning. Including at least one of these applications would greatly enhance the paper's significance. 2. It would be of interest to clarify how SFGC can benefit neural architecture search for GNNs since it does not generate graph structures, which are essential for GNNs, and different GNNs may require distinct operations over the graph structure. 3. While Figure A2 provides a comparison of the running time between GCN and GNTK, it would be valuable to include a detailed complexity analysis of both methods concerning the number of nodes. * Specifically, elaborate on the quadratic complexity of GNTK due to the pairwise kernel matrix calculations and the matrix inversion operation. * I guess that is also why on Reddit (r=0.05%) GCN and GNTK exhibit similar running times. What if we further increase r to 0.1%? 4. It would be beneficial to include an empirical comparison with DosCond, as it also aims to accelerate the graph condensation process, to provide a comprehensive evaluation of SFGC. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Complexity comparison of GCN and GNTK 2. What is the exact formulation for $\mathcal{K}$ in Eq. 7? Specifically, what are the values for $\beta$ and $K$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer KcU6** Thanks for sharing your thoughts and questions with us. We greatly appreciate your valuable suggestions on discussing more on the practical application and complexity of our proposed SFGC method. We have taken your suggestions into careful consideration and we have provided detailed responses to your questions below. We hope that these answers will help to address your concerns clearly. **W1: Demonstrate the benefits of SFGC in practical applications**: Thanks for your suggestion. We have given some detailed illustrations about how the benefits of SFGC in Appendix Sec.B Potential Application Scenarios. For instance, for graph neural architecture search (GraphNAS), which needs to repeatedly train different potential GNN architectures, our SFGC generated small-scale condensed graph-free data can be taken as a representative substitution of the large-scale graph, significantly benefiting for saving many computation costs and accelerating new GNN development. A more detailed explanation can be seen in responses to W2. **W2: How SFGC can benefit GNNs neural architecture search**: According to the below survey work *[IJCAI-Survey-2021]*, the design of GraphNAS search space can extensively involve: (a) Micro search space with “aggregation functions”, “aggregation weights”, number of attention heads, combining functions, feature dimensionality, Non-linear activation function; (b) Macro search space with layer-wise combination function; (c) the Pooling function and (d) Hyper-parameters. Hence, for fine-grained search space, our proposed structure-free can only slightly affect the design of “aggregation functions and weights” in the (a) Micro search space. Considering all different types of aggregation weights in different GNNs are calculated based on the node features, SFGC condensed graph-free data has implicitly encoded topology structure information into the node features. Hence, SFGC could still benefit GraphNAS by designing node-attribute weighted aggregation functions in the search space, as well as all other aspects in (b)-(d) to design new GNN models driven by specific tasks. Thanks for your valuable suggestions and we will add these discussions into the final version. *[IJCAI-Survey-2021] Ziwei Zhang, Xin Wang, Wenwu Zhu Automated Machine Learning on Graphs: A Survey.* **W3-Detailed complexity analysis of GCN and GNTK**: Given $T$ is the number of GCN training iterations, $N$ and $N^{'}$ are the number of large-scale graph nodes and condensed graph nodes, respectively, $L$ denotes the number of layers, and $F$ denotes feature dimension. In our SFGC, we use $L=2$, hence for GCN, we have its complexity dominated by $\mathcal{O}(4TN^{'}F^{2})$. For node-level GNTK calculation, its complexity can be dominated by $\mathcal{O}(4N^{2}N^{'2})$. Hence, under certain iterative times $T$ setting, the GCN’s and GNTK’s might have comparable complexity ($TF^{2}$ vs. $N^{'}N^{2}$), which might be a reason for Reddit (r=0.05%) GCN and GNTK have similar running times (GNTK still better). When the condensed graph has more nodes, for instance, Reddit further increases r to 0.1%, GCN needs to involve more iterative times $T$, and GNTK also needs to calculate a bigger Kronecker product matrix. It might be hard to make a straightforward comparison, since it is hard to make sure how to set hyperparameters (for instance, $T$, learning rate) to iteratively train GCN on the mid-product condensed graph in the optimization process. And this challenge is our main motivation to leverage close-form GNTK in dynamic evaluation, avoiding iterative GCN training tied to hyperparameter settings. We sincerely thank you for your suggestion, and we will add these discussions in the final version. **W4-Empirical comparison with DosCond**: The node classification performance comparison between our proposed SFGC and DosCond can be seen in Table.Re-KcU6-1, where the DosCond results are from its work. As can be observed, our proposed method still outperforms DosCond. DosCond improves the graph condensation process by simplifying short-range gradient matching to the one-step pattern to accelerate condensation, in contrast, our proposed SFGC improves the graph condensation process by simplifying the optimization objective with a structure-free paradigm to obtain high-quality condensed graph-free data. DosCond and our proposed SFGC do have different targets. Thanks for your valuable suggestion, we will add this discussion to our final version. **Table.Re-KcU6-1. Performance comparison between DosCond and our proposed SFGC.** | Methods | Cora (r=2.6%) | Citeseer (r=1.8%) | Flickr (r=0.1%) | | :--- | :---: | :---: | :---: | | DosCond | 80.0 | 71.0 | 46.1 | | SFGC (**Ours**) |**81.7** | **72.4** | **46.6** | **Q2-Formulation of Eq.(7)**: The K in Eq.(7) is the GNTK kernel and its detailed calculation has been illustrated in Eq.(2) of our main submission. The $\beta$ denotes the number of fully-connected layers in calculating GNTK in Eq.(2) and we set it to 2 in our work. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thank you for the detailed response and some of my concerns have been addressed. I tend to accept this paper and I think my current score is reasonable.
Rebuttal 1: Rebuttal: **Common response to all reviewers**: We thank all reviewers for their thorough review and valuable suggestions. We are delighted that our contributions have been positively acknowledged, including: **(1) Novel and interesting problem of structure-free graph condensation paradigm for practical application scenarios (@All Reviewers!)** **(2) Effective long-term training trajectory meta-matching framework with the bi-level optimization (Reviewer 4KcB,Reviewer yKD6)** **(3) Interesting and effective GNTK-based evaluation metric with graph neural feature score (Reviewer 8VHW, Reviewer 4KcB, Reviewer KcU6)** **(4) Numerous and convincing experimental results with superior performance effectiveness. (@All Reviewers!)** **(5) Good generalization ability of our synthesized graph over cross architectures (Reviewer KcU6, Reviewer PLzT, Reviewer 4KcB)** We greatly appreciate all the positive comments and valuable suggestions for our work. These comments encourage us to continue our efforts in advancing this new promising graph condensation research area for real-world applications. More detailed responses are as follows. We hope our responses address all weaknesses and questions! Please let us know if there is any concern. We have considered your thoughtful suggestions, and have modified accordingly to improve the manuscript in the final version. Pdf: /pdf/5ff0379f24c4d82a3c52c0a042eb288243469aaf.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a structure-free graph condensation method designed to distill large-scale graphs into small-scale graph-free data while preserving comparable expressiveness. The proposed method, named SFGC, achieves this by condensing the graph topology into an identity matrix, effectively embedding the structure information into the node attributes. To effectively imitate the GNN training process, SFGC employs a training trajectory meta-matching scheme. Additionally, a graph neural feature scoring technique is used, which dynamically evaluates the quality and relevance of the synthetic graph-free data. Strengths: - Graph condensation is a crucial research area with many real-world applications. The authors propose a new “structure-free” graph condensation method. - They provide convincing experimental results and comprehensive discussions overall. - The idea of using GNTK-based graph neural feature score metric is interesting and effective. - The paper is clearly written and easy to follow. Supplementary materials also offer valuable additional information regarding the model and its performance behaviors. Weaknesses: - The authors claim that this is the first work that distills large-scale graphs to small-scale synthetic graph-free data, but the previous work GCOND-X also appears to perform a similar task of distilling large graphs to small-scale graph-free data. Further clarification on this would be needed. - It appears that the experiments do not precisely determine the extent to which different aspects of the model contribute to performance improvement. - The effectiveness of the “structure-free paradigm” in SFCG should also be convincingly demonstrated. In Section 3.2, a graph structure is generated from condensed node features and compared with SFCG. However, since the graph structure is already included in the node features, inputting the graph structure and condensed node features into the GNN model again could cause over-smoothing. Therefore, it is unclear whether the performance improvement over existing models is due to the structure-free paradigm or appears to be due to over-smoothing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Could you provide more details on how the three condensation ratios for each dataset used in the experiments were determined? - I am curious why the performances of the five variants of SFGC in Figure 3 (Citeseer), used for synthesizing graph structures, drop significantly at a condensation ratio (r) of 3.6%, which is even much worse than the case of smaller condensation ratios. - I would like to know if the number of expert trajectories affects the performance and how the number of expert trajectories were set. - Claiming that the long-term parameter distribution matching method is superior because the SFGC method outperforms GCOND and GCOND-X in Table 1 might be somewhat hasty. While the superior performance could be due to the consideration of long-range, it seems that finding the optimal condensed graph-free data, as seen in Table A4, also plays a significant role. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not include a discussion on the potential limitations of this study or offer insights into possible future research directions. - An important consideration is that if SFGC necessitates the pre-training of a GNN on a large-scale graph to obtain a pre-trained training trajectory, it implies that real-world large-scale data must be initially trained. While this process is distinct from the graph condensation pipeline, it nonetheless introduces challenges associated with intensive computational demands. Addressing and suggesting solutions for this could provide valuable directions for future studies. - I find the potential of employing different GNN models as the condensation network intriguing. As the method of incorporating graph structural information into node features heavily relies on the characteristics of the condensation model, using a different GNN model could potentially alter the properties of the condensed graph and impact the final performance. Therefore, additional experiments with various condensation networks and test networks would offer valuable insights. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer 8VHW** We are glad that you recognize the significance of graph condensation. We have carefully considered your thoughtful comments and suggestions, and the following are our detailed responses. **W1: Clarification on GCOND’s graphless variant GCOND-X**: Our SFGC has different motivations and technical implementations with GCOND-X [14]. GCOND aims to simultaneously learn node attributes and graph structure, 'graphless' variant GCOND-X is not their main goal, but a by-product for ablation study. In contrast, our SFGC directly encodes structure information into more compact condensed node attributes, with comprehensive theoretical analysis and thorough empirical studies in Appendix Sec. D and Sec.3. Besides, our SFGC conducts offline long-term training trajectory meta-matching schema for condensation, whereas GCOND-X conducts online gradient matching schema. We will add these discussions in the final version. **W2: To what extent each aspect of the model contribute to performance improvement**: We have three core components in our framework: (C1) training trajectory meta-matching scheme, (C2) graph neural feature score metric, (C3) structure-free paradigm.We have individually analyzed the effectiveness of each one, i.e., Lines 327-336 (C1), Table A4 (C2), and Fig. 3 (C3) in the submission. To summarize the component contributions, we showed **Table.Re 1 in the response PDF file** for the ablation study in Cora dataset with 2.6% condensation rate. As observed, IDX-1 vs. IDX-3, IDX-3 vs. IDX-4, and IDX-2 vs. IDX-3, verify the effectiveness of C1, C2, and C3, with 5.5%, 0.5, 7.3% improvement, respectively, illustrating the effectiveness of each aspect of our proposed SFGC. **W3: Whether the effectiveness of SFGC due to the structure-free paradigm or oversmoothing.** We have compared our “structure-free paradigm” (w/o structure) vs. other 5 variants (w/ structure) in Sec.3.2 to verify its effectiveness, where the 5 w/ structure variants follow existing graph structure learning methods, i.e., GCOND[14], to synthesize graph structures from condensed node features. Such graph structure learning strategy might have potential over-smoothing when “graph structure is already included in the node features”. Importantly, this is the main drawback of existing methods and motivates us to propose “structure-free condensation paradigm”, and performance improvement in Fig.3 over other variants (w/ structure) could illustrate the effectiveness of “structure-free paradigm” in SFGC. **Q1: Details of condensation ratio**: Due to response word limitations, please refer to our response to Reviewer-4KcB, W1 for more details. **Q2:Synthesizing graph structures drop at a large condensation rate**: This observation accurately reflects the limitation of existing graph structure learning based condensation methods, which require optimizing a triple-level condensation objective. In fact, when condensation rate is relatively large 3.6% in Citeseer, the number of nodes is larger, and the graph structure learning space increases exponentially, making the problem harder to be optimized. Hence, it is intuitive that synthesizing graph structures would drop significantly at a large condensation rate with more nodes. And this also motivates us to propose the “structure-free paradigm’’. **Q3:The number of expert trajectories $K$**: In our submission, $K$ is empirically set as 200. Here, we also conduct a hyperparameter analysis on $K$ in **Table.Re 2 of the response PDF file**. Intuitively, more experts might lead to more guidance on condensation, and fewer experts might limit the model behavior imitations. However, when the number of experts keeps raising from 200 to 300, the performance drops moderately. One potential reason is: the distribution of more expert GNN parameters would be more complex, and accurately computing their expectation to guide condensation would be more difficult. We will add these discussions in the final version. **Q4: Superiority of long-term parameter distribution matching** First, compared to “online short-range gradient matching” in GCOND and GCOND-X, the superiority of our “offline long-term parameter distribution matching” is (1) good condensation performance and (2) saved memory usage (SFGC: 118.116 vs. GCOND [14] 910.4)*100Mb (ref. Figure A1 in the appendix). We list the comparison results w/ and w/o the GNTK-based dynamic evaluation strategy in **Table.Re 3 of the response PDF file**. It shows, our SFGC (w/o dynamic evaluation) achieves better performance than GCOND-X and GCOND, verifying the effectiveness of the proposed “offline long-term parameter distribution matching”. **Discussed Limitation and potential future direction**: We mentioned the limitation at the end of our main submission (ref. Lines 358-360), that is our SFGC has limited ability on graph-level tasks which build on multiple graphs. And we would like to further explore more on the graph-level condensation problem, which should simultaneously reduce the number of nodes and the number of graphs in the large-scale graph collection. **L1. Suggestion for intensive computation on training real-world large-scale graph data**: We would suggest employing GraphSage [10], GraphSAINT [35], or ClusterGCN [KDD-19-Cluster-GCN] as the backbone to train super-large graphs for alleviating for intensive computation. **L2. Employing different GNN condensation networks**: Our work can be straightforwardly extended to employ different GNN models as condensation networks by imitating their training behaviors. Thanks for sharing the thoughtful suggestion, and we will add these limitation discussions to the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. Some of my concerns have been addressed and I'd keep my score.
Summary: This paper proposes a graph dataset condensing algorithm with a main idea of creating a new format of graph representation that does not explicitly include edge information. The authors suggest a new graph kernel and training algorithm with trajectory for online gradient to achieve graph condensation. Experiments demonstrate that the suggested algorithm (named SFGC) outperforms other baselines that condense the graph with explicit edge information in terms of node classification. The authors also showcase the performance of SFGC in terms of generalization ability and empirical learning time efficiency. Strengths: One of the main strengths of this paper is introducing an interesting method for compressing graph datasets. The idea of removing edge information explicitly seems strong, but it appears plausible and well-founded, akin to non-negative matrix factorization with constraints of non-negativity. The simplicity of the idea allows other researchers to easily adapt it to their own graph-related research. Furthermore, the authors present numerous experimental results demonstrating that the suggested framework outperforms the baseline models. Weaknesses: I have some questions, so I hope to listen to the answers from the authors. It is hard to see Figure 3. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Can SFGC be used for edge classification tasks, such as edge prediction? - I am curious why using SFGC is better than using the whole dataset in Cora data (Table 1). In lines 303 to 304, the authors mentioned this phenomenon, but it would be beneficial to see the reason backed with examples. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: It is challenging to find the limitation section. Could you please indicate the part where the limitations of the proposed method are discussed? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer U4b1** We sincerely appreciate the time and effort the reviewer dedicated to reviewing our work, and we are pleased to learn that our proposed SFGC is interesting and well-founded to the reviewer. The following are our detailed responses to the reviewer’s thoughtful comments and suggestions. And we will refine our manuscript to make Fig.3 more clear. **Q1: Can SFGC be used for edge classification tasks, such as edge prediction?**: Yes, our SFGC can be also used for edge classification tasks, e.g., edge prediction, by using edge prediction loss (BCE loss) as the optimization objection. More specifically, according to the definition of graph condensation mentioned in Lines 45-52 of our main submission, the proposed SFGC method condenses large-scale graph data into small-scale condensed graph-free data, so that the small-scale condensed data could achieve comparable ‘test performance’ as the large-scale graph when training the same GNN model. Thus, the proposed method will remain effective for different tasks regarding the ‘test performance’. **Q2: More analysis of Cora dataset results**: The reason SFGC condensed graph-free data has a performance that even exceeds the whole large-scale graph dataset on Cora and Citeseer is mainly attributed to the following aspects: SFGC is a data-centric method in a generative way. The parameterized node features are continuously updated and optimized in the whole condensation progress of imitating GNN’s learning behavior, which means we have very extensive space to seek optimal condensed graph-free data, resulting in models outperforming the one trained from the original graph. Besides, the long-term training trajectory meta-matching technique allows us to receive comprehensive knowledge as informative supervision from extensive GNN expert training processes by learning their parameter distribution, which further contributes the superior performance of our condensed graph from the original graph. Thanks for your suggestion and we will add such discussions in the final version. **Discussed Limitation and potential future direction**: We mentioned the limitation at the end of our main submission (ref. Lines 358-360), that is our proposed SFGC mainly works on node-level condensation by reducing the number of nodes in a single graph, which has limited ability on graph-level tasks that require multiple graphs to supervise GNN training. In light of this, for potential future direction, we would like to further explore more on the graph-level condensation problem, which should simultaneously reduce “the number of nodes“ and “the number of graphs“ in the large-scale graph collection.
null
null
null
null
ASIF: Coupled Data Turns Unimodal Models to Multimodal without Training
Accept (poster)
Summary: This work proposes a method to obtain multimodal representations from pretrained unimodal models and a multimodal dataset, and demonstrates the effectiveness of the method on the zero-shot classification task. The method is to map candidate images and texts into the same representation space, which consists of similarities of the candidates to each image/text in an anchor multimodal dataset. Strengths: - The proposed method is very simple and intuitive, by representing an image or text as a vector of similarities to an anchor set of images/texts. - No gradient updates are required. One can directly benefit from pretrained unimodal models. No training parameters are involved. - The advantage of quick model adjustments via data handling without model retraining. - The method gives (to a certain extent) explainable multimodal representations. - The paper is clearly written and easy to understand, including a comprehensive discussion section. Weaknesses: - A larger multimodal dataset leads to better zero-shot classification performance, but also leads to a bigger representation vector. E.g. a 1.6M-d vector for an image or text, while manageable for retrieval tasks, might be difficult to maneuver for other downstream tasks (e.g. any generative tasks). I understand that the authors stated that effectiveness or applications to tasks other than zero-shot classification is out of the scope of this work, but I'd like to see more discussions on the limitations or impacts of the size of representation vectors. - The method is sensitive to the curated multimodal dataset. The authors ablated on the size of the dataset, but not the source of the dataset. E.g.what would be the effect of using a different multimodal dataset from CC? (COCO, LAION, etc?) Technical Quality: 3 good Clarity: 3 good Questions for Authors: As stated in the section above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I don't see potential negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review; we provide some brief comments on the points raised. “*I'd like to see more discussions on the limitations or impacts of the size of representation vectors.*” Thanks for pointing this out. Indeed, the large dimensionality prevents a straightforward application of ASIF in tasks like text-to-image generation. Other tasks like captioning could be approached with different pipelines (as described in the answer to 2Qsg). Still, we agree with the reviewer that the higher dimensionality trait of ASIF should be cited in the discussion, and we will add it to the camera-ready version. We also envision follow-up works tackling the problem of reducing the dimensionality of the multimodal space built by ASIF. “*What would be the effect of using a different multimodal dataset from CC, like LAION or COCO?*” - For LAION, we expect a similar performance to CC given that they almost follow the same production pipeline (scraping from the internet). LAION would be the natural next step after CC given its larger size. - Our very first working demo was built using a subset of COCO, which indeed worked and led us to this work. The problem with COCO is its small size (328k samples) and the relatively narrow distribution of captions: while they are much more precise than CC, they also cover much fewer concepts. CC captions are sometimes imprecise but include different captioning styles and cover a wider set of words. Still, we agree that assessing the performance with the COCO dataset is valuable, and we will add COCO to the training dataset choices in the demo notebook provided in the supplementary material. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks to the authors for the rebuttal. While I share the same concern with some other reviewers that the method is unlikely to be widely adopted, I still think the method is novel enough to be published and can inspire and stimulate more ideas along this line. Therefore I maintain my rating of 7.
Summary: The paper proposes a method to align vision and language without learning a parametric model. Main idea: having a support set of image-text pairs, the structure of the visual data should match the structure of the language data. More precisely, the distances from one query image to all images in the support set (denoted as relative representations) should be similar to the distances from the associated text and all texts in the support set. For visual tasks that can be formalized as text retrieval, this method can be applied without learning. The method has relatively good performance given its simplicity. Strengths: Strong points: S1. The method is simple and can be easily applied to potentially many vision-language retrieval tasks. Being able to use any independently pre-trained vision and language model is a big advantage. Another big advantage is that it does not require any training. S2. The model can be easily adapted by simply changing the support set of image-text pairs. S3. Some interpretability is obtained by inspecting the nearest images and captions used to obtain the relative representations. Weaknesses: Weak Points: W1. As the authors notice, the model is slower at training time. Although optimizations might reduce it. W2. It is not clear if this approach can be used for other downstream tasks, like VQA, or captioning. Although this might be outside of the scope of the paper it might be worth discussing. W3. The paper will benefit from more comparisons with recent vision-language models. Relevant experiments will be on image-to-text retrieval and text-to-image retrieval on COCO and Flickr30K datasets compared to methods like BLIP [A] W4. Comparing with some baselines will be beneficial. If we have a support set of the classification task, with ground truth image-label pairs a good baseline is a simple K-nearest neighbor on the same visual representation used in ASIF. For example, using the 100 image-text pairs of EuroSAT we can create a KNN relying on the same image encoder as ASIF. How does this KNN compare with ASIF in terms of performance? [A] Li, Junnan, et al. "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation." ICML, 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: Can the relative representations be used for other downstream tasks, different than image classification or retrieval? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations regarding inference speed. There is no specific negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review. We appreciate the feedback and would like to offer some brief responses to the points raised. “*Can this approach can be used for other downstream tasks, like VQA, or captioning?*” We are indeed conducting preliminary experiments to use ASIF for captioning in a subsequent work. Our current approach is centered on ASIF, combined with an LLM, to generate plausible captions for new images. This process utilizes captions retrieved by ASIF representation and then iteratively refines the crafted caption. As illustrated in Figure 6 of the paper, we expect the LLM to generate expressions such as "a photo of a triumphal arch" based on the retrieved captions and hints at their similarities to the caption sought. “*More comparisons with BLIP and KNN on EuroSAT*” - Thanks for pointing this out. We focused on CLIP and LiT since ASIF can be seen as a third step in the research direction traced by LiT. Nevertheless, we urge a comparison with more recent models like BLIP in future works trying to improve ASIF performance using larger datasets and further optimizations. Our primary objective here was to introduce and justify the ASIF procedure, illustrating its effectiveness on the representative task of zero-shot classification. In making this choice, we followed LiT, which used the very same datasets to showcase the benefits of a locked image encoder, that is their main claim. - Thanks for the valuable suggestion of making the KNN experiment on EuroSAT, we plan to make the test, expanding the discussion in appendix B. We expect a close but slightly lower performance with respect to ASIF, since ASIF can count on more images useful to perform the classification from its original training set (as seen in Fig. 7 in the Appendix). --- Rebuttal Comment 1.1: Title: Comments after rebuttal Comment: I thank the authors for their rebuttal. Overall I think that this is a good paper, with an interesting, clear idea and good experiments. I will increase my score to 7.
Summary: This paper proposes ASIF, transferring independently pre-trained image/text encoders to the classification task without further finetuning. The proposed method only needs a small amount of paired image-text data as anchors, and represent new data samples using the relative representation to the anchor samples. The simple method achieves reasonable results on different image classification benchmarks. Strengths: 1. The proposed approach is efficient since it does not large amount of image and text data pairs to train extra multimodal representations that align images and text. Also, the required image-text pairs can be only 1.6M, which is small comparing with original CLIP and other multimodal classification models. 2. The needed image-text pairs are flexible. By adding more anchor samples for specific image classes, the proposed method could obtain better performances for such specific classes. Weaknesses: 1. Even though ASIF does not need large amount of paired image-text data, it still requires pre-trained single-modal models which are already trained on large amounts of image or text data samples. From this perspective, I didn't see the necessity of such setting. 2. The prediction results are relying on anchor samples. If the anchor samples are not representative enough, the method will not be that useful. Imagine for a target dataset, if the provided anchor samples do not come from the same dataset or do not include similar samples as the target samples, the method may fail. If the authors can provide more experimental results to justify the influence of different chosen anchor samples, it will be more promising. 3. The paper should be re-organized and the writing should be improved a lot. For example, the "ASIF recipe" can be written in an algorithm format, instead of current style. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. In Figure 4(b), why the image representation for x* is "absolute representations"? This term is never mentioned in the text. 2. In Figure 4(a), the first sample x1 goes through the image encoder and get the embedding for x1. However, for x2, why such process involves y2 (the black letter y2 under the box for x2)? In general, all the figures in this work should be re-designed since current presentation is very confusing and difficult to follow. Also, the resolution of figures should be improved. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review, we hope to have addressed all points: On the Weaknesses: 1. “*ASIF still requires pre-trained single-modal models which are already trained on large amounts of image or text data samples.*” The crucial difference is that unimodal data may come untied and even without any labels, while captioned images are scarcer and more challenging to gather. Furthermore, the ASIF setting may be desirable for two key properties: the ability to easily edit the model and its interpretability. 2. “*If the anchor samples are not representative enough, the method will not be that useful.*” We perceive this aspect as a strength rather than a flaw. ASIF deliberately separates perception from interpretation, and only the anchors are amenable for any classification outcome. This way, we can control the visual understanding of the model by curating the limited set of anchors. Effective curation is possible given the swift iterations and the interpretability of ASIF models. And we don't need to imagine; we conducted the experiment suggested by the reviewer! In the EuroSAT experiment, we discuss the impact of adding anchors closer to the distribution of interest (penultimate paragraph of section 3 and appendix B). 3. “*The writing should be improved a lot*” We acknowledge this opinion, but the positive feedback we received on the writing, and specifically on the “recipe” format, still largely outweigh the negative feedback. Nevertheless, we want to improve the readability of the figures for the camera ready (especially Figure 4). --- On the Questions: 1. “*"absolute representations" used in Figure 4b but never mentioned in the text.*” Thanks for bringing this up. We like “absolute representations” because it contrasts with “relative representations,” but indeed, it should be introduced in the manuscript. We will add it in the camera-ready version. 2. “*Figure 4a not clear*” We see that this subfigure is prone to misunderstanding. We are considering moving all the captions in line with y1 below. In general, we plan to polish all the figures in the camera-ready version, thanks for this feedback. --- Rebuttal Comment 1.1: Title: Response Comment: I would like to thank the authors for clarifying my concerns. Based on the EuroSAT results, ASIF could obtain better performances by adding in-domain examples for training. Also, from Figure 5, the imagenet results increase when adding more image-text pairs. The proposed method looks interesting, as it does not require a large amount of image-text pairs and does not need retraining when the training set is updated. As the results shown in Figure 5 are promising as the results are keeping increasing, but based on current experimental results for ASIF, it is hard to justify ASIF "achieves competitive performance with CLIP and LiT". It will be very interesting to figure out when ASIF could match the performances of CLIP and LiT.
Summary: This paper presents a novel approach to aligning text-image modalities without any training. The method is based on the assumption that images and their captions have similar relative embeddings, even when trained independently. By leveraging paired multimodal data, relative representations can be computed within each modality, and they can serve as a medium for cross-modality communication. The major strength of this approach is its novelty, as it removes the need for training to align different modalities of data. Strengths: The idea is novel and has the potential to introduce a new paradigm in multimodal alignment research. Adapting to a new distribution of data is easy as it only requires a new set of coupled data without training. The concept of separating perception and interpretation can open up new research opportunities and can be applied to various multimodal problems. Weaknesses: - Although the approach is interesting, a more comprehensive analysis is needed to convince the readers, including a global analysis of similarities between features from text and image encoders, potentially compared with CLIP and LiT. - Figure 2 requires additional explanation or a clearer illustration to motivate and convince the reader. - Figure 4 could be improved for better understanding. The diagram is not intuitive at first, without fully reading and understanding the method section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How can you provide more convincing evidence that the features are communicable across different modalities? - How sensitive is the selection of text encoders? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The method has not been verified across a wide range of multimodal downstream tasks, but it is clearly stated in the limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review, we provide some brief comments on the points raised. “*More comprehensive analysis of similarities between features from text and image encoders to convince the readers*” Thanks for the suggestion, in the appendix we conduct an analysis of the two feature spaces that looks at the relative distances (Fig. 7 and 10), we imagine to give it more space when presenting the work, in a future poster or blog post. Maybe the reviewer is imagining a different analysis and comparison with CLIP and LiT, we would really appreciate an expansion on this point in the discussion period, to us is not obvious how to do this given e.g. the different dimensionality of the feature spaces. “*How can you provide more convincing evidence that the features are communicable across different modalities? How sensitive is the selection of text encoders?*” In this work we demonstrated this effective communication through zero-shot and few-shot visual benchmarks as well as discussing ASIF performance in the audio domain. Still, we acknowledge that further evidence would be beneficial to the adoption of the method. In this work, we focused on introducing the new ASIF methodology but we expect to have further evidence as well as a deeper analysis on the components –including the text encoder– in future work. We also point the reviewer to [1] for a further discussion of the communication allowed by relative representations. [1] Moschella, Luca, et al. "Relative representations enable zero-shot latent space communication." published at ICLR 2023. --- Rebuttal Comment 1.1: Title: Response to Authors' Rebuttal Comment: Thank you for the authors' response. Below are the clarifications. *More comprehensive analysis of similarities between features from text and image encoders to convince the readers* If I understand correctly, Figures 3 and 10 show that when we pick similar images to the top image (using an image encoder), their corresponding captions tend to be more similar to the caption of the top image (using a text encoder). What I was wondering is how similar the new representations from ASIF are compared to those from CLIP and LiT. I understand that ASIF is an unsupervised method so it might not perform better than CLIP and LiT, but I assume that there can be some better cases too. *How can you provide more convincing evidence that the features are communicable across different modalities? How sensitive is the selection of text encoders?* It may be a little bit early to conclude that the current method works well across various text encoders (e.g., naive BERT, naive RoBERTa, and SimCSE). Have the authors considered adopting other text encoders? Given that the primary modalities this paper focuses on are text and vision, discussing this could be an important aspect of the paper. Another concern I have is with the current title of the paper "Coupled Data Turns Unimodal Models to Multimodal without Training." The counterpart models like CLIP and LiT have titles: "Learning transferable visual models from natural language supervision" and "Lit: Zero-shot transfer with locked-image text tuning." Both of these titles clearly indicate the modalities they address. Although the authors include the audio result in the appendix, the primary modalities discussed throughout the paper are text and vision. It seems to me that the audio result is a separate work and more detailed discussion should be open to the reviewers to fully convince ASIF is worth having the current title. I would like to hear the authors' opinion on this as well. --- Reply to Comment 1.1.1: Comment: Thanks for the clarifications, below are our responses. *More comprehensive analysis of similarities between features from text and image encoders to convince the readers.* Thanks for elaborating on this point. If we have understood correctly, this suggestion could manifest as a qualitative comparison between the inter-similarities of a set of images and texts using ASIF, CLIP and LiT. This would give some indications on the behavior of these models, e.g. we could observe which one between ASIF or LiT is more similar to CLIP. This comparison is very straightforward, and we plan to include it in the appendix of the camera-ready version. *Adopting other text encoders and title of the paper.* We privileged the variability on vision encoders given the broader diversity of training methods on the vision side, and the well-established position of SentenceT as the strandard sentence encoder in literature. We believe that the current title well communicates the main message of our work using a synecdoche commonly used in this research area to represent the broader scope with specific parts. Currently, images and text represent the main domains where multimodal models are tested given the wider availability of relevant models and paired data. Our title would not be an exception in this field, e.g. “Multimodal Neurons in Artificial Neural Networks” by Goh et al. (2021) covers only image and text domains.
Rebuttal 1: Rebuttal: We thank the reviewers for showing keen interest in our ideas, and for their thorough and quite valuable comments. This general response summarizes the main points raised and how we have addressed them. Specific answers are then provided to each reviewer in response to their remarks. 1. **Empirical Results and Benchmarks:** The concern regarding the limited empirical results was addressed by clarifying that our evaluation followed established protocols (same as LiT) and included additional results on EuroSAT. We also expanded on ASIF's performance on audio data. 2. **Comparison with Supervised Encoders and ImageNet Accuracy:** We explained that our focus on open vocabulary classification makes direct comparisons with models like DEIT unfair. We emphasized the specialization and generalization trade-off in image classification, supported by recent literature. 3. **Training Data of Encoders:** We clarified the concerns about the large training set of unimodal encoders, remarking that ASIF's distinctive capability is to rely on a modest amount of captioned images, which are more scarce and challenging to collect than unpaired and unlabeled data. 4. **Interpretability and Inference Costs:** We provided insights into our approach to interpretability and responded to concerns about the trade-offs between training and inference costs. A commitment was made to discuss these trade-offs further in the paper. 5. **Anchor Samples and Model Control:** The importance of representative anchor samples was discussed, highlighting ASIF's deliberate separation of perception from interpretation, allowing for control and interpretability. 6. **Writing and Figure Clarity:** Acknowledging some concerns about the writing style and figure clarity, we committed to improving these aspects in the camera-ready version of the paper. 7. **Potential Extensions and Sensitivities:** Responses were provided on the potential use of ASIF for other downstream tasks, sensitivity to text encoders, and comparisons with other models. We highlighted ongoing experiments and plans for future work. 8. **Representation Vectors and Dataset Choices:** We agreed to discuss the limitations or impacts of the size of representation vectors and the effects of using different multimodal datasets. We also outlined our experience with different datasets and plans to expand on this in supplementary material. In summary, we believe that we have addressed the main concerns and provided clarification on several key aspects of our methodology. We also appreciated the valuable suggestions for future research and will consider them in ongoing and subsequent work. Thanks once again for the thoughtful engagement with our paper.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes to leverage single-modal pre-trained text & image encoders and a relatively small image-text dataset to create a CLIP-like open-vocabulary visual recognition model without training. The authors claimed that the proposed model ASIF is more training efficient, more interpretable, and can easier handle data copyright issues than CLIP and alternatives such as LiT. More specifically, relative representations w.r.t. the available image-text pairs are computed for image and text respectively. To remove noise, only top-k values are kept. The resulting sparse embeddings can be used for image classification tasks in a way similar to CLIP. The above process requires no training, and the anchoring image-text pairs can be removed without re-training. Moreover, by inspecting the top-K anchoring image-text pairs, human can understand how ASIF makes its predictions. Strengths: S1. Using relative representation to create a CLIP-like open vocabulary visual recognition model is new. Weaknesses: W1. Empirical results are weak. W2. The major claims (efficient training, data copyright, interpretability) have some flaws. W3. Scalability / generalization (to more tasks) remains unclear. The only empirical results this paper presents are image classifications in 4 benchmarks, and the accuracy are far below widely used visual recognition models. I understand given that no training is required, this can still be considered strong results. However, this also means that this method is unlikely to be widely adopted. Moreover, for ASIF with supervised visual encoder (DEIT base), the ImageNet accuracy is also considerably lower than the original DEIT results. This raises a concern that the assumption of a available supervised encoder might not be realistic. Given the 1.6M anchor data, I also suggest the authors to run a baseline of taking existing single-modal encoders (unsupervised) and finetune on these image-text pairs with contrastive loss, so that readers can better understand the trade-offs of ASIF and training. I also have concerns on the claims. The uni-modal encoder also requires training, and it seems that the text encoder used was trained on 1B internet text, which is much larger than CLIP's 400M. Moreover, if the pre-trained encoders have data copyright issues, there's no way to remove them under the current ASIF framework without re-training. For the interpretability claim, the same procedure could be done for CLIP-like models given an anchoring dataset. (Retrieve nearest neighbors using CLIP's text/image embeddings in the image-text pairs.) Therefore, the interpretability is not ASIF's exclusive advantage. I acknowledge that the scalability limitation has been properly addressed by the authors. Although some possible techniques to improve it are discussed (L154-L169), I personally don't think this can replace a well-tuned CLIP model. For many real-world applications, inference cost may be more crucial than training cost. On the other hand, the current ASIF is only tested on image classification. I would suggest the authors explore different tasks for future version. Instead of arguing the potential by pure hypothesis, it would probably be more convincing to demonstrate the potential on a wide range of tasks / modality, given that current image classification accuracy are not promising. Additional Suggestions: - To make ASIF look more promising, I'd suggest the authors try to utilize relative representations to improve CLIP. In this way, users can choose between finetuning / ASIF depending on the training / inference cost trade-off while getting strong accuracy. - From the accuracy difference of sup vs unsup encoders, it seems the data quality of pretrained encoders also plays a major role. Studying training data quality vs anchoring data quality might also be an interesting direction. --------------------------------- Update after rebuttal ================ I appreciate the detailed answers from the authors. Most of my questions are answered and I have a better understanding of this submission; hence the increased confidence from 4 to 5. W2 was properly addressed and the authors promised to clarify the claims in revision. Unfortunately, W1 and W3 remains concerns (no additional results are provided) that outweigh the strengths. I've increased overall rating from 3 to 4 but still leaning negative. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My questions and suggestions are included in the weaknesses section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: Limitations are properly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We invite further dialogue with the reviewer in the next phase to foster an updated assessment of our research that builds upon the discourse. We also thank the reviewer for the interesting points regarding the interpretability and scalability/inference tradeoffs "*The only empirical results this paper presents are image classifications in 4 benchmarks” and “lack of evidence” on other tasks* - The primary evaluation follows the protocol of LiT (table 2 of LiT paper) and employs exactly the same four classification benchmarks used in ASIF. - In addition, we also present performance results on EuroSAT (as indicated in the penultimate paragraph of the Empirical Evidence section and detailed in appendix B) and we discuss ASIF's performance on audio data from published follow-up work (appendix C). *"ASIF with supervised visual encoder (DEIT base), the ImageNet accuracy is also considerably lower than the original DEIT results. This raises a concern that the assumption of a available supervised encoder might not be realistic"* A comparison against supervised learning on ImageNet is unfair as we focus on open vocabulary classification (also LiT ImageNet accuracy would be considerably lower). Similarly, it would be unfair to report performance of DEIT on any other data set without finetuning. The trade-off between specialization and generalization in image classification is well-known and has been widely discussed in recent literature, including multimodal open-ended models [1, 2, 3]. [1] Wortsman, Mitchell, et al. "Robust fine-tuning of zero-shot models." CVPR 2022. [2] Wortsman, Mitchell, et al. "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time." ICML 2022. [3] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." ICML 2021. "*uni-modal encoders also require training, and it seems that the text encoder used was trained on 1B internet text, which is much larger than CLIP's 400M*" It is essential to clarify that we are not comparing like-for-like here. It is important to distinguish free-form text, which lacks labels, from a collection of captioned images, which are comparatively more scarce and challenging to gather. What is intriguing is the modest quantity of captioned images that ASIF relies upon (1.6M) after the encoders are pre-trained from purely unpaired and potentially unlabeled data. "*Interpretability*" Our approach is different because the prediction for us is mechanicistically caused by the nearest neighbour, if you were to change the anchor set then the prediction would change and not just the explanation. There is a vast literature on explainability, arguing that explanations need to faithfully summarise the internal model computations, see [1]. In our case, the interpretability pertains to the assignment of captions to visual features. [1] B-cos Networks: Alignment is All We Need for Interpretability, Bohle et al., CVPR 2022 “*Trade-offs between training and inference costs*” The reviewer is correct that inference cost is usually a blocker for industrial applications of foundation models, as large-scale deployment can eclipse even the high training costs. Nevertheless, we contend that the training cost remains a relevant consideration, and the additional attributes of ASIF can potentially position it as the preferred model. The idea the reviewer suggest of merging ASIF with CLIP is very interesting. While it is beyond the scope for this submission, being able to navigate the training/inference cost trade-off is very valuable depending on the application. We will be happy to add a discussion in the paper about the inference cost vs training trade-offs and how ASIF may be an extreme point with valuable in-betweens. As it is not obvious how to do this, we would leave this open to future research and acknowledge the reviewer contribution to the discussion. “*Studying training data quality vs anchoring data quality might also be an interesting direction.*” We agree with the reviewer and have already begun investigating a closely related avenue for our future research. Specifically, we are delving into the realm of degrading the quality of encoders within an ASIF model while maintaining acceptable performance levels. One possibility is to modulate the quality of the encoders by acting on the training dataset. We thank the reviewer for this valuable suggestion. --- Rebuttal Comment 1.1: Title: Response to Authors' Rebuttal Comment: I appreciate the authors' response. However, many of my concerns are not resolved, so I don't plan to change my final rating at this point. Regarding W1, I still think the empirical results are way too weak. One of my original point was "the accuracies are far below widely used visual recognition models". **Given the current experimental results, I don't believe building upon ASIF can lead to satisfactory image classification model if we care about the accuracy performance.** Regarding the authors' rebuttal, I don't think it's changing my opinion about this submission but I'll still respond here. I think benchmarks in LiT's Table 1 can also be considered. Table 2 seems merely a small scale ablation for LiT. I admit I missed the EuroSAT results in the supplementary, but my concerns are still unresolved. Did ASIF achieves comparable accuracy with other open-vocab models such as CLIP/LiT on EuroSAT? The table is mainly ablating ASIF design choices only. Lastly, I'm not sure if we can include Appendix C in the discussed here, since it's not this submission's contribution. Adoption of ASIF by others might be a positive signal, but I'm not sure how to evaluate this under the current reviewing process. Perhaps I wasn't clear enough in my original review on the DEIT-related concern. I'm not asking to surpass a specific model's accuracy. I'm questioning the assumption of an existing supervised visual encoder. ASIF significantly hurts the supervised encoder's performance on the same image domain (ImageNet). Empirically, this may indicate that ASIF is not a good method to convert DEIT to open-vocab within a similar image domain. If an supervised pre-trained model is assumed, I would suggest to demonstrate ASIF's advantage in a domain transfer settings. With W2, I would like to emphasize my concerns on the "data copyright issues". This paper claims that "an asset owner can remove their 39 data from the model" (L38) and "... if we need to remove samples ... lost the license to use them ..." (L286-288). If the data to remove is used when training the encoders, I don't think ASIF can handle it very well. **This false claim remains a significant issue for this submission.** To solve this, the encoders need to be retrained, and the 1B text data would become a problem. The cost could be as significant as re-training a CLIP model (400M image-text). The author's rebuttal focused on the data collection aspect, but I'm more concerned on the training cost considering the data removal claim. The interpretability response is confusing. Why do we need to change the anchor set? For the CLIP model, if a set of image-text pairs from its pre-training data is sampled, isn't it a good interpretable anchor set as in ASIF? I still don't understand how ASIF's interpretability is special. For W3 the scalability concern, I'm suggesting making this paper a study of training / inference trade-off. Perhaps ASIF can be done together with the CLIP framework: Pre-train on smaller dataset to save training cost, and add ASIF later. On the other hand, use smaller ASIF anchor set but pre-train on more CLIP data to save inference cost while maintaining reasonable accuracy. The above is just one possibility. To conclude, **I strongly suggest to shift the direction to make ASIF compatible with existing CLIP-like model for a stronger accuracy performance and better scalability**, and show that ASIF can enable useful trade-offs on certain condition / assumption. To be honest, the current scope of this submission is not interesting, at least to me. Overall, my main complaint about this submission is that, being an empirical paper, the experimental results are not convincing to change the community's wide adoption of CLIP. Unless ASIF can work well together with other existing methods and achieve reasonably good accuracy (in 2023 standard), I am unlikely to change my rating. I agree with the other reviewer's positive view about this paper -- relative representation to make open-vocab model is interesting. However, this idea by itself, without a good execution to produce a model that actually works well, does not have enough contribution to the community in my opinion. --- Reply to Comment 1.1.1: Comment: While we respect the Reviewer's opinion, and acknowledge the significant effort in pointing out what they perceive as major weaknesses in our work, we strongly disagree. In the interest of a positive and balanced discussion, we would like to stress the following points: ### Goal of the paper One can turn a pre-trained unimodal model into an open vocabulary one without any further training. This finding is surprising and useful. At the same time, we never advocated ASIF as a one-stop replacement for CLIP or LiT. See ad verbatim block (L359) > *The simple ASIF procedure presented here offers a strong baseline for multimodal models, but its performance still falls apart to CLIP and LiT* ### Data copyright issue (editability property of ASIF). We believe in faith this to be just a tragic misunderstanding that is influencing negatively the judgment of this reviewer: the reviewer states that >*This paper claims that "an asset owner can remove their 39 data from the model"*. Here is the complete sentence from the introduction, covering the general problem: > *Still, training neural networks at such scale presents several challenges beside the obvious infrastructure and training costs. Notably, it requires collecting massive training sets, making it difficult to interpret the predictions of the model in light of their training data. Additionally, the training assets are often not owned by the institution training the model [6]. This introduces several additional challenges, from reproducibility to the difficulty of ensuring that **an asset owner can remove their data from the model [7--11]***. The editability property is specific to the multimodal data set. ASIF models can be adjusted in seconds, by easily adding new multimodal samples or removing them (e.g. for copyright issues) by simply adding or deleting their embeddings. It is clear that this capability does not hold for the pretrained encoders. If needed, we will make it clearer in the camera ready. Finally, we believe that large scale pre-training on established data sets can be both effective (see imagenet pre-training) and safer from copyright issues compared to downstream data that did not go through the same level of scrutiny. See ad-verbatim quotes (L136) > *In our procedure, the encoders can be pre-trained with established data sets that do not change over time, and removing the effect of a multi-modal example is as simple as deleting its embeddings.* ### Interpretability and relation with CLIP > *I'm suggesting making this paper a study of training / inference trade-off. Perhaps ASIF can be done together with the CLIP framework: Pre-train on smaller dataset to save training cost, and add ASIF later.* While we found it interesting, note that this is not the goal of the paper. As written above, we are showing how to turn frozen unimodal models into open vocabulary. This comes with additional nice properties. Whether they can be applied to CLIP or not it's besides the point. As we already explained in the rebuttal, ASIF cannot be used to explain CLIP predictions as CLIP is not necessarily relying on those samples in a mechanicistic way to make predictions. Clearly, the explainability procedure only relates to the multi-modal data, not the pre-training. > *Overall, my main complaint about this submission is that, being an empirical paper, the experimental results are not convincing to change the community's wide adoption of CLIP.* We never claimed this to be a purely empirical evaluation of CLIP like models. We proposed a new method with the goal repeatedly stated in this response. ### Supervised encoders ASIF does not assume to have a supervised visual encoder, it can also be unsupervised as demonstrated. We are interested in evidencing that ASIF works with potentially any uni-modal encoder, either trained through a supervised or unsupervised task. Note that also LiT hurts the supervised encoder's performance on imagenet as is for us with DEIT. At the same time, one gains the open vocabulary capabilities.
null
null
null
null
null
null
Batch Bayesian Optimization For Replicable Experimental Design
Accept (poster)
Summary: The paper introduces a batch Bayesian optimization method for the setting where experiments can be very noisy, and therefore it is common practice to repeat many experiments. The framework allows for both the selection of the experiment design and how often each experiment is replicated. The authors introduce three method: BTS-RED-Known (for the setting where the noise is known), BTS-RED-Unknown (for the setting where the noise is unknown), and Mean-Var-BTS-RED, for the setting where we want to trade-off finding the optimum while obtaining less noisy observations (called risk-averse optimization). In every case the experiment designs are chosen using a variation of Thompson Sampling (with scaled variance). The number of replications is then chosen based on the noise level at the experiment design, an upper-bound is used when the noise is unknown. For risk-averse optimization, a linearized sum of the objective and the noise level is used. For every algorithm theoretical bounds are found on the regret, which are proven to be sub-linear. The number of repeated experiments depends on the hyper-parameters $R^2$, but a theoretical justification for its value is given. Empirically, the method is shown to outperform other Bayesian Optimization algorithms in synthetic examples, and two real world examples: one in Precision Agriculture, and one in AutoML. Strengths: Originality: To the best of my knowledge, the proposed algorithm is novel. While it combines simple ideas, it does so carefully and effectively. In particular, the theoretical analysis of the algorithms is very strong, providing a choice for the hyper-parameter $R^2$, and proving regret bounds for the algorithm. Quality: The algorithm is built up from proved and tested methods in the literature, and a strong theoretical justification is given for using them. Empirical evidence of the algorithm performance is given, but in a limited capacity. The creating of a data-set (for the agriculture example) based on real-life experiments makes the experiment particularly strong. Clarity: The method is well explained, and the theoretical implications are clear. There is a good choice of figures, even if they are difficult to read. Significance: The paper is well motivated, experiment replication due to noise is common in many areas of science and it is not usually taken into account by classical Bayesian optimization algorithms. Making BO more practical to the wider scientific community is very important, and papers like this one take a big step towards it. Weaknesses: While the motivation, and justification for the algorithms is clear, I find the empirical evidence to be lacking. My main concerns are: - The main framework seems to fit a _homoskedastic_ GP to the mean of the replicated data (which will, by design, have very small noise so it should not be a problem). Then, if needed, a second homoskedastic GP is used to model the variance. However, I would argue the most naive and natural solution is to simply use a heteroskedastic GP as the surrogate model in the first place. This is not a problem in itself, however, I believe for a fair comparison against Thompson Sampling a homoskedastic GP should be used. Otherwise the model misspecification could be problematic and lead to poor performance (maybe Figure 1d could be explained by this?). It is unclear from the paper if this is the case. In simpler terms, *it is unclear wether the empirical advantage of BTS-RED comes from a better algorithm, or a better model*. If a heterskedastic GP is used, then is should be clarified, otherwise the paper would benefit greatly from including it. - The extent of the experiments seems limited. Only a single 1d example is shown, and then both real world examples are 2d. It would be good to include more examples. Similarly, the budget is fixed to $\mathbb{B} = 50$ for all experiments, it would be good to see how the algorithm performs for smaller (and perhaps larger) budgets. Other minor issues: - The novelty of Mean-Var-BTS-RED seems overstated, is it simply solving a multi-objective problem through linearization? - The heuristic for handling unused budget seems a little problematic. Given you get some observations (albeit noisy), if they clearly pointed to a sub-optimal design it seems wasteful to still evaluate the objective further at the specific location. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Does *asymptotically no regret* simply mean that the method converges? Or that convergence is sub-linear? - Am I correct in assuming that the GPs that you are fitting when using BTS-RED are homoskedastic with a small noise level? - Is there an intuitive meaning to the variable $R^2$? Or is this a hyper-parameter to choose? I think it would be good to talk about it earlier, as it is often mentioned, but not discussed until page 5. - When doing Thompson Sampling for for BTS-RED, the variance of the GP is scaled (e.g. line 4 in algorithm 1). Is this done for theoretical guarantees or for some other reason? - I do not fully understand section 3.2.2 (Upper Bound on Noise Variance Function). If the posterior of $g$ is a GP is the result that $-\mu_{t-1}'(x) + \beta_t' \sigma_{t-1}'(x)$ an upper-bound not obvious? If we talk about the general case where $\epsilon'$ is not necessarily Gaussian, then the posterior is not necessarily a GP which could mean my confusion is just a typo in line 214. Minor formatting issues: - I found the figures and their font sizes to be too small and difficult to read. - In Theorem 3.1 $\tau_{t-1}$ and $\beta_t$ are defined and then not used in the Theorem Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Limitations are mentioned in the Conclusion section. Other limitations I mentioned (e.g. lacking empirical evaluation) seem like they could be easily addressed by the authors in a single iteration of the paper. There is no obvious negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your constructive feedback. --- > - The main framework seems to fit a homoskedastic GP to the mean of the replicated data... (and) a second homoskedastic GP is used to model the variance. However, I would argue the most naive and natural solution is to simply use a heteroskedastic GP as the surrogate model in the first place. This is not a problem in itself, however, I believe for a fair comparison against Thompson Sampling a homoskedastic GP should be used... If a heterskedastic GP is used, then is should be clarified, otherwise the paper would benefit greatly from including it. You are correct that our algorithms fit two homoskedastic GPs to model the mean and the variance, respectively. If we understand corretly, you are asking whether we have used a homoskedastic or heterskedastic GP for the baseline of batch Thompson sampling (TS) in our experiments: We have used a homoskedastic GP, which is consistent with the homoskedastic GP used in our algorithms and with the previous works on batch TS. As you suggested, we've added an experiment in which we replace the homoskedastic GP in batch TS by a heterskedastic GP. The results (Fig. 4 in our global response above) show that the use of heterskedastic GP leads to comparable (slightly better) performances for batch TS, but it is still consistently outperformed by our algorithms. Therefore, we think this can serve as additional evidence to justify that our empirical advantage comes from better algorithms. In addition, to the best of our knowledge, using a single heteroskedastic GP as the surrogate model makes it difficult to perform theoretical analyses. In contrast, the use of two homoskedastic GPs in our methods has allowed us to derive strong theoretical guarantees. --- > - The extent of the experiments seems limited. Only a single 1d example is shown, and then both real world examples are 2d. It would be good to include more examples. Similarly, the budget is fixed to $\mathbb{B}=50$ for all experiments, it would be good to see how the algorithm performs for smaller (and perhaps larger) budgets. As you suggested, we've added more experiments (see our global response above). Firstly, we've added two experiments with **higher-dimensional continuous input spaces** (12d and 14d, see Fig. 1 in our global response). Our methods still consistently achieve compelling performances (especially with $\kappa=0.3$, which is consistent with our original experiments, see lines 258-261). We've also added two experiments with different values of the budget $\mathbb{B}$: $\mathbb{B}=100$ and $\mathbb{B}=30$ (see Fig. 2 in our global response), in which the performance advantages of our methods (especially $\kappa=0.3$) are consistent with those shown in our original paper (with $\mathbb{B}=50$). --- > - The novelty of Mean-Var-BTS-RED seems overstated, is it simply solving a multi-objective problem through linearization? We have indeed designed our Mean-Var-BTS-RED algorithm based on the scalarization/linearization technique, which is a common technique for multi-objective optimization. However, our theoretical analysis for Mean-Var-BTS-RED are novel to the best of our knowledge, and they posed additional technical challenges compared with the analysis of our BTS-RED-Unknown. --- > - The heuristic for handling unused budget seems a little problematic. Given you get some observations (albeit noisy), if they clearly pointed to a sub-optimal design it seems wasteful to still evaluate the objective further at the specific location. What you have suggested here is indeed an interesting potential extension. However, as you have also alluded to, the observations are likely very noisy due to the incomplete evaluations, and hence using these potentially noisy observations to decide whether to continue the unfinished experiment may lead to unreliable decisions. Therefore, this extension is likely to require new algorithmic designs, which we will explore in future works. Thank you for the suggestion. --- > - Does asymptotically no regret simply mean that the method converges? Or that convergence is sub-linear? To clarify, our methods being asymptotically no-regret means that our cumulative regret is sub-linear. This also implies that our simple regret asymptotically goes to 0; intuitively, this means that our methods are guaranteed to be able to query the global optimum asymptotically, in other words, our method is guaranteed to converge. Please see lines 90-104 for more details. --- > - Is there an intuitive meaning to the variable $R^2$? Or is this a hyper-parameter to choose? I think it would be good to talk about it earlier, as it is often mentioned, but not discussed until page 5. The parameter $R^2$ can be seen as the *effective noise variance* for every observations (lines 122-124). We have used our theoretical regret bound to derive a guideline on how to choose $R^2$ (lines 171-181), which we have indeed followed in our experiments (lines 249-256). To clarify, we had in fact already discussed about $R^2$ on page 3 (lines 118-124), and we'll revise this part to make the discussion clearer. --- > - When doing Thompson Sampling for for BTS-RED, the variance of the GP is scaled. Is this done for theoretical guarantees? Yes, this is done for the theoretical guarantees following the previous works [6]. --- > - I do not fully understand section 3.2.2 (Upper Bound on Noise Variance Function). If the posterior of $g$ is a GP, is the result that $-\mu'\_{t-1}(x)+\beta'\_t\sigma'_{t-1}(x)$ is an upper-bound not obvious? This upper bound naturally arises only given the assumption that the noise $\epsilon'$ is sub-Gaussian. We think that it may not be obvious to see that this assumption is reasonable, and hence we have justified why this assumption is reasonable in lines 201-210. --- Thank you again for your insightful comments. We hope our additional clarifications and results could improve your opinion of our paper. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the very detailed response and for the additional experiments. I think most of my concerns have been well addressed. It is encouraging to see that the method holds up against heteroskedastic TS, I would still prefer if this was compared against in all benchmarks as I think it is the more natural baseline, but I guess it is not strictly necessary. I seem unable to edit my review at the moment, but I will upgrade my score when I get the chance. --- Reply to Comment 1.1.1: Title: Thank You for You Comments Comment: We're happy to hear that most of your concerns have been well address, and we highly appreciate for your improved evaluation of our paper. We'll also revise the paper following your comments, which we believe will significantly improve our work.
Summary: This paper introduces an algorithm for selecting both sampling locations and the number of replications in the context of heteroskedastic Bayesian Optimization. The authors propose to use Thompson sampling for batch candidate selection and a scheme for determining the number of replications for each element of the batch based on an effective noise variance. The authors propose three different variants of their algorithm, covering both the noiseless and noisy cases, as well as the mean-variance optimization case. They prove asymptotic regret bounds for each case (based on which they suggest the choice of effective noise variance). Finally, the paper empirically compares the proposed algorithm against baselines on synthetic and real-world problems. Strengths: - This is a well-motivated problem and of interest to the Bayesian Optimization research community as well as to practitioners. - While the basic ideas aren't novel, the specific approach taken is. It is also quite simple and easy to implement. - The theoretical results appear sound and are useful, especially the guidelines for how to choose the effective noise variance. The technical contributions are solid and non-trivial. - The paper is generally well written and the contributions are stated clearly. Weaknesses: - Theoretical results: - While the results are interesting, it's not really clear to me to what extent the asymptotic rates from this approach differ from those that would be achieved by uniform sample allocation (see my questions below). This seems like a missing aspect to me that the authors should address. - Empirical results: - It feels like the authors could have attempted harder to compare against other baselines. - The comparison is only against sequential BO algorithms, which have a clear disadvantage in that they cannot, well, use batch evaluations (this includes RAHBO). While the authors acknowledge that, the obvious thing to do here would be to compare against parallel (batch) BO algorithms (using uniform budget allocation) such as qEI, qUCB, etc. that are readily available in the literature (and are implemented in many BO libraries). This would help better understand the performance of the proposed method in absolute terms (rather than solely in terms of improving over batch TS due to non-uniform allocation. - Similarly, even though [34] may be heuristic, it would still be useful to compare against empirically (if that is reasonably straightforward to do). - The improvements relative to a basic baseline such as simple batch TS do not strike me as particularly impressive. - This may be due to the simple problems that were used. It would be helpful to see how the algorithm performs on higher dimensional problems (max dimension is 2) and what effect the dimension has on the significance of the ability to be able to allocate budget in a nonuniform fashion. - Maybe these results are more impressive than I think, but if that's the case maybe the authors can try to explain this better? - Figures: The figures in the MT are tiny and extremely hard to read. I suggest moving some of the technical details from section 3 to the appendix and generally tighten up the writing and make some readable figures instead. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Asymptotic no-regret: - Wouldn't a uniform sampling strategy achieve the same? Are there any improved regret bound rates that we can get from the proposed approach vis-a-vis a uniform allocation? - *Note*: I am not questioning the practical value of doing non-uniform allocation, but I wonder if the bound is more interesting - especially even with a uniform allocation I would assume that in a finite space you should asymptotically achieve similar rates (log(B)/sqrt(B)) in terms of the budget as if you used the proposed strategy. The non-asymptotic regime seems more interesting). Not that this is still with heteroskedastic noise, so this is somewhat different than comments on the homoskedastic noise case the authors make in l164. - What if the noise is heteroskedastic in the outcome value (rather than necessarily the input)? Can this be exploited in a different way? - What would a more principled approach for the n_max heuristic for avoiding samples on undesirable regions look like? Could this be handled in a more principled as part of the point generation? - For the unknown noise case, the GP modeling the empirical noise variance may produce negative predictions. How are you dealing with this? Have you considered modeling the log-variance with a GP instead (a common approach in other similar work)? - Finiteness of the domain: The discretization approach ("we assume that the domain X is finite, since extension to compact domains can be easily achieved via suitable discretizations [4].") is fine in theory (for regret bounds etc) but can become problematic in practice, especially in higher dimensions where the number of discrete points to sample needs to grow exponentially with the dimension to get similar coverage of the domain, which renders acquisition function optimization challenging. It seems reasonably straightforward to optimzie the acquisition fucniton using gradient-based optimization, maybe the authors may want to comment on that. - "it is recommended to make nmin larger in experiments where the overall noise variance is large" <- why is that recommended? It will make it easier to estimate the local noise variance, but it's not necessarily clear that this is going to result in better optimization performance. - "When the noise variance is unknown [...] we approximate sigma_max^2 by the maximum observed empirical noise variance and update our approximation after every iteration" <- This makes a lot of sense, but it feels like something is swept a bit under the rug here. Specifically, what does this do to the regret bound, which depends on sigma_max^2? Is it still valid (just with an unknown factor)? Or is there additional work to show this? - The agriculture example is a nice real-world application and a good fit for the algorithm, however, I question the number of iterations that are being evaluated here. How long is a growing cycle until the leaf area and tipburn area can be evaluated? If this is on the order of months then 100 growth cycles seem a lot... - The regularization parameter lambda depends on the overall horizon T (and the appendix states that this is required for the theoretical results to hold) - however, in practice T may not be know if we are interested in anytime performance of the algorithm. How would one deal with this in practice? - Why are you optimizing the GP hyperparameters only after every 10 iterations and not after every iteration? - typo l97: "upper on" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your detailed and insightful comments. --- > Theoretical results: comparision with the theoretical regret bound achieved by uniform sample allocation. For uniform sample allocation (i.e., we replicate every input a constant $n_0\leq\mathbb{B}$ number of times), the effective observation noise would be $(\sigma_{\max} / \sqrt{n_0})$-sub-Gaussian, where $\sigma_{\max}$ is the maximum noise standard deviation. This would result in a regret bound which can be obtained by replacing the term $\sqrt{R^2 / (\mathbb{B} / \lceil \frac{\sigma^2_{\max}}{R^2} \rceil - 1)}$ in our regret bound (Theorem 3.1) by $\sigma_{\max} / \sqrt{n_0}$. Here we ignore the non-integer conditions (e.g., ceiling operators) for simplicity. As a result, with our optimal choice of $R^2$ (lines 171-181), the above-mentioned term from our Theorem 3.1 can be simplified to $\sigma_{\max}/\sqrt{\mathbb{B}}$ (details omitted here), which is to be compared with the above-mentioned term $\sigma_{\max} / \sqrt{n_0}$ from uniform sample allocation. Because $n_0 \leq \mathbb{B}$, **our regret bound** (with the scaling of $\sigma_{\max} / \sqrt{\mathbb{B}}$) **is guaranteed to be no worse than the regret of uniform sample allocation** (with the scaling of the $\sigma_{\max} / \sqrt{n_0}$). Thank you for pointing out this interesting comparison, we'll add the discussions here to the paper after revision. --- > Empirical results: - Performance on more complex, higher-dimensional problems - Comparison against more baselines on batch BO **We've added two more complex real-world experiments with higher-dimensional input spaces** (Fig. 1 in our global response above), with input dimensions of $d=12$ and $d=14$. Our methods still consistently achieve compelling performances (especially with $\kappa=0.3$, which is consistent with our original experiments, see lines 258-261). We've compared with batch TS as the representative baseline batch BO algorithm because it's the most natural competitor to our algorithm, and batch TS is both simple and has been found to yield competitive empirical performances (e.g., the TuRBO algorithm [13], which showed impressive performances in various applications, also used batch TS for batch selection). As you suggested, we'll explore comparisons with additional baseline batch BO methods in future works. --- > What if the noise is heteroskedastic in the outcome value (rather than necessarily the input)? To clarify, in our setting, the noise variance $\sigma^2(x)$ varies with the input $x$, and the realized noise $\epsilon\sim\mathcal{N}(0,\sigma^2(x))$ is added to the function value $f(x)$ to produce the outcome $y=f(x)+\epsilon$. --- > What would a more principled approach for the $n_{\max}$ heuristic for avoiding samples on undesirable regions look like? A more principled approach for avoiding samples in undesirable regions would require taking into account the function value $f(x)$ at different $x$. Since the function is unknown, we may instead use the confidence bounds calculated by the GP which contains the function with high probability. We'll explore this in future works. --- > The GP modeling the empirical noise variance may produce negative predictions. We find that in practice, it's very rare for our **upper bound on the unknown noise variance** (line 218) to be negative. In practice, we clip this upper bound to account for such exceptions. This modelling choice is adopted since it naturally allows us to derive our theoretical guarantees, and it also leads to good empirical performances in our experiments. It's interesting to see if modelling the log-variance could further improve our performance. --- > Finiteness of the domain: The discretization approach is fine in theory but in higher dimensions, it renders acquisition function optimization challenging. It's indeed a generic challenge for BO to optimize the acquisition function in high-dimensional continuous input spaces, for which we can adopt commonly used techniques (e.g., L-BFGS-B) in BO. This is exactly what we've done in our added experiments with continuous high-dimensional input spaces (Fig. 1 in global response above). --- > why is it recommended to make $n_{\min}$ larger in experiments where the overall noise variance is large? When the overall noise variance is high, intuitively, an overall larger number $n_t$ of replications is needed. Hence, we recommend a larger number $n_{\min}$ of minimum replications as a potentially useful heuristic. --- > "When the noise variance is unknown [...] we approximate $\sigma^2_{\max}$ by the maximum observed empirical noise variance...". What does this do to the regret bound, which depends on $\sigma_{\max}^2$? We in fact only need to approximate $\sigma^2_{\max}$ when we **choose the parameter $R^2$** by minimizing our derived regret bound (lines 171-181). So, our regret bounds (e.g., Theorem 3.1), which **hold for all values of $R^2$**, are still valid. --- > I question the number of iterations in the agriculture example. How long is a growing cycle? The growing cycle is around 1-2 weeks. Also note that our algorithms in fact already outperform the other methds before 100 iterations. --- > The regularization parameter $\lambda$ depends on the overall horizon T - however, in practice T may not be known. It's a common practice in BO to assume that the overall horizon $T$ is known [6,33]. When $T$ is not known, we can use the doubling trick commonly adopted in multi-armed bandits to obtain anytime algorithms, or simply use an estimation of $T$. --- > Why are you optimizing the GP hyperparameters only after every 10 iterations? It's a common practice to update the GP hyperparameters after multiple iterations, mainly to save computational cost. --- Thank you again for your valuable feedback. We hope our clarifications and additional results could improve your opinion of our paper. --- Rebuttal Comment 1.1: Title: Additional Question Comment: Just a follow-up question regarding batching algorithms, stemming from this review. It is my experience that batching methods such as q-EI, q-UCB or Local Penalization can become computationally expensive and scale badly for large batches. Because of this, and looking at the large batch-sizes in the experiments ($ B = 30, 50, 100 $) I thought the choice of Thompson Sampling to be very natural, however, I am wondering how would you expect the algorithm to perform in batch sizes in the range $ B \in \{5, 6, ..., 20 \}$ where other methods could be more natural competition. --- Reply to Comment 1.1.1: Title: Response to Additional Question Comment: Thank you for pointing out this additional advantage of batch Thompson sampling (TS) in terms of computational costs given our large batch sizes. It is also what we have observed in our experiments, i.e., increasing the batch size for batch TS does not siginificantly increase the computational cost. We'll add it to the paper as an additional justification for our choice of TS for batch selection. When $\mathbb{B}$ gets smaller, we expect the behavior of our algorithm to become more and more similar to standard batch TS (without replications). To see this, note that as $\mathbb{B}$ becomes smaller, our choice of $R^2$: $R^2=\sigma^2_{\max}(\sqrt{\mathbb{B}}+1)/(\mathbb{B}-1)$ (line 175) will become larger; as a result, our selected number of replications $n\_t^{(b)}=\lceil \sigma^2(x^{(b)}\_t) / R^2 \rceil$ (line 5 of Algo. 1) will become smaller. In the extreme case where $\mathbb{B}=4$, we have that $R^2=\sigma^2_{\max}$ and consequently $n_t^{(b)}=\lceil \sigma^2(x^{(b)}_t) / R^2 \rceil=1$, which means our algorithm reduces to standard batch TS (without replications). Therefore, when $\mathbb{B}$ is small, we expect our algorithms to behave similarly to standard batch TS. Given the good empirical performances of batch TS compared with other batch BO algorithms shown in the literature [13,20], we expect our methods to perform competitively as well. To summarize, we think our algorithms perform on par with standard batch TS when $\mathbb{B}$ is small ($\mathbb{B}=4-20$) and consistently outperforms batch TS (as shown in our experiments) when $\mathbb{B}$ is large ($\mathbb{B}>30$). This is in fact well aligned with the motivation of our work, i.e., in precision agriculture, $\mathbb{B}$ usually takes values within the range $\mathbb{B}=50-100$ according to the plant biologists we are collaborating with. We'll also clarify this after revision. Thanks for pointing this out. --- Rebuttal Comment 1.2: Comment: Thanks for the detailed response. A number of my concerns were addressed, thanks also for adding the higher-dimensional comparisons. However, I still feel like I don't really have a sense of how well the proposed approach works compared to other non-TS baselines. While reviewer VcKK points out that TS is well suited to large batch sizes, $\mathbb{B}$ is the total replicate size - with $n_t$ set to a minimum of 5, even with $\mathbb{B}$ the effective batch size is (at most) 10, which is definitely well within the realm of other batch acquisition functions such as qUCB or q(N)EI or other even less computationally demanding heuristics such as the penalization approach of Gonzalez et al. (2016). J. Gonzalez, Z. Dai, P. Hennig, and N. Lawrence. Batch bayesian optimization via local penalization. In A. Gretton and C. C. Robert, editors, Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pages 648–657, Cadiz, Spain, 09–11 May 2016. PMLR. --- Reply to Comment 1.2.1: Title: Thank You for Your Comment Comment: We agree that incorporating replications (with a fixed number $n_t$ of replications) into other batch BO algorithms (such as qUCB, q(N)EI and the penalization approach of Gonzalez et al. (2016)) can reduce their effective batch size and hence their computational cost, and it is interesting to explore comparisons of our methods with these methods in future work. In addition, as Reviewer VcKK has commented, it is also an interesting future topic to incorporate our method (e.g., our adaptive number of replications depending on the noise variance) into these other batch BO algorithms. Thank you and we will add the discussions here to the revised paper.
Summary: The paper proposes a batched Bayesian optimization with heterodescadic noise and Thompson sampling. Further, it proposes an extension where not only function is minimized but also a robust variant objective which also incorporates observational noise variance. It provides some regret analysis relying on prior works and experimental comparison on real life benchmarks from agriculture. Strengths: Originality --------- The use of Thompson sampling with heteroscedastic noise is novel. Significance ----------- 1. Authors test their algorithm in real-world settings. 2. I think the pattern of trying to repeat measurements is indeed omnipresent in the life sciences, and methodologies which automatically incorporate it are of benefit to practitioners. Clarity & Quality -------- I was able to understand the paper, however the exposition is very dense. Weaknesses: The proof techniques used here are classical used in prior works mainly Chowbury & Gopalan 2017 and Desaultes et. al. 2014. Namely the RKHS proof of Gopalan and Batched version of it using the uncertainty sampling there. This work provides a Thompson sampling look at heteroscedastic bandits but the theoretical results are only of marginal interest to the community since they are straightforward extension of prior techniques. The focus on Thompson sampling seems arbitrary. There are works which can address heteroscedastic as well as robust optimization but do not use Thompson sampling such as Kirschner & Krause 2018 and Makarova e.t al. 2021. Why Thompson sampling in particular is interesting remains unanswered. The fact one performs repetitions is not theoretically motivated, but instead only practically motivated; or at least I took the liberty to make this conclusion. Information theoretically speaking, repeating the measurements might not be the best thing to do to decrease uncertainty overall. In fact probably optimizing an allocation over all points can lead to a better solution in long run (over multiple iterations). However, I think studying the problem where the repetitions are a constraint on the measurement scheme is much better motivation for this work, since life-sciences do have certain setup-costs per experiment. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Why do you put heavy focus on the theory? Are the theoretical bounds important for your real-life experiments? 2. You provide a outline how to choose R, but is this what you use in your real lab-experiments. 3. In real experiment - do you use uncertainty sampling as in the theorems? I my opinion this provides limited novelty towards broader NeurIPS crowd. I think as a case study of Bayesian optimization with heteroscedasdic noise its very nice, and a paper more focused towards actual challenges with precision agriculture would be much more fitting. I think studying the problem where the repetitions are a constraint on the measurement scheme is much better motivation for this work. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Authors adequately addressed the limitations and, if applicable, potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your insightful comments. --- > The proof techniques used here are classical used in prior works... but the theoretical results are only of marginal interest to the community since they are straightforward extension of prior techniques. Although we have used some proof techniques from the previous works, we think that our analyses are in fact not straightforward extension of prior techniques. **Rigorously and coherently integrating the different components into our overall analytical framework required careful treatment and non-trivial efforts**. Therefore, we think that our technical theoretical contributions are in fact "solid, non-trivial" and "strong" (as commented by Reviewer 2mxf and Reviewer VcKK). Moreover, our algorithmic deisgns (e.g., our adaptive selection of the number of replications to guarantee a small effective noise level, lines 116-125), which are indispensable for our theoretical analyses to be hold, are also novel and can be of broader interest to the community. More importantly, in addition to our contributions from the pure theoretical perspective, **our theoretical results have also provided useful practical guidelines for our empirical implementation**. Specifically, our regret upper bound has provided us a natural and principled way to set the effective noise variance $R^2$ (lines 171-181), which we have indeed followed in our experiments (lines 249-256) and led to compelling empirical performances. In addition, we have also performed empirical evaluations in real-world experiments for precision agriculture (using real-world data from precision agriculture) and AutoML, which we think constitute important empirical contributions and hence demonstrate the potential of our proposed methods in solving real-world problems. --- > The focus on Thompson sampling seems arbitrary... Why Thompson sampling in particular is interesting remains unanswered. Our focus on Thompson sampling (TS) is in fact not arbitrary. Instead, we have adopted TS because its inherent randomness makes it particularly suitable for selecting a batch of inputs [20] in the batch setting we focus on (lines 44-45). In fact, using the inherent randomness of TS for batch selection is a simple and well-established method in BO (e.g., see [13, 20]), which both allows for the derivation of theoretical guarantees [20] and leads to strong empirical performances (e.g., the TuRBO algorithm from [13]). Thank you for pointing this out, and we'll clarify this after revison to avoid confusion. --- > The fact one performs repetitions is not theoretically motivated, but instead only practically motivated... I think studying the problem where the repetitions are a constraint on the measurement scheme is much better motivation for this work, since life-sciences do have certain setup-costs per experiment. You are correct that performing replications is mostly motivated from practical applications. Specifically, it is important to explicitly replicate each condition because in problems with large and heterogeneous observation noises, (1) replicating each input condition leads to more reliable observations and has indeed been repeatedly found [2,25,34] to improve the performances in such problems (lines 23-26), and (2) replicating each conditon is indeed what practitioners do in these real-world problems such as precision agriculture (according to the plant biologists we are collaborating with). We agree that it will further strengthen our motivation by considering more real-world scenarios with motivations or requirements to perform replications, e.g., when performing new experiments with a different input condition requires expensive experimental setups as you suggested. We'll follow your suggestion and will revise the paper to discuss them as additional motivations. Furthermore, it is also an interesting topic to theoretically show the advantage of performing replications, which we'll also explore in future works. Thank you very much for the suggestion. --- > 1. Why do you put heavy focus on the theory? Are the theoretical bounds important for your real-life experiments? > 2. You provide a outline how to choose R, but is this what you use in your real lab-experiments. Firstly, our theoretical bounds can serve as assurance for practioners deploying our algorithms. For example, similar to many previous works on BO with theoretical guarantees, **the fact that our algorithms are asymptotically no-regret can serve as a hallmark indicating that our algorithms are well-behaved**. Secondly, our theoretical results have indeed provided us useful and practical guidelines on how to set the effective noise variance $R^2$, which is the most important parameter in our algorithm (lines 171-181). More importantly, **we have indeed followed this theoretical guideline to choose $R^2$ in our real-world experiments** (lines 249-256). --- > 3. In real experiment - do you use uncertainty sampling as in the theorems? In our experiments, we have used the simpler random search instead of uncertainty sampling as the initialization method. This is because it has been reported in previous works that uncertainty sampling is usually only a theoretical requirement and other initialization methods often lead to similar performances in practice [20]. To corroborate this, we have added an experiment using uncertainty sampling as initialization (Fig. 3 in the global response above), and the results show that it indeed leads to very similar empirical performances. --- Thank you again for your valuable comments. We hope our clarifications and additional results could improve your opinion of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I still believe the heavy focus on theory is to the detriment of the exposition of work. I also noticed comment of the above reviewer who spotted a modelling issue by modelling $\sigma^2$. This is yet again a decision where authors chose a decision which is easier to analyze with current analysis at the expanse of practice. Re: your choice of Thompson sampling. There are many paper on batch-BO where they motivation is that Thompson sampling does not lead to sufficient diversity of batches. So then, what is the conclusion. But I perfectly accept your choice for Thompson sampling. This was not criticism of the choice. I think its fine to make choices, I was just wondering if there is a good reason for it. I find your answer to the optimal choice of R^2 according to the theory dishonest. You do say you use the theory, but at the same time in the next line you say its too conservative and introduce a kappa which you tune to get good results. I mean, you use a scalar variable to tune another scalar variable, so what is the point here? So what if I used kappa=0.1, would it work even better? How universal is this scaling? This is a universal scaling for the whole experiment depending on the budget, which is possible to tune in simulation. My main concern up to this point is with the presentation, I feel that what has been said in the rebuttal is way more interesting that the actual contents of the paper. --- Reply to Comment 1.1.1: Title: Thank you for further comments Comment: Thank you for further comments. --- > I still believe the heavy focus on theory is to the detriment of the exposition of work. We believe that theory is as important as practice, and **both our theoretical and empirical contributions are integral and indispensable components of our paper**. Although we agree that it is important to design an algorithm which works well empirically, however, **it is of equal importance to provide a theoretical guarantee to an algorithm** which can serve as an assurance for practioners deploying this algorithm and can guarantee the correct behavior of the algorithm. Our theory is also important for the exposition of our work, because it allows our presentation to be more rigorous, principled, and non-ambiguous. --- > I find your answer to the optimal choice of R^2 according to the theory dishonest. You do say you use the theory, but at the same time in the next line you say its too conservative and introduce a kappa which you tune to get good results. I mean, you use a scalar variable to tune another scalar variable, so what is the point here? So what if I used kappa=0.1, would it work even better? How universal is this scaling? This is a universal scaling for the whole experiment depending on the budget, which is possible to tune in simulation. We respectfully disagree on this. Here we are using the theoretical value for $R^2=\sigma^2_{\max}(\sqrt{\mathbb{B}}+1) / (\mathbb{B}-1)$ as a general guideline for our practical implementation. More specifically, we have followed the dependency (of this theoretical value) on $\sigma^2_{\max}$ and $\mathbb{B}$, and introduced an additional multiplies $\kappa$ to account for the potential conservativeness of the theoretical analysis. If we ignore this theoretical value and instead directly tune $R^2$ as a scalar, we wouldn't be able to account for the theoretically inspired dependency of $R^2$ on $\sigma^2_{\max}$ and $\mathbb{B}$. As a result, our algorithm wouldn't be able to automatically adapt to different values of $\sigma^2_{\max}$ and $\mathbb{B}$ in a principled way. Therefore, our design ensures that our choice of $R^2$ is universal in the sense that a single value of $\kappa$ (i.e., $\kappa=0.3$) allows us to achieve good empirical performances in most of our experiments (which is indeed what we have shown). On the other hand, if we instead directly tune $R^2$ as a scalar, we cannot find a universal way to set $R^2$ because it cannot adapt to the different values of $\sigma^2_{\max}$ and $\mathbb{B}$. In fact, **it is a common practice** in Bayesian optimization to use the theoretical value as a general guideline (rather than following the exact theoretical value) for the practical implementation. For example, when choosing the $\beta_t$ parameter in the GP-UCB algorithm [33] (which is used to tune the weight between the GP posterior mean and standard deviation), a common practice is also to only use its theoretical value as a general guildine and simply apply some fixed multiplier. For example, the representative work of [a] below has set $\beta_t=0.2d\log(2t)$ (see Section 4.4 of [a]). [a] High Dimensional Bayesian Optimisation and Bandits via Additive Models --- > My main concern up to this point is with the presentation, I feel that what has been said in the rebuttal is way more interesting that the actual contents of the paper. We will follow you suggestion to try to make the paper easier to read by giving more intuitions rather than detailed derivations/proofs, and we will also add what we included in the rebuttal to the paper after revision.
Summary: The authors propose three methods for Bayesian Optimization under the constraint of batch sampling and with significant heteroscedastic aleatoric uncertainty assumed. Their methods BTS-RED-Known assumes knowledge of the variance function, BTS-RED-Unknown does not assume the variance function is known and fits a GP to the negative variance function and Mean-Var-BTS-RED, also assumes no knowledge of the variance function but uses the learned variance function to enable risk averse BO by optimizing for a weighted combination of the mean function and the negative variance. The authors prove their methods are asymptotically no-regret. The methods are tested on synthetic as well as real-world (precision agriculture and hyperparameter optimization) setups. Strengths: - The paper is well written and clear. - The BTS-RED methods are simple and intuitive, it would be easy to implement and apply them to new problems. - The method performs well on the limited (see below) empirical evaluation conducted. - The authors prove the BTS-RED methods are asymptotically no-regret. - The Mean-Var extension for risk averse BO is interesting. Weaknesses: - The method is developed for discrete valued domains, IMO extension to real-valued domains may not be trivial (see limitations section). - The experiments are all conducted on very low dimensional, discrete domains with small bounds. I would have liked to see evaluations on real-valued domains and higher dimensional problems e.g. hyperparameter tuning with 10-20 mixed type hyperparameters. Currently the empirical evaluation is quite limited as a result. - For the synthetic and precision agriculture experiments, the authors guarantee the ground truth function is of the heteroscedastic GP model class that their method assumes. This is not unreasonable, but given that there is only one further experiment in the paper which comes from a real-world unknown function (hyperparameter tuning experiment) this limits the robustness of the empirical evaluation as it gives the BTS-RED methods quite an advantage over competitor methods which do not have a heteroscedastic GP function representation. I would have liked to see further evaluated on more unknown ground truth functions. - Overall I am slightly concerned with the substantiveness of the contribution of the paper. The use of a heteroscedastic GP and Thompson sampling for BO is well established. The extension to batch sampling is interesting, but to what extent the BTS-RED methods interestingly integrate batch structure seems limited to me (see questions below). The Mean-Var extension is again interesting but fairly straightforward. All together the novelty and contribution seems somewhat limited compared to prior work e.g. [16]. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I was surprised to see that the second GP was used to model $- \simga^2$. Is this correct? Why is the GP not used to model $log \simga^2$ as is more standard in the literature as it can take on values on the full real-valued range? - I am not sure I fully understand the implementation of non-batch BO algorithms in the experimental section. It would seem to me that a non-batch algorithm should have an advantage in that they should sample 1 point, make an observation, update the function representation and sample again. Of course this may make them infeasible to apply to problems with batch structure e.g. precision agriculture. However in the experiments these methods are performing very poorly, so I am assuming they are not implemented as above. Are you using the non-batch methods in a batch-wise fashion? I assume this would mean that all these methods would in effect simply sample the exact same point many times per batch until the budget is used up? If my understanding is correct, I am not sure this is an interesting comparison and it would be interesting to evaluate these methods as they were intended to be used as a upper bound on performance compared to methods with the batch constraint. - Why is there a need to compute the empricial mean over the replications (line 9, algorithm 1)? It would seem to me that adding (x, y_1), ..., (x, y_n) data points to the training data would be a better use of the data as it would explicitly weight observations with more replications more highly. It seems to me that you are losing the weighting by the number of replications be computing the empirical mean of the observations and treating all such aggregated data points as equal (line 10)? - Do I understand correctly that in the various BTS-RED methods, the batch structure is only accounted for in the heuristic regarding replicating the same point too many times? Otherwise sample points are selected independently, with no cross-datapoint correlations accounted for and no knowledge of the prior points in batch? This would seem like a significant weakness of the method, and it seems feasible to modify the method to account for this, see for example "BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning" for a similar treatment in the related active learning space. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors address some limitations of the method e.g. the use of a heuristic to handle unused budget in the last iteration of the algorithm. However I would liked to have seen a more comprehensive discussion of other limitations: - The method is tested on discrete valued domains, it is claimed extension to the real-valued domain is feasible, however it is not clear to me that the heuristics introduced to ensure that the same point is not sampled repeatedly can be trivially extended to a real-valued domain where it would be possible to sample a very near by (essentially identical) point without adding to the count of the number of replications for the originally sampled point. - Heuristics are introduced to ensure the same point is not sampled too many times and the method introduces new hyperparameters which cannot all be set following apriori guidelines. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your valuable feedback. --- **Clarification on Batch Selection and the Heuristic:** > ...the batch structure is only accounted for in the heuristic regarding replicating the same point too many times... > ...it is not clear to me that the heuristics introduced to ensure that the same point is not sampled repeatedly can be trivially extended to a real-valued domain... > ...to what extent the BTS-RED methods interestingly integrate batch structure... To clarify, **our methods do not need additional heuristics to ensure that the same point is not sampled repeatedly**. Instead, the diversity of the selected points in a batch is achieved by **the inherent randomness of the Thompson sampling** (TS) strategy (lines 44-45). In fact, using the inherent randomness of TS for batch selection is a simple and well-established method in BO (e.g., see [13, 20]), which both allows for the derivation of theoretical guarantees [20] and leads to strong empirical performances (e.g., the TuRBO algorithm from [13]). More importantly, we think that the strong theoretical guarantees of our algorithms (e.g., our algorithms are asymptotically no-regret, e.g., see line 157) indicate that our batch selection strategy is principled rather than heuristic. Regarding the heuristic you mentioned, if we understand correctly, we think you are referring to the heuristic of a maximum number of replications mentioned in lines 129-130. As a clarification, this heuristic is only needed **to further improve our empirical performance**, and is hence not an essential part of our algorithmic design. As a result, this heuristic **does not affect the ability of our algorithm to solve problems with continuous real-valued domains** (please see our next response). --- > More real-world experiments with higher-dimensional, continuous domains. As we've clarified above, applying our methods to continuous real-valued domains is indeed feasible and in fact trivial. **We have added two real-world experiments with high-dimensional continuous input domains** (see Fig. 1 in our global response above), with input dimensions of $d=12$ and $d=14$. Our methods still consistently achieve compelling performances (especially with $\kappa=0.3$, which is consistent with our original experiments, see lines 258-261). These additional results are further demonstrations of the practicality and real-world potentials of our methods, and we'll add them to the paper after revision. --- > Substantiveness of Our Contributions Here we clarify our novelty and contributions. In addition to the novelty of our algorithmic design (e.g., our adaptive selection of the number of replications depending on the noise variance), a major part of our novelty and contributions (compared with the previous works such as [16]) comes from our theoretical analysis. Specifically, we have shown that our algorithms are asymptotically no-regret, and are guaranteed to improve with a larger budget $\mathbb{B}$ or smaller noise level. Moreover, our theoretical results have provided guidelines on the practical implementation of our algorithms (lines 171-181), which we have indeed followed in our experiments. Lastly, we have empirically evaluated our methods in real-world problems with large and heterscedastic noise (e.g., the experiment using real-world data from precision agriculture in Sec. 5.2) and shown that our methods achieve competitive performances. Therefore, we think that our algorithmic design, theoretical analysis and empirical experiments constitute important contributions. --- > Using the second GP to model $-\sigma^2$ (instead of $\log\sigma^2$). This is because it allows us to naturally derive our theoretical guarantees. Moreover, this choice has indeed helped our algorithms achieve compelling empirical performances, as demonstrated by our experiments. We'll explore modelling $\log\sigma^2$ in future works to see if it leads to further performance gains. --- > Implementation of non-batch BO algorithms The reason why batch methods are favored in practice (over non-batch methods) is due to their ability to perform paralell evaluations. So, in our experiments, we have adopted a more realistic comparison, which allows the practical advantage of batch methods to be seen more clearly. Specifically, in our figures, every iteration represents 1 batch, and hence non-batch methods can be seen as having a batch size of 1. --- > Why is there a need to compute the empricial mean over the replications (line 9, algorithm 1)? ... It seems to me that you are losing the weighting by the number of replications... The use of the empirical means over replications is theoretically motivated, i.e., it allows all our observations to have the same effetive noise variance $R^2$ and hence serves as the foundation for our theoretical analyses. Moreover, the weighting by the number of replications is in fact implicitly accounted for by the effective noise variance $R^2$. This is because for every queried input, its number of replications is adaptively selected to ensure that the effective noise variance for its observation is upper-bounded by $R^2$ (see lines 116-125 for detailed explanations). --- > Our hyperparameters. Regarding our hyperparameters, we have discussed how our hyperparameters are set (Sec. 5, first paragraph). Most of our hyperparameters are kept at the same values in all our experiments, and regarding our most important hyperparameter $\kappa$, we have shown that the values of $\kappa=0.2,0.3$ (especially $\kappa=0.3$) consistently lead to competitive performances in all our experiments. So, we think our adopted hyperparameter values can serve as good recommendations for the practical deployment of our methods. --- Thank you again for your insightful comments. We hope our additional clarifications and experiments could improve your opinion of our paper. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I thank the authors for their clarifying comments and in particular the new experiments on a real valued domain they have added (they certainly improve the paper), I will update my score accordingly. I found the clarification of the modelling of $-\sigma^2$ very interesting and would think it would be a further useful contribution of the paper to include an ablation study on this in the appendix of the final paper as it is contrary to the current standard practice in the literature.
Rebuttal 1: Rebuttal: We'd like to thank all reviewers for your insightful comments, and for acknowledging our contributions. Specifically, we are encouraged to hear that our methods are "easy to implement" (Reviewer QJyu and Reviewer 2mxf) and hence practical, and that our contributions are "of interest to the Bayesian optimization research community as well as to practitioners" (Reviewer 4YBP and Reviewer 2mxf). Regarding our theoretical contributions, Reviewer 2mxf has commented that **"our technical contributions are solid and non-trivial"**, and Reviewer VcKK has mentioned that **"the theoretical analysis of the algorithms is very strong"**. Our empirical contributions have also been acknowledged, for example, Reviewer VcKK has commented that **"papers like this one take a big step towards making BO more practical to the wider scientific community"**. Here we provide some important additional experimental results, and we'll address your individual questions in our separate responses below. --- A common suggestion to further improve the empirical contributions of our paper is to add **real-world experiments with higher-dimensional continuous input spaces**. Therefore, following your suggestion, we have added two real-world experiments with 12-dimensional and 14-dimensional continuous input spaces, respectively: - Lunar landing experiment: Here we tune $d=12$ parameters of a controller which is used to control a lunar lander in the OpenAI gym environment, in order to maximize the cumulative rewards in an episode. - Robot pushing experiment: Here we tune $d=14$ parameters controlling a robot, in order to make it complete a task involving pushing objects. The goal is also to maximize the cumulative rewards. Both experiments are commonly used benchmarks in the literature of high-dimensional Bayesian optimization [13]. More importantly, for every evaluated set of controller parameters in both experiments (i.e., every input $x$), the observation (i.e., cumulative rewards) is noisy due to random environmental factors. In addition, the noise may be heteroscedastic. For example, an effective set of parameters which can reliably and consistently control the robot is likely to induce small noise variance, whereas some ineffective sets of parameters may cause radically varying behaviors and hence large noise variances. Therefore, these experiments are also suitable for the application of our algorithms. The results are shown in Fig. 1 in the attached pdf, which demonstrate that **in these experiments with high-dimensional continuous input spaces, our algorithms still consistently achieve compelling performances** (especially with 𝜅=0.3 , which is consistent with our original experiments, see lines 258-261). We think that these additional results serve as further evidence for the empirical effectiveness and robustness of our algorithms. We sincerely hope these results, together with our individual responses below, could improve your assessments of our paper. Pdf: /pdf/d45e557988746e591de06541aebabdb3ee1aa1cb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Deep Insights into Noisy Pseudo Labeling on Graph Data
Accept (poster)
Summary: This paper aims to provide in-depth insights into pseudo labeling (PL) in the context of graph learning models. The authors first present an error analysis of the PL strategy, demonstrating that the error is bounded by the confidence threshold of PL and the consistency of multi-view predictions. Furthermore, they theoretically illustrate the impact of PL on convergence properties. Building upon this analysis, they propose a careful pseudo labeling methodology that involves assigning pseudo labels to samples with the highest confidence and multi-view consistency. Finally, extensive experiments demonstrate that the proposed strategy enhances the graph learning process and outperforms alternative PL strategies in link prediction and node classification tasks. Strengths: 1. This paper theoretically analyzes how the PL strategy affects convergence properties in GNN. 2. Based on the analysis, this paper proposes a cautious pseudo labeling methodology, and extensive experiments demonstrate the effectiveness of the proposed strategy. Weaknesses: 1.There might be some mistake in Theorem 2.5. since the covariance between the cross-entropy loss and PL strategy and $\beta$ is non-negative, the inequality $\beta Cov + L \leq L$ holds only if $\beta Cov = 0$. 2.It might be better to clearly illustrate $q(t)$ in Figure 2, which demonstrates the framework of the algorithm. 3.It is suggested to remove the first picture in Figure 1 and add two more pictures to show the performance on node classification. 4.The word ‘noisy’ in the title is redundant for pseudo labeling. Furthermore, the whole paper rarely mentions ‘noisy PL’. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How much time does the proposed method consume on the dataset like PubMed? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some figures could be further polished up, and check the correctness of the theorem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading and insightful comments. Following are the responses regarding your concerns. For the figures in the rebuttal, please check the PDF file in the global response of the author rebuttal, located at the top of this page. > 1.There might be some mistake in Theorem 2.5. since the covariance between the cross-entropy loss and PL strategy and \beta is non-negative, the inequality $\beta Cov + L \leq L$ holds only if $\beta Cov = 0$ We are confident that Theorem 2.5 contains no errors. Our analysis in lines 184-190 demonstrates that the covariance term, denoted as Cov, is never greater than 0. Considering that \beta is non-negative, the product of \beta and Cov would be non-positive. Consequently, the loss decreases iteratively, leading to a smaller overall loss. > 2.It might be better to clearly illustrate q(t) in Figure 2, which demonstrates the framework of the algorithm. 3.It is suggested to remove the first picture in Figure 1 and add two more pictures to show the performance on node classification. 4.The word ‘noisy’ in the title is redundant for pseudo labeling. Furthermore, the whole paper rarely mentions ‘noisy PL’. Thanks for your advice. We add the q(t) in Fig.2, as Fig. a in the attachment of our very first response. We have conducted experiments to investigate the impact of PL capacity on node classification performance. The results indicate that initially, as the number of PLs increases, there is an improvement in model performance. Then the improvement is followed by a degradation in performance until no nodes remain above the confidence threshold. It is important to note that due to the significantly smaller number of nodes compared to edges, our experiments only capture the first scenario illustrated in Fig. 2, as shown in Fig. b,c of the attachment in our very first response. This is why we specifically selected the PL experiment for the link prediction task in the introduction. The primary contribution of our study lies in the theoretical analysis of the error induced by PL, which is a major drawback associated with noisy labels. The entire paper focuses on quantifying and mitigating the influence of noisy PL, addressing this critical issue in label noise research. > How much time does the proposed method consume on the dataset like PubMed? The averaged time consumption (seconds) on PubMed is reported as the following table: | Base model | GCN | GAT | SAGE | APPNP | | -- | -- | -- | -- | -- | | Time(s) | 147.6 | 388.8 | 147.6 | 181.2 | 820.0 | > Some figures could be further polished up, and check the correctness of the theorem. Thanks for your advice. We would update to more clear figures in the latest version and correct the typos in the paper. Please refer to the attachment in our first overall response. --- Rebuttal Comment 1.1: Title: About the rebuttal Comment: 1. The authors have addressed most of my concerns. 2. The authors mention that the covariance term, denoted as Cov, is never greater than 0 in the rebuttal. However, in Page 5, line 184-185, they say that "the second inequality holds because the covariance between the cross-entropy loss and PL strategy is non-negative". Does there exist a mistake? --- Reply to Comment 1.1.1: Comment: We apologize for the typographical error in Line 184-185. Based on the analysis conducted in Line 184-190, it has been determined that the $Cov$ term in Eq.4 should be non-positive, as indicated in Line 190. We have made the necessary revision in the latest version.
Summary: Pseudo labeling is significant for GNN. This paper first theoretically analyzed the effect of pseudo labeling by showing the error bound and the convergence property. Then, accordingly, the paper proposes a cautious pseudo labeling based on confidence and multi-view consistency. The experimental results demonstrate the effectiveness of the proposed cautious pseudo labeling strategy. Strengths: The paper is well written. The analysis of Pseudo labeling for GNN is valuable. Weaknesses: However, the analysis or the conclusion is general to all fields instead of highly related to the graph. Besides, the solutions, including high confidence and multi-view consistency, are not that novel. More experiments are required. The time complexity should be provided when comparing with previous methods. Performance comparison on link prediction should also contain the previous PL methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) The paper is well written. The analysis of Pseudo labeling for GNN is valuable. However, the analysis or the conclusion is general to all fields instead of highly related to the graph. Besides, the solutions, including high confidence and multi-view consistency, are not that novel. 2) Some contents may contain errors. For example, the results of Figure 1 may contain some errors. When the samples of PL and CPL are equal to/similar to 0, the GAE/GAE+PL/GAE+CPL should have equal/similar performance. 3) More details need to be provided. What are the details of multi-view teachers? Does it refer to multi teachers or one teacher with multi-data-augmentation inputs? If the multi-view is obtained based on the data augmentation, what types of data augmentation are used? What is the data augmentation in Figure 4? 4) More experiments are required. The time complexity should be provided when comparing with previous methods. Performance comparison on link prediction should also contain the previous PL methods. 5) Other questions. In Algorithm 1, the student was fine-tuned based on the teacher model. I wonder whether there exists overfitting since the observed data set has been put into the model again and again. In 3.1, I wonder whether the data from the testing would be selected by CPL to be labeled for training. I wonder whether CPL may reduce the performance of nodes of low-frequently labeled classes. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. Please find our response which addresses your concern. 1. Thanks for your agreement that our paper is well written and this is really a good question. Our analysis of the PL strategy focuses on the task of graph learning. We make assumptions regarding the graph properties, i.e., Graph Perturbation Invariant (GPI) property and Additive Expansion Property (AEP). GPI assumes that the representation of the graph does not undergo significant changes when augmentation is applied. AEP assumes the continuity of the probability density in the neighborhood of the local optimal set. It is highly likely that these assumptions hold true for GNNs. The idea of message passing in GNNs smooths out the discrete variations caused by augmentation and ensures continuity. Additionally, GNNs are typically shallow networks, which implies the existence of a measure that prevents extreme fluctuations in values. However, it remains unclear whether these assumptions hold in other scenarios such as tabular data. The main contribution of our study lies in providing insights into PL in the context of graph learning. While thresholding and multiview augmentation techniques are not new in the machine learning community, their adoption in our study is a natural solution as indicated by our analysis, rather than trivial combinations in other empirical studies. We aim to quantify the error bound of PL by incorporating these techniques. Our theoretical analysis demonstrates that the performance of PL is influenced by these two factors. Therefore, we combine them to the PL and quantify their impact. Specifically, the noisy PL belong to the augmentation space, and thresholding is used to filter the local optimal set, and its consistency can be considered an explicit measure of the error bound of PL. 2. The results depicted in Fig.1 are correct based on our experiments. These experiments were conducted with specific PL capacities, i.g. 2k, 4k, ..., and 100k for the WiKiCS. And we employed a polynomial fitting curve. To avoid overfitting, we refrained from using a high-order polynomial. Thus, there might be two factors contributing to the initial value problem: a) The original training set is relatively small. Consequently, the initial PL samples have a relatively substantial impact and enhance the prediction performance. This also influences the initial value. b) The fitting result is influenced by the nonlinear relationship between the variables. The nonlinearity can affect the initial fitting value of the function. 3. "multi-view teacher model" refers to a single teacher model with multiple inputs, each of which is augmented separately. We apply various data augmentations to the original graph. These augmented graphs are fed into the teacher model individually. The final prediction is the average predictions generated from these inputs. In the next, we analyze the consistency among these predictions. In Fig.4, we employed three different augmentation methods: drop node (Node view), feature mask (Feature view), and DropEdge (Structure view). In each experiment, a single augmentation method was applied three times. As for Multiview, we applied each augmentation method once as a combined augmentation. The "Random'' refers to the random selection of samples during PL. 4. The overhead of the CPL stems from two main components: the calculation of confidence scores and the fine-tuning process on the enlarged training set. In each iteration, the time required to compute the confidence scores for the PL candidates is approximately the evaluation time of the test set. For the fine-tuning, the total epoch number is roughly twice that of the pre-training epochs. The total time consumption for fine-tuning is expected to be twice that of the base model.The averaged time consumption (second) : Node classification: | Base model | Cora | CiteSeer | PubMed | AmazonPhoto | LastFMAsia | | -- | -- | -- | -- | -- | -- | | GCN | 115.8 | 241.0 | 147.6 | 302.6 | 104.0 | | GAT | 251.4 | 450.0 | 388.8 | 652.2 | 338.6 | | SAGE | 141.6 | 237.8 | 181.2 | 347.0 | 134.6 | | APPNP | 511.2 | 1005.2 | 820.0 | 843.75 | 576.2 | Link prediction: | Base model | CiteSeer | Actor | WikiCS | TwitchPT | AmazonPhoto | | -- | -- | -- | -- | -- | -- | | GAE | 452 | 6376 | 7412 | 3401| 4956 | | node2vec | 691 | 5067 | 6537 | 2740 | 5470 | | SEAL | 2982 | 14579 | 17491 | 11639 | 24904 | We compare with another study, i.e. EdgeProposal, that proposes a similar approach for link prediction by introducing possible edges during training. The comparison experiment is provided, with metric Hit@20 (%) | Dataset |GAE | EdgePropsal | CPL | | -- | -- | -- | -- | | ogb-ddi | 41.4 | 53.4 | 58.9 | | ogb-colab | 60.1 | 60.4 | 60.6 | 5. We acknowledge that one of the weaknesses of PL-based methods is the risk of overfitting as PL samples often have similar representations to the original training set. However, the error-labeled samples may have a significant influence on the model. To prevent the model from being misled by the error introduced, it is necessary to retain the original training set. Besides, we incorporated a validation set, and the results demonstrated that the CPL does not suffer from overfitting. The proposed model is transductive learning. The test set is also the candidate for PL. However, their labels are totally isolated from the training process. We did not specifically observe the outcome of imbalanced PL. To explore this further, let us consider an extreme story in which we only pseudo label some classes rather than all classes. This experiment was conducted on CiteSeer with GCN for the node classification task. During CPL, we adjust the imbalance ratio by only pseudo labeling some specific classes. Imbalance CPL: | # PL class | 0(raw) | 1 | 2 | 3 | 4 | 5 | 6(balanced) | | -- | -- | -- | -- | -- | -- | -- | -- | | AUC(%) | 69.32 | 69.65 | 70.02 | 70.12 | 70.93 | 72.22 | 72.96 | --- Rebuttal Comment 1.1: Comment: The authors have addressed most of my concerns. I agree with that: "While thresholding and multiview augmentation techniques are not new in the machine learning community, their adoption in our study is a natural solution as indicated by our analysis, rather than trivial combinations in other empirical studies. "
Summary: The paper provides an error bound for pseudo labeling on graphs. Moreover, the authors propose a cautious pseudo labeling method and validate it through experiments. Strengths: 1. The paper presents good experimental results, demonstrating the effectiveness of the proposed cautious pseudo labeling method. 2. The authors have conducted comprehensive experiments 3. The error bound proposed in the paper is deemed useful and practical Weaknesses: The paper suffers from unclear notation and contains numerous typos. These issues hinder the reader's understanding and make it challenging to follow the presented ideas. Clarifying the notation and addressing the typos is necessary to improve the clarity of the paper. Technical Quality: 4 excellent Clarity: 1 poor Questions for Authors: 1. Line 86: For "err(g)", does it mean the expectation is taken over the test points? Please clarify this point. 2. Line 91: Is the GPI property introduced in this paper, or has it been introduced previously? Please provide the necessary context for understanding this property. 3. Line 104: When stating "whose probability is higher than a threshold," does it mean this condition must hold for each "y ∈ U"? Please clarify this point. 4. Line 105: Since "g(G) ∈ ℝ^(N×M)", what do you mean by "ŷ ∈ g(G)"? Please explain this notation. 5. Line 102: Do you mean we can find "p_f(⋅)" or we can find "α, η"? Please clarify this statement. 6. Line 108: In "p_{αf}", there should be no "α" here. Please correct this notation. 7. Line 122: When stating "the teacher predictor satisfies additive expansion," the additive expansion is defined for a probability density, not a classifier. Please provide clarification and ensure consistency in terminology. 8. Line 124: Could you explain what is meant by "E_{Y_test}"? 9. Line 145: The first term should change "t" to "t+1". 10. Line 147: What is the definition of the covariance term in this context? Please provide clarification. 11. Lines 163-164: There seems to be a typo in the sentence, "we calculate multi-view prediction of the by." Please clarify this phrase. 12. Do you have any formal proof of Theorem 2.5? I am willing to increase my score if these concerns are resolved. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 1 poor Contribution: 4 excellent Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We sincerely appreciate your efforts in providing us with a detailed review. We have carefully considered all of your insightful suggestions and corrections, and we have incorporated them into the latest version of our draft. We have addressed each comment individually as follows. > 1. Line 86: For "err(g)", does it mean the expectation is taken over the test points? Yes. It measures the performance of the proposed model on the test set. Besides, we also use it to quantify the PL accuracy during the training, where the expectation is taken over the selected PL samples. > 2. Line 91: Is the GPI property introduced in this paper, or has it been introduced previously? Analogous analyses of GPI are frequently encountered, such as "label-invariant augmentation" as Definition 4.1 in [1], or the "augmentation invariant" in self-supervised learning as Definition 1in [2]. These analyses typically assume the invariance of categories/representation or serve as regularization techniques. In our approach, we relax these constraints by incorporating the Lipschitz regularity, a principle commonly employed in the theoretical analysis of neural networks, as Definition 1in [3] and Definition 1in [4]. [1] Yu, Junchi, Jian Liang, and Ran He. "Mind the Label Shift of Augmentation-based Graph OOD Generalization." IEEE/CVF CVPR. 2023. [2] Hua, Tianyu, et al. "On feature decorrelation in self-supervised learning." IEEE/CVF ICCV. 2021. [3] Virmaux, Aladin, and Kevin Scaman. "Lipschitz regularity of deep neural networks: analysis and efficient estimation." NIPS 31 (2018). [4] Arghal, Raghu, Eric Lei, and Shirin Saeedi Bidokhti. "Robust graph neural networks via probabilistic Lipschitz constraints." Learning for Dynamics and Control Conference. PMLR, 2022. > 3. Line 104: When stating "whose probability is higher than a threshold," does it mean this condition must hold for each "y ∈ U"? Yes, the probability of the elements $y$ in the local optimal set $U$ should be higher than the threshold. > 4. Line 105: Since "g(G) ∈ ℝ^(N×M)", what do you mean by "ŷ ∈ g(G)"? Sorry for the notation mistake. g(G) is the output N×M confidence matrix, ŷ is the N×M probability prediction of the augmented graph. It should be ŷ = g(G) here. > 5. Line 102: Do you mean we can find "p_f(⋅)" or we can find "α, η"? Please clarify this statement. & 6. Line 108: In "p_{αf}", there should be no "α" here. Thanks for your judgment. The conclusion should be p_{αf}(U/Uε)>=p_{αf}(U)+αη. This proposition is an assumption on the continuity of the trained GNN-based confidence predictor over the local optimal set. When we apply the graph augmentation, the perturbed local optimal set becomes larger. The increased subset still satisfies the inequality under amplified measure. We aim to find the probability measure for the prediction "p_f(⋅)" rather than "α, η", which refers to the trained GNN in the experiment. We only wish to illustrate the continuity property. Thus, the proposition only states the existence of the coefficient pair (α,η). Determining the value and analyzing their influence are not the point of our study. We conduct the similar analysis of proposition 1 in [5], they show in detail the bound of coefficient α. [5] Zhang, Yuchen, Percy Liang, and Moses Charikar. "A hitting time analysis of stochastic gradient langevin dynamics." Conference on Learning Theory. PMLR, 2017. > 7. Line 122: When stating "the teacher predictor satisfies additive expansion," the additive expansion is defined for a probability density, not a classifier. Sorry for the inaccurate statement. We refer to the confidence predictor g in the teacher model. Its corresponding probability density refers to f in proposition 2.2. It is hard to provide theoretical analysis on the derivative of the neural networks, we can only give the assumption based on the local smoothness of GNN. We could revise the prerequisite of the theorem as "For the GNN in the teacher model, if its corresponding density measure satisfies additive expansion." > 8. Line 124: Could you explain what is meant by "E_{Y_test}"? It refers to the expectation taken over the test set. As the reply to your 1st question, the expectation of Err(g) is taken over the test set. Then the inconsistency term A in Theorem 2.3 should be taken from the same range. In the application when the ground truth is unknown, this term could be estimated from the validation set or training set. But the difference between the estimation and the theorem is not the key point in our study. > 9. Line 145: The first term should change "t" to "t+1". Yes. According to the definition in Algorithm 1, it should be $CE(g(t)_psi,ŷ_o(t+1))>=CE(g_phi(t),ŷ_o(t))$. > 10. Line 147: What is the definition of the covariance term in this context? There are two variables in the covariance term, the cross-entropy and the PL strategy. In this study, we define the PL strategy T as an indicator function of the N PL candidates. The output is 1 for the selected PL samples, and 0 for the non-PL samples. The cross-entropy term also consists of N elements, each of which is the difference between the predicted confidence of the candidates and their ground truth label. The covariance quantifies the relation between the cross-entropy and PL indicator function over these N candidate samples. > 11. Lines 163-164: There seems to be a typo in the sentence, "we calculate multi-view prediction of the by." Thanks for pointing out the mistake. It should be "we calculate the multi-view prediction by the teacher model." > 12. Do you have any formal proof of Theorem 2.5? We have shown all the detailed proof of the first inequality in Appendix. It is hard to conduct theoretical proof of the second inequality, as it would be the symbolized representation of the analysis in Line 184-190. --- Rebuttal Comment 1.1: Comment: I have read the authors' rebuttal. I feel they have adequately addressed my questions and concerns. I am increasing my score.
Summary: The article discusses noisy pseudo labeling (PL) on graph data and proposes a new cautious PL methodology (CPL) to improve the graph learning process. The authors conduct experiments to evaluate CPL strategy for link prediction on various datasets and apply it on popular models in node classification task. The result shows that the proposed strategy outperforms other PL strategies. The paper also provides a theoretical analysis of the impact of noisy labels introduced by PL on the graph training procedure. 1. The error introduced by PL is bounded by the confidence of PL threshold and consistency of multi-view prediction. 2. PL can be designed to contribute to the convergence property. Strengths: The paper makes a highly original and significant contribution to graph learning. It proposes a new cautious Pseudo Labeling (PL) methodology that addresses limitations in prior works by introducing a confidence threshold and a consistency criterion for selecting high-confidence PL samples. This methodology, combined with a new consistency-based PL (CPL) strategy, improves the convergence property of graph learning and outperforms other PL strategies in link prediction and node classification tasks. The research methodology is rigorous, and the paper is well-structured, clear, and provides practical solutions to the challenges of limited and noisy labeled data. Weaknesses: The paper could be strengthened by addressing several weaknesses. Firstly, in Table 2, the authors can also compare with other PL methods in the link prediction task. Secondly, conducting experiments with larger sample sizes or exploring different configurations would provide a more comprehensive evaluation. Thirdly, clarifying the methodology by providing implementation details and explaining data preprocessing steps would enhance replicability and understanding. Finally, considering the applicability of the proposed approach to different domains with highly imbalanced class distributions or datasets with different types of noise would broaden its practical relevance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Were there any specific assumptions made when applying the proposed approach to the benchmark datasets? It would be helpful to understand the compatibility of the approach with different dataset characteristics, such as class imbalance or noise types. Insights into these considerations would shed light on the generalizability of the approach. 2. Could you provide additional information on the hyperparameters used in the experiments? Specifically, how were the hyperparameters set for the proposed cautious Pseudo Labeling (PL) methodology and the baseline models? Sharing these details would aid in replicating and fine-tuning the approach in future research. 3. In the discussion of results, could you provide further insights into the potential limitations or failure cases of the proposed approach? Understanding the scenarios where the approach may not perform optimally would help in setting realistic expectations and identifying areas for further improvement. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not mention their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are most thankful for your thoughtful assessment, and glad to communicate with you on all your concerns: > Were there any specific assumptions made when applying the proposed approach to the benchmark datasets? It would be helpful to understand the compatibility of the approach with different dataset characteristics, such as class imbalance or noise types. Insights into these considerations would shed light on the generalizability of the approach. The assumptions of the CPL are the Graph Perturbation Invariant (GPI) property and Additive Expansion Property (AEP). GPI guarantees there is no extreme change during the augmentation process. Some scenarios like the adversarial attack and designed noise are excluded. AEP assumes the continuity of the probability measure in the neighborhood of the local optimal set. The sudden variation in probability density may violate this assumption, such as the highly heterophilous graph. There is no evidence showing that the imbalance and noise type influence the efficiency of CPL. However, their effects on the base model is the key point to the overall performance. > Could you provide additional information on the hyperparameters used in the experiments? Specifically, how were the hyperparameters set for the proposed cautious Pseudo Labeling (PL) methodology and the baseline models? Sharing these details would aid in replicating and fine-tuning the approach in future research. CPL is characterized by a relatively small number of hyperparameters. Specifically, there two key hyperparameters that need to be set are the capacity of the number of PL per iteration and the corresponding fine-tuning epochs. These hyperparameters can be chosen based on the available computational resources, as discussed in section 3.2.2 of our analysis. In our experiments, we opted to set the fine-tuning epoch to 20% of the pretraining epoch. The iterative PL capacity results are presented in the following tables. We have implemented early stop restrictions in our approach. We have defined a lowest acceptable threshold, denoted as Th, and a fine-tuning patience value, denoted as P. In CPL, we employ a top-k strategy. However, if the confidence of a particular sample falls below the threshold Th, it will not be assigned a pseudo-label. The threshold Th can vary depending on the dataset and its specific characteristics. The exact values used for Th are reported in the subsequent section. Furthermore, we have incorporated an early stop mechanism based on the fine-tuning patience P. If the CPL process fails to improve model performance for P consecutive iterations, the training is terminated. In our experiments, we have set P to a value of 10. These early stop restrictions serve to control the quality and effectiveness of PL. Link prediction: | Dataset | CiteSeer | Actor | WikiCS | TwitchPT | AmazonPhoto | | -- | -- | -- | -- | -- | -- | |#PL/itr| 100| 200| 1000 | 300 | 500 | | Th | 0.8 | 0.9 | 0.98 | 0.9 | 0.9 | Node classification: | Dataset | Cora | CiteSeer | PubMed | AmazonPhoto | LastFMAsia | | -- | -- | -- | -- | -- | -- | |#PL/itr| 100| 200| 1000 | 300 | 500 | | Th | 0.6 | 0.8 | 0.8 | 0.8 | 0.8 | > In the discussion of results, could you provide further insights into the potential limitations or failure cases of the proposed approach? Understanding the scenarios where the approach may not perform optimally would help in setting realistic expectations and identifying areas for further improvement. The proposed model represents a basic implementation of confidence-based PL for multiview graph learning. While it aligns with theoretical analysis, there is ample room for further enhancement. One potential area for improvement lies in the measurement of confidence. In the current CPL approach, we rely on the average of multiview predicted probabilities, which is a biased estimation of confidence. By incorporating a proper confidence estimation that takes into account uncertainty and factors in downstream tasks, we can potentially enhance the underlying model. Another avenue for advancement is the development of more advanced and adaptive PL strategies. For instance, we could consider incorporating a diversity penalty during the selection of PL candidates. This would help to promote a more diverse and representative set of candidates, thereby enhancing the overall learning process. On a different note, providing an explicit condition for the theorem is challenging. However, it is possible to circumvent the assumptions in specific scenarios. As previously mentioned, the AEP assumes the continuity of probability near the local optimum. Yet, situations characterized by sudden variations in probability density, such as those encountered in heterophily graphs or instances of overconfidence in incorrect samples, may violate this assumption. By acknowledging and addressing these specific conditions, we can refine the model's applicability and effectiveness. --- Rebuttal Comment 1.1: Title: Post-rebuttal comment Comment: Thank you for the detailed explanations. I am willing to champion this paper. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and for championing our paper. We appreciate the time you dedicated to reviewing our study and engaging in the rebuttal process. Your valuable insights and suggestions have greatly contributed to the improvement of our paper.
Rebuttal 1: Rebuttal: The revised main scheme and the required figures of the experiment from **Reviewer 4kBQ** are shown in the supplementary PDF file. Pdf: /pdf/d5128af85bb960c2667055f5ef0d921d281e4d23.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Minimax Regret for Online Learning with Feedback Graphs
Accept (spotlight)
Summary: This paper considers a classic problem of online learning with feedback graphs, which interpolates the full-information feedback and the bandit feedback. Specifically, the authors consider the case where the feedback graph is undirected, meaning that if node $i$ can observe node $j$, then node $j$ can observe node $i$. The best known $T$-dependent upper bound for this problem is $O(\sqrt{\alpha T\ln K})$ by [Alon et al., 2013]. For general strongly-observable graph, the best known $T$-dependent upper bound is $O(\sqrt{\alpha T\ln^3 K})$ proposed in [Zimmert and Lattimore, 2019]. In this paper, the authors propose an FTRL algorithm with q-Tsallis entropy regularization, showing $O(\sqrt{\alpha T(1+\ln \frac{K}{\alpha})})$ upper bound. This recovers the minimax regret bound in both bandit $O(\sqrt{KT})$ regret and full-information setup $O(\sqrt{T\ln K})$. Moreover, the authors also propose an improved $\Omega(\sqrt{\alpha T\frac{\ln K}{\ln \alpha}})$ lower bound compared with the $\Omega(\sqrt{\alpha T})$ lower bound proven in [Alon et al., 2015], though this lower bound does not match the upper bound obtained by the q-Tsallis FTRL algorithm. In addition, the authors also generalize their results to the undirect strongly observable graphs and time-varying graphs. Strengths: - The paper is well-written and the proposed algorithm is easy to follow. - The designed algorithm improves upon the best known $T$-dependent regret bound for undirected strongly observable graphs. Specifically, the important technical contribution is Lemma 1, which shows that the stability term of the FTRL can be bounded by $\alpha^{1+b}$ where $b$ is decided by the parameter choice of Tsallis-entropy, compared to the one with log factors obtained in [Alon et al., 2015] and [Zimmert and Lattimore, 2019]. - The lower bound also tries to bridge the previous gap between the full-information case and the feedback graph case, although there is still a gap between the upper and the lower bound and the technical used in the lower bound is very similar to the one used for proving the lower bound for contextual bandit in [Seldin and Lugosi, 2016]. Weaknesses: I do not find major weaknesses of this paper, except that the current upper bound is only obtained for the undirected strongly observable graph, which is also discussed by the authors in the appendix. Generalizing this technique to the general strongly observable graphs would be interesting. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - The current algorithm is an FTRL-based algorithm. I wonder whether a certain type of online mirror descent based algorithm with a time-varying learning rate can also achieve similar results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See Weakness and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. As mentioned in our answer to Reviewer TWzC, we hope the case of directed feedback graphs could be addressed by extending the proposed techniques. Regarding your question, OMD could be used in place of FTRL with the same techniques presented in this work. With regards to the adoption of a time-varying learning rate, we remark that adapting to a sequence of graphs with arbitrary independence numbers would also require the Tsallis entropy parameter to adapt to such a sequence, which poses significant challenges for the analysis. This is why we adopted an approach based on the doubling trick. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for the authors' response. The response addresses my questions and I keep the original score.
Summary: The paper investigates no-regret online learning algorithms performing under the feedback graph model. More precisely, at each round $t$ a learner selects an action (out of set of possible actions) incurring the cost associated with the specific action at the specific round. The actions are additionally vertices of an undirected graph that changes from round to round. Once the action $i$ is selected at round $t$, the learner is also informed on the costs of the neighboring actions at the specific rounds. The latter setting generalizes the bandit setting (the respective graph is composed by isolated vertices) as well as the full-information setting (the respective graph is a clique). Initially the authors consider the special case where every graph at each round admits independence number $\alpha$ ($\alpha =1$ for cliques and $\alpha =K$ for graphs with $K$ isolated edges). The authors first provide a nearly tight regret guarantee ($O(\sqrt{\alpha T \log (k/\alpha)})$) for the above special case. The authors also provide a nearly matching $\Omega(\sqrt{\alpha T \log K / \log \alpha})$ lower bound on the regret. Finally the authors remove the assumption that all the arriving graphs admit the same independence number and provide an $O(\sqrt{\overline{\alpha} T \log (k/\overline{\alpha})})$ regret bound where $\overline{a}$ is the time-average independence number of the graphs. Strengths: I think the paper is very interesting both from the perspective of results as well as from the perspective of techniques. I find very interesting the fact that the provided regret bound matches the respective regret bounds for the bandit ($\alpha = K$) and the full-information case ($\alpha=1$). Also the authors provide a nearly matching lower bound that is based on a novel reduction to multitask learning. Both the techniques for the upper and lower bound require no trivial ideas that are nicely illustrated in the current write up. I also liked the fact that the provided algorithm can be modified to handle the case of time-varying independence numbers. I overall believe that the paper provides very solid results and that will be of great interest of the online learning audience of NeurIPS. Weaknesses: The only limitation of the paper that I can find is the assumption on undirected graphs. The latter is also signified by the authors and considering the complexity of the current I think it is more than fair to consider the directed case for future work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. As mentioned in our answer to Reviewer J2zm, we agree that the directed feedback graphs case is an interesting future direction, which we hope could be addressed by extending the proposed techniques. --- Rebuttal Comment 1.1: Comment: Thank you for your response. After reading the other reviews, I am confident that this is a very good paper and I have decided to keep my current score.
Summary: Two classical learning problems are learning with experts and multi-armed bandits. In both problems, a learner interacts during $T$ rounds with a set of $K$ actions by selecting an action at each round. After each round, the loss/reward at that step is revealed for all actions (resp. only for the selected action) in the expert (resp. bandit) problem. Mannor and Shamir [24] provided a convenient interpolation between the two problems by considering a generalization via feedback graphs. At each step $t$, losses are observed only at neighbor actions to the selected one, according to a graph $G_t$. The expert case corresponds to the clique, while the bandit case corresponds to graphs without edges. In this context, it is known that if the graphs have independence number $\alpha$, the minimax regret lies between $O(\sqrt{\alpha T\log K})$ and $\Omega(\sqrt{\alpha T})$. Further, the lower (resp. upper) bound is tight for learning with experts (resp. multi-armed bandits) for which $O(\sqrt{KT})$ regret is achievable (resp. a lower bound $\Omega(\sqrt{KT\log K})$ is known). This paper investigates the corresponding $\sqrt{\log K}$ gap between the case $\alpha=1$ (clique/experts) and $\alpha=K$ (multi-armed bandits) and gives interpolating upper bound $O(\sqrt{\alpha T(1+\log K/\alpha)})$ on the regret provided that the graphs are undirected and strongly observable (each vertex has either a self-loop or neighbor to all other actions), with the same algorithm---FTRL with $q$-Tsallis entropy---run with appropriate parameter $q(\alpha)$ and loss estimates. Via a doubling trick, they show that one can achieve this bound up to an additive $\log \alpha$ term without prior knowledge on $\alpha$ (and can be further extended to the case when graphs $G_t$ do not share the same independence number). Last, the authors provide an improved interpolating lower bound $\Omega(\sqrt{\alpha T \log K/\log \alpha})$, which requires the graphs $G_t$ to vary over time (but with same independence number), showing that when $\alpha$ is sub-polynomial in $K$, learning with feedback graphs requires an extra factor compared to the known lower bound $\Omega(\sqrt{\alpha T})$. This is shown via a reduction to the multitask bandit problem (different tasks are simulated by changing the graph over time). Strengths: The paper is very well-written and pleasant to read. The question is well posed, and motivated by the fact that improving the $\sqrt{\log K}$ factor for learning with experts, from $O(\sqrt{KT\log K})$ to $O(\sqrt{KT})$ remained open for a significant amount of time in the literature. Hence, interpolating the regret bounds in terms of the remaining factor $\sqrt{\log K}$ between the known bounds $O(\sqrt{\alpha T\log k})$ and $\Omega(\sqrt{\alpha T})$ seems an important question. The upper bounds use the same algorithm (FTRL with $q$-Tsallis entropy) with adapted parameters $q$, which also interpolates between known approaches for the extreme cases $\alpha=1$ or $\alpha=K$, giving new insights on the impact of this parameter on the learning behavior. Weaknesses: Although the paper presents new contributions to interpolate between experts and bandits, the proofs follow classical arguments in the literature and seem to bring limited original ideas. In particular, it was known that the $q$-Tsallis entropy specifically allows achieving the minimax entropy in both end cases. The main difficulty in the generalization to feedback graphs is in a novel bound for the variance term in the FTRL analysis (Lemma 3), which itself is a generalization of known results in the literature for $q=1$. The rest of the proof follows standard analysis. The lower bound heavily relies on the fact that graphs are allowed to vary over time. It seems that the approach cannot be extended to the important case of interest of a fixed graph (which seemed to be the main case studied in the previous literature on feedback graph bandits). The paper would also be significantly strengthened if the lower bound could be improved to match the upper bound when e.g. the independence number $\alpha$ grows as a polynomial in $K$---in that case, the present paper does not provide improvements in terms of rates compared to the existing literature. As such, the gap left by this paper seems to remain quite important. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Does the analysis carry over for adaptive adversaries, or is the oblivious assumption important? - l.97 the independence number is the cardinality of such a set, right? - What happens if we assume that the graph is constant over time? Would you expect similar lower bounds to hold? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: The limitations were very well addressed by the authors, who clearly identify the new bound on the variance (Lemma 3) as the key added technical contribution for the upper bounds. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We remark that obtaining the key result of Lemma 1 was an obstacle for prior work in the way of obtaining improved guarantees using $q$-Tsallis entropy with feedback graphs. Although we focused on the case of time-varying graphs in our lower bound construction, we conjecture that the upper bound is tight for all values of $K$ and $\\alpha$, even in the case of some fixed graph. In any case, please note that the lower bound we provided is the first such result that hints at the necessity of a logarithmic factor for the minimax regret of this problem (beyond the experts case). To address the question about adaptive adversaries, the analysis does carry over straightforwardly if the adversary adapts to the previous choices of the learner with our current notion of regret, but it would be interesting to investigate the more challenging notions of dynamic or adaptive regret. Finally, we thank the reviewer for pointing out the typo. --- Rebuttal Comment 1.1: Comment: Thank you for your answers and comments. After re-examing the paper, I agree that the contributions are significant and the new bounds in the lemmas are significant. I have updated my score as a result.
Summary: The paper studies the online learning under partial observations given by an underlying graph structure. This model is a common generalization of the bandit model (where the feed graph just contains self loops) and the full information model (where the graph is a complete graph with self loops). This model has been considered in the literature where it has been shown that the regret (under a technical condition known as strong observability) is $\tilde{\Theta} ( \sqrt{ \alpha T } )$ where $T$ is the time horizon and $\alpha$ is the independence number of the graph. Prior to the present paper there was a logarithmic gap in the regret upper and lower bounds. In particular, the previous bound when instantiated in the case of bandits led to an extra $log K$ factor. The present paper addresses this gap by designing an algorithm that achieves regret $\sqrt{ \alpha T (1 + \log (K/\alpha)) }$ which gives the optimal bounds in both extremes. Furthermore, the paper presents lower bounds matching their given bounds. The algorithm that the paper analyses is a version of FTRL with the Tsallis entropy regularizer. Since the choice of $q=1/2$ is known to be optimal for bandits and $q=1$ is optimal for full information, the natural idea is to interpolate between using a $\alpha$ dependent choice of $q$. The paper analyses the variance of the natural inverse probability weighted estimator of the loss in a novel way to arrive at the optimal choice to $q$ to get the regret bound. The paper extends their analysis to a slightly more general settings (than just graphs with self loops; time varying independence numbers). Strengths: As stated in the summary, the main technical novelty is the analysis of the variance of the IPW estimator in the setting of feedback graphs and FTRL. This is itself a rather interesting contribution and I expect this to be useful in various other contexts. The main result presented in the paper resolves an interesting and natural question in the literature. Weaknesses: One main drawback of the approach of $\alpha$ dependent choice of the $q$ is that it makes the algorithm extremely inefficient (in theory at least). In particular, the independence number of a graph is known to (extremely) hard to approximate; in particular, unless $P=NP$ there is no "non-trivial" approximation algorithm even. This is to be contrasted with the previous work in the feedback graph setting which are efficient ( polynomial in $N$ the size of the graph/number of bandits). It remains an excellent open question to see if there is an efficient algorithm achieving this bound. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is it immediately clear how the regret behaves when one only has an estimate of the independence number instead of the exact quantity? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. It is indeed true that computing, or even approximating, $\\alpha$ is computationally intractable in general. Please note that we do address this issue in Section 4, where we not only generalize our approach to time-varying feedback graphs with possibly different independence numbers, but also lift the requirement of knowing these independence numbers. If we specialize this setting to that of a fixed graph, we can avoid requiring knowledge of $\\alpha$ altogether. We thank the reviewer for pointing out this possible misunderstanding. We presented the first result assuming the knowledge of $\\alpha$ to simplify the presentation, but we will make sure to clarify this important feature of our theory in the revised version. --- Rebuttal Comment 1.1: Comment: We thank the authors for pointing this out and apologize for the oversight. Increasing the score accordingly.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards Robust and Expressive Whole-body Human Pose and Shape Estimation
Accept (poster)
Summary: The paper focuses on improving whole-body pose and shape estimation from monocular images, a task that often struggles with complex, real-world scenarios. The authors argue that the performance of these models is significantly impacted by the quality of the predicted bounding box, such as the scale and alignment of body parts. The difference between ideal bounding box annotations and model detection results poses a substantial challenge to whole-body pose and shape estimation. To address this, the authors introduce RoboSMPLX, a framework to enhance the robustness of whole-body pose and shape estimation. RoboSMPLX integrates three new modules: a Localization Module to improve model awareness of the subject's location and semantics within the image space, a Contrastive Feature Extraction Module that uses contrastive loss with dedicated positive samples to help the model be invariant to robust augmentations, and a Pixel Alignment Module to ensure the reprojected mesh from predicted camera and body model parameters are accurate and pixel-aligned. The effectiveness of the method is showcased through comprehensive experiments on body, hands, face, and whole-body benchmarks. Strengths: The paper is well-structured and reader-friendly. It begins with a compelling motivation that efficiently highlights the issue of the current state-of-the-art methods' low robustness against varying crops and misalignments. The authors also convincingly justify potential solutions to these problems. The proposed method by the authors is sound even if it lacks complete novelty. Impressively, the authors have performed extensive experiments and provided an in-depth ablation study, substantiating their proposed approach effectively. Although their method doesn't top all the benchmarks, its efficacy is undeniable, particularly in its demonstrated robustness to variations in crop size and alignment - a challenge that other methods fail to meet. The authors provide a thorough justification with robust qualitative and quantitative results. Weaknesses: The claim made in lines 47-51 and 76-77, suggesting that contrastive learning is not used in the parametric estimation of human meshes, is inaccurate. Previous works, such as [1], have already incorporated a form of contrastive and triplet loss for 3D face shape and pose estimation from monocular face images. Thus, the application of contrastive loss for mesh recovery from monocular images isn't entirely novel. The authors should moderate their claims and include a thorough discussion concerning related work in this field. Furthermore, the pixel alignment loss achieved by projecting the mesh as a mask, a technique used by ICON [2] for improved SMPL pose parameter estimation, is not entirely new. The authors should also include instances of the method's failure during testing within the main body of the paper. [1] Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision. CVPR 2019 [2] ICON: Implicit Clothed humans Obtained from Normals. CVPR 2022 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The paper seems sound to me and I do not have much questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Although some limitations have been discussed it might not be adequate. Please add some visual limitations of the model during the inference time. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for acknowledging our extensive experiments and in-depth ablation study, and recognising the efficacy of our methods. We will polish the paper and add the clarifications below in the revised version. Below we would like to provide point-to-point responses to address all the raised questions: **Q1: "application of contrastive loss for mesh recovery from monocular images isn't entirely novel. The authors should moderate their claims and include a thorough discussion concerning related work in this field."** **A1:** Thank you for your suggestion. We have added [1] in the discussion for related works in this field. As stated in Sec.4.3, our novel design lies in: 1. **The scheme of supervised contrastive learning for high-dimensional regression.** Prior SSL methods [43, 58, 4] are not particularly adept at learning useful embeddings for human pose and shape estimation. Without labels, the model primarily extracts features based on background information instead of pose information. 2. **The design of human pose representation.** Sanyal et al. [1] use shape as the representation to disentangle expression, head pose and camera parameters. To learn accurate poses, we find that using the normalized keypoints representation in 3D is more useful than using pose parameters (joint rotations). 3. **The sampling strategy for selecting positive samples.** Prior works [43, 58] employed pose-variant augmentations (e.g., rotation and flipping), which can adversely affect the learning by altering the global orientation. Below is the discussion and comparisons with existing methods, which will be added to the related works in the revision: “Contrastive Learning. Recently contrastive learning has demonstrated state-of-the-art performance among self-supervised learning (SSL) approaches. This strategy has been applied to 3D hand pose and shape estimation [43, 58]. **Sanyal et al [1] incorporates a novel shape consistency loss for 3D face shape and pose estimation that encourages the face shape parameters to be similar when the identity is the same and different for different people.** Choi et al. [4] were the first to apply contrastive learning for 3D human pose and shape estimation. They found that SSL is not useful for this task, as the learned representations could be challenging to embed with high-level human-related information. Khosla et al. [18] proposed supervised contrastive learning for image classification tasks, which incorporates label information during training. **Currently there is no attempt to apply this strategy to human pose and shape estimation, where the definition of positive samples is unclear, and data lie in a continuous space. We are the first to overcome these challenges and integrate supervised contrastive learning with whole-body pose and shape estimation.**” **Q2: "pixel alignment loss achieved by projecting the mesh as a mask, a technique used by ICON [2] for improved SMPL pose parameter estimation, is not entirely new."** **A2:** We acknowledge the methodological parallels between our work and that of ICON. However, it's crucial to delineate the distinctions in implementation and objectives. ICON [2] relies on an existing segmentation model to retrieve a silhouette mask of a clothed human. Pixel alignment loss is employed during inference time to refine the SMPL parameters by fitting the projected SMPL silhouette to the “ground-truth” clothed silhouette mask. Our method differs from ICON is several ways: - Our methodology adopts a regression approach to directly predict pixel-aligned mesh. Pixel alignment loss is exclusively employed during the training phase to supervise the ground-truth and predicted part segmentation map. This offers a notable speed-up over ICON's optimization procedure. - Additionally, our approach circumvents the need for an external pretrained network dedicated to clothed silhouette mask prediction. The fitting results of ICON [2] depend on the accuracy of the segmentation model, which remains limited in occlusion scenarios. - We have found that a differentiable part segmentation map holds a higher efficacy compared to normal silhouette supervision. This encourages learning of correct prediction of body part and silhouette, even in instances of object or self-occlusion. **Q3: "instances of the method's failure during testing"** **A3:** Thank you for the suggestion. Please refer to Figure 3 in Rebuttal document. [1] Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision. CVPR 2019 [2] ICON: Implicit Clothed humans Obtained from Normals. CVPR 2022 Please don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer! --- Rebuttal Comment 1.1: Title: Follow-up Comment: Dear reviewer, We would like to follow up to check if your concerns have been addressed. In the previous response, we have made the following updates/clarification: - Following your precious advice to include more literature on use of contrastive setting (Q1), we have added extra discussions to the Related Works sections and provided clarification on how our module differs from previous work. - Regarding the difference in pixel alignment compared to ICON (Q2), we have outlined the key distinctions. - We have also added visual examples of the method’s failure cases (Q3) in Figure 3 of `Rebuttal.pdf`. We are happy to answer further questions.
Summary: This paper addresses the task of whole-body pose and shape estimation, including human mesh, hand gestures, and facial expressions, from monocular images. The author identifies the impact of predicted bounding box quality on the accuracy and reliability of existing methods. Based on this observation, a novel framework called RoboSMPLX is proposed to enhance robustness through three modules: a localization module, a contrastive feature extraction module, and a pixel alignment module. Comprehensive experiments are conducted to demonstrate the effectiveness of the framework. Strengths: 1. Overall, the paper is of high quality, with clear motivation and well-organized sections. It is easy to follow and presents novel modules that address existing limitations. The paper includes comprehensive experiments and provides sufficient visualizations of the generated mesh. The author's contributions are highly appreciated. 2. The empirical study on the impact of subject localization, feature extraction, and pixel alignment, as stated in lines 91-100, is crucial in breaking through barriers. The paper provides extensive qualitative and quantitative results, both in the main paper and supplementary material, making it solid and convincing. 3. The three modules work together effectively, aligning intermediate representations such as pose, dense landmarks, segmentation, and silhouette masks. This enhances robustness through proper data augmentation techniques. 4. The proposed framework consistently achieves impressive results for various tasks across multiple datasets, demonstrating superior performance in whole-body pose and shape estimation. Weaknesses: 1. Since the author does not mention code release, it would be beneficial to release the code to contribute to the research community. 2. The paper lacks a discussion on computational complexity and inference speed, which are important evaluation metrics, especially for practical deployment. I suggest reporting total parameters, FLOPs, and fps with a detailed discussion to strengthen the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. Although the proposed framework achieves impressive performance, the overall framework and training strategy is somewhat complex. Simplifying the framework without sacrificing performance would be beneficial. 2. Currently, the proposed framework only allows for whole-body estimation from a single image. Future work could focus on video-based estimation to further improve robustness, alleviate depth ambiguity, and enhance temporal smoothness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for acknowledging that we present novel modules, and recognising that our extensive qualitative and quantitative results are solid and convincing. We will polish the paper and add the clarifications below in the revised version. Below we would like to provide point-to-point responses to address all the raised questions: **Q1: "it would be beneficial to release the code to contribute to the research community."** **A1:** Thanks for your suggestion. We plan to release code here at https://github.com/robosmplx/RoboSMPLX/ when the paper is accepted. **Q2: "reporting total parameters, FLOPs, and fps with a detailed discussion to strengthen the paper"** **A2:** Please refer to our response to General Concerns 2. **Q3: "the overall framework and training strategy is somewhat complex. Simplifying the framework without sacrificing performance would be beneficial"** **A3:** Thanks for this great suggestion. Our current goal is to enhance the model robustness. How to simplify the framework without performance drop will be our future work. **Q4: " Future work could focus on video-based estimation to further improve robustness, alleviate depth ambiguity, and enhance temporal smoothness"** **A4:** Thanks for this great suggestion. This is indeed an interesting and important direction, and we will explore this in future work. Please don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer! --- Rebuttal Comment 1.1: Title: Follow-up Comment: Dear reviewer, We would like to follow up to check if your concerns have been addressed. In the previous response, we have made the following updates/clarification: - Following your advice on code release (Q1), we have created a GitHub for the project and plan to release the code there when the paper is accepted.. - Regarding the total parameters, FLOPs, and fps of the various methods, we have added it to General Concerns 2. - For your valuable suggestions to improve upon this project (Q3 and Q4), we intend to explore it in the future. We are happy to answer further questions.
Summary: The paper introduces RoboSMPLX, a method for whole-body 3d human pose and shape estimation from a monocular image. Motivated by the poor robustness of existing methods, especially w.r.t. the quality of bounding boxes, three components are proposed: 1) a localization module, 2) a contrastive feature extraction module, and 3) a pixel alignment module. The robustness of existing methods is evaluated by applying different types of image, location and pose augmentations. By using these augmentations in a contrastive setting during training of RoboSMPLX, RoboSMPLX achieves higher robustness when evaluated on these augmentations. Additionally, the performance of Hand, Body, Face and Wholebody reconstructions are evaluated on typical benchmark datasets. Strengths: Increasing the robustness of whole-body human pose and shape estimation methods is an important topic. The problem is well motivated, and the evaluation of the performance of existing methods under different augmentations is interesting. Weaknesses: I have the following main concerns about the paper: 1) Evaluation seems inconsistent and incomplete. It is unclear why some competitors are omitted in different experiments. For example, the performance of PyMAF[54] is reported in Table 1 and 2, but not in Table 3 and 6, even though [54] reports the relevant numbers. OSX[25] is also missing in Table 2. The competitors outperform RoboSMPLX in the experiments where they are omitted. 2) Robustness of RoboSMPLX. The robustness evaluation seems not realistic. It is not surprising that RoboSMPLX shows better performance under the different augmentations the method is trained on. However, most relevant is the performance on in-the-wild scenarios. This is not properly evaluated for the whole-body task. On the contrary, RoboSMPLX performs worse on AGORA than OSX. Although AGORA is a synthetic dataset, it contains realistic and diverse scenes with multiple persons and occlusions. On the other hand, EHF only consists of 100 images of a single subject recorded in a mocap studio. It is therefore questionable, if RoboSMPLX really succeeds in being more robust than its competitors. To better assess the in-the-wild performance, the methods could for example be evaluated on RICH (CVPR'22) (https://rich.is.tue.mpg.de/) or on BEDLAM (CVPR'23) (https://bedlam.is.tue.mpg.de/). Additionally, to show the effectiveness of the localization module, the accuracy of the predicted bounding boxes could be evaluated. Futher concerns: - The contrastive module is not well motivated. Why use the contrastive setting with all its overhead, instead of simply using the different transformations as data augmentation? Especially since adding positive samples only has small influence. - Why are the ablation studies in Table 11 not conducted with the whole body model? Mean+std of multiple runs should be reported since the differences are so small. - Why is the pixel alignment not evaluated quantitatively? This would be more meaningful instead of only showing some examples e.g. in Figure 13. - The experiments section should be revised. It is difficult to read due to the many experiments and referrals to the appendix. - The novelty is limited since the augmentations and the pixel alignment module already have been used extensively in the literature for estimating human pose and shape, e.g. [35] and [45]. - IIn Table 10 7.18 PA-PVE should be in bold, and 14.38 MPJPE in Table 11. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The following requires clarification: - What is the estimation error reported in Table8? - Table 11: what does DR54 and KS stand for? What is the row named "joints"? - Regarding line 204: why should the model produce consistent representations for the same subject under different poses? - For how long is the model trained on what hardware? What's the inference time? - How are the joints for the body, hand and face determined? How does the 137 joints body skeleton look like? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: More failure cases could be shown in the paper. Also in comparisons to the competitors. The potential societal impact is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your insightful and constructive feedback. We will polish the paper, add the experiments and make the clarifications in the revised version. **Q1: "Evaluation seems inconsistent and incomplete. It is unclear why some competitors are omitted in different experiments."** **A1:** **Our evaluation methodology has been systematically designed based on the underlying models of the compared methods.** Specifically, the body network was trained on datasets tailored for SMPL and hence was benchmarked against other methods utilizing the SMPL framework, including PyMAF [54]. Conversely, OS-X, which is rooted in the SMPL-X framework, was compared solely with its SMPL-X counterparts, such as Expose, Hand4Whole and PIXIE in Table 6 and not Table 2. For comparisons with PyMAF-X, please refer to **Response 1 A2** **Q2: "Additionally, to show the effectiveness of the localization module, the accuracy of the predicted bounding boxes could be evaluated** **A2:** Thank you for the suggestion. We have evaluated the accuracy of predicted part bounding boxes on the EHF test set. We have employed the Intersection over Union (IoU) as our metric (Please refer to Figure and Table 1 in the Rebuttal.pdf). Our method obtains the highest IoU scores in Table 1. **Q3: "Why are the ablation studies in Table 11 not conducted with the whole body model?"** **A3:** **The primary objective of the ablation study was to demonstrate the efficacy of individual modules.** Given that wholebody network contains three subnetworks (body, hand, face), and each subnetwork encompasses all three modules, **it was deemed more methodologically sound to isolate and assess the impact of each module within a distinct subnetwork.** This approach ensures a clearer understanding of the contribution of each module without the potential confounding effects of evaluating them within the entirety of the whole body model. **Q4: The contrastive module is not well motivated. Why use the contrastive setting with all its overhead, instead of simply using the different transformations as data augmentation? "** **A4:** Our use of the contrastive module is motivated by the need to constrain/maintain the same pose feature for different augmentations, to avoid domain shift caused by strong augmentation alone. The experiments show that the use of strong augmentation alone for training can lead to performance deterioration, while combining it with the contrastive loss consistently results in minimal errors (Table 2 in Rebuttal.pdf)). The contrastive features extraction module is not used in the inference, and only used during training. Therefore, during inference time it will not pose any computation overhead. Contrastive learning is a successful strategy in other CV areas [43, 58], and we are the first to introduce it on pose for Human Pose and Shape Estimation. To illustrate this further, we delved into a visualization of the pose similarity for augmented samples. The findings reveal that augmented samples are perceived as dissimilar in both Model 0 and Model 1 (Table 3 in Rebuttal.pdf)). Yet, when examining Model 2, a marked increase in embedding similarity is evident, underscoring the advantage of the contrastive approach. **Q5: Why is the pixel alignment not evaluated quantitatively? "** **A5:** In assessing the pixel alignment, it's crucial to recognize that standard metrics like PVE and MPJPE errors are computed post root alignment. They also do not measure the accuracy of mesh projection within the image space. Presently, there's an absence of a metric tailored to gauge the degree of pixel alignment of a mesh in this context. As such, our study primarily offers a qualitative analysis. Nevertheless, in an effort to address your concern and introduce some quantitative results, we measure the errors between the projected 2D vertices of ground-truth and projected mesh (Please refer to Figure 2 and Table 3 in Rebuttal.pdf). From Table 3, it is evident that omitting the pixel alignment module leads to suboptimal outcomes. In contrast, our pixel alignment strategy, leveraging rendered segmentation maps, showcases better performance than using vertex loss as supervision. **Q6: The experiments section should be revised. "** **A6:** Thanks for the suggestion. If we have the opportunity for revision, we will include important experiments in the main text as much as possible, while maintaining the completeness of information without frequent reference to supplementary materials. **Q7: In Table 10 7.18 PA-PVE should be in bold, and 14.38 MPJPE in Table 11"** **A7:** Thanks for pointing that out, we will fix the bolding in Table 11. **Q8: What is the estimation error reported in Table8?"** **A8:** The estimation error for Table 8 refers to the difference between the top-1 retrieved pose (COCO-train) and query pose (COCO-test). Thanks for pointing this out, we will edit the caption to make it clearer. **Q9: Regarding line 204: why should the model produce consistent representations for the same subject under different poses?** **A9:** L204: “By minimizing this loss, the model can produce consistent representations for the same subject, even when presented with different augmentations”. To clarify, this means that the model should produce consistent representation for the same subject with the same pose under different augmentations. **Q10: For how long is the model trained on what hardware? What's the inference time?** **A10:** Our model was trained utilizing a cluster of 8xTesla V100-SXM2-32GB GPUs. Specific to the training duration, the hand models required approximately one day, whereas the body and face models necessitated two days. The joint training process was completed within a day. For details on inference time, please refer to the section labeled "General Concerns 2." Please don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer! --- Rebuttal Comment 1.1: Comment: **Q11: How are the joints for the body, hand and face determined? How does the 137 joints body skeleton look like?"** **A11:** We follow the same setting used by H4W and OSX. The whole-body skeleton comprises 137 joints, broken down as follows: 25 body joints, 40 hand joints, and 72 face joints. The 25 body joints adhere to the SMPL body convention. The joints determined by the MANO model consist of 21 for each hand. However, we omitted the wrist joint as it is already accounted for in the body, resulting in a total of 40 hand joints (calculated as (21-1) x 2). The 72 face joints are derived from the FLAME convention, with the exclusion of the neck joint, which is encompassed within the body convention, adjusting the count from 73 to 72. **Q12: More failure cases could be shown in the paper. Also in comparisons to the competitors.** **A12:** Thank you for the suggestion. Please refer to Figure 3 in Rebuttal document. --- Reply to Comment 1.1.1: Comment: Dear reviewer, We would like to follow up to check if your concerns have been addressed. In the previous response, we have made the following updates/clarification: - Regarding the evaluation inconsistency and completeness (Q1), we provided explanations of why certain methods are compared against others, given the underlying frameworks. Regarding your advice to show the effectiveness of the localization module (Q2), we have evaluated the accuracy of the predicted bounding boxes in `Rebuttal.pdf`. - Regarding the use of data augmentation instead of contrastive setting (Q4), we demonstrated that the Contrastive Module can help to avoid domain shift, results in improved performance and is only used during training. - Regarding your concern for the quantitative evaluation of pixel alignment (Q5), we have added projected vertex error as an additional metric. - Following your advice, we plan to refine the experiment section (Q6), correct table formatting errors (Q7), clarify the meaning of estimation error (Q8) in the main text. - We have also provided extra clarification regarding why the ablation study was not conducted with the wholebody (Q2), the consistency for representations across varied augmentations (Q9), and have detailed the training and inference specifics (Q10). We are happy to answer further questions. Title: Follow-up
Summary: This paper proposes a method to improve the robustness of whole-body pose and shape estimation, which mainly contains three components: 1) localization module to give the network awareness of location and semantic part; 2) contrastive feature extraction module to predict consistent representations under different augmentations; 3) pixel-alignment module to ensure alignment between projected mesh and 2d evidence. Strengths: 1) This paper is well-written and easy to understand 2) The topic of robustness of pose and shape estimation is meaningful and the proposed method is effective against augmentations 3) The experiments are comprehensive and visualizations are very nice to help understand the method Weaknesses: 1) Since the main topic is the robustness of whole body pose and shape estimation, the literature review of robustness in vision tasks, especially in pose estimation tasks should be included. For example, [1][2]. [1] Bai, Yutong, et al. "CoKe: Contrastive Learning for Robust Keypoint Detection." [2] Zhang, Yumeng, et al. "Improving robustness for pose estimation via stable heatmap regression." 2) I wonder why the target task is whole-body pose and shape estimation rather than body/hand pose estimation? 3) Why does ‘baseline’ only appear in table2 but not in table1 and 3? Also, the definition of baseline is not clear. The definition of baseline is not clear either in table 9 4) Where is the result of robustness of body subnetwork again augmentations? 5) Where is the details of table 7? 6) Eq,2 typo Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: see weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: "literature review of robustness in vision tasks, especially in pose estimation tasks should be included"** **A1:** Thank you for your suggestion of a comprehensive literature review. We would like to emphasize that **prior works on 2D pose estimation are different from our 3D pose and shape estimation tasks. They focus on 2D/3D joint locations while we target the prediction of pose, shape and camera parameters for image subjects.** Below is our detailed discussion, which will be added to the revision. “Tackling robustness in vision tasks have motivated extensive works. Within the domain of human pose estimation, strategies including data augmentation, architectural innovations, and diverse training strategies have been actively explored. Specifically, (1) "AdvMix" [3] stands out by enhancing the robustness of pose estimation models using data augmentation. This method combines adversarial augmentation with knowledge distillation, where a generative framework mixes various corrupted images to confuse a pose estimator. Through such adversarial training, the estimator is conditioned to learn from harder samples, making it more robust. (2) Architectural modifications have been explored as a means to increase robustness. For instance, the work in [2] delineates a unique heatmap regression approach, encompassing three core components: a row-column correlation layer, a highly differentiated heatmap regression, and a maximum stability training protocol. This strategy is devised to buffer the network against minute perturbations. (3) Contrastive learning has also been applied to improve robustness. Bai et al. [1] introduced “CoKe”, a contrastive learning framework tailored for keypoint detection. By detecting each keypoint independently, the method demonstrates robustness, especially against occlusions, compared to more conventional approaches. **Q2: "I wonder why the target task is whole-body pose and shape estimation rather than body/hand pose estimation?"** **A2:** 3D pose and shape estimation and 2D/3D pose estimation are fundamentally two different tasks. **The former involves determining the "pose" and "shape" parameters of a statistical human body model, whereas the latter concentrates on predicting the locations of 2D/3D keypoints within an image.** We follow existing works [16, 21, 19, 23] in the task definition of human pose and shape estimation. 3D whole-body pose and shape estimation allows us to retrieve a human mesh from the predicted pose and shape parameters, opening a myriad of applications across diverse domains such as computer graphics and augmented/virtual reality. It has been a popular topic and attracts lots of attention [16, 21, 3, 15, 23, 8, 20, 19, 9, 5, 56, 42, 54] **Q3: "Why does ‘baseline’ only appear in table2 but not in table1 and 3? Also, the definition of baseline is not clear. The definition of baseline is not clear either in table 9"** **A3:** Sorry for the confusion. Baseline refers to training with the same datasets and backbone on HMR, but with the addition of the extra modules (Localization, Contrastive FE and Pixel Alignment Modules). For Table 1, the Baseline is HMR which is included. For Table 3, we have updated the baseline below. Updated Table 3. Evaluation of the Face subnetwork. | Method | LQ Mean(mm) &darr; | HQ Mean(mm) &darr; | | ----------- | ------------- | ------------- | | ExPose [5] | 2.27 | 2.42 | | ExPose † | 2.46 | 2.38 | | HMR | 2.18 | 2.11 | | HMR† | 2.31 | 2.27 | | RoboSMPLX | 2.12 | 2.08 | | RoboSMPLX † | 2.12 | 2.1 | **Q4: Where is the result of robustness of body subnetwork again augmentations?"** **A4:** Thanks for the great suggestion. Please find the result in the table below. Similar to the findings in Table 11, we conclude that training with strong augmentations can cause domain shift. This was also found in prior works [15, 19, 35]. In Table 4 of PARE [19], and Table 4 of HMR-EFT [15], Table 8 of [35], they also showed that adding crop augmentation can harm performance on existing benchmarks. Ablation of different modules on Body subnetwork. Results are trained on EFT-COCO and tested on 3DPW test set. | | PA-MPJPE &darr; | MPJPE &darr;| | -------------------------- | -------- | ----- | | Baseline (HMR) | 60.8 | 96.2 | | Baseline (HMR) + strongaug | 63.2 | 101.5 | **Q5: Where is the details of table 7?"** **A5:** Thanks for pointing this out. It was previously wrongly referenced to Table 18 in Line 288. We have fixed it, it is now referenced to Table 7 instead. We hope this helps to clarify. **Q6: Eq,2 typo"** **A6:** We would like to clarify what does “Eq,2 typo” refer to? Please don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer! --- Rebuttal Comment 1.1: Title: Clarification of Q2 Comment: Dear author, I want to have further clarification on my second question. My question is why the target task is whole-body pose and shape estimation rather than body/hand pose and shape estimation. Let's call this task mesh recovery. I understand that mesh recovery tasks are very different from 2D/3D keypoint detection. My question is usually whole-body is an advanced task compared with body or hand mesh recovery and there are many more existing works focused on body or hand mesh recovery than whole-body. Also, the challenges of whole-body mesh recovery are usually low resolution of hands or the rotation of the wrist. When coming to robustness, whole-body tasks seems the same as purely body or hand tasks for me. Therefore, it would be more meaningful to first study the robustness of body mesh recovery since there are many more competitive works to compare to show the superiority of your method. The proposed method which should be applicable to all mesh recovery tasks also seems less convincing in this way. --- Reply to Comment 1.1.1: Title: Response to Clarification of Q2 Comment: Dear reviewer, Thank you for raising valid concerns about our choice of focusing on whole-body mesh recovery. Below we provide explanations that address your queries. **Q1: why the target task is whole-body pose and shape estimation rather than body/hand pose and shape estimation** **A1:** Compared to body mesh recovery, **whole-body mesh recovery is a more prevalent and important task which offers broader applications**. It allows for the accurate modeling of hand gestures and facial expressions, making it particularly advantageous for intricate tasks such as clothed human reconstruction, editing, and animations that utilize SMPL-X predictions. **Q2: the challenges of whole-body mesh recovery are usually low resolution of hands or the rotation of the wrist. When coming to robustness, whole-body tasks seems the same as purely body or hand tasks for me** **A2:** While we acknowledge that whole-body mesh recovery inherently encapsulates the challenges of individual body parts like hands, face, and the body itself, **whole-body mesh recovery presents unique robustness problems, different from body/hand mesh recovery.** Beyond the common challenges like the “low resolution of hands [25] or rotation of wrist [32]”, **the accuracy of face and hand part crops influencing their respective subnetworks is a significant factor.** Two main concerns are the inaccurate localization of part crops by the wholebody network and the robustness of hand and face subnetworks when handling inaccurate part crops. Our approach addresses both these challenges. It's worth noting that in dedicated hand or face-mesh recovery, images primarily showcase the hand or face at the center. In contrast, this is often not the case in whole body estimation (part crops of existing methods are illustrated in Figure 2). The robustness issue is more pronounced and relevant in whole-body mesh recovery. This motivates our study to improve the robustness in every subnetwork of the wholebody pipeline. **Q3: it would be more meaningful to first study the robustness of body mesh recovery since there are many more competitive works to compare to show the superiority of your method** **A3: ** **We demonstrate the efficacy of our proposed modules, namely Localization, Contrastive Feature Extraction, and Pixel Alignment on body mesh recovery** (Table 2, Figure 15). In addition, the modules are also effective for face (Table 3, Table 5, Figure 15) and hand mesh recovery (Table 1, Table 4, Figure 14). Their performance outpaces existing methods (as seen in Tables 1, 2, 3), and they exhibit robustness under various positional augmentations (evidenced by Table 4 and 5). Qualitative demonstrations of this can be found in Figures 14 and 15. **Existing solutions on robustness for body/hand mesh recovery do not work as well.** Previous efforts for robustness in body/hand mesh recovery often relied heavily on data augmentation, leading to domain shifts as seen in [15, 19, 35]. Notably, some found that excessive crop augmentation can degrade benchmark performance (e.g., Tables 4 of PARE [19] and HMR-EFT [15], and Table 8 of [35]). Contrarily, our method leverages supervised contrastive learning, proven to effectively mitigate errors while maintaining robustness. We hope this helps to clarify why we choose whole body mesh recovery instead of body/hand mesh recovery. We are happy to answer further questions.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for your constructive feedback and recognitions of this work, especially for acknowledging that the problem is well motivated and meaningful [p9Xq, bkwo, go2c, 4HYU, Uf3X], the experiments are comprehensive [go2c, 4HYU, Uf3X], and the proposed modules are effective [Uf3X, 4HYU, go2c]. We will follow your suggestions to polish the paper, add the experiments and make the clarifications in the revised version. **General concerns:** **1. Clarification about the novelty of the proposed method** Below, we summarize the novelty of our approach and differences from previous works: Our research aims to enhance the robustness of whole-body pose and shape estimation. Notably, many current methods face challenges in maintaining performance under the augmentations commonly observed in complex in-the-wild scenarios. We posit that the accuracy and reliability of such models are influenced by the quality of the predicted bounding box, especially concerning the scale and alignment of individual body components. In addressing these issues, we introduce three novel modules and empirically validate their efficacy across body, hand, face and whole body models. 1. **Localization Module**: This module incorporates both sparse and dense prediction branches, ensuring the model is aware of the location and semantics of the subject’s parts in the image. The learned location features that encode information about the sparse and dense representations are helpful in recovering relative rotations, shape and camera parameters. Our work is distinct from previous SMPL-X methods [5, 9] that do not use any location information. We also demonstrate that our method is more effective than previous methods that only use joint features [32, 25] or keypoints [56] for recovering pose, but neglect the effectiveness to recover shape and camera parameters. In the whole-body estimation pipeline, these location information play a pivotal role in localizing bounding boxes for hand and face subnetworks, which previous methods have a separate bounding box predictor for. 2. **Contrastive Module**: Merely leveraging strong data augmentation can introduce domain shifts. This was also found in prior works [15, 19, 35]. In Table 4 of PARE [19], and Table 4 of HMR-EFT [15], Table 8 of [35], it shows that adding crop augmentation can harm performance on existing benchmarks. Our approach integrates supervised contrastive learning, utilizing the regressed keypoints from the mesh as the representation. This encourages the network to learn to produce similar embeddings for samples with the same pose under different augmentations. 3. **Pixel Alignment Module**: Minor deviations in scale or positional translations often result in visible misalignments in the projected mesh, indicative of errors in camera parameter estimations. While prior work relied on the supervision of projected keypoint for learning camera parameters, we introduce dense supervision of projected mesh using part-segmentation maps through differentiable rendering. This helps to ensure accurate pixel alignment of outputs. Notably, the combination of part-segmentation supervision and differentiable rendering has not been applied in whole-body pose and shape estimation. **2. Run-time/ inference speed** We measure the model size, computation complexity and inference time for different models including ours, as shown in the table below. Although our framework has sophisticated design, it has comparable inference speed as others, validating its efficacy. Table 1: These results are tested on RTX3090. FLOP refers to the total number of floating point operations required for a single forward pass. The higher the FLOPs, the slower the model and hence low throughput. Inference Time is obtained by averaging across 100 runs. | | Total parameters (M) | GFLOPs | Inference time (s) | | --------------------- | -------------------- | -------------- | ------------------ | | ExPose | 26.06 | 21.04 | 0.1330 &pm; 0.0050 | | PIXIE | 109.67 | 24.23 | 0.1670 &pm; 0.0065 | | Hand4Whole | 77.84 | 17.98 | 0.0709 &pm; 0.0022 | | OSX | 422.52 | 83.77 | 0.1998 &pm; 0.0028 | | PyMAF-X (gt H/F bbox) | 205.93 | 33.41 | 0.2194 &pm; 0.0027 | | PyMAF-X + OpenPifpaf | 205.93 + 115.0 | 33.41 + 120.52 | 0.2727 &pm; 0.0136 | | RoboSMPLX | 120.68 | 29.66 | 0.2008 &pm; 0.0220 | Pdf: /pdf/6529ca7b1d42957dc1f996f08c37db0ff06a9f32.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This submission proposes RoboSMPLX for whole-body pose and shape estimation. RoboSMPLX incorporates three modules, including a localization module, a contrastive feature extraction module, and a pixel alignment module. The localization module is aware of the location and semantics of body parts so that cropping could be more accurate. The contrastive feature extraction module incorporates a pose- and shape-aware contrastive loss, along with positive samples, for better feature extraction under robust augmentations. The pixel alignment module applies differentiable rendering to inherence of the re-projection alignment of the mesh. Strengths: The motivation of the proposed method is clear: using a localization module to improve the robustness of the cropping. Overall, the proposed method technically makes sense and can be easily reproduced. The experiments also show comparable or better performances with previous methods. Weaknesses: The most severe weakness is the novelty of the proposed method and the lack of comparisons with recent state-of-the-art solutions. To be more specific, there are several major issues: - The proposed localization and pixel alignment modules have very limited contributions to the community, as these operations are commonly used in this field. For instance, similar localization strategies are used in [39,56], and differentiable modules are used in [i]. [i] SK Dwivedi, N Athanasiou, M Kocabas, MJ Black, Learning to regress bodies from images using differentiable semantic rendering, ICCV 2021. - There is a lack of discussion and comparison with the recent state-of-the-art method PyMAF-X [54]. An in-depth discussion and comparison with [54] is necessary to support the claim of the proposed method. It is recommended to include [54] in the Related Work section, compare results with [54] in Tables 3,4,5, and Table 7, and show qualitative results of the proposed method and [54] for compressive comparisons. - This paper claims robust performances of whole-body pose and shape estimation, but no video result is provided in the supplementary materials. To convincingly demonstrate the robustness of the proposed method, it is also recommended to include a side-by-side comparison video [54]. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: How about the run time of the proposed method? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations of this paper are mainly the novelty and the lack of comprehensive comparisons with recent state-of-the-art solutions. Given such clear defects in the experimental results, I rate this paper as the one below the acceptance bar. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for acknowledging the proposed method is clear. We will polish the paper, add the experiments and clarify below points in the revised version. **Q1: "The proposed localization and pixel alignment modules have very limited contributions to the community, as these operations are commonly used in this field."** **A1:** Our method is significantly different from the mentioned works [39, 56, i]. Specifically, Expose [39] directly regresses pose and shape parameters from the image **without taking into account any location information.** For [56], its IK modules (BodyIKNet and HandIKNet) focus **solely on keypoint data** for deriving pose parameters, and the FaceNet operates on a direct regression from the image. In contrast, our Localization module effectively captures **both sparse (through a 2.5 heatmap for keypoint location) and dense (through part segmentation) predictions for each human body part (body, face and hand).** The encoded location information is then used for prediction of pose, shape and camera parameters. The ablation study in Table 7 shows the superiority of our design over [56]. Our method is technically different from [i] in several primary aspects: 1. [i] relies on an existing clothing segmentation model to retrieve a clothing segmentation mask, while we capitalize on the projected ground-truth mesh to source the part-segmentation mask, which is more cost-efficient as many SMPL-X datasets contain ground-truth wholebody parameters. 2. We have found that a differentiable part segmentation map holds a higher efficacy compared to normal silhouette supervision. This encourages learning of correct prediction of body part and silhouette, even in instances of object or self-occlusion. 3. The module we introduce aims to bridge the disparity in pixel alignment observed in numerous SMPL-X models. Moreover, the integration of this module predominantly facilitates the accurate determination of camera parameters and enhances pixel alignment. Even marginal variations in camera parameters can induce significant alignment shifts. **Q2a: "There is a lack of discussion and comparison with the recent state-of-the-art method PyMAF-X [54]"** **A2a: 1)** Thanks for the suggestion. Below we provide detailed discussions and comparisons with PyMAF-X. We will also include [54] in the Related Work section. 1. **Acquisition of part bounding boxes**: PyMAF-X relies on an off-the-shelf whole-body pose estimation model (OpenPifpaf) to obtain whole body 2D keypoints of the person in the image, from which part crops are derived. During the EHF evaluation, PyMAF-X employs ground-truth hand and face bounding boxes. In contrast, our method and other works (ExPose [5], PIXIE [9], Hand4Whole [32], OS-X [25]) encompass a self-integrated module designed to extract hand and face bounding boxes directly from the image. So it is unfair to directly compare these works with PyMAF-X. 2. **Operational efficiency**: Openpifpaf imposes extra computation during inference, making PyMAF-X less efficient than our method. Please refer to Table 1 in General Comments 2. 3. **Network architecture**: Due to the diverse backbone and dataset combinations utilized, it is challenging for us to make whole-body network comparisons. In our paper (Table 1), we focus on contrasting RoboSMPLX’s Hand subnetwork with PyMAF’s Hand subnetwork. Both networks are trained and evaluated on the same backbone and dataset, FreiHAND. In this context, our method surpasses PyMAF. 4. **Performance**: On the EHF metrics, our performance lags behind PyMAF-X. This could potentially arise from variations in the training datasets employed. While the training pipeline of the body network for PyMAF-X has been disclosed, the training specifics for hands and face and the methodology to integrate hand, face, and body module PyMAF-X, remains undisclosed. We intend to replicate with similar training datasets in the future. **Q2b: "recommended to compare results with [54] in Tables 3,4,5, and Table 7, and show qualitative results of the proposed method and [54] for compressive comparisons.""** For comparisons on Tables 3, 4, 5, PyMAF-X did not provide the [pre-trained hand and face models](https://cloud.tsinghua.edu.cn/d/3bc20811a93b488b99a9/?p=%2Fdata%2Fpretrained_model&mode=list) to be evaluated on the respective FreiHand and Stirling benchmarks. In addition, the face model of PyMAF-X was trained on [VGGFace2](https://www.robots.ox.ac.uk/~vgg/data/vgg_face2/) which is no longer publicly available. Therefore, we were unable to reproduce a result with similar training configurations. For Table 7 when evaluating the whole body, the hand and face evaluations are affected by the accuracy of the part bounding box detected. In PyMAF-X’s evaluation on EHF dataset, the ground-truth part- bounding boxes are fed in for evaluation. Therefore this poses an unfair comparison. If we were to obtain the part-bounding box from OpenPifPaf, we will be indirectly evaluating OpenPifPaf, rather than PyMAF-X as they do not have any module for locating the part-bounding boxes. **Q3: "include a side-by-side comparison video "** We plan to include additional visualizations here https://github.com/robosmplx/RoboSMPLX/. **Q4: "run time of the proposed method"** Please refer to our response to General Concerns 2. Please don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer! [1] SK Dwivedi, N Athanasiou, M Kocabas, MJ Black, Learning to regress bodies from images using differentiable semantic rendering, ICCV 2021. --- Rebuttal Comment 1.1: Title: Follow-up Comment: Dear reviewer, We would like to follow up to check if your concerns have been addressed. In the previous response, we have made the following updates/clarification: - Regarding your concern on the novelty of the proposed localization and pixel alignment modules (Q1), we clarified the uniqueness of our modules in contrast to other works. - Regarding your precious advice to compare to PyMAF-X (Q2), we discussed and compared our approach with PyMAF-X in detail and plan to add it to the "Related Work" section. - Regarding your advice to include a side-by-side comparison video (Q3), we've uploaded comparison videos per your advice (Q3). Notably, PyMAF-X's performance is influenced by OpenPifpaf's predictions, evident in `pymafx_openpifpaf.mp4`. We observed improvements with a more robust pose estimator in `pymafx_mmpose.mp4`. Our model's results are in `robosmplx.mp4`. - Regarding the run time of the proposed methods (Q4), we have addressed it in General Concerns 2. We are happy to answer further questions.
null
null
null
null
null
null
Deep Recurrent Optimal Stopping
Accept (poster)
Summary: The paper purports to develop a framework of optimal stopping that generalizes the previous approaches by incorporating non-Markovian settings and using a Bayesian network formulation. Strengths: The only strength of this paper in this reviewer's opinion is the fact that it tackles an important problem. Weaknesses: The paper is written in a manner that makes it very difficult to discern its contents. The presentation and the proofs are ridiculously long-winded even though they could they be easily written in much more succinct manner. Although the reformulation of the optimal stopping problem as a Bayesian is interesting at first sight, I don't see how it is useful unfortunately, especially given the fact that I have concerns about they are even maximizing. See the next section for more concrete examples of the problems this paper has. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. It isn’t clear why every finite stopping time (as defined in the literature as a measurable function $\tau \colon \Omega \to \mathbb{N}$ such that $\\{\tau \le n\\} \in \mathcal{F} _ n$ for all $n \in \mathbb N$; here and elsewhere in the review $(\mathcal{F})_{n \in \mathbb N}$ is the appropriate filtration which makes the relevant stochastic processes adapted) can be written in the form of Definition 2.3. I don’t believe it is even true — for intuitively the same reason why there are stopping times that are not hitting times. This makes me doubt that the paper is solving the desired optimal stopping problem and is limited to a rather special case. 2. What does the notation $\arg \sup$ in Definition 2.4 mean? It isn’t immediately obvious that a optimal stopping time $\tau^*$ will exist and therefore a small note showing this would be helpful. Perhaps a mention of the classical result that a finite optimal stopping time exists if and only if $\tau_0 := \inf \\{n \in \mathbb N : U_n = R_n\\} < \infty$ a.s., where $U_n := \text{ess sup} _ {\tau \in \mathcal{T} _ n} \mathbb{E}[R_\tau \mid \mathcal{F}_n]$ and $\mathcal{T}_n$ being the collection of all finite stopping times $\tau$ satisfying $\tau \ge n$, would make this immediate. (Note: it can be shown that $U_n = V_n$ as defined in the paper.) 3. As mentioned above $V_j$ should be $$ V_j(\mathbf S_j) = \text{ess sup}_{\tau \in \mathcal{T}_j} \mathbb{E}[R_\tau \mid \mathcal{F}_j], $$ which is a *random variable* (see https://almostsuremath.com/2019/01/06/essential-suprema/ for the definition of essential supremum)*.* This isn’t what equation (1) says. Even the Wald-Bellman equation as written in the paper says that $V_j(\mathbf S_j)$ is a random variable. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's concerns regarding the definitions in the paper. Our treatment is closely related to the approach in the following references. **[27]** A. N. Shiryaev, "Stochastic Disorder Problems" **[Poor]** H. Vincent Poor, "An Introduction to Signal Detection and Estimation" **[Fischer]** Fischer, Tom. “Stopping times are hitting times: a natural representation.” Statistics & Probability Letters (2011) We hope that the reviewer reconsiders the rating of the paper, since there were no issues other than clarifications w.r.t. definitions. Specific responses are as follows. > The reviewer expressed doubts about the definition of the optimal stopping problem in the paper, specifically, the stated concerns are: **a)** *It isn’t clear why every finite stopping time can be written in the form of Definition 2.3.* **b)** *I don’t believe it is even true-for intuitively the same reason why there are stopping times that are not hitting times. This makes me doubt that the paper is solving the desired optimal stopping problem and is limited to a rather special case.* **Response:** We clarify that the formulation of the optimal stopping time in definition 2.3 applies to general discrete-time, finite horizon setting and is not a special case. Specific responses to a) and b) are as follows **a)** Definition 2.3 in our manuscript is the same as that in the classic textbook [Poor] "An Introduction to Signal Detection and Estimation" by H. Vincent Poor. Please see both page 137 and page 145 where the **policy stopping time** is defined as : $N(\phi) = \min$ \{ $n | \phi_n (Y_1, Y_2, \cdots, Y_n) = 1$ \} Here $N(\phi)$ is the policy stopping time, $\phi$ is equivanent to our stopping policy $\varphi$, and $Y_1, Y_2, \cdots, Y_n$ is equivalent to ${\bf S}_j$ in our notation. **b)** As the reviewer remarks, hitting times are stopping times (by the Debut Theorem). However, on a technical note, stopping times can also be interpreted as hitting times **w.r.t. a "stopping process"**, see for example[Fischer] para 2 on page 1: *"Astonishingly, it seems to be less widely taught (and maybe known) that the inverse is true as well: for any stopping time there exists an adapted stochastic process and a Borel measurable set such that the corresponding hitting time will be exactly this stopping time"*. > What does the notation in Definition 2.4 mean? It isn’t immediately obvious that a optimal stopping time will exist and therefore a small note showing this would be helpful. **Response:** The $\arg \sup$ notation for $\tau^*$ in definition 2.4 simply denotes that the optimal stopping time (if one exists) satisfies $\mathbb{E} [ R_{\tau^*} ] = \sup_{\tau} \mathbb{E} [ R_{\tau} ]$. Existence and finiteness of $\tau^*$ is guaranteed if $\mathbb{E} \sup_{k\geq 0} | R_{k}| < \infty$. See page 58 of [27] A. N. Shiryaev, "Stochastic Disorder Problems" book. We will add a short note regarding this technical condition in the revised paper as suggested by the reviewer. We agree that the condition suggested by the reviewer is indeed a sound alternative condition. However, our treatment is along the lines of [27] and easy to verify. >Use ess sup in equation (1) to define $V_j$. It is a random variable unlike what equation (1) suggests. **Response:** The issue is due to a typo in equation (1): It should be $\mathbb{E} [ V_j({\bf S_{\textit j}}) ] = \sup_{\tau \geq j} \mathbb{E} [ R_{\tau} ]$ instead of $V_j({\bf S_{\textit j}}) = \sup_{\tau \geq j} \mathbb{E} [ R_{\tau} ]$. This result appears on page 60, Theorem 1 (case of a finite time horizon) of [27] A. N. Shiryaev, "Stochastic Disorder Problems" book. --- Rebuttal Comment 1.1: Comment: 1. Thank you very much for the reference to the Fischer's paper. It is a very interesting result! I did not know about this. In light of this I have significantly revised my rating for the paper. 2. The existence of $\tau^*$ is _not_ guaranteed by the condition $\mathbb{E} \sup_{k \ge 0} |R_k| < \infty$. Shiryaev on page 58 only says that this condition is sufficient for the existence and finiteness of $\sup_{\tau \in \mathfrak{M}} \mathbb{E} G_\tau$. It doesn't say anything about the existence of $\tau^*$. --- Reply to Comment 1.1.1: Comment: We thank you for your valuable comments and kind reconsideration of our paper. In response to your following point: > The existence of $\tau^*$ is not guaranteed by the condition $\mathbb{E} \sup_{k \ge 0} |R_k| < \infty$. Shiryaev on page 58 only says that this condition is sufficient for the existence and finiteness of $\sup_{\tau \in \mathfrak{M}} \mathbb{E} G_\tau$. It doesn't say anything about the existence of $\tau^*$. please see further discussion on pages 59 and 60 of [27] (Shiryaev). The stated condition when applied to **finite-horizon** optimal stopping problems (the case considered here) is sufficient to guarantee existence of $\tau^*$, since the optimal stopping time that achieves the above supremum **can be constructed explicitly** by the process of backward induction (See page 59, equation 3.7 for the construction process and Theorem 1 of Shiryaev [27] on pg 60, equation 3.8 that shows that the construction achieves optimality). --- Rebuttal 2: Title: Please take a look at author response and let us know if your opinion has changed. Comment: Thank you.
Summary: Optimal stopping is the problem of choosing a time to take a given action based on sequentially observed random variables in order to maximize an expected payoff. Previous works used Deep Neural Networks to find the optimal stopping time (e.g. Backward Induction method), however, as the authors mentioned, these approaches have several limitations in non-Markovian settings. The paper presents the Optimal Stopping Problem as an inference problem on a Bayesian Network and they use RNNs to learn the model-free optimal stopping strategies. Strengths: - Authors introduce a reasonable way to solve the optimal stopping problem and provide the corresponding theoretical justifications. - The approach is well motivated (lines 43-60). - The experimental results demonstrate that their method outperforms the baseline. - The authors compared the training and inference times for DROS and the baseline. - The paper is well written and organized. - Experiments were done on real-world benchmarks. Weaknesses: - There is a whole literature on how to use deep learning to solve partial differential equations (PDEs) in general, and more specifically option pricing problems. Many papers solve PDEs using deep learning in the context of optimal stopping time. - There are several papers on optimal stopping problems, I encourage the authors to include more baselines in their comparisons. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - American options are the most popular in practice and is a very good case study to solve. - Have you tested your algorithm on different American options? If not, why not? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - From the tables 2 and 3 (Appendix C), the training time of DROS is bigger compared to the baselines. - The proposed method can suffer in terms of time complexity when we test it to solve problems in high dimensions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the motivation, contribution, presentation, and organization of the paper. The main concern seems to be with regard to the vast body of work on PDE based optimal stopping approaches, specifically with regard to solving American options. We hope that have fully addressed this concern in the specific comments below, including direct comparison of our model-free method with state-of-the-art PDE based American option solvers which are in fact model-based. Considering the mitigation of this perceived weakness, we hope the rating will also be reconsidered. > There is a whole literature on how to use deep learning to solve partial differential equations (PDEs) in general, and more specifically option pricing problems. Many papers solve PDEs using deep learning in the context of optimal stopping time. I encourage the authors to include more baselines in their comparisons. **Response:** Indeed, there is a lot of interest in using deep-learning to solve PDEs with application to option pricing. However, feel that PDE based baselines are not appropriate baselines for this paper, for the following reasons: * **The setting of the paper is discrete-time, finite-horizon optimal stopping problems**. There is no natural PDE which describe the dynamics such systems in general, thus ruling out PDE approaches. PDE based methods, including deep learning methods are inherently designed for *continuous-time* optimal stopping problems. Note however, that while we may approximate continuous-time problems by choosing a fine discretization grid, to our knowledge, natively discrete-time problems cannot necessarily be solved using the PDE approach. * **We consider the PDE methods to be in the category of model-based methods**, since they start with a specific PDE to be solved. Consider, for example, popular PDE based American option pricing methods such as the Deep Galerkin Method (DGM) [Sirignano and Spiliopoulos, 2018] and the Backward Stochastic Differential Equation (BSDE) method [Chen and Wan, 2020]: These assume Markovian Black-Scholes Dynamics and the PDEs to be solved require the Black-Scholes model parameters, such as covariance of the Brownian motion, volatility, risk-free interest rate, and dividend yield. In contrast, our method (and those we compare against) does not use any prior information on the evolution dynamics of the underlying stochastic process. We will add references and discussion to reflect these points. > American options are the most popular in practice and is a very good case study to solve. Have you tested your algorithm on different American options? If not, why not? **Response:** As stated above, this paper is about solving discrete-time, finite-horizon optimal stopping problems. Option pricing (specifically Bermudan options, that have discrete exercise opportunities) are just one example application. We did not include American options since they are continuous-time and hence we felt that they were not natural candidates to show efficacy of the developed methods. That said, our method can indeed be used to price American options in a model-free manner, simply by solving the corresponding Bermudan option at a finer discretization (ie: increasing discrete exercise opportunities to day level, for example). Note that Continuous-time methods for American options (such as those cited above) require discretization of the original PDE (ex: using Euler-Maruyama scheme) or random sampling (as used in DGM), so often do not end-up directly solving a Bermudan option. Although as noted above the PDE approaches are not model-free, **we have now run our approach in pricing challenging high-dimensional American options** and compare vs. published results from state of the art PDE baselines including Deep Galerkin Method (DGM) [Sirignano and Spiliopoulos, 2018] and the Backward Stochastic Differential Equation (BSDE) method [Chen and Wan, 2020]. We consider the 100 dimensional continuous-time **American** geometric-average call option with Black-Scholes dynamics considered in [Sirignano and Spiliopoulos, 2018] and [Chen and Wan, 2020]. The option is characterized by the following parameters: $r$: 0.0, $\delta$: 0.02, $\sigma$: 0.25, $\rho_{ij}$: 0.75, 'time_horizon_yrs': 2 years, 'strike_price': 100. The exact price of this option can be determined semi-analytically for comparison [Chen and Wan, 2020]. | Method | stock price | option_price | exact_price | | :---: | :---: | :---: | :---: | | [Sirignano and Spiliopoulos, 2018] | 100 | **9.9236** | 9.9345 | | [Sirignano and Spiliopoulos, 2018] | 110 | N/A | 15.6491 | | [Chen and Wan, 2020] | 100| 9.9187 | 9.9345 | | [Chen and Wan, 2020] | 110| 15.6219 | 15.6491 | | DROS-OSPG (ours) | 100| 9.8675 | 9.9345 | | DROS-OSPG (ours) | 110| **15.6428** | 15.6491 | Our model-free algorithm yields results competitive with state of the art PDE methods that assume Black-Scholes dynamics. We were surprised by the excellent performance of our method, since we did not expect it to work well in a natively continuous-time setting. This opens the door for using our approach to price American options in a model-free setting, especially when the underlying trajectories are non-Markovian and are not required to follow Black-Scholes dynamics. We can include a more comprehensive comparison with PDE methods and American options in the supplemental material. **References** [Sirignano and Spiliopoulos, 2018] DGM: A deep learning algorithm for solving partial differential equations. Journal of Computational Physics, 375:1339–1364, 2018. [Chen and Wan, 2020] Yangang Chen & Justin W. L. Wan (2020): Deep neural network framework based on backward stochastic differential equations for pricing and hedging American options in high dimensions, Quantitative Finance --- Rebuttal Comment 1.1: Comment: Thank you for your valuable comments. We hope we have addressed all your concerns with regard to comparisons with PDE based optimal stopping methods and application to American option pricing. Since the deadline for author-reviewer discussion is fast approaching, please do let us know if there are any further clarifications. We would also like to highlight the several contributions (summarized in global comments to reviewers) to the under-researched area of **optimal stopping in non-Markovian settings**, which has several real-world applications, including computational finance. This is achieved by bringing together, for the first time, RNNs, probabilistic graphical models and policy-gradient methods. One of the key contributions is a new policy-gradient algorithm for optimal stopping that avoids expensive Monte-Carlo rollouts by performing inference on a Bayes Net model of state-action trajectories. This opens the door to new applications. For instance, in computational finance, asset dynamics are often modelled with Markovian Black-Scholes type models. In many real-world settings, such assumptions are invalid. Our approach (RNNs and optimal stopping policy gradient methods) provide an elegant alternative, especially in non-Markovian settings. The results of this paper would be of significant interest to the NeuRIPs community. We hope this provides sufficient grounds to reconsider the rating of the paper. --- Rebuttal Comment 1.2: Title: Thank you for your response Comment: I have read all the comments. I want to thank the authors for running the new experiments regarding the American Options. I would be happy to increase the score if the authors have the possibility to run more experiments with several strike prices in the case of American pricing so we can draw strong conclusions. Can you please update the code and include the American option benchmarks?
Summary: The paper proposes an RNN-based approach for optimal stopping which is based on a Bayesian inference view of optimal stopping. The proposed model can be trained with direct optimization via policy gradients, or with expectation-maximization (EM). These two appraoches are shown to be equivalent. This new RNN-based approach is shown to outperform state of the art methods on a couple of commonly used datasets. Strengths: The limitations of the existing deep neural network approaches are discussed and contributions of this work are clearly stated. The background on optimal stopping is also introduced in detail. Weaknesses: - **Writing and organization of the paper have much room for improvement.** **Overuse of abbreviations makes the paper difficult to read.** Although abbreviations like DNN and EM are common and inevitable, I personally would suggest against abbreviating weighted maximum likelihood as WML and policy gradiants as PG in the text. To make things worse, one of the main approaches is named DROS-OSPG, with an unnecessary repeat of "optimal stopping" which I find to be cumbersome and confusing. **Equations with conflicting real and dummy indices.** In equation 2 and theorem 3.1, the index $j$ is both used as a real index and a dummy index for summation. **There's too much technical detail in the main paper.** Equivalence of EM and policy gradient is interesting but probably belong better in the supplements. However I do understand that the significance of this fact might have eluded me since I am not an expert in this field. The Keras-specific implementation detail on lines 246-247 can also be saved for the supplements. **No conclusion or discussion paragraph at the end.** As a consequence, the paper does not find room to discuss its limitations, which is a requirement for NeurIPS papers. - **Ablations are missing.** Since I am not an expert in this specific field, I do not know if the improvements over the state of the art are significant enough. From the prospective of a research paper, I feel like some design decisions need to be justified by running ablations. For example, how important are the weights in the weighted maximum likelihood objective? Abalation studies like this would justify the various claims put forth throughout the paper. --- Once again I must say that it is very likely that I do not understand the significance of the paper due to lack of familiarity with the field. My main concerns with the paper lie in its presentation. I do not think the paper at its current state is doing a good job of illustrating the key ideas of the new method and of convincing me that the contributions are novel and significant. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In the first equation in equation 2, why is the numerator $R_\tau$? - What is the justification for the paramterization of $Y_j$ in equation 2? Is this an arbitrary design decision? - Why is the XOR used for $Y$ on line 167? - In what sense is the proposed method a Bayesian interpretation of optimal stopping? Does the Bayesian interpretation still hold once the weights are introduced into the objective? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors did not adequately address the limitations of the work since there is no conclusion or discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewers comments. To better help clarify the novelty and significance of the paper and approach, we have included context in the global comments to all reviewers. We hope this helps the reviewer in appreciating the contribution and to reconsider the rating. Specific questions raised in the review are addressed below. >In the first equation in equation 2, why is the numerator $R_{\tau}$? **Response:** This is a typo: $\mathbb{P}(Y_j = 1 | {\bf R}_H, A_j = 1)$ should be equal to $\displaystyle\frac{R_j}{\sum_{k=0}^H R_k }$ instead of $\displaystyle\frac{R_{\tau}}{\sum_{j=0}^H R_j}$ as appeared in the paper. >What is the justification for the parameterization of $Y_j$ in equation 2? Is this an arbitrary design decision? **Response:** The formulation of this conditional probability distribution is indeed a design choice, but not arbitrary. * Since we desire to encode reward opportunities at each step in the trajectory into the BN and every node in the BN represents a conditional probability distribution (CPD), it is natural to use the relative reward at a time-step (normalized by total reward over the trajectory) to represent this reward opportunity as a CPD. * A key consequence of this choice is that it leads to an equivalence with policy gradients and minimization of the optimal stopping objective. Similar design choices have also been made in other related work. Please see equation (3) in [15] Sergey Levine, "Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review". The authors say: "While this might at first seem like a peculiar and arbitrary choice, it leads to a very natural posterior distribution..." Note that while we are inspired by a corresponding view of reinforcement learning [15], the resulting modeling choices needed to capture the structure of optimal stopping in our approach leads to a very different BN model. For instance in the former case, rewards may be accumulated over time-steps and trajectory lengths are not variable, rolling out to the horizon. Also that formulation leads to a maximum entropy objective. > Why is the XOR used for on line 167? **Response:** We use the XOR operator to define the random variable $Y$ from the $Y_j$'s. Since $Y=1$ if and only if exactly one of the $Y_j$'s are 1. The reward for a trajectory can only be claimed by a stop action at a single time-step. This requires only a single $Y_j = 1$, allowing us to sum the probabilities of stopping and collecting rewards at each time-step of the trajectory (equation 5) >Ablation for the weights in WML **Response:** Our intent in this paper is not to use the WML weights explicitly. We provide a particular (*and natural*) choice of weights: **weight each trajectory by the expected reward of that trajectory** which results in a particular WML objective, which then leads to equivalence between the WML and policy-gradient approaches **and** minimization of the original optimal stopping objective from definition 2.4. Also, we feel that it is not meaningful to ablate different weight choices in this case since the WML objective itself changes with different weights. To ablate, one needs to change something and measure against a fixed target. In this case the proposed ablation would be shooting at a different target. >In what sense is the proposed method a Bayesian interpretation of optimal stopping? **Response:** What we have shown is that the peculiar state-action trajectories in optimal stopping can be modelled explicitly by a Bayesian Network (Figure 1a) which can then be used to rewrite the classic optimal stopping objective of equation (10) in the form of equation (12). It is in this sense that we have a Bayesian Net interpretation of the optimal stopping problem. A real benefit of this view is that by computing probabilities over state-action trajectories, we avoid explicitly sampling actions (a.k.a. Monte-Carlo state-action trajectory rollouts) like policy-gradient methods typically do. A reward augmented version of the state-action trajectory network (Figure 1b) additionally captures the notion of an optimal stopping trajectory, by introducing optimality variables $Y_j$ and corresponding conditional probability distributions (equation 2) into the model. This latter approach leads to a WML problem which does not in general reduce to minimization of the classic optimal stopping objective, **unless** the CPD and weights have the specified form. In this sense, this is a generalized interpretation of optimal stopping problems. >Does the Bayesian interpretation still hold once the weights are introduced into the objective? **Response:** The reward augmented Bayes Net model (Figure 1b) is simply a model of state-action trajectories and corresponding relative reward possibilities **inside** a trajectory. This **interpretation does not change** by introducing weights into the objective since these weights assign importance to entire trajectories (so do not affect relative rewards inside a trajectory). --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. I think the response did help me better understand the work and its significance. Given that the methodology and results presented in the paper all seem solid and the typos are fixed, I am willing to bring my score up to a 5. Thanks! --- Reply to Comment 1.1.1: Comment: We thank you for your valuable comments and kind reconsideration of our paper.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers and appreciate the concerns raised. Here we address concerns regarding the significance of our contribution. We respond to each reviewer individually in specific comments. While there is a vast body of work on optimal stopping problems in the Markovian setting, the literature on model-free optimal stopping in **non-Markovian settings** is sparse. This is of great practical importance in areas such as finance (option pricing), operations research (predictive maintenance), early classification/detection etc. Two of the main challenges (mentioned in lines 43-60, 76-79) in developing effective algorithms for this setting include: * **Explosion of state space:**. Ready extensions of Markovian approaches to non-Markovian settings results in state space explosion rendering them impractical. Solving the problem at hand **requires efficient parameterization of state space, such as afforded by RNNs**. However, popular optimal stopping approaches either cannot use RNNs for structural reasons (backward induction) or fare poorly in non-Markovian settings even if RNNs are used (fitted Q-iteration). Thus, *one does not encounter the use of RNNs in optimal stopping settings*. * **Lack of model-free direct RL methods:** RL style policy-gradient algorithms are typically online algorithms and require **expensive monte carlo policy rollouts**. *Policy gradient methods are notably missing* from the optimal stopping literature. Keeping these problems in mind, **we bring together, for the first time, RNNs, probabilistic graphical models and policy-gradient methods to design an RNN-based policy-gradient algorithm for non-Markovian optimal stopping settings**. One of the key contributions is to is to *avoid expensive Monte-Carlo rollouts in the policy gradient algorithm by performing inference* on a Bayes Net (a probabilistic graphical model) model of state-action trajectories (Section 4). Extending the Bayes net trajectory model with reward augmentation (section 3) yields a weighted maximum likelihood (WML) policy estimation approach that is a *generalization of the optimal-stopping policy-gradient method, in the sense that we recover our policy gradient algorithm for specific design choices of CPDs and weights* (section 4.1). One of the key benefits of this generalization is that the procedure can be adapted to various settings by augmentation with additional variables/latents makes it possible to model time-dependent stochastic disorders, such as change-points etc, also permitting a readymade solution method, via incremental EM.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Hierarchical Integration Diffusion Model for Realistic Image Deblurring
Accept (spotlight)
Summary: This paper proposed a novelty Hierarchical Integration Diffusion Model for deblurring task. By incorporation multi-scale latent priors, the proposed method achieved SOTA performance on both synthetic and real-world datasets. Strengths: 1. The proposed method achieved SOTA performance. 2. The ablation study is very complete and the paper is easy to understand. Weaknesses: 1. I think this paper is meaningful. But I argue that the authors should add more discussion about the difference between DiffIR. In my understanding, the difference is mainly using multi-scale latent prior, which has some novelty but seems small. Also in the introduction section, the authors state the motivation as "since the advantages of regression-based methods in distortion accuracy, we integrate DMs and regression-based methods", which seems similar with DiffIR. 2. Experiment comparison. This paper seems highly inspried by DiffIR. I suggest the authors also compare the results with the re-trained DiffIR model. The compare methods do not contain any diffusion models. In section 2.2, DvSR seems handle deblurring task using DM. I think the authors should add more comparisions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What are the difference between the proposed method and DiffIR? I think only using multi-scale latent space z seems not very novelty enough. 2. I suggest authors add more DM-based compare method, e.g., retrained DiffIR and DvSR. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: As state in the introduction part, the diffusion model may gengerate unpleasing artifacts in the restoration results. Does this method also have some fail cases? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer w6Fg (denoted as R4) `Q4-1:` The authors should add more discussion about the difference between DiffIR. The difference is mainly using multi-scale latent prior. Also, the motivation: "since the advantages of regression-based methods in distortion accuracy, we integrate DMs and regression-based methods", seems similar with DiffIR. `A4-1:` Thanks for your valuable suggestions. We clarify the differences between DiffIR, and the novelty of our work below. *Note: All models are tested on GoPro, and the input size is 3×256×256 to calculate FLOPs.* **For "the difference is mainly using multi-scale latent prior".** The multi-scale latent prior is part of our novelty. 1. In general, the difference between HI-Diff and DiffIR is the integration approach between DM and Transformer. We design the **hierarchical integration**: **multi-scale prior** and **cross-attention interaction**, which is more suitable for non-uniform deblurring in real scenarios. 2. **The prior is different.** HI-Diff applies the multi-scale prior, while DiffIR uses the single-scale prior. The multi-scale prior adapts to encoder-decoder Transformer architecture for better integration. The ablation in Tab. 1 demonstrates this point. | Method | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | | ------------ | :--------: | :-------: | :-------: | :----: | | Single-scale | 21.98 | 125.39 | 32.00 | 0.9534 | | Multi-scale | 23.99 | 125.47 | 32.24 | 0.9558 | 3. **The interaction is different.** HI-Diff adopts cross-attention in the hierarchical integration module (HIM), while DiffIR uses prior as dynamic modulation parameters. With cross-attention, the regions with varying degrees of blur in features can pay different attention to the prior, resulting in better (non-uniform) deblurring performance. We replace cross-attention in HI-Diff with dynamic modulation and find that cross-attention yields better performance. | Method | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | | ------------------ | :--------: | :-------: | :-------: | :----: | | dynamic modulation | 29.63 | 121.01 | 32.10 | 0.9544 | | cross-attention | 23.99 | 125.47 | 32.24 | 0.9558 | Moreover, the **HIM** we proposed is **plug-and-play**, which is more convenient without modifying the components of the original model. **For "the motivation is similar with DiffIR".** The integration of DM and Transformer in our method is inspired by DiffIR. But our motivation is different from DiffIR. 1. **The motivation is different.** As described in the introduction, our motivation is threefold: reduce DM complexity, take advantage of Transformer, and better address the non-uniform blur. However, DiffIR focuses on improving the efficiency and stability of DM without **considering properties of realistic deblurring**. 2. **HI-Diff outperforms DiffIR.** We compare our HI-DIff with DiffIR. Our method performs better than DiffIR with comparable Params and FLOPs. It shows that our method, applying hierarchical integration, is more suitable for realistic non-uniform deblurring. | Method | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | | ---------------- | :--------: | :-------: | :-------: | :---: | | DiffIR (paper) | 26.94 | 120.99 | 33.20 | 0.963 | | HI-Diff (ours) | 28.49 | 142.62 | 33.33 | 0.964 | | HI-Diff-2 (ours) | 23.99 | 125.47 | 33.28 | 0.964 | `Q4-2:` Compare the results with the re-trained DiffIR model and DvSR. `A4-2:` Thanks for your suggestion. We provide more comparisons with diffusion models: **DvSR [48]** and **DiffIR [49]**. For DiffIR, apart from the results provided in the paper, we also **retrain the model** with the official code. Models are tested on GoPro, and the input size is 3×256×256 to calculate FLOPs. | Method | Params | FLOPs | PSNR | SSIM | | ------------------ | :----: | :---------: | :---: | :---: | | DvSR | 26.07M | 170.31**T** | 31.66 | 0.948 | | DiffIR (paper) | 26.94M | 120.99G | 33.20 | 0.963 | | DiffIR (retrained) | 26.94M | 120.99G | 33.18 | 0.963 | | HI-Diff (ours) | 28.49M | 142.62G | 33.33 | 0.964 | | HI-Diff-2 (ours) | 23.99M | 125.47G | 33.28 | 0.964 | 1. Compared with the DvSR, our methods have much smaller FLOPs and better performance. 2. Compared with DiffIR, our methods perform better with comparable Params and FLOPs. This is because HI-Diff applies hierarchical integration, which is more suitable for processing non-uniform blur in real scenarios. **We provide more comprehensive comparisons in Tab. 1 of the PDF.** `Q4-3:` What are the difference between the proposed method and DiffIR? I think only using multi-scale latent space z seems not very novelty enough. `A4-3:` Thanks for asking for those details. We clarify them as follows. 1. The difference between our HI-Diff and DiffIR is the **hierarchical integration**: **multi-scale prior** and **cross-attention interaction**, which is more effective and suitable for non-uniform deblurring. 2. The multi-scale latent prior is part of our novelty. **We have responded to another similar question, `Q4-1`. Please refer to `A4-1` for more details.** `Q4-4:` I suggest authors add more DM-based compare method, e.g., retrained DiffIR and DvSR. `A4-4:` Thanks for the valuable suggestions. We compare more DM-based methods, e.g., DvSR [48] and DiffIR [49]. Our HI-Diff outperforms other DM-based methods. **We have responded to another similar question, `Q4-2`. Please refer to `A4-2` for more details.** `Q4-5:` As state in the introduction part, the diffusion model may gengerate unpleasing artifacts in the restoration results. Does this method also have some fail cases? `A4-5:` Thanks for asking for those details. In our method, there are also some failure cases. We provide some cases in **Fig. 2 of the PDF**. --- Rebuttal Comment 1.1: Comment: Thank you for the time for rebuttal. My main considersions is this paper does not provide the comparison with other diffusion methods. The rebuttal has well sovled my considersions and thus I change my rate to weak accept after the rebuttal. After reading the review from Reviewer 5Ars, I suggest authors may also add some visual comparisons for deblur results with and without using the diffusion prior. Thank you. --- Reply to Comment 1.1.1: Title: Thanks Reviewer w6Fg for approving our work Comment: Dear Reviewer w6Fg, Thanks for your response. We are happy to see that our response can solve your concerns. For "add some visual comparisons for deblur results with and without using the diffusion prior". In the ablation study (Sec 4.2), we have compared the model without the diffusion prior (**Baseline**), and the model with the diffusion prior (**HI-Diff, ours**). The quantitative results are provided in **Tab. 1** (first and fourth rows) of the main paper. And the visual comparisons are provided in **Fig. 2** (the first case). Meanwhile, thanks for your valuable suggestions. We will provide **more visual comparisons** in the revision. Best, Authors
Summary: This paper presents the Hierarchical Integration Diffusion Model (HI-Diff) for realistic image deblurring. The HI-Diff utilizes diffusion models to generate multiscale priors in the latent space, which are integrated hierarchically into the deblurring process to improve the results. Experiments are conducted on both synthetic and real-world blur datasets. Strengths: - A hierarchical integration module is proposed to fuse the prior into the model from multiple scales - Experiments conducted on synthetic and real-world blur datasets demonstrate the state-of-the-art results, and the code would be released. Weaknesses: - It's unclear why authors adopt Diffusion Models (DMs) to model the prior. For one blurry image, the corresponding blurry prior should be deterministic instead of a distribution generated by DMs. - The loss $L_{difussion}$ for the DMs is also weird (L202). If the DMs learn to produce the z, why not integrate z directly into the model? - Tab. 1 and Fig.3 indicate that the DMs do not really matter. In Tab.1, compared to the baseline, it only improves by 0.04 dB. The gain of this method mainly comes from the multi-scale representation of the prior (> 0.2dB). Fig. 3, to some degree, also supports this. More iteration numbers (>5) in the diffusion model cannot improve the results. - L222 L 243 seems to indicate the authors use the original Restormer [1]. But, with the additional DMs, the flops of this method are less than Restormer, as shown in Tab. 3. What is the difference compared to the original Restormer? Why not use the original setting? [1] Restormer: Efficient transformer for high-resolution image restoration Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer 5Ars (denoted as R3) `Q3-1:` It's unclear why authors adopt Diffusion Models (DMs) to model the prior. For one blurry image, the corresponding blurry prior should be deterministic instead of a distribution generated by DMs. `A3-1:` Thanks for your question. We explain it below. 1. Compared with other methods, DMs have a stronger modeling ability to generate target priors. Therefore we apply DM to model the distribution of prior. Meanwhile, DM generates the prior **conditioned on** the blurry image, **not entirely randomly**. 2. Furthermore, the prior is not deterministic, since a blurry image may correspond to multiple sharp images (**ill-posed**). Therefore, some level of randomness in DM sampling is suitable. 3. We replace DM with Transformer in HI-Diff to generate priors. Models are tested on GoPro, and the input size is 3×256×256 to calculate FLOPs. Applying DM outperforms using Transformer. | Method | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | | ----------- | :--------: | :-------: | :-------: | :----: | | Transformer | 24.84 | 125.48 | 32.09 | 0.9545 | | DM | 23.99 | 125.47 | 32.24 | 0.9558 | `Q3-2:` The loss $L_{diffusion}$ for the DMs is also weird (L202). If the DMs learn to produce the z, why not integrate z directly into the model? `A3-2:` Thanks for your question. We clarify them as follows. **For $L_{diffusion}$.** 1. The loss $L_{diffusion}$=$\Vert \hat{\mathbf{z}} - \mathbf{z} \Vert_1$, where $\mathbf{z}$ is prior from ground truth image and $\hat{\mathbf{z}}$ is the predicted prior after complete $T$ step reverse processes of DM. The loss is to allow DM to **learn to generate prior** directly. 2. For general DM (e.g., DDPM), it optimizes **one denoising step** at each training step. Therefore, it cannot generate the prior directly (needs total denoising steps). Thus, its training objective (Eq. (7) in our paper) is the noise ($\epsilon$). However, the final output of general DM is still the prior. 3. Different from general DM, we execute the **complete process** of DM at each training step, and let the DM directly learn to generate z. This is consistent with the final goal of general DM. Therefore, **the loss $L_{diffusion}$ is reasonable**. **For "why not integrate z directly into the model".** 1. It is feasible to integrate z directly into the (Transformer) model. However, this will cause Transformer to be performed at each denoising step. The overall **complexity is too high**, since Transformer is performed $T$ times. 2. Instead, we execute DM separately on the latent space, and only use the final result of DM for Transformer. Therefore, **the complexity is effectively reduced**, since Transformer is conducted once. 3. Furthermore, compared with methods (e.g., DvSR [48]) that execute the complete model at each step, our method (generate z separately) achieves better performance with much smaller FLOPs (evaluated on GoPro, the input size: 3×256×256 to calculate FLOPs). | Method | Params | FLOPs | PSNR | SSIM | | -------------- | :----: | :---------: | :---: | :---: | | DvSR (CVPR'22) | 26.07M | 170.31**T** | 31.66 | 0.948 | | HI-Diff (ours) | 28.49M | 142.62G | 33.33 | 0.964 | `Q3-3:` Tab. 1 and Fig.3 indicate that the DMs do not really matter. In Tab.1, compared to the baseline, it only improves by 0.04 dB. The gain of this method mainly comes from the multi-scale representation of the prior (> 0.2dB). Fig. 3, to some degree, also supports this. More iteration numbers (>5) in the diffusion model cannot improve the results. `A3-3:` Thanks for your question. We explain it as follows. **For "the DMs do not really matter".** We replace DM with Transformer in HI-Diff to generate priors, and the model performance decreases. It demonstrates that the DM is important. | Method | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | | ----------- | :--------: | :-------: | :-------: | :----: | | Transformer | 24.84 | 125.48 | 32.09 | 0.9545 | | DM | 23.99 | 125.47 | 32.24 | 0.9558 | **For "the gain mainly comes from the multi-scale representation".** 1. Without multi-scale representation, the prior cannot be effectively fused in Transformer features, restricting performance. Thus, the gain is little. 2. The result of (Multi-Scale + Transformer) is lower than that of applying DM. This indicates that both DM and Multi-Scale are important. **Both two components should be considered together.** **For "more iteration numbers cannot improve the results".** 1. Since the prior is in the latent space, DM does not need too many iterations to model it. However, it doesn't mean DM isn't important. 2. Compared with 1000+ iterations in general DM (DDPM), fewer iterations (i.e., 8) further indicate the effectiveness of our proposed method. `Q3-4:` The authors use the original Restormer [1]. With the additional DMs, the flops of this method are less than Restormer (Tab. 3). What is the difference compared to the original Restormer? Why not use the original setting? `A3-4:` Thanks for your question. We explain it below. **For "the difference compared to Restormer".** 1. Our HI-Diff (model in Tab. 3 (4)) applies the original structure of Restormer with **fewer** block numbers in each stage. **Other settings remain the same.** 2. We provide a comparison between HI-Diff and Restormer. Models are tested on GoPro, and the input size is 3×256×256 to calculate FLOPs. | Method | Block in each stage | Params (M) | FLOPs (G) | PSNR (dB) | | --------- | :-----------------: | :--------: | :-------: | :-------: | | Restormer | [4, 6, 6, 8] | 26.13 | 154.88 | 32.92 | | HI-Diff | [3, 5, 5, 6] | 28.49 | 142.62 | 33.33 | **For "why not the original setting".** We apply fewer blocks to make the overall Params and FLOPs of HI-Diff close to Restormer to realize **a fair comparison**. --- Rebuttal Comment 1.1: Title: Follow-up discussions with Reviewer R3 (5Ars) Comment: Dear Reviewer 5Ars, We thank you for your valuable review time and comments. We have responded to the related questions, which we believe have covered your concerns. 1. We explain the **reason** for applying the **diffusion model** (DM), and compare it with Transformer to demonstrate its **superiority**. 2. We analyze the rationality of the **loss** for DM and the reasons for not **integrating z** directly into the Transformer model. 3. We analyze and conduct experiments to demonstrate **the importance of DM**. 4. We clarify the difference between our method and **Restormer**, and the reasons for not using the original setting. We hope to discuss further with you whether or not your concerns have been addressed. Please let us know if you still have any unsolved or other concerns. Then, we have enough time to provide further feedback. Thanks. Best, Authors
Summary: The authors propose a new image deblurring model, Hierarchical Integration Diffusion Model (HI-Diff). The HI-Diff uses the diffusion models to produce priors in a highly compacted latent space, and is integrated into the deblurring process hierarchically with the proposed hierarchical integration module (HIM). The effectiveness of each component is demonstrated by the ablation study. The main results on synthetic and real-world deblurring show that HI-Diff outperforms recent state-of-the-art methods. Strengths: 1. The proposed HI-Diff is simple and effective. The idea that leveraging the power of the diffusion model and integrating it into the deblurring process hierarchically is reasonable and novel for image deblurring. 2. The design of the HI-Diff is stated clearly and logically. The authors expound the motivation and realization of the design in detail. And the effectiveness of each part is proved by the ablation study. 3. The main comparisons with recent state-of-the-art methods are extensive. The evaluation on both synthetic and real-world datasets demonstrates the superiority of the proposed methods. 4. In the supplementary material, the authors provide more variant models and more quantitative and qualitative comparisons, further revealing the promising performance of HI-DIff. 5. The paper is well-organized, and the writing is good and easy to read. 6. The authors also provide the code and pre-trained models for results reproduction. This reveals the solidness of the work and helps other researchers to follow. Weaknesses: 1. The feature prior is the key of this work. However, the paper lacks a specific analysis of the prior. For example, the difference and impact of prior generated on different inputs. 2. As mentioned in the paper, the proposed method generates more realistic deblurred images. But the comparisons in Tabs. 2 and 3 are about distortion-based metrics (e.g., PSNR). More comparisons on perceptual metrics should be provided. 3. Although the authors provide the FLOPs and Params comparisons in Tab. 4, the latency (running time), another important indicator in low-level tasks, needs to be provided. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Please clarify the differences between the HI-Diff and DiffIR [49]. 2. Compare HI-Diff with more generative models (like GAN and Diffusion), and evaluate on perceptual metrics. 3. Provide the latency comparisons to further show the effectiveness of the method. 4. The authors conduct train models on synthetic and real-world datasets. Are the settings of two model the same? if different, clarify the details. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations and potential negative societal impact of the work have been discussed in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer ymRm (denoted as R2) `Q2-1:` The feature prior is the key of this work. However, the paper lacks a specific analysis of the prior. `A2-1:` Thanks for pointing it out. We provide an analysis of the prior. 1. We compare the similarity of priors generated on different inputs. We find that the similarity between priors (e.g., MSE) positively correlates with the similarity between input images (e.g., SSIM). 2. We further provide some visual results in **Fig. 1 of the PDF**, which intuitively show the impact of priors. `Q2-2:` More comparisons on perceptual metrics should be provided. `A2-2:` Thanks for your suggestion. We have provided a comparison of perceptual metrics in **Tab. 2** of the **supplementary material**. We also show part of them here (with more compared methods). Models are tested on GoPro. | Method | Model | LPIPS $\downarrow$ | DISTS $\downarrow$ | NIQE $\downarrow$ | PSNR $\uparrow$ | SSIM $\uparrow$ | | ------------------- | :---: | :----------------: | :----------------: | :---------------: | :-------------: | :-------------: | | DBGAN (CVPR'20) | GAN | 0.110 | 0.078 | 4.06 | 31.10 | 0.942 | | DvSR (CVPR'22) | DM | 0.059 | N/A | 3.39 | 31.66 | 0.948 | | DiffIR (ICCV'23) | DM | 0.081 | 0.071 | 4.13 | 33.20 | 0.963 | | HI-Diff (ours) | DM | 0.080 | 0.071 | 4.12 | 33.33 | 0.964 | | HI-Diff-PE-1 (ours) | DM | 0.051 | 0.031 | 3.53 | 33.27 | 0.963 | | HI-Diff-PE-2 (ours) | DM | 0.044 | 0.029 | 3.30 | 32.84 | 0.959 | Our method achieves the best performance on both distortion-based and perceptual metrics. **We provide more comprehensive comparisons in Tab. 1 of the PDF.** `Q2-3:` Although the authors provide the FLOPs and Params comparisons in Tab. 4, the latency (running time), another important indicator in low-level tasks, needs to be provided. `A2-3:` Thanks for pointing it out. We provide the latency (i.e., running time) comparison. The running time is tested on one 3090 GPU with the input size of 3×256×256. We calculate the average time over 100 images. Our method achieves comparable running time with other methods. | Method | MPRNet | Restormer | HI-Diff (ours) | HI-Diff-2 (ours) | | ------------ | :----: | :-------: | :------------: | :--------------: | | Latency (ms) | 77.46 | 82.05 | 75.89 | 65.98 | `Q2-4:` Please clarify the differences between the HI-Diff and DiffIR [49]. `A2-4:` Thanks for asking for those details. We clarify them as follows. *Note: All models are tested on GoPro, and the input size is 3×256×256 to calculate FLOPs.* 1. In general, the difference between HI-Diff and DiffIR is the **hierarchical integration**: **multi-scale prior** and **cross-attention interaction**, which is more suitable for non-uniform deblurring. 2. **The prior is different.** HI-Diff applies the multi-scale prior, while DiffIR uses the single-scale prior. The multi-scale prior adapts to different scale features in encoder-decoder Transformer architecture for better integration. | Method | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | | ------------ | :--------: | :-------: | :-------: | :----: | | Single-scale | 21.98 | 125.39 | 32.00 | 0.9534 | | Multi-scale | 23.99 | 125.47 | 32.24 | 0.9558 | 3. **The interaction is different.** HI-Diff adopts cross-attention, while DiffIR uses prior as dynamic modulation parameters. Features pay different attention to the prior with cross-attention, which is more suitable for non-uniform deblurring. Experiments demonstrate that applying cross-attention outperforms using dynamic modulation. | Method | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | | ------------------ | :--------: | :-------: | :-------: | :----: | | dynamic modulation | 29.63 | 121.01 | 32.10 | 0.9544 | | cross-attention | 23.99 | 125.47 | 32.24 | 0.9558 | 4. **Experimental comparison.** Our method performs better than DiffIR with comparable Params and FLOPs. It shows that our method, applying hierarchical integration, is more suitable for realistic non-uniform deblurring. | Method | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | | ---------------- | :--------: | :-------: | :-------: | :---: | | DiffIR | 26.94 | 120.99 | 33.20 | 0.963 | | HI-Diff (ours) | 28.49 | 142.62 | 33.33 | 0.964 | | HI-Diff-2 (ours) | 23.99 | 125.47 | 33.28 | 0.964 | `Q2-5:` Compare HI-Diff with more generative models (like GAN and Diffusion), and evaluate on perceptual metrics. `A2-5:` Thanks for your suggestion. We compare HI-Diff with GAN: **DBGAN [54]**, Diffusion: **DvSR [48]** and **DiffIR [49]** on distortion-based and perceptual metrics. Our HI-Diff achieves the best results. **We have responded to another similar question, `Q2-2`. Please refer to `A2-2` for more details.** `Q2-6:` Provide the latency comparisons to further show the effectiveness of the method. `A2-6:` Thanks for your suggestion. We further compare the latency of HI-Diff with other methods. Our HI-Diff achieves a comparable running time with other methods. **We have responded to another similar question, `Q2-3`. Please refer to `A2-3` for more details.** `Q2-7:` The authors conduct train models on synthetic and real-world datasets. Are the settings of two model the same? if different, clarify the details. `A2-7:` Thanks for your question. The settings are the same for the two models on synthetic and real-world datasets. --- Rebuttal Comment 1.1: Title: My concerns are well addressed by the extensive results and analyses. Comment: Thanks for providing such a detailed response. The authors provide extensive quantitative and visual results, which well solve my questions. The extensive explanations and analyses further figure out the unclear parts in my first-round review. I also read other reviewers’ comments and the corresponding responses. Overall, I am very satisfied with the response. I would like to raise my score and vote for acceptance. --- Reply to Comment 1.1.1: Title: Thanks Reviewer ymRm for approving our work Comment: Dear Reviewer ymRm, Thanks for your response. We are happy to see that our quantitative and visual results can solve your concerns. Best, Authors
Summary: The paper introduces the Hierarchical Integration Diffusion Model (HI-Diff), a novel approach for realistic image deblurring. It combines a diffusion model and a regression-based model, performing the diffusion process in a compact latent space to generate informative priors for deblurring. These priors are integrated into the regression-based model using a hierarchical module that adapts to complex blurry scenarios. The paper also presents a two-stage training strategy to optimize the latent encoder and diffusion model together. Experimental results on synthetic and real-world datasets demonstrate the effectiveness, efficiency, and superiority of the proposed method over state-of-the-art techniques in terms of PSNR, SSIM, and visual quality. Strengths: - The paper presents a novel and effective approach for leveraging the diffusion model in image deblurring, yielding realistic details and mitigating unwanted artifacts. - By executing the diffusion model within a compact latent space, the paper successfully reduces computational complexity, enabling faster inference with fewer iterations. - Through comprehensive experiments on synthetic and real-world datasets, the paper demonstrates the superior performance of the proposed method compared to state-of-the-art approaches. Weaknesses: - The diffusion models have an advantage in generating high-quality image details. However, in this paper the diffusion model reconstruct feature in the compacted latent space. The paper does not show and explain the superiority of diffusion models over other models in this task. - The paper does not conduct a comprehensive comparison with other diffusion models for image deblurring. These methods also apply diffusion models in different ways to address the challenges of image deblurring, such as computational efficiency, distortion accuracy, and generalization ability. It would be interesting to see how HI-Diff compares with these methods in terms of performance and complexity. - The paper does not analyze the effectiveness of the latent compression and evaluate the hyperparameters. It would be helpful to provide some ablation studies or analysis on the latent compression encoder. - Lack of one-stage ablation study or a clearer explanation is needed. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Can you please elaborate on the structure and parameters of the diffusion model in more detail? - It would be beneficial to include some visual results showcasing the effects of the diffusion process, if available. - In the event of replacing the diffusion model with alternative restoration models like transformer-based approaches, would the proposed method still maintain its effectiveness? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Based on the available information on the web page, the paper does not explicitly address the limitations of their proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer mQcX (denoted as R1) `Q1-1:` The paper does not show and explain the superiority of diffusion models over other models in this task. `A1-1:` Thanks for pointing it out. We explain and conduct experiments to show the superiority of diffusion models (DMs). 1. **Explanation:** Compared with other methods, DMs have a stronger distribution (image / latent) modeling ability. Thus, DM can generate high-quality image details. Meanwhile, not limited to image distributions, it also applies to latent spaces. Therefore, we adopt DM to generate the prior in latent space. 2. **Experiments:** We replace DM with a Transformer-based model in HI-Diff to generate priors. We use the same training settings as HI-Diff for the new model. Models are tested on GoPro, and the input size is 3×256×256 to calculate FLOPs. Applying DM outperforms using Transformer. | Method | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | | ----------- | :--------: | :-------: | :-------: | :----: | | Transformer | 24.84 | 125.48 | 32.09 | 0.9545 | | DM | 23.99 | 125.47 | 32.24 | 0.9558 | `Q1-2:` It would be interesting to see how HI-Diff compares with other diffusion models for image deblurring. `A1-2:` Thanks for your suggestions. We compare HI-Diff with other diffusion models: **DvSR [48]** and **DiffIR [49]**. Models are tested on GoPro, and the input size is 3×256×256 to calculate FLOPs. | Method | Params | FLOPs | PSNR | SSIM | | ---------------- | :----: | :---------: | :---: | :---: | | DvSR (CVPR'22) | 26.07M | 170.31**T** | 31.66 | 0.948 | | DiffIR (ICCV'23) | 26.94M | 120.99G | 33.20 | 0.963 | | HI-Diff (ours) | 28.49M | 142.62G | 33.33 | 0.964 | | HI-Diff-2 (ours) | 23.99M | 125.47G | 33.28 | 0.964 | 1. Compared with the DvSR, our methods have much smaller FLOPs because DM is performed in latent space. Meanwhile, our methods achieve better performance. 2. Compared with DiffIR, our methods perform better with comparable Params and FLOPs. This is because HI-Diff applies hierarchical integration, which is more suitable for processing non-uniform blur. **We provide more comprehensive comparisons in Tab. 1 of the PDF.** `Q1-3:` The paper does not analyze the effectiveness of the latent compression and evaluate the hyperparameters. It would be helpful to provide some ablation studies or analysis on the latent compression encoder. `A1-3:` Thanks for the suggestion. We ablation on the token number $N$ of latent space to analyze the effectiveness of latent compression. We set $N$ as 4, 16, and 64. All experiment settings are consistent with the ablation study (**Sec. 4.2**). Models are tested on GoPro, and the input size is 3×256×256 to calculate FLOPs. | $N$ | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | | :--: | :--------: | :-------: | :-------: | :----: | | 4 | 23.985 | 125.05 | 32.12 | 0.9542 | | 16 | 23.986 | 125.47 | 32.24 | 0.9558 | | 64 | 23.990 | 127.33 | 32.28 | 0.9560 | With the increase of $N$, the performance increases, but the gain magnitude decreases. It may be because as $N$ increases, tokens become more redundant, thus the performance increase is limited. And the increase of redundant tokens also increases complexity and resource consumption. To balance performance, redundancy, and consumption, we choose $N$=16. `Q1-4:` Lack of one-stage ablation study or a clearer explanation is needed. `A1-4:` Thanks for pointing it out. We explain it below. 1. In our two-stage training architecture, the first stage uses the **ground truth image** as input to participate in model training. However, the second stage only adopts the **blurry image** as input. Therefore, we cannot combine the two stages into one stage. And we do not conduct the one-stage ablation study. 2. As we mentioned in **Sec 3.2** (Limitations) of the **supplementary material**, two-stage training is more tedious than one-stage training, which is a shortcoming of our method. One of our follow-up research directions is to explore how to conduct efficient one-stage training. `Q1-5:` Can you please elaborate on the structure and parameters of the diffusion model in more detail? `A1-5:` Thanks for asking for those details. We clarify them as follows. **For the structure**: The main component of the diffusion model is the Denoising Network (DN), which consists of some Linear Layers and MLP-Mixer Layers. The structure of DN is as follows. ```shell input -> (Linear+LRelU) -> (MLP-Mixer)×4 -> (Linear+LRelU) -> output ``` Meanwhile, we apply the same diffusion and reverse process as DDPM (scheduler) [14]. **For parameters**: We provide the Params and FLOPs of the diffusion model. We separately calculate FLOPs (input size: 16×256) for one diffusion step and total steps. | Params | FLOPs (one step) | FLOPs (total steps) | Step | | :----: | :--------------: | :-----------------: | :--: | | 2.63M | 42.37M | 338.95M | 8 | `Q1-6:` It would be beneficial to include some visual results showcasing the effects of the diffusion process, if available. `A1-6:` Thanks for your valuable suggestion. We provide some visual results in **Fig. 1 of the PDF**. The blurred image gradually becomes sharp as the diffusion (reverse) process proceeds. Meanwhile, when the prior is Gaussian noise (i.e., $\mathbf{z}_{8}$), the output of Transformer is **not noise**. This may be because Transformer features actively ignore the invalid priors through cross-attention when fused with priors. `Q1-7:` In the event of replacing the diffusion model with alternative restoration models like transformer-based approaches, would the proposed method still maintain its effectiveness? `A1-7:` Thanks for your suggestions. We find that replacing DM with Transformer degrades model performance. **We have responded to another similar question, `Q1-1`. Please refer to `A1-1` for more details.** --- Rebuttal Comment 1.1: Title: Follow-up discussions with Reviewer R1 (mQcX) Comment: Dear Reviewer mQcX, We thank you for your valuable review time and comments. We have responded to the related questions, which we believe have covered your concerns. 1. We provide analysis and experiments (comparing it with Transformer) to show the **superiority** of the **diffusion model** (DM). 2. We compare our method with other DMs (e.g., **DvSR** and **DiffIR**) for image deblurring in terms of performance and complexity. Our method **outperforms** other DMs. 3. We conduct an ablation study on **latent compression** (i.e., the token number N) to show its effectiveness. 4. We analyze our shortcomings and follow-up research directions regarding **one-stage** training. 5. We clarify the **structur**e and **parameters** of the DM. 6. We provide **visualizations** to showcase the effects of the diffusion process. We hope to discuss further with you whether or not your concerns have been addressed. Please let us know if you still have any unsolved or other concerns. Then, we have enough time to provide further feedback. Thanks. Best, Authors --- Rebuttal 2: Comment: Dear Reviewer mQcX: Your review states Soundness: 2 fair, Presentation: 2 fair, Contribution: 2 fair; yet your Rating is 7 Accept. This doesn't make much sense. Please respond and clarify. --- Rebuttal Comment 2.1: Comment: Dear AC, the author's rebuttal has well addressed my concerns, and I forgot to update these relevant ratings. I have now updated the ratings.
Rebuttal 1: Rebuttal: ## Response to all reviewers and area chairs for a brief summary Dear reviewers and area chairs, We thank all reviewers and area chairs for their valuable time and comments. We are encouraged that: 1. Reviewer mQcX and Reviewer ymRm agree that our method is novel. 2. Reviewer ymRm thinks our experiments are extensive, and Reviewer w6Fg thinks the ablation study is complete. 3. All reviewers recognize that our method achieves state-of-the-art performance. We have responded to each reviewer individually to address any comments. We would like to give a brief summary. 1. We explain the **reason for applying the diffusion model** (DM), and conduct additional experiments (comparing it with Transformer) to demonstrate **its superiority and importance**. Meanwhile, we also analyze the rationality of the loss for DM. 2. We provide comprehensive **comparisons with other DMs** for image deblurring in terms of performance and complexity. 3. We provide analyses and experiments to clarify the **differences** between our method and DiffIR, and show the **novelty** of our work. 4. We clarify the **details of our method**, including the structure and parameters of DM, differences from Restormer, and the settings of models on different datasets. 5. We analyze and experiment on **latent compression**. Meanwhile, we analyze our shortcomings and follow-up directions regarding **one-stage** training. 6. We provide the **latency comparisons** and analyze more on priors. 7. Finally, we provide more **visualizations** to enhance the solidity of the work, including the diffusion process and some failure cases. We thank all reviewers and area chairs again! Best, Authors Pdf: /pdf/c8947445b87b0695c236831d15db6e007eb84a6b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Bayesian Learning via Q-Exponential Process
Accept (poster)
Summary: This paper proposes a generalization of Besov processes to higgher dimensions that satisfies the stochastic process constraints that previous works could not satisfy. This is achieved by finding the right radius density function so that the corresponding elliptic distribution satisfies Kolmogorov's extension theorem. They then demonstrate how one may do Bayesian modeling and inference with this Q-EP process and provide experiments demonstrating their results. Strengths: * Very interesting stochastic process family that I can see having a lot of use cases * Compelling experimental results showing the efficacy of their method Weaknesses: * The experimental results use pretty small images. How well do these methods scale up to larger images and high-dimensional domains more generally? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * How should one choose q? Should q=1 always be the default starting point? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the nice comments and the generous support. We have three CT examples in Section 4.2: the Shepp–Logan phantom is of size $128\times 128$ and the other two human body parts CT images are of size $512\times 512$. They are pretty standard sizes for images in machine learning. Note, the discrete dimension $d=128^2$ or $512^2$. We need to specify covariance matrices of size $d\times d$ for GP and q-EP which are extremely large ($512^2\times512^2=68,719,476,736$). Therefore, we did dimension reduction by partial Eigen-decomposition (taking the first $L=2000$ largest eigenvalues) using randomized algorithms (Joel A. Tropp, ACM 204 CalTech). If the reviewer refers "larger images" to super-resolution satellite images, the methodology should still work in theory, but care needs to be taken when implemented in powerful GPUs or TPUs. As commented in the conclusion, $q=1$ is often adopted for q-EP (and Besov process as well) to impose more regularization as opposed to $q=2$ which corresponds to GP. We studied the effect of the parameter $q$ in Figure C.6 in the supplementary materials by varying $q\in (0,2]$. We found that smaller $q$ led to sharper reconstruction of the blurred image of satellite. In general, we could do cross validation to choose $q$ or even impose a hyper-prior on $q$ for a fully Bayesian treatment, which is expected to be more complicated and will be a good future direction. --- Rebuttal Comment 1.1: Title: Reply to authors Comment: Thank you for your response. I will maintain my score.
Summary: Motivated by the correspondence between Gaussian Process priors and ridge regularization for non-parametric regression problems, the authors in this paper develop a stochastic process prior, namely the $\textbf{Q-exponential (Q-EP) process}$, which can correspond to $\ell_q$-regularization. Specifically, by starting from multivariate $q$-exponential distributions, the authors verify a Kolmogorov Consistency criterion to eventually develop the aforementioned process. Subsequently, the authors further provide justifications how the (Q-EP) process process allows a more flexible modeling of covariance kernels compared to parallel Besov processes also designed for similar tasks and derive a posterior predictive formula. Lastly, the benefits of the prior are demonstrated through numerical examples pertaining to problems in functional data analysis, image reconstruction, and solving inverse problems. Strengths: Offers a detailed probabilistic description and construction of a stochastic process prior for non-parametric Bayesian regression which (i) corresponds to $\ell_q$-regularization; (ii) is developed from easy to understand first principles; and (iii) allows easy computation of posterior predictive distributions. Weaknesses: Theoretically, it was not discussed in which type of non-parametric problems and function spaces one might get better rates of posterior contraction using the developed process type priors compared to Gaussian process priors. Most of the theory is basic probabilistic calculations and quite stratighforward. Technical Quality: 3 good Clarity: 3 good Questions for Authors: There is substantive literature on "minimax optimal" posterior contraction over standard function spaces (e.g. Sobolev, Besov etc.) results for suitable designed Gaussian process priors -- can the authors discuss whether there are some set ups of non-parametric regression problems where some version of the priors developed here offers optimal posterior contraction over some function spaces? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None noted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for raising good points in the "Weaknesses" and "Questions" sections. Q-EP is proposed as a nonparametric prior for flexible Bayesian models including regression, classification, density estimation, inverse problems, etc. The motivation is to impose more regularization than Gaussian process (GP) on function spaces while enjoying the similar tractability (on correlation and prediction) as GP. The function space is assumed to be $L^q$ and we require the kernel function $\mathcal{C}$ is a trace-class (having summable eigenvalues) so the quadratic $r(u)=\langle u, \mathcal{C}^{-1}u\rangle$ and hence the process is well-defined. We are aware of Professor Ghosal, Professor Van Der Vaart and their collaborators' seminal works on posterior contraction theory on non-parametric Bayesian models. We actually have been working on similar theories for q-EP and a spatiotemporal Besov extension that relies on q-EP. However, given the limited space of 9 pages, we decided to focus on the introduction of q-EP, its application to nonparametric modeling and demonstration of its advantages over GP when modeling subjects with abrupt changes or sharp contrast (such as edges in image). We leave the rigorous treatment of contraction theory to a journal submission. We thank the reviewer for raising such a good question on comparing the contraction rates between q-QP and GP and we will investigate it in our ongoing work.
Summary: As a generalization of GP, it is important to construct a stochastic process (prior) that can express various degrees of smoothness. As such process, the Besov processes have been proposed but they are defined in the form of a series expansion, and the corresponding probability distribution is not given in an easy-to-handle form. In this work, the q-exponential process is proposed, which addresses the drawback of the conventional Besov process. A probability distribution called the q-exponential distribution (q-ED) that facilitates sampling from the Besov process is proposed. It is extended to multivariate in a consistent manner in the sense of marginalization. q-ED is expressed as an elliptic distribution, which also satisfies the exchange rate, thus satisfying the conditions of Kolmogorov's extension theorem, and the existence of the corresponding stochastic process is guaranteed. The KL expansion shows that the q-exponential process (q-EP) corresponding to the q-ED can be defined and has a series representation almost equivalent to the Besov process. The q-EP has the strong advantage that the posterior predictive distribution can be constructed by MCMC, and Bayesian regression can be performed. Strengths: It is an important contribution from the viewpoint of statistical modeling to define an explicit probability distribution as q-ED and to provide a sampling method for Besov processes described in series representation. q-ED is firstly defined as a one-dimensional probability distribution and then is extended to a multivariate distribution paying attention to consistency. The method of extending to a stochastic distribution and the method of performing MCMC inference (posterior distribution calculation) based on stochastic representation is standard and reasonable in the context of SDE simulation (e.g., a scaled mixture of normals). The proposed q-EP will have wide applicability and some successful application results are presented in the paper. Weaknesses: Minor issue: It is claimed that "we have less control on the correlation strength once the orthonormal basis is chosen". It is true but when we are free to choose the orthonormal basis, I feel the conventional Besov process has equivalent or even better freedom for specifying the correlation structure to the proposed q-EP. I acknowledge that specifying the correlation structure via the covariance function is handy, but in terms of the degree of freedom, it is fair to consider the case we can freely choose the basis. It is claimed in line 218 that "codes will be publicly available at to-be-released", hence currently I cannot evaluate the reproducibility of the experimental results. Very minor presentation issues: - Eq.(5) and elsewhere, p(u) should be treated as the function of u, not r. I mean, for example in Eq.(5), p(u) = k_d |C|^{-1/2} g(r(u)). - F_i in Theorem 3.1. should be defined. - \mathbb{R}^k in Eq.(7) should be \mathbb{R}^{n} - References should be properly described. For example, ref [1] does not have a link to the project, and ref [12] lacks bibliographic information. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Q1: In my understanding, the proposed q-EP is essentially equivalent to the Besov process, and the major difference is its explicit control of the covariance structure. If this is the case, what is the main reason for the performance gap between Besov and q-EP in experiments, particularly for image reconstruction? If not, please reveal the difference between Besov proc. and q-EP. In particular, does q-EP includes Besov proc as a special case? Q2: Related to the problem mentioned in 6.Weakness, I'm not sure the experimental setting in subsection 4.1 is fair. For the Besov process, the Fourier basis is chosen and fixed. It is plausible but the inferior performance of the Besov process could be contributed to the bad choice of the basis. Please validate this experimental setting in more detail. Q3: I don't understand the meaning of the second sentence in "Introduction". How the "High-dimensional objects" can be viewed as "evaluation of proper functions", and what is the "proper function"? Q4: What is the difference between "d^{\star}" in line 34 and "d" in other places? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I'm concerned that the scale mixture expression is not possible for all q. See, e.g., M. West, ``On scale mixtures of normal distributions'', Biometrika (1987). I found that the range 0 < q < 2 is specified in the Appendix, but should be explicitly stated in the body of the paper to make the applicability of the methodology clear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for supporting the contribution and the potential impact of q-EP. We agree with the reviewer that the correlation strengthen of Besov process can be configured through the choice of basis functions $\{\phi_\ell(x)\}$, as spelled out in Equation (12). But it is just less straightforward than specifying in the kernel function $C(\cdot,\cdot)$ as usually done in Gaussian process. We will reword the relevant sentences to reflect this. The Github repository hosting all the codes is currently private but will be made public when the work gets published. However, along with the submission, we provided a zip file in the supplementary materials containing Python codes for demo and an example of reconstructing blurred image of satellite. All the codes in the supplement are anonymous and accompanied with a `readme` help document so the reviewer should be able to reproduce some results by following the instruction. We hope the reviewer can understand and find it helpful. We thank the reviewer for careful reading and we will fix all the minor issues raised in "Weaknesses". We also appreciate all the questions and now answer them one by one: Q1) Yes, the statement about the relationship between q-EP and Besov process is right. As highlighted in the introduction, q-EP can be viewed a probabilistic definition of Besov process with explicit specification of correlation strength and tractable prediction formula. Their representation equivalence is further elaborated in Theorem 3.4 and Remark 2. Due to their difference in the mathematical format (probabilistic distribution vs series representation), they behave differently in numerics. We found q-EP is superior than Besov in relatively lower-dimensional cases (See Figures 3,4 and Tables 1,2) and they become more similar when the dimensions go much higher and dimension reduction (partial eigendecomposition of C) is implemented (See Figure5 and Table C.2). We agree that it is an interesting phenomenon and will investigate it in future work. Q2) We tried multiple wavelet bases including Hard, Shannon, Meyer and Mexican Hat etc. but none of them generated better result. We could include these reconstruction results in supplementary materials in the revision. Q3) Sorry for the confusion. What we tried to say is that each image can be viewed as a function defined on a bounded domain whose values are the pixels. We will revise this sentence to avoid confusion. Q4) $d^\star$ in line 34 is the dimension of space where subjects of interest are defined. For example, in our numerical examples, $d^\star=1$ for the time series and stocks and $d^\star=2$ for various CT images. While $d$ in other places refers to discrete dimension of the processes. For example, $d=N=200$ is the number of time points for the time series and $d=n^2=128\times 128$ is the image size for CT images. We will clarify them in the revision. Also thanks for pointing out the condition $0<q<2$ for the scale mixture result only mentioned in the appendix. We will make it explicit in the main text when revising the paper. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response. My concerns including the reproducibility issues and equivalence to the Besov process in series representation are resolved; I'm now more confidently support this work. I raised score from 6 to 7. --- Reply to Comment 1.1.1: Comment: We are very thankful for the reviewer's support! Really appreciate all the constructive comments.
Summary: This paper proposed a new random process prior that corresponds to estimating parameters with $\ell_q$ penalty. The process, named Q-EP, can be used to provide a shaper penalty than the standard Gaussian process. Empirical experiments show the practical use case for Q-EP. Strengths: The derivation of Q-EP looks solid to me. I am not sure about the novelty of the work. The experimental results look convincing, though I do not know if there is a standard benchmark. Weaknesses: It is hard for people unfamiliar with the field to understand the paper. In particular, the authors do not give a fair amount of text to explain the background. It is hard to relate the abstract math in the introduction with the examples provided (e.g. Figure 1). There is no clear statement claiming the connection between Q-EP with the $L_p$ regularization. The Bayesian model is not introduced until page 6. In sum, I suggest a major rewriting of the paper, potentially with a different organization. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What is your insight when comparing the Bayesian models with various denoising works in deep learning? I believe they are very different in many aspects. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I do not see any potential negative societal impact of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's critics. As mentioned in the introduction, the novelty lies in the first probabilistic definition of Besov process (which is widely used in imaging analysis and Bayesian inverse problems) with explicit specification of correlations and tractable prediction formula. For nonparametric regression with novel priors, mean squared error (MSE) (or root MSE, RMSE) and log-likelihood (LL) (See Table 1) are standard measures to compare, as included in the paper by Professor Zoubin Ghahramani's group [37], the paper by Bankestad et~al [4], and the seminar book by Rasmussen and Williams [Gaussian Processes for Machine Learning, 2006]. We also added multiple standard quality metrics in imaging analysis (PSNR,SSIM, HaarPSI as in Table 2). They all support our claimed numerical advantages for q-EP. While we are open to any other metrics the reviewer might suggest, we are also willing to defend against "limited evaluation", given 4 time series examples, 3 CT image reconstructions and one Bayesian inverse problem regarding an advection-diffusion equation reported in this paper. Regarding the connection between q-EP and $L_q$ regularization, we kind of assumed for granted and that might have caused some confusion. It is similar to the relationship between GP and $L_2$ regularization: negative density of Gaussian distribution yields the $L_2$ regularization term usually added to the objective function, that is, $\frac{1}{2}\Vert u-\mu\Vert^2$. In q-EP scenario, that $L_q$ regularization term is $\frac{1}{2} r(u)^{\frac{q}{2}}$ with $r(u)=\langle u-\mu, \mathcal{C}^{-1}(u-\mu)\rangle$ plus some other comparatively smaller term $\log r$. We will add explicit explanation and more background relating the math to Figure 1 to the main text. Regarding the structure, we admit that we spent some space on explaining the marginalization consistency -- we did that to emphasize its importance and the univariate q-exponential distribution may fail to generalize to a valid stochastic process without care, as happened in other literatures -- we think that is part of our novelty. The Bayesian model is discussed after the new prior q-EP is fully introduced. This is in the same spirit of Rasmussen and Williams' GP book, which does not jump to the model at beginning. We thank the reviewer for this good question. First, the paper introduces a nonparametric modeling tool with the novel q-EP prior imposing more regularization than GP. It can be applied to imaging analysis but not limited to image denoising. Second, one of the advantages of the proposed Bayesian models over the majority of optimization based deep learning techniques is the uncertainty quantification (UQ) (refer to Figures C.3 and C.4). UQ is of scientific interest and is the natural byproduct of Bayesian approaches however not the main focus of many vanilla versions of denoising works in deep learning. Last but not the least, these two approaches can interact and evolve to new methods. For example, the q-EP prior can provide a new loss function other than MSE for training the denoising neural network (NN). On the other hand, there might exist certain format of NN whose limiting behavior mimics q-EP prior, similar to the relationship between DNN and GP [Neal 1994a, Lee et al 2018 ICLR]. This is also an interesting direction the authors would like to pursue. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I have raised my score accordingly. Nevertheless, I still recommend a thorough modification of the paper so it can be accessible to readers unfamiliar with the topic. The organization of a paper is generally different from the organization of one chapter in a book. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for raising the score. We are glad that the reviewer has accepted our clarification on more important questions such as connection between Q-EP with the $L_p$ regularization, sufficient numerical evaluations, etc. Regarding the structure, we agree with the reviewer on the statement "The organization of a paper is generally different from the organization of one chapter in a book." (We cited Rasmussen and Williams' GP book to explain our writing style but did not mean to literally follow its organization). To improve the paper's clarify, We will take the reviewer's suggestion to revise the introduction with more background and elaboration, as also mentioned by Reviewer V1qe. We would also appreciate if the reviewer could elaborate "a thorough modification" with some more specifics.
Rebuttal 1: Rebuttal: We thank all the anonymous reviewers for their careful reading, constructive advices and critics. All the reviews acknowledge the novelty of the proposed q-EP as a nonparametric prior and its potential impact in statistics and machine learning applications. Some demand clarification and presentation improvement, and others suggest theoretic exploration. All their comments and opinions are highly appreciated. Below we have made one-to-one responses and hope the reviewers and program/area chairs can find them useful when evaluating the paper. We will be happy to discuss more with reviewers in the discussion stage. Thanks!
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose the 'q-exponential process', a stochastic process interpretation of Lq function regularization, which can induce sparsity in the solutions to optimization problems and can be used as a functional prior for Bayesian applications in time series regression, image reconstruction, and other applications. Strengths: The proposed q-exponential process has several important benefits: - enforcing sparsity or more sharpness in function space - control of correlation structure, similarly to GP - flexibility in choice of kernel - conjugacy for posterior prediction with appropriate likelihoods, in contrast to the Besov case Weaknesses: The paper could provide more background on the problem setting and Besov processes, as the current introduction is difficult to follow. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It may be interesting to further unpack Remark 3 regarding the similar mean & covariance for Q-EP and GP, for example showing how this affects predictions for Q-EP and GP in a tractable toy example. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors note the need for grid search over hyperparameters such as regularity parameter and smoothness parameter. The method is tested only for q=1 and it would be interesting to explore other values. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the suggestion on more background about Besov processes. In addition to the existing introduction which includes its mathematical definition, we will elaborate more on its implication and applications on imaging analysis. We also appreciate the reviewer's advice on expanding Remark 3. In the revision, we will restore more mathematical details that were in the initial draft but hidden from the submission for brevity. However, we have two tractable toy examples in section 4.1 for which we compare the predictions for Q-EP and GP in Figure 3(b) and in Figure C.2(b) as well. Regarding the limitation, we emphasized in the manuscript that $q=1$ is the case often opted in (Conclusion). We did explore the effect of regularization parameter $q$ for a spectrum of different values in Figure C.6 and mentioned that briefly in the paragraph "Connection to existing work" in the introduction. We will make it more explicit with proper emphasis.
null
null
null
null
null
null
On Calibrating Diffusion Probabilistic Models
Accept (poster)
Summary: This paper investigates calibrating diffusion models. They notice that the expected data score equals zero. Since models not usually do not learn this correctly, they introduce a calibration term to subtract from a learned score. The new objective now includes the expected model score and performs better compared to the uncalibrated models. Strengths: The paper is well written and the method is presented clearly. The theoretical results are convincing. The main appeal of the method is its simplicity. The solver part of the results is solid. Weaknesses: Since the focus is on post-processing, the method is best suited for discrete time diffusion (like DDPM). Otherwise, it is unclear how the expected values are saved for continuous t. Line 221 mentions the selection of t depends on the sampling schedule of the solver, but the solver then cannot be adaptive. The alternative of learning the network as in eq. 14 would solve this but seems overly complicated. As can be seen from Fig 1 the lines are simple so fitting some simpler model would make more sense. The results are showing the number of solver evaluations is lower and the likelihood is better. Likelihood is evaluated in a weird way. The only thing that is shown is that the expected learned score is not zero for "uncalibrated" models which is somewhat tautological. In this case, some other relevant metric like FID would be better. There is no empirical demonstration that the diffusion models are uncalibrated, in a similar way that confidence-accuracy diagrams show this for classification. One could learn some simple synthetic dataset from a known distribution. It is not clear why this would be a relevant task, as presented. I assume guidance would further uncalibrate the models but it is used for better samples. Perhaps the image domain is not the best choice here. Eq. 13 is presented as a fact, whereas it seems this is to be demonstrated. The main issue is the limited motivation for improving generative models on likelihood and a bad choice of experimental setup. Something other than images could solve both of these issues. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Why is FID going up with the number of samples in Table 3 in both columns? - Can you incorporate this loss somehow during training so that the final model is calibrated? - Does this hold for any data distribution? - Could using 20 and 50 diffusion steps for generation be too little? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions, we have uploaded a rebuttal PDF. ***W1: It is unclear how the expected values are saved for continuous t and adaptive sampling schedules*** In our work, we present two methods for implementing calibration: post-training computation and dynamical recording. Post-training computation can be applied to both discrete and continuous timestep $t$ for non-adaptive sampling schedules. For adaptive sampling schedules, directly applying post-training computation may necessitate additional tricks such as interpolation. Dynamical recording can thus be used for both discrete/continuous timesteps and non-adaptive/adaptive sampling schedules. The learning framework in Eq. (14) is simply a regression between the recording network and the score outputs, which can be easily implemented with a few lines of code. Furthermore, because we use a shallow MLP for recording (described in Lines 312-322), which is relatively lightweight in comparison to diffusion models, the extra computational and memory costs of dynamical recording are less than $1\\%$. ***W2: Likelihood evaluation and showing results on other relevant metrics like FID would be better*** As indicated by Eq. (11), the entire area under the curves (i.e., integral w.r.t. timestep $t$) in Figure 1 is counted as the likelihood improvements led by our calibration, for both SDE and ODE solvers. In **Table A** of the rebuttal PDF, we assess sample quality using FID and other relevant metrics including sFID, inception score (IS), precision and recall. As can be seen, our calibration consistently improves sample quality under different metrics, and we will present full results in the revision. ***W3: No empirical demonstration that diffusion models are uncalibrated, in a similar way as confidence-accuracy diagrams*** Although we use the same term, there is no direct relationship between the confidence-accuracy calibration studied in discriminative learning and the generative calibration studied in our paper. Intuitively, we observe some essential properties that should hold for any data scores (e.g., the expected data scores should be zero) and define a diffusion model as `uncalibrated’ if its learned model scores deviate from these essential properties. The mechanism of our calibration, such as Eq. (7), is more similar to variance reduction techniques (though not the same; we are more like DSM-loss reduction), which exploit existing observations to obtain better (score) estimators. ***W4: The main issue is the limited motivation for improving generative models on likelihood*** Improving model likelihood has long been a primary goal in generative learning; for example, generative models such as variational autoencoders (VAEs), energy-based models (EBMs), normalizing flows, and autoregressive models are all trained by maximizing likelihood. Improving model likelihood has also been well-motivated and widely studied in diffusion models [1,2,3,4,5]. Model likelihood is a principled metric that indicates how well a model distribution is learned towards the data distribution (under KL divergence) and can be practically applied for data compression and density estimation. ***Q1: Why is FID going up with the number of samples in Table 3 in both columns?*** This phenomenon may be related to ODE solvers. It is found that using more neural function evaluations (NFE) for ODE solvers may increase the FID score [6,7], implying that an overly accurate estimation of the score may not necessarily decrease the FID score. As a result, in Table 3, using more training/generated data to obtain a more accurate estimate of the score mean does not result in a lower FID score. We will conduct ablation studies to further investigate this phenomenon in the revision. ***Q2: Can you incorporate this loss during training so that the final model is calibrated?*** Yes, we could intuitively incorporate a loss of, say, $\\|\\mathbb{E}\_{q\_{t}(x\_{t})}[\\boldsymbol{\\epsilon}\^{t}\_{\\theta}(x\_{t})]\\|$ during training. However, since the expectation operator $\\mathbb{E}\_{q\_{t}(x\_{t})}$ is inside the norm operator $\\|\\cdot\\|$ (the norm operator is convex), a potential concern is that we can only obtain a biased estimation of $\\|\\mathbb{E}\_{q\_{t}(x\_{t})}[\\boldsymbol{\\epsilon}\^{t}\_{\\theta}(x\_{t})]\\|$ during training using samples in a mini-batch, and the bias could be large for small mini-batch sizes. Post-training computation and dynamical recording, on the other hand, can use more data samples to obtain asymptotically unbiased estimates. ***Q3: Does this hold for any data distribution?*** Yes, our conclusions apply to any data distribution considered in the diffusion model literature (satisfying mild regularity conditions such as $q\_{0}(x_{0})\\rightarrow 0$ when $x\_{0}\\rightarrow \\infty$, as assumed by default in [8, 9]). ***Q4: Could using 20 and 50 diffusion steps for generation be too little?*** Existing ODE solvers can generate high-quality images with $20 \\sim 50$ (or even $10 \\sim 20$) diffusion steps [6,7]. ***References:*** \ [1] Kingma et al. Variational Diffusion Models. NeurIPS 2021 \ [2] Song et al. Maximum Likelihood Training of Score-Based Diffusion Models. NeurIPS 2021 \ [3] Huang et al. A Variational Perspective on Diffusion-Based Generative Models and Score Matching. NeurIPS 2021 \ [4] Vahdat et al. Score-based Generative Modeling in Latent Space. NeurIPS 2021 \ [5] Lu et al. Maximum Likelihood Training for Score-Based Diffusion Odes by High Order Denoising Score Matching. ICML 2022 \ [6] Lu et al. DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps. NeurIPS 2022 \ [7] Karras et al. Elucidating the Design Space of Diffusion-Based Generative Models. NeurIPS 2022 \ [8] Ho et al. Denoising Diffusion Probabilistic Models. NeurIPS 2020 \ [9] Song et al. Score-Based Generative Modeling through Stochastic Differential Equations. ICLR 2021 --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. Many of my questions are adequately addressed. I still think Q1 should be investigated, W3 and Q3 can be demonstrated on some "tabular" data. Your images in the attached pdf (Figure A) are not very convincing. Images 1, 2, 6, 8 offer not improvement in my opinion. However, I am satisfied with the rest of the response and the numerical results so I would not mind acceptance. I'm raising my score accordingly. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: Thank you for your comments and raised score. We will conduct ablation studies to further investigate Q1, and attempt to empirically validate our calibration on data modalities other than vision (e.g., tabular data). We will also provide additional visualization to demonstrate the effectiveness of our calibration.
Summary: The paper proposes a simple procedure to improve the calibration of pre-trained diffusion models. Diffusion models have achieved strong practical performance, but its score estimation is often seen as a black box. The paper sheds some light on the issue, by detecting the inherent lack of calibration of the current state-of-the-art models, and proposing a simple procedure that reliably drives up the lower bound for model likelihood. Empirical evaluations showcase the efficacy of the method. Strengths: The paper discusses the problem of calibration in diffusion models, a question long overdue in this area of research: while diffusion models have achieved eye-catching generative results, little is known about whether the score function is a loyal depiction of the ground truth, or even a gradient field in itself. The paper gives theoretically sound justifications of by framing the evolution of the score function as a martingale, and proposes a simple yet empirically quite effective way that adjust pre-trained diffusion models closer to the underlying score function. The overall writing and presentation of the paper is sound. Weaknesses: I think Section 3.4 is unsatisfactory, as I think the proper conclusion should be that the uncalibrated DPM training objective do not inherently minimize the mean of the expected predicted noises. My 2 more specific comments are listed below. - The logic between the explanations in 3.4 and Figure 2 seems confounding. I have understood the main points made in the paper after several reads, so I think the paper could adopt an easier to follow explanation. I believe the author's main point is (i) the optimal score attainable via diffusion modeling is capped by finite training data, so while the optimal score $\nabla \log q_t(x_t;\mathbb{D})$ has expectation zero with respect to the marginal distribution conditioned on data, but it does not likely have expectation zero when given the entirety of $q_t$; and (ii) state-of-the-art diffusion models do not inherently minimize $\lVert\mathbb{E}_{q_t} s_\theta^t \rVert$. I believe those two points are quite separate but lines 188-191 present it as one singular point, which seems as if the paper treats the diffusion modeling as the gold standard of learning the scores, and that even the true score carries a bias in a finite-data setting. - I admit that the finite data bias can present a problem, but it is not what the calibration trick is for. Frankly speaking, the calibration trick does not manufacture new training data just because it is more calibrated, but only pushes the score estimate _towards_ the optimal score estimate attainable in a finite data regime, i.e., $\nabla \log q_t(x_t;\mathbb{D})$. It is with this argument that I believe the calibration trick will attempt to modify the score estimate even when presented with the ground truth $\nabla \log q_t(x_t)$. Therefore, I believe the main point is not about dataset bias, but that common score matching objectives do not take into consideration the minimization of $\lVert\mathbb{E}_{q_t} s_\theta^t \rVert$. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: None. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I believe that the authors can make a few improvements in its presentation of empirical results. I understand that page limit prohibits the addition of more content, but I believe that some plots deserve to be in the main body of text. - Figure 1 showcases the potential lack of calibration for pre-trained DPMs, but does not show the expectation of predicted noise _after_ calibration. I believe that one can show the effectiveness of calibration by comparing the 2 images and discovering the calibrated model has a much lower expected value of the predicted noise. - The paper could benefit from also presenting some generated images as it shows any improvements in the most straightforward way. Currently there is a figure in the appendix about how calibration "reduce ambiguous generations", but there are too many examples to see a measurable difference. I believe a smaller plot highlighting the said ambiguous generations deserves a spot in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive review and suggestions, we have uploaded a rebuttal PDF. ***W1: About Section 3.4*** Thank you for your insightful analyses, which are greatly helpful to us. Actually, we conducted preliminary trials on incorporating a loss of $\\|\\mathbb{E}\_{q\_{t}(x\_{t})}[\\boldsymbol{\\epsilon}\^{t}\_{\\theta}(x\_{t})]\\|\^{2}$ during training. However, since the expectation operator $\\mathbb{E}\_{q\_{t}(x\_{t})}$ is inside the norm operator $\\|\\cdot\\|$ (the norm operator is convex), we can only obtain a biased estimation of $\\|\\mathbb{E}\_{q\_{t}(x\_{t})}[\\boldsymbol{\\epsilon}\^{t}\_{\\theta}(x\_{t})]\\|$ during training using samples in a mini-batch, and the bias could be large for small mini-batch sizes. In contrast, our calibration using either post-training computation or dynamical recording can exploit more data samples to obtain asymptotically unbiased estimates of $\\mathbb{E}\_{q\_{t}(x\_{t})}[\\boldsymbol{\\epsilon}\^{t}\_{\\theta}(x\_{t})]$ (as stated in Line 204 for dynamical recording). According to your suggestions, we will better clarify the claims of Section 3.4 and Figure 2 in the revision. ***Limitations*** As to Figure 1, the expectation of predicted noise after calibration is zero as $\\|\\mathbb{E}\_{q\_{t}(x\_{t})}[\\boldsymbol{\\epsilon}\^{t}\_{\\theta}(x\_{t})-\\mathbb{E}\_{q\_{t}(x\_{t})}[\\boldsymbol{\\epsilon}\^{t}\_{\\theta}(x\_{t})]]\\|=0$ for noise prediction and $\\|\\mathbb{E}\_{q\_{t}(x\_{t})}[\\boldsymbol{s}\^{t}\_{\\theta}(x\_{t})-\\mathbb{E}\_{q\_{t}(x\_{t})}[\\boldsymbol{s}\^{t}\_{\\theta}(x\_{t})]]\\|=0$ for score prediction. Therefore, as indicated by Eq. (11), the entire area under the curves in Figure 1 is counted as the likelihood improvements led by our calibration. In **Figure A** of the rebuttal PDF, we provide examples demonstrating that our calibration could reduce ambiguous generations, such as eliminating generations that resemble both horse and dog. Intuitively, uncalibrated scores will contain redundant information from all classes and lead to the generation of ambiguous features. --- Rebuttal Comment 1.1: Title: Post-rebuttal response Comment: I thank the author for explaining points I laid out in the reviews: I maintain the same score assessment for the paper as I already recommend acceptance. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: We appreciate your detailed comments, especially the insightful suggestions about Section 3.4. We will incorporate them into the final revision of our paper. Thank you again!
Summary: This paper introduces a general calibration technique for diffusion probabilistic models (DPMs). The authors derive a time-dependent calibration term that is independent of any particular input under different model parameterizations. This calibration term can be computed in advance and repeatedly used for sampling. The experimental results demonstrate the effectiveness of the proposed calibration method with a noticeable reduction of FIDs when taking 15-40 NFEs. Strengths: The proposed calibration method in this paper is novel, theoretically sound, and practically useful in that it in principle can be applied to all kinds of DPMs to improve sampling with little computational overhead. The authors provide a performance analysis with varying numbers of samples for approximating the calibration term using the MC method. The FID results suggest that 20K of MC samples are sufficient to estimate the calibration term and reach the best quality. In this sense, the pre-sampling computation seems not to be too expensive. Overall, this approach appears to be general and easy to implement. Therefore, I believe this paper is significant and worth acceptance in NeurIPS. Weaknesses: Although the calibration term is not $x_t$-dependent, it depends on time ($t$) and the model parameters ($\theta$). I am therefore concerned about the stability of the proposed approach. From Figure 1, we can basically confirm that the range of calibration terms drastically varies over different data sets. In this sense, if we apply the proposed approach to a new data set, there are at least three dynamic factors -- $t$, $\theta$, and data distribution $q_0(x_0)$ that could affect the performance of this calibration method. In regard to this, I suggest the author conduct more validation experiments to prove the stability and consistency of the proposed method over different time steps, models, and datasets. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: What are the FID results of AFHQv2 64×64, FFHQ 64×64, and ImageNet 64×64? Following my concerns on stability, in the main paper, the authors state that they have conducted experiments on these datasets based on pre-trained models considering their top performance. However, I cannot find these numbers in the main paper and in the appendix. Why do you choose to only present CIFAR-10 and CelebA 64x64 in the main paper? I believe that an improvement on the ImageNet 64×64 is more convincing. I suggest the authors keep the FIDs of all these datasets in the main paper and compare them to the original DPMs. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The impact of this paper may be limited by the lack of comparison against other existing methods. I tend to categorize this paper into research that improves pre-trained DPMs generally at inference time. Along this direction of research, there are several relevant works, which I believe the author could easily compare to, i.e., (not trainable) PNDM [1], Analytic-DPM [2], and (trainable) BDDM [3]. [1] Liu, Luping, et al. "Pseudo Numerical Methods for Diffusion Models on Manifolds." [2] Bao, Fan, et al. "Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models." [3] Lam, Max WY, et al. "Bilateral Denoising Diffusion Models." Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive review and suggestions, we have uploaded a rebuttal PDF. ***W1 & Q1: The stability of the proposed approach and FID results of AFHQv2 64×64, FFHQ 64×64, and ImageNet 64×64*** Indeed, the gain of our calibration trick is proportional to the degree of `uncalibration’ for a diffusion model, or how far the uncalibrated learned scores deviate from the essential properties (e.g., the expected data scores should be zero). We will conduct more validation experiments to demonstrate the stability and consistency of our calibration. **Table B** of the rebuttal PDF contains the FID results for AFHQv2 64×64, FFHQ 64×64, and ImageNet 64×64. To reduce computational burden, we did not run full FID experiments on these datasets in the original main paper. In the revision, we will provide full FID results on AFHQv2 64×64, FFHQ 64×64, and ImageNet 64×64. ***Limitations*** Thank you for your constructive suggestions. Our calibration is conceptually compatible with these advanced samplers/solvers [1,2,3], because we focus on improving model score estimation and these methods focus on how to efficiently utilize model scores for better/faster sampling. In the revision, we will conduct empirical studies to ensure that our calibration is compatible with these methods. ***References:*** \ [1] Liu et al. Pseudo Numerical Methods for Diffusion Models on Manifolds. ICLR 2022 \ [2] Bao et al. Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models. ICLR 2022 \ [3] Lam et al. Bilateral Denoising Diffusion Models. NeurIPS 2021 --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thanks the authors for a detailed response and adding new experimental results. I am convinced that the proposed calibration trick is a general method applicable to diffusion models. I will keep my rating, leaning to an acceptance. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: We appreciate your detailed comments and suggestions. We will polish our paper further and incorporate new results into the final revision. Thank you again!
Summary: This paper presents a simple way to calibrate an arbitrary pretrained diffusion probabilistic model (DPM), which can reduce the score matching loss and increase the lower bounds of model likelihood. The authors observe that the stochastic reverse process of data scores is a martingale, from which concentration bounds and the optional stopping theorem for data scores can be derived. The proposed calibration method is easy to follow and can be performed only once, resulting in models that can be used repeatedly for sampling. The premise is that DPMs are by default uncalibrated (which they show experimentally), and that calibration can result in improved generation. This again is showed experimentally using the FID metric. —- Raised score from 4 to 6 post-rebuttal. On the whole it seems like the method can have some positive impacts - mostly in terms of model likelihood (with subsequent downstream benefits), and sometimes in terms of the quality of generated images. Strengths: - clear and concise presentation of the proposed calibration method - empirical validation of the method on multiple datasets, - derivation of concentration bounds and the optional stopping theorem for data scores Weaknesses: This biggest weakness is solely relying on model likelihood and the FID score for measuring generative performance. FID (Heusel et al., 2017) calculates the Wasserstein-2 (a.k.a Fréchet) distance between multivariate Gaussians fitted to the embedding space of the Inception-v3 network of generated and real image. A major drawback with FID is its high bias. The sample size to calculate FID has to be large enough (usually above 50K). Smaller sample sizes can lead to over-estimation of the actual FID. This can clearly bee seen in Table 3. It's not a surprise that the model likelihood improves (since this is directly linked to calibration), but the link between model likelihood and generation quality is not at all clear. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: In the supplementary material, some example generated images are given with/without calibration. I zoomed right in, but I struggled to see any difference between pairs of images - is there anything that demonstrates that calibrating improves matters? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors have partially addressed the limitations in that they acknowledged that the model likelihood comes down but not necessarily FID, although they don't answer the question of whether it improves generation quality! Also there is a scaling issue with the method that is identified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions, we have uploaded a rebuttal PDF. ***W1: Solely relying on model likelihood and the FID score for measuring generative performance*** First, we need to clarify that Table 3 is an ablation study on *the number of samples used to estimate the calibration term*, rather than the number of samples used to calculate the FID score. As stated in Section 4.1 (Lines 217-218), we use 50K samples to calculate the FID score for all experiments in our paper. In **Table A** of the rebuttal PDF, we assess sample quality using FID and other performance metrics including sFID, inception score (IS), precision and recall. As can be seen, our calibration consistently improves sample quality under different metrics, and we will present full results in the revision. ***W2: The link between model likelihood and generation quality is not at all clear*** There have been several works discussing the link between model likelihood and generation quality in diffusion models [1,2,3,4,5], and their observations are that there is usually a trade-off between model likelihood and generation quality (i.e., improving model likelihood may degrade generation quality, and vice versa). In contrast, our calibration could improve both model likelihood and generation quality at the same time. ***Q1: Is there anything that demonstrates that calibrating improves matters?*** In addition to the quantitative improvements in model likelihood and various generative metrics (FID, sFID, IS, precision and recall), we show in **Figure A** of the rebuttal PDF how our calibration can reduce ambiguous generations, such as eliminating generations that resemble both horse and dog. Intuitively, uncalibrated scores will contain redundant information from all classes and lead to the generation of ambiguous features. ***References:*** \ [1] Kingma et al. Variational Diffusion Models. NeurIPS 2021 \ [2] Song et al. Maximum Likelihood Training of Score-Based Diffusion Models. NeurIPS 2021 \ [3] Huang et al. A Variational Perspective on Diffusion-Based Generative Models and Score Matching. NeurIPS 2021 \ [4] Vahdat et al. Score-based Generative Modeling in Latent Space. NeurIPS 2021 \ [5] Lu et al. Maximum Likelihood Training for Score-Based Diffusion Odes by High Order Denoising Score Matching. ICML 2022 --- Rebuttal Comment 1.1: Comment: Thanks for the responses and clarifications. Ultimately it comes down to those images. I concur that 9 of them do look better the calibration (the car image I don’t really think is much different). Perhaps there are also other images that look better without calibration (i.e. this is one-sided)? Perhaps the differences would be more obvious on higher dimensional images? I think the jury is still out that calibration is really useful here. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: Thank you for the valuable feedback. The phenomenon shown in Figure A is NOT one-sided. According to our observations, the generated images are either *calibration looks better* or *small visually distinguishable difference before/after calibration*. We can hardly find the image where without calibration looks better. Following your suggestions, we will provide more visual examples on higher dimensional images such as ImageNet 64×64 in the revision. Beyond visualization, higher model likelihood can directly benefit data compression and density estimation, whereas improved image quality (as measured by quantitative metrics) can benefit downstream tasks such as image editing/customization, semi-supervised learning, and improving adversarial robustness.
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback, and we have responded to each reviewer individually. We have also uploaded a rebuttal PDF that includes: - **Table A**: Assessing sample quality using FID and other performance metrics including sFID, inception score (IS), precision and recall. - **Table B**: FID results for AFHQv2 64×64, FFHQ 64×64, and ImageNet 64×64. - **Figure A**: Selected examples demonstrating that our calibration could reduce ambiguous generations. Pdf: /pdf/b5d4c3aec500df19370b93f4929b0f38569322ab.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper makes an observation regarding the reverse process of diffusion probabilistic model, noting that the data score term is a martingale with respect to this process. A key contribution of the paper is a theorem to this effect and the associated proof. Leveraging this observation, the authors propose a calibration technique: one can calibrate a pretrained diffusion model at any time step by subtracting its expectation. Experimental results in the paper demonstrate that the calibrated score model achieves lower values of score matching objectives. Furthermore, the paper provides evidence that the calibrated score model yields higher evidence lower bounds. Lastly, the paper explains that similar conclusions hold true for the conditional score model. Strengths: S1. The paper makes a novel and significant observation concerning the martingale nature of the data score term in probabilistic diffusion models with respect to the reverse process. S2. The paper is clearly written, well-organized, and provides a key theorem and proof that are presented in a readily-followed fashion. S3. The innovative technique for calibration is soundly based on theoretically-derived principles, is relatively simple to compute, and achieves an experimental performance improvement. The method is extended to the conditional setting. S4. The paper provides visualizations of the expected predicted noises and associated discussion that helps the reader understand the inductive bias. The additional experimental analysis is a welcome addition (sensitivity to the number of training samples, performance using generated samples, dynamic recording performance). Weaknesses: W1. Although the proposed method achieves an improvement in terms of model likelihood, the experiments are less convincing that there is a meaningful practical improvement in terms of generated samples. There is an improvement in FID, but the sample images are presented in the appendix in such a way that it is almost impossible to discern a difference between the calibrated and uncalibrated models. W2. The reported experiments only assess sample quality using FID. Multiple papers have demonstrated how this metric can be misleading in some circumstances and have proposed alternatives, e.g., precision and recall metrics, that can be used in conjunction with FID to provide a clearer and more complete assessment of the capabilities of a generative model. W3. There is very little discussion of computational and memory requirements for the calibration. The paper could benefit from a much clearer discussion of the costs that are incurred in order to obtain the calibration benefit. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Q1. Can the authors provide a clearer commentary about the claimed subjective quality improvement. The generated images in Appendix C are extremely small. There is almost no discussion beyond “Our calibration could help to reduce ambiguous generations” – this is almost impossible for a reader to verify with the way the images are presented and it is not clear exactly what is meant. Q2. Could the authors comment on how the method performs for other performance metrics for generative models? Q3. What is the computational and memory cost (particulary in comparison to the original model)? i.e., does it increase the computational overhead by 30 percent? Is the memory cost an additional 20 percent? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The paper provides a very brief discussion of the limitations. This is definitely useful, and the authors acknowledge some of the key limitations. On the other hand, it would strengthen the paper if the supplementary material contained a more in-depth discussion. In particular, additional discussion concerning overhead and whether the proposed method translates to genuine practical benefits would be welcome. The section could discuss the challenge of evaluating generative models and suggest potential research avenues fhat might help establish whether the calibration improvement translates to more applied settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive review and suggestions, we have uploaded a rebuttal PDF. ***W1 & Q1: Providing a clearer commentary on the improvement of image quality*** In **Figure A** of the rebuttal PDF, we provide examples demonstrating that our calibration could reduce ambiguous generations, such as eliminating generations that resemble both horse and dog. Intuitively, uncalibrated scores will contain redundant information from all classes and lead to the generation of ambiguous features. ***W2 & Q2: Generative performance under other metrics*** In **Table A** of the rebuttal PDF, we assess sample quality using FID and other performance metrics including sFID, inception score (IS), precision and recall. As can be seen, our calibration consistently improves sample quality under different metrics, and we will present full results in the revision. ***W3 & Q3: Computational and memory costs*** We provide two methods for implementing calibration in our work: post-training computation and dynamical recording. For post-training computation, we compute the score mean for each inference timestep (e.g., each timestep utilized by DPM-Solver) and restore them for reuse. Therefore, the amortized computational cost for calibration is precisely $\\frac{\\mathcal{M}}{\\mathcal{N}}$, where $\\mathcal{M}$ is the number of samples used to calculate each score mean, and $\\mathcal{N}$ is the number of new samples generated during inference. If we use $\\mathcal{M}=20,000$ samples to calculate each score mean as ablated in Table 3, and we generate $\\mathcal{N}=100,000$ new samples during inference, the amortized computational cost will be $20\\%$; if we generate more samples, such as $\\mathcal{N}=1,000,000$, the amortized computational cost will be $2\\%$. As to the memory cost, since we first calculate score means and then restore them as detached tensors, they consume negligible inference memory during generation. For dynamical recording, both the extra computational and memory costs are less than $1\\%$, because we use a shallow MLP for recording, which is relatively lightweight in comparison to diffusion models. In the revision, we will provide more detailed analyses of computational and memory costs. ***Limitations*** Thank you for the instructive suggestions. Dynamical recording could significantly reduce the computational and memory overheads of calibration, while a calibrated diffusion model could potentially benefit downstream applications such as image editing/customization, semi-supervised learning, and improving adversarial robustness. In addition to model likelihood and quality metrics, we can evaluate a diffusion model by its degree of `uncalibration’, namely, how far the learned scores deviate from the essential properties (e.g., the expected data scores should be zero). In the revision, we will have a more in-depth discussion. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. It has resolved my questions. Since I already recommended acceptance, I have retained my original ranking. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: We appreciate your detailed comments and suggestions. Our final revision will include new performance results as well as details on computational/memory costs. Thank you again!
null
null
null
null
null
null
Holistic Transfer: Towards Non-Disruptive Fine-Tuning with Partial Target Data
Accept (poster)
Summary: In this paper, the authors tackled "holistic transfer" task in which imcomlete target data that only cover a part of class labels are given to fine-tune a pre-trained model. To gain generalization ability of the model to unseen-class data in the target domain, the proposed method adopts several techniques in the fine-tuning with the target data: leave-out local SGD, freezing a linear classifier, selective distillation, and feature rank regularization. The experimental results with several benchmark datasets demonstrate the effectiveness of the proposed method. Strengths: - The problem setting called holistic transfer is interesting and should be important when we consider practical ML applications. - The proposed method works well in terms of boosting overall accuracy across several datasets in the experiments. Weaknesses: - This paper fairly lacks discussion on the comparison of the proposed task/method with related work. - The problem setting of holistic transfer is quite similar with partially zero-shot domain adaptation (PZDA) [R1]. Specifically, it can be seen as a supervised but source-free version of PZDA. Since a generative approach to source-free domain adaptation is somewhat common in the recent literature, using PZDA method with source-data generation seems to be a simple solution to holistic transfer. - [R1] "Partially Zero-shot Domain Adaptation from Incomplete Target Data with Missing Classes," WACV 2020. - Additionally, it should be also straightforward to use test-time adaptation or online domain adaptation methods to solve the holistic transfer task. - The performance of existing methods such as those raised above is not examined in the experiments, which makes the significance of the proposed method unclear. Comapring with domain generalization methods should be unfair, because they are essentially unable to utilize the information of the target domain. - In the experiments, the performance of the proposed method in unseen classes is often par or even worse than that of the source model, which raises questions about the effectiveness of the proposed method for generalizability to unseen classes. - The manuscript lacks several important topics such as related work and limitations, while the discription about the problem setting (section 2) is redundant. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Why did the authors not examine the existing methods such as test-time adaptation, online domain adaptation, or PZDA in the experiments? - Can we say that the model is also successfully adapted to unseen classes even if the accuracy in those classes does not change from the source model? If yes, why? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - LOLSGD essentially requires that the number of classes is large enough to provide a sufficient variety of label subsets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The manuscript lacks several important topics such as related work and limitations.** Due to the page limit, we leave the related work and limitations in the Supplementary (as mentioned in L343). We apologize if it was unclear, and we will clarify it in the final version. **Lacks discussion and comparison with PZDA, online domain adaptation, and test-time adaptation.** Thank you for the valuable comments. Our holistic transfer (HT) setting shares conceptual similarities with several machine learning paradigms (such as continual learning and domain adaptation, as discussed in Sect. 2, Table 2, and related work). We tried to include comprehensive discussions, but it appears we missed some. We appreciate your references to partially zero-shot domain adaptation (PZDA), test-time adaptation, and online adaptation, and we will surely cite and discuss them in the final version. The key difference between our HT and PZDA is that we consider a source-free setting, which is more practical, especially for models trained on large-scale datasets or scenarios with privacy concerns (L83-85). If the source data set is available during adaptation, the problem can indeed become much easier. We have discussed a solution at L148-153, which is to translate the source data into the target style. Our HT setting is also quite different from test-time adaptation and online adaptation. While both of them allow the source model to be dynamically adapted once new target data is collected, they are not meant to address our problem of adapting the source model’s holistic capability of recognizing $N$ classes to the target domain when the available target data only covers $N’<N$ classes. More specifically, test-time adaptation and online adaptation cannot directly tackle the situation where some of the target test classes are missing in the target training set (cf. Table 2). While both of them can *wait* till the model eventually see the missing target classes to adapt the model’s capability of those classes, the model may have already suffered a serious forgetting problem (please see our next response). In contrast, our HT setting aims to *proactively* adapt the model, even for some target test classes that are missing in the target training set. We think the source-data generation idea you suggested is very interesting and itself deserves a future paper since we believe source-data generation remains a challenging and ongoing research problem. We would like to emphasize that our main contributions are proposing the HT problem and establishing some most relevant baselines (L35, L346). We certainly do not claim that we have solved the problem perfectly. That said, we have experimented with source-data generation for adaptation and found that the gap between using real source images and generated ones is still large. We consider the OfficeHome dataset and apply the popular DeepInversion [C] on the two source models trained on Ar and Rw domains, respectively. We conduct a simple baseline for our HT problem by combining the real/generated source data with the partial target data for fine-tuning. As shown in Table R5 (the PDF in the global response), the unseen accuracy gap between using real/generated source data remains significant. Examples of the generated source images can be seen in Figure R2 (the PDF in the global response). [C] Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion, CVPR 2020. **Comparing with domain generalization methods should be unfair, because they are essentially unable to utilize the information of the target domain.** We apologize for the confusion. While SWA and SWAD (L262-263) are initially proposed to improve model generalization, we use them to fine-tune the source model with the target data. We include them mainly because they share a conceptual similarity to our LOLSGD, in which they also take average over several models in training. We will clarify this. **... the performance of the proposed method in unseen classes is often par or even worse than that of the source model … Can we say that the model is also successfully adapted to unseen classes even if the accuracy in those classes does not change from the source model? If yes, why?** We appreciate your question. The ultimate goal of HT (as defined in the objective in Equation 1) is to make the adapted model’s performance close to the oracle model that is trained/fine-tuned on the full target data without missing classes. To claim we successfully resolve HT thus requires improving the accuracy for both the seen and unseen classes in the target domain. However, one should not treat maintaining the unseen class accuracy the same as the source model as a trivial task, as fine-tuning with only the seen class data degrades the unseen class accuracy drastically (Sect. 2.3). Therefore, even keeping the unseen accuracy while improving the seen accuracy is challenging, especially in the source-free setting. In our humble opinion, our methods have largely addressed this challenge (from a drastically degraded unseen accuracy to an unseen accuracy similar to the source model) and, in some cases, improved the unseen accuracy against the source model. After all, our main contributions are proposing the HT problem and investigating some most relevant baselines (L35, L346). We certainly do not claim that we have solved the problem perfectly, and we hope our contributions will establish the foundation for future research in HT. --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for the authors' response. I have read it as well as all other reviews. I do not have any further question from my side, but I am still not fairly convinced on the comparison. (a similar concern seems to be also raised by QPVq and CeTn) Specifically, since source-free DA and test-time adaptation are quite popular in the literature, we would naturally expect that they are examined in the experiments for comparison. However, the authors did not directly compare them with the proposed method, but instead used their own baselines, which are somewhat similar to them, as the authors stated "inspired by" or "similar to" in the response to QPVq. I think this is why several reviewers, including myself, commonly raise this concern. --- Reply to Comment 1.1.1: Title: Thank you for your repsponse. Further response. Comment: Dear reviewer, Thank you for reading our rebuttal and other reviewers' comments. We are glad that you do not have further questions. We respond to your remaining concern as follows. We acknowledge that source-free domain adaptation (SFDA) and test-time adaptation (TTA) share some similarities to our holistic transfer (HT). For example, we also consider a source-free setting. However, we want to emphasize that **HT is addressing quite different challenges from theirs.** For instance, based on our investigation, the core challenges in HT are the **forgetting of unseen classes and the bias to seen classes in the target domain** (L103-110, Sect. 2.3). These challenges, in our humble opinion, drastically differ from the challenges and technical focus in SFDA and TTA, which are to recover the true labels for the unlabeled target data. Therefore, in considering what baselines to compare to, we focus on those which can potentially address the core challenges in HT. In our humble opinion, the baselines that we design in Sect. 3.2 and 3.3 address the forgetting problem in HT more closely than directly applying methods in SFDA and TTA whose goal is not to handle forgetting. We hope the above clarification addresses your concern *"However, the authors did not directly compare them with the proposed method, but instead used their own baselines, which are somewhat similar to them, as the authors stated "inspired by" or "similar to" in the response to QPVq."* In short, many of the techniques proposed in these related paradigms do not aim to address the core challenges in HT. Directly applying them is thus unlikely effective for HT and may be considered as an unfair/misleading comparison. Nevertheless, we do draw insights from them (e.g., some components in their techniques) to design our baselines. That said, in our final version, we will be happy to apply SFDA and TTA methods to HT while knowing that they may not address the challenges in HT. Best, Authors
Summary: - This paper introduces a new setting: partial target data. A model is pretrained on a source domain with a set of classes. The model then has access to labels from a target domain, but only a subset of the classes. The goal is to do well on all classes (including the remaining, unseen classes) on the target domain - This paper repurposes datasets such as Office-Home, iWildCam, VTAB, iNaturalist (fungi), and FEMNIST, for this task. - They find that naive fine-tuning on the target does not do well. It does well on the seen classes, but poorly on unseen target classes - They consider a wide range of methods. For example, tuning only the batch norm parameters - They propose a method called LOLSGD, where they leave out a class and take gradient steps, and average this over the left-out class. - They combine these with ideas such as selective distillation, feature rank regularization - They show that their method does well on a wide range of datasets Strengths: - The problem seems very relevant, practical, and interesting. I’m not personally aware of other work in this space, but I might be missing parts of the related literature - I think their datasets generally seem to make sense, and could be useful for people working on this problem - Their explanation of why fine-tuning doesn't work very well makes sense - They try a lot of potential improvements, including some of their own, which are novel. These methods seem to work well - I like the selective distillation idea - one reason a model might do poorly in the partial target data setting, is that it might have much higher confidence on seen objects than unseen objects. Selective distillation might mitigate this issue, because the logits on unseen classes should be similar - The experiments generally look interesting and sound. My overall verdict is an accept. While there are some weaknesses, I think the paper would be a solid addition to the conference. Weaknesses: - Batchnorm seems to do well, better than I expected from the text in line 174. BN (stats only) does the best on unseen data in Table 3. Consider running these on the other datasets too? Maybe a slightly more sophisticated version might work better. - I don't understand the intuition for LOLSGD. It seems handwavy. I think it would be good to add a toy example where this method works. Or show a simple example where the method works. It's unclear why the gradients biased to certain classes will cancel out - Their solutions seem nice and plausible, but not especially convincing. This is fine, since they're introducing a problem, and try out some reasonable methods. Future work can focus on better or more principled solutions. - nit: I’m not a big fan of the name holistic transfer. There are many different transfer settings. What makes this more holistic than other types, for example OOD generalization? I think “partial target data setting”, or some other more specific naming choice would be better. - There should be a more comprehensive related work, and it should be in the main paper. Can you compare to universal domain adaptation, open set domain adaptation, etc? I wonder if there is some setting that is similar to yours, even if I'm not aware of the literature, which is why my confidence is low. Please select the closest related settings and compare with them. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - For feature regularization, can you add some details for exactly what matrix you’re computing the singular values (is it the matrix C?). Also, if you regularize ||diag(C^T C)||_2^2, then doesn’t that push down the singular values? Are you subtracting this term instead of adding it? Sorry for I’m missing something basic. - For CLIP, do you initialize the head with the zero-shot initialized classifier? - I wonder if tuning just the bottom couple of layers would work well. Lee et al., 2023, found that for shifts in x it can often be better to tune the bottom few layers. This might have less overfitting to the seen classes as well. Maybe it can be combined with batchnorm fine-tuning. - LP-FT: I think it’s worth adding that it’s not designed for this scenario (it’s designed for the case where the classes from source / pretraining to target / fine-tuning are different, so you don’t have a “head”. Otherwise it looks like a bit of a strawman. - nit: Line 174 says “Unfortunately, we found updating normalization layers is yet satisfying” - do you mean it’s not yet satisfying? That is, updating normalization layers doesn’t quite solve the partial target data setting? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No concerns Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive and constructive feedback. **Batchnorm seems to do well.** We note that our ultimate goal is to achieve high overall accuracy, not merely unseen accuracy. Although BN (stats only) *maintains* the unseen accuracy well on OfficeHome, it cannot effectively improve the seen accuracy, resulting in worse overall accuracy (cf. Table 3). Following your suggestion, we apply BN (stats only) to iWildCAM (cf. Table 6). As in Table R2 (in the PDF of the global response), BN (stats only) maintains the unseen accuracy well but cannot improve the seen accuracy. That said, we agree that a more sophisticated, dedicatedly designed version of BN might perform better, and we leave it as future work. **More details and demonstrations of LOLSGD.** Thank you for the comment. Please be referred to the global response. **The naming of holistic transfer.** We apologize for the confusion. The word “holistic” closely reflects our setting — adapt a pre-trained classifier’s *holistic* capability of recognizing $N$ classes to the target domain, even when the available target data does not cover all the $N$ classes. (Please be referred to the first and third responses to 9DJT for details.) We will clarify this in the final version. We indeed have considered naming our setting with “partial target data,” but it may confuse our setting with a related problem named partial domain adaptation [A]. Partial domain adaptation also considers the situation where the target data set contains partial (i.e., $N'<N$) classes. However, after adaptation, it only aims to perform well on those $N'$ classes. In sharp contrast, our setting aims to perform well on all the $N$ classes after adaptation. We thus purposefully refrain from using the term "partial" to avoid confusion. That said, we remain open to discussion about the name and are willing to consider any adjustments based on your feedback. [A] Partial adversarial domain adaptation. ECCV 2018. **More related work.** Thank you for the comment. Our related work is in the Supplementary due to the page limit. We will strengthen it and include the most relevant part in the main paper. Our setting is highly related to domain adaptation (DA), whose objective is to adapt a model trained on a source domain (e.g., an image style) to perform well on a target domain (e.g., a different style). Depending on whether *labeled* target data is provided for adaptation, DA can be categorized into supervised, semi-supervised, and unsupervised settings, and we consider the supervised setting (L81). Another way to categorize DA works is by the relationship between the label spaces of the source and target data. The standard closed-set setting assumes that the source and target data cover the same $N$ classes. That is, when one prepares the target data set for adaptation, all classes are expected to be present. However, in practice, this is not trivial and often infeasible for an end-user: collecting data can be quite laborious, especially when $N$ is huge. (Please be referred to the first response to 9DJT for details.) The proposed setting aims to release this constraint, making the data preparation for adaptation *much simpler* for end-users. At first glance, our setting seems quite similar to partial DA [A], which also considers the situation where the target data set contains part (i.e., $N’<N$) of the source classes. However, the goal of partial DA is to perform well only on those $N'$ classes after adaptation. In contrast, our setting aims to adapt the source model’s holistic capability of recognizing $N$ classes to the target domain. This is why in Table 1 we explicitly separate the target data set into one for training ($N’<N$ classes) and one for testing ($N$ classes). Our study in Sect. 2.3 showcases the challenges of this new setting: standard fine-tuning would simply degrade the model’s capability on the $N - N’$ unseen target classes. Open-set and universal DA consider a different scenario where the target data set contains additional “unknown” classes that do not appear in the source data. The goal is thus to equip the adapted model with the ability to predict “unknown” for those data. We argue that this setting is orthogonal but can be compatible with ours, and we leave the combination as future work. Please also see the first response to HMzM. **Feature regularization.** Yes, Figure 4 shows the singular values of matrix C, which is the covariance matrix of feature vectors. Ours has more *uniform* singular values. That said, in training, we do not explicitly compute the singular values and regularize them. The regularization term $||diag(C^\top C)||_2^2$ we use is inspired by [15] and [B], which is *added* to the loss function. While looking a bit counterintuitive, [B] proved that this regularizer has the effect of penalizing the variance among the singular values, thus discouraging the tail singular values from collapsing to 0, mitigating dimensional collapse. We apologize that we missed this detail in the manuscript, and we will clarify it in the final version. [B] Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning, ICLR 2023. **CLIP initialization.** Yes, we initialize the classification head with class names’ embeddings extracted from CLIP’s text encoder. After that, we drop the text encoder. **Tuning the bottom couple of layers.** We experiment with fine-tuning the BN layers and the first (or last) block of the source ResNet-50 on OfficeHome. As shown in Table R4, fine-tuning parts of the layers drastically degrades the unseen accuracy. This showcases the challenge of holistic transfer. That is, even with just parts of the model being fine-tuned, the model can easily fit the seen classes and suffers from forgetting the unseen classes. **LP-FT.** We will do so. **Line 174.** You are right. We meant updating normalization layers does not solve holistic transfer. We will correct the typo. --- Rebuttal 2: Title: Thanks for the response Comment: I have read all the other reviews and the author response. I think the paper introduces an interesting problem, and has some nice analysis. So I think it is above the NeurIPS bar. But I could understand if the paper doesn't get in this time. The other reviewers mentioned that it would be nice to add cases where the target dataset contains classes that aren't present in the source dataset. I don't think this is necessary. I actually prefer the author's setting because it cleanly examines an important problem - if we have some knowledge about classes 1 to N, but then fine-tune on some data with missing classes, how do we preserve our knowledge of the missing classes? Adding unseen classes into the mix complicates the problem. That said, I'm still not a fan of the name "holistic" transfer, and I suspect that may be why the reviews are asking for this change. After reading the author response, it is still not clear to me why this is more "holistic" than other paradigms. Maybe something along the lines of "class extrapolation"? This might capture the fact that the original model may be CLIP (self-supervised) or pretrained on a source distribution - it doesn't matter. The point is it has some inductive bias for classes 1 to N, is prompted using a subset of classes, and should extrapolate to new classes. The rebuttal answers most of my questions. Thank you for explaining the connection with various related areas of research. Unfortunately, I haven't had the time to re-look into the detailed intuitions of LOLSGD too closely. I'd suggest adding a toy dataset or example where this method works (e.g., with mixtures of Gaussians, etc). --- Rebuttal Comment 2.1: Title: Thank you for your repsponse Comment: Dear reviewer, Thank you for your valuable response. We are glad that our rebuttal addressed most of your questions. We are pleased you recognize our interesting problem setup, nice analysis, and our paper above the acceptance bar. We also appreciate your support in our current setting. Regarding the name "holistic" transfer, thank you for clarifying your concern and providing further suggestions. We apologize for the confusion --- **We certainly do not claim that our problem setting is more "holistic" than other transfer learning and domain adaptation paradigms, and we will clarify it.** We use "holistic" mainly to emphasize that we intend to transfer the *source classifier's "holistic" capability* --- i.e., being able to recognize $N$ classes --- even when the available target data set has missing classes. To address the confusion. We will consider changing the naming. For instance, "Towards Holistic Transfer of Classifier Capabilities: Non-Disruptive Fine-Tuning with Partial Target Data." We appreciate your suggestion (i.e., class extrapolation) and will certainly consider it. For LOLSGD, please kindly be referred to our global response when you have time. We will be happy to answer your further questions if there are any. We will incorporate the detailed intuition and evidence in our global response into our final version. We also appreciate that you gave us more details about the toy example, and we will add it to our final version. Best, Authors
Summary: This paper studies a learning problem, called Holistic Transfer, which involves the adaptation of a pre-trained source model capable of classifying a wide range of objects, to a target domain using data that covers only a partial label space. To solve this problem, Strengths: 1. Distribution shift exists everywhere in real applications. 2. Experiments show the effectiveness of the proposed method. Weaknesses: 1. This paper is not well-structured. Considering that the learning setup is proposed in this paper. It is more important to convince me the new setup is valuable. Now the introduction is too short and evidence is not convincing. 2. The learning setup is somewhat trivial. Researchers initially noticed distribution shift problem because the distribution shift is from sampling bias. It is okay to assume that the distribution of labels is different. In this case, sampling bias exists. It is confusing to assume that the marginal distribution of $T$ and $T^*$ is the same one. 3. The name "Holistic Transfer" is also confusing for me. What is "Holistic" means? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. This paper is not well-structured. Considering that the learning setup is proposed in this paper. It is more important to convince me the new setup is valuable. Now the introduction is too short and evidence is not convincing. 2. The learning setup is somewhat trivial. Researchers initially noticed distribution shift problem because the distribution shift is from sampling bias. It is okay to assume that the distribution of labels is different. In this case, sampling bias exists. It is confusing to assume that the marginal distribution of $T$ and $T^*$ is the same one. 3. The name "Holistic Transfer" is also confusing for me. What is "Holistic" means? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **It is more important to convince me the new setup is valuable. Now the introduction is too short and evidence is not convincing.** We apologize for not making the motivation clear. We provide a detailed motivation as follows. We will incorporate it in the final version to expand the introduction. The proposed setting considers the practical scenario where *an end-user wants to adapt a pre-trained model’s capability of recognizing $N$ classes to the target domain* (L16-19). In the literature, this typically requires preparing a target data set that covers all the $N$ classes, which can be challenging or even *unrealistic*, especially when $N$ is huge (L20-21). The proposed setting aims to release this constraint, making the data preparation for fine-tuning *much simpler* for end-users. More specifically, in most of the literature on domain adaptation, the target data set has been “pre-prepared” to cover all the $N$ classes of the pre-trained source model. However, in practice, this is not trivial and often infeasible for an end-user: collecting data can be quite laborious and costly. Take wildlife monitoring as an example. Data are often collected passively (e.g., via smart camera traps), waiting for animals to appear. As a result, when a smart camera trap is redeployed to a new location and requires adaptation, it is hard to prepare a complete target dataset that contains all the animal species of interest. This raises a dilemma: *should one wait for the data to be fully collected, even if it means sacrificing the model's performance in the meantime, or update the model right away with incomplete data, accepting the risk of compromising certain capabilities?* Our paper aims to address this dilemma by delving into an unexplored Holistic Transfer (HT) setting: adapting a pre-trained classifier's *holistic* capability of recognizing $N$ classes to the target domain, using target data that covers only a subset of $N'<N$ classes. It is worth noting that HT fundamentally differs from partial domain adaptation [A]. While partial domain adaptation also considers the situation where the target data set only contains $N'<N$ classes, its goal is only to perform well on those $N'$ classes after adaptation. In sharp contrast, HT aims to perform well on all the $N$ classes after adaptation. This is why our benchmarks in Table 1 explicitly separate the target data set into the one used for training ($N’<N$ classes) and the one used for testing ($N$ classes). Our study in Sect. 2.3 showcases the challenges of this new setting: standard fine-tuning would simply degrade the model’s capability on the $N - N’$ unseen target classes. We hope the above paragraphs address your concern, and we will be happy to provide more information in the discussion period. [A] Cao, Zhangjie, et al. "Partial adversarial domain adaptation." Proceedings of the European conference on computer vision (ECCV). 2018. **The learning setup is somewhat trivial … It is confusing to assume that the marginal distribution is the same one.** Sorry for the confusion. About the relationships between P_T and P_T* at L60, we should have written P_T(x|y) = P_T*(x|y), not P_T(x) = P_T*(x). That is, we assume that the data distributions in T and T* are the same per class, not marginally. We will correct this accordingly. We hope this addresses your concern that *the learning setup is somewhat trivial.* If not, we kindly ask for more details about your comment, and we will be happy to address it in the discussion period. **The name "Holistic Transfer" is also confusing for me.** Sorry for the confusion. Following the motivations we provide above, the goal of our new setting is to adapt a pre-trained classifier’s *holistic* capability of recognizing $N$ classes to the target domain, even when the available target data does not cover all the $N$ classes. This is why we call our setting “holistic transfer.” We will clarify this in the final version. That said, we remain open to discussion with you regarding the name and are willing to consider any adjustments based on your feedback. *In light of our clarifications, we would like to ask if you are willing to reconsider your score and if there are any new concerns or additional questions we can respond to!* --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thanks for the detailed reply. My concerns have been partially addressed. I also read the others' reviews, and find Reviewer QPVq also find the setup in this paper not very exciting. I think this paper can raise discussions and is somewhat valuable to the community. However, the writing qualities need to be improved and now the version is not ready for publication. I am inclined to raise the score to 5, and I hope that the new version could make several improvements in writing quality (especially learning setup) if the paper can be accepted. --- Reply to Comment 1.1.1: Title: Thank you for your repsponse Comment: Dear reviewer, Thank you for your time reading our rebuttal. We are glad that our rebuttal has addressed some of your concerns. **We are happy to know that you are willing to raise your score to 5.** Regarding Reviewer QPVq's concern about the paper setup, we have addressed it in the corresponding response. Regarding your concern about the writing quality, specifically the too-short introduction and learning setup, we will certainly improve it, including incorporating our rebuttal properly. Please do not hesitate to let us know if there are any new concerns or additional questions we can respond to! Best, Authors
Summary: This paper proposes "Holistic Transfer" as an important problem and also provides some solutions to it. "Holistic Transfer" handles the situation when adapting a pre-trained source model (e.g. with 1000 classes) to a target domain (e.g. with 100 classes), but there are only data for part of the target domain (e.g. only with target domain data for 50 classes and no target domain data for the other 50 classes). Taking classification as an example, it assumes that all the target domain classes are in the source data, but may not be all in the target domain data. Usual fine-tuning can improve the performance on seen classes (present in the target domain data), but may destroy the performance on unseen classes (not present in the target domain data). The paper proposes some solutions in handling these situations, aiming to preserve the performance on unseen classes. In the proposed approach, the changes in the target domain are decomposed into two types of changes: "style" change and class change. Style change can be learned via the seen classes of the target domain, assuming that the unsee classes are with the same style change. The information from the unseen classes are preserved via distillation and feature rank regularization. Experiments were conducted on various datasets to validate the approach. Strengths: The problem proposed in the paper is valid and practically useful problem. As large foundation models gains popularity, how to adapt those models to specialized target domains effectively is of practical importance. The paper handles the situation when the data for some classes of the target domain classes are missing. The proposed approach seems to be sound. It decomposes the changes in the target domain into two types of changes: "style" change and class change. Style change can be learned via the seen classes of the target domain, assuming that the unsee classes are with the same style change. The information from the unseen classes are preserved via distillation and feature rank regularization. The effectiveness of the proposed approach is validated on various datasets. Weaknesses: The paper assumes that both the "seen" and "unseen" classes from the target domain are subsets of the classes in the source domain. It may happen that some "seen" classes in the target domain are not in the classes of the source domain. It would interesting to see how these cases are handled. Technical Quality: 3 good Clarity: 3 good Questions for Authors: One question regarding the results in Table 1, from some rows, e.g. the row of "BN (stats) only", the "Unseen" performance is better than "Overall" performance, why is that? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive and constructive feedback on our paper. **The paper assumes that both the "seen" and "unseen" classes from the target domain are subsets of the classes in the source domain. It may happen that some "seen" classes in the target domain are not in the classes of the source domain. It would interesting to see how these cases are handled.** Thank you for the comment. If some “seen” classes in the target domain are not in the source domain, this will require expanding the label space of the model. A simple baseline would be first training the classification weights for those “seen” classes from scratch while keeping all other model components intact. Then, we can apply our holistic transfer approach to the expanded model. A more sophisticated solution would involve techniques from continual learning, or more specifically, class-incremental learning. This machine learning paradigm aims to expand the label space of a model. We leave a suitable combination of our approach and techniques from class-incremental learning as future work. **One question regarding the results in Table 1, from some rows, e.g. the row of "BN (stats) only", the "Unseen" performance is better than "Overall" performance, why is that?** We surmise this is because the classes are not equally difficult for classification. In some cases, before we perform adaptation, we can already see that the unseen class accuracy of the source model is higher than its overall accuracy, meaning that those unseen classes are inherently easier than the seen classes. Some methods (e.g., BN (stats) only) can better keep the unseen accuracy close to the source model with less forgetting. However, they cannot adapt the seen classes effectively to the target domain. Therefore, the results of these methods generally follow the trend in the source model with the unseen accuracy higher than the overall accuracy. In contrast, some methods (e.g., LP-FT) can better adapt the seen classes to the target domain but suffer from serious forgetting of the unseen classes. These methods thus have lower unseen accuracy.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments. We are glad that the reviewers found the proposed problem “interesting”, “important” (Reviewer HMzM), and “valid and practically useful” (Reviewer tg8m, CeTn); the proposed method “works well” (Reviewer HMzM, tg8m, 9DJT, CeTn, 9DJT); the experiments generally look “interesting and sound” (Reviewer CeTn). We address the comments about LOLSGD in this global response (raised by Reviewer QPVq and CeTn) and address other comments raised by individual reviewers separately. We also include a one-page PDF with requested experiments, tables, and figures. We will incorporate all the feedback in the final version. **(Reviewer QPVq) The LOLSGD seems similar to local SGD and meta-learning. It is unsure why its design can disentangle covariate shifts. What is its difference from using a larger batch size for SGD?** Our LOLSGD is fundamentally different from meta-learning and SGD with a large batch size. In our holistic transfer (HT) problem (see Sect. 3), fine-tuning is affected by the covariate shift (from the source to the target) and the disruptive concept shift (from classifying all the classes to classifying the seen classes). Our LOLSGD aims to disentangle them or, more precisely, reduce the disruptive concept shift by subsampling multiple datasets that each contain a subset of the seen classes. When updating the model separately in parallel with these subsampled datasets, each updated model is affected by a different concept shift but a shared covariate shift. By averaging these models, LOLSGD can potentially cancel out the disruptive concept shifts and strengthen the shared covariate shift (L191-196). We provide more evidence that LOLSGD can cancel out the disruptive concept shifts in the following response to Reviewer CeTn. We argue that this is fundamentally different from meta-learning, whose goal is to learn a meta-modal that can be easily applied to a future task (e.g., a few-shot classification task). While meta-learning also subsamples its meta-training set, it is mainly to simulate multiple future tasks for learning the meta-model, not to cancel out unwanted gradients. We also argue that LOLSGD is fundamentally different from SGD with a large batch size. Concretely, without the strategic subsampling in LOLSGD, SGD with a large batch size will not create multiple models from which we can potentially cancel out the disruptive concept shifts. Our LOLSGD is inspired by local SGD, as mentioned in L177-179. Our key contribution is novel usage. Local SGD was initially proposed for distributed learning, in which the training data are decentralized by default. The goal is mainly for reducing the communication overhead of large-scale training. In contrast, in LOLSGD, the target training data set is not decentralized initially, but we strategically subsample it to simulate different concept shifts. **(Reviewer CeTn) I don't understand the intuition for LOLSGD. I think it would be good to add a toy example where this method works. Or show a simple example where the method works. It's unclear why the gradients biased to certain classes will cancel out.** We apologize if we did not make the intuition clear. We have provided some more details in the above response to Reviewer QPVq. Please kindly be referred to it. To give more evidence for the canceling-out effect, we compare the seen class accuracy among 1) naive fine-tuning with the partial target data, 2) LOLSGD, and 3) fine-tuning with the full target data (i.e., the oracle at L148-153). As shown in Figure R1 (the PDF in the global response), the seen class accuracy of naive fine-tuning (black dotted line) exceeds the oracle (green dotted line), indicating that naive fine-tuning learns undesired concept shifts towards seen classes, leading to an unreasonable accuracy. In contrast, the seen accuracy of LOLSGD (red dotted line) consistently stays below the oracle, indicating that undesired concept shifts are reduced. As a result, LOLSGD obtains much higher unseen accuracy than naive fine-tuning. Pdf: /pdf/375a8e188496459e302f8b7cfa772847eced18e5.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a problem of fine-tuning the source model with partial target data, where the source and target distributions have covariate shifts and the target test data contain classes unseen in the target training data (also called Holistic Transfer, HT, in this paper). It proposes Leave-Out Local SGD (LOLSGD) to disentangle domain gradients from classification gradients during training and several regularizations to preserve class relationships. In the experimental part, this paper builds some holistic transfer benchmarks based on existing domain adaptation and fine-tuning datasets and evaluates the proposed methods. Strengths: * This paper generally writes well. The analysis of the dilemma in the problem makes sense, and the proposed methods are easy to understand. * This paper builds some benchmarks for the proposed holistic transfer problem and evaluates some methods on them. Some of the evaluations and results may help future works on related topics. Weaknesses: * The proposed problem does not seem very interesting or practical to me. In my opinion, a practical problem should be simple and realistic. For example, in the problem of fine-tuning, we only need to have a pre-trained model and some target training data. However, in the proposed holistic transfer problem, there exist too many constraints and assumptions (for example, it requires that the source domain covers all the labels in the target domain), which may not be satisfied in real applications. In Table 1, the authors give some examples of the HT problem, but they actually contain two different settings, making the definition of the problem a bit confusing. The first three examples are variants of source-free domain adaptation, with partial classes in target training data. The last two examples are fine-tuning of CLIP models, and CLIP models are actually self-supervised pre-trained, so it does not fit the definitions in Section 2.1. Besides, the problem of fine-tuning CLIP and testing on unseen classes has already been proposed by previous works [1]. * The technical novelty of the proposed methods is limited. The LOLSGD seems similar to local SGD and meta-learning. It is unsure why its design can disentangle covariate shifts. What is its difference from using a larger batch size for SGD? The class relationship regularizations and ensemble methods also seem to be existing techniques. * This paper only discusses the problem itself and evaluates some proposed techniques. Some important related topics and methods are not discussed and compared in this paper. The source-free domain adaptation [2] and its following works are highly related but are not compared. Some fine-tuning methods may also solve the problem but are also missing, such as [3][4]. For the CLIP fine-tuning, some existing works already explore the problem of unseen classes [1], which are not mentioned. [1] Conditional Prompt Learning for Vision-Language Models. [2] Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation. [3] DELTA: Deep Learning Transfer using Feature Map with Attention for Convolutional Networks. [4] Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Some questions are listed in ‘Weaknesses’, and some more questions here: * How would the performance change if the number of classes in the target training data is changed? * Would the increase in unseen classes sacrifice the performance on seen classes? * In Equation (5), how can we get argmin_theta? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors point out the limitation that this paper mainly focuses on vision classification tasks and leave the studies to image segmentation/object detection and natural language processing tasks as their future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The proposed problem does not seem very interesting or practical … too many constraints and assumptions … Table 1 actually contains two different settings …** We want to reiterate that the proposed problem is fairly *practical* and *realistic*. It considers a *practical* scenario where *an end-user wants to adapt a pre-trained model’s capability of recognizing $N$ classes to the target domain.* In the literature, this typically requires preparing a target data set that covers all the $N$ classes, which can be challenging or even *unrealistic*, especially when $N$ is huge (L1-3, L16-21). The proposed problem releases this constraint, making the data preparation for fine-tuning *much simpler* for end-users. We apologize if Sect. 2.1 gives the wrong impression that our proposed problem introduces many constraints and assumptions, and we will carefully refine it. Sect. 2.1 is meant to describe our setting and ground it in the literature formally. The assumptions we mentioned (L54-63) either follow existing works or help clarify our setting. For instance, *the source domain covers all the labels in the target domain* is a direct property of the aforementioned practical scenario; the assumptions in L58-63 actually *relax* the assumption made in L124-125 by existing works. We should not have used the word “constraints” at L69. Indeed, L70-85 simply describes the properties and challenges of the proposed problem. We consider a more realistic source-free setting (L83-85) as it removes the need to access the source data. We also apologize for the confusion in Table 1. All five cases strictly follow the setting of holistic transfer — adapt a pre-trained model when the available target data only contain partial classes. We use CLIP mainly to extend our study to VTAB (Table 7) and iNaturalist (Table 9), for which we have no explicit source data to pre-train the model. We use CLIP’s zero-shot capability only to construct the source classifier, and we disregard its text encoder afterward. In other words, our problem definition has nothing to do with CLIP; our methods are not designed specifically for CLIP. We will clarify this in the final version. **Technical novelty.** We want to emphasize that our main contribution is studying the holistic transfer (HT) problem. The novelties are thus not merely technical but involve the investigation and understanding of the problem and the construction of benchmarks. For instance, Sect. 2.3 reveals the challenges of HT, which leads to the two directions of solutions (Sect. 3.2 and 3.3). Within each, instead of directly proposing new techniques, we deliberately seek existing techniques (initially developed for other problems) that can potentially address the challenges. While knowing that this approach might not result in many technical novelties, we believe it is crucial and valuable, as it helps establish the foundation of HT and connect it to the existing machine learning techniques. For the novelty and detail of our proposed LOLSGD, please be referred to the global response. **Related topics and methods.** Thank you for the comment and references. We will cite them and add more discussions on related topics. We did include the frozen linear classifier (L210-212) inspired by source-free adaptation (SFDA) in our experiments. Nevertheless, SFDA does not address adapting a pre-trained model using partially available target data. [3] and [4] aim to fine-tune pre-trained backbones for new downstream tasks. Their focus is to reduce over-fitting, for example, via distillation [3] or regularization [4]. Therefore, they are similar to the selective distillation and feature rank regularization (L213-226) in our experiments, which, however, could not fully solve the HT problem. For CLIP fine-tuning, we want to reiterate that CLIP is not our main focus. We use CLIP only to construct the source models for the VTAB and iNaturalist experiments. That said, we follow your comment to compare to CoCoOp [1] in these experiments. CoCoOp [1] fine-tunes CLIP by training a meta-net to condition the classification weights (i.e., the fully-connected layer) on the input images. In other words, CoCoOp freezes the visual features but changes the classifier weights by minimizing the standard cross-entropy loss. In contrast, our approach freezes the classifier weights but adapts the visual features by minimizing the loss designed specifically for HT. We conduct two experiments: 1) CoCoOp alone for the HT problem, and 2) combining CoCoOp with our approach. For 2), we take the resulting model after CoCoOp as the improved source model and further adapt the feature. We report the result on the CIFAR-100 task in VTAB (cf. Table 7). As shown in Table R1 (see the PDF in the global response), CoCoOp alone performs well on unseen classes, but it improves the overall accuracy only marginally. Combining both approaches, we can obtain better accuracy in unseen and seen classes, leading to the best overall accuracy. **Number of seen and unseen classes.** We conduct an experiment on OfficeHome (cf. Table 3). Starting from 10 seen classes, we gradually include more seen classes. As shown in Table R3, the overall accuracy consistently improves when the number of seen classes increases. To answer whether the increase of unseen classes (equivalently, the decrease of seen classes) would sacrifice the performance of seen classes, we calculate the seen class accuracy on the 10 commonly seen classes — this makes the accuracy comparable across settings. As shown in Table R3, the accuracy on the 10 commonly seen classes actually improves when we decrease the number of seen classes. We surmise that this is because when fewer seen classes are available, the adapted model will focus more on the 10 commonly seen classes. **Equation (5).** The argmin_theta means training the model on a locally sampled class subset in LOLSGD. We minimize the loss using SGD for several steps. We will clarify this. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I have read the comments from other reviewers and the author response, and some concerns have been addressed. The remaining concerns mainly lie in the problem setting and the evaluation. I am still not fully convinced of the novelty or practical value of the proposed holistic transfer problem. It is much less flexible than standard fine-tuning. Compared with source-free DA, fewer classes are needed, but the accuracy may also be sacrificed. It is unclear whether this intermediate state is valuable to explore. Furthermore, the setting is currently not unified for different pre-trained models in the experiments. (For the ImageNet pre-trained model, it is firstly trained on a small source domain and then transferred to the target domain, which is more like ‘partial source-free DA’. For the CLIP pre-trained model, the model is directly transferred to the target domain, which is more like ‘fine-tuning’ and has been explored before.) In evaluation, there are some other related topics mentioned by the reviewers, such as source-free DA, fine-tuning, and test-time adaptation, which can also potentially solve holistic transfer. But it seems that these topics or methods are not properly discussed or compared. --- Reply to Comment 1.1.1: Title: Thank you for your response. Further response (Part 1). Comment: Dear reviewer, Thank you for reading our rebuttal and other reviewers' comments. We are glad that we have addressed some of your concerns. We respond to your remaining concerns as follows. **I am still not fully convinced of the novelty or practical value of the proposed holistic transfer problem. It is much less flexible than standard fine-tuning.** We respectfully think comparing standard fine-tuning (a technique) and the proposed holistic transfer problem (a problem setting) may not make sense. In our humble opinion, the counterpart of our holistic transfer problem is the setting where the target data set covers all the classes of interest. In this counterpart setting, standard fine-tuning is certainly the go-to approach. However, in the proposed setting where the target data set has missing classes, standard fine-tuning will simply suffer forgetting, as evidenced in Sect. 2.3. Regarding which setting is more flexible, we respectfully disagree that our setting is less flexible than the counterpart setting. Similar to few-shot learning vs. many-shot learning or semi-supervised learning vs. fully supervised learning, we consider the scenario where the available data does not meet the requirement of the standard technique. Specifically, one may not obtain a comprehensive target data set covering all the classes of interest. For instance, in our response to Reviewer 9DJT, we gave one practical example --- camera traps --- where the data is collected passively, waiting for animals to appear. It is thus hard to compile a comprehensive target data set for adapting the model. Our setting enables adapting a pre-trained model to the target domain even under such a partial data situation, which we consider more flexible than the counterpart setting. **Compared with source-free DA, fewer classes are needed, but the accuracy may also be sacrificed. It is unclear whether this intermediate state is valuable to explore.** We acknowledge that if one can obtain a target data set covering all the classes of interest, the accuracy after adaptation should be higher. However, in some practical scenarios, collecting such a data set can be laborious, costly, and even infeasible, and our holistic transfer (HT) setting aims to tackle such a situation. In other words, we do not view HT merely as needing fewer classes but as a setting to address the problem where only partial class data are available. We respectfully think this is reminiscent of few-shot learning vs. many-shot learning or semi-supervised learning vs. fully supervised learning. It is well-known that many-shot learning and fully supervised learning can lead to better accuracy, but they require significant effort in data collection. Few-shot learning and semi-supervised learning aim to address the problem when such an effort cannot be met, and the community recognizes their values even if we know that their accuracy might be sacrificed. **Furthermore, the setting is currently not unified for different pre-trained models in the experiments.** We apologize if our rebuttal has not fully addressed this concern. We acknowledge that we applied different approaches to obtain the pre-trained models, and the main reason is that there is no clear source data set for VTAB and iNaturalist. Nevertheless, both 1) ImageNet pre-training followed by training on a source domain and 2) CLIP pre-training lead to a pre-trained classifier capable of classifying $N$ classes that cover the target label space. (We note that we drop the CLIP text encoder after we obtain the $N$ class names’ embedding.) The core problem of HT is then to adapt such a pre-trained classifier capable of recognizing $N$ classes to the target domain using target data that covers only partial (i.e., $N’ < N$) classes. In other words, regardless of how the pre-trained classifier was built, we study HT in a unified setting. **For the CLIP pre-trained model ... has been explored before.** We apologize if our rebuttal has not fully addressed this concern. We want to reiterate that CLIP fine-tuning is not our main focus. We use CLIP only to construct the pre-trained models for the VTAB and iNaturalist experiments. We drop the CLIP text encoder after we obtain the classification head with class names’ embeddings. Thus, our setting is not the same as CoCoOp [1]. That said, we will tone down the use of CLIP in our final version --- after all, CLIP is mainly used to obtain pre-trained models for some experiments, which can be replaced with other (future) foundation models.
null
null
null
null
null
null
DiViNeT: 3D Reconstruction from Disparate Views using Neural Template Regularization
Accept (poster)
Summary: This paper proposes a two-stage framework for neural 3D reconstruction from disparate views via neural templates regularization. In the first stage, a network is trained for predicting shape templates. After that the volumetric surface reconstruction network with depth and SDF constraints is trained with the templates prior. Compared to other 3D reconstruction methods, DiViNet specifically targets disparate input images and achieves SOTA in sparse image views. Strengths: 1.The proposed DiViNet gets rid of the require of explicit cues, unlike former methods. Moreover, only a small set of images with small overlaps are used. 2.This proposed designs are technically sound. Weaknesses: 1. Although DiViNet performs best with sparse view inputs, with dense view inputs, it's worse than the latest methods. According to Table 2, DiViNet is the second-best method. The DiViNet achieves SOTA in too many constraints to other methods. MonoSDF[1] using multi-resolution feature grids gets 0.73 on DTU dataset. 2. As mentioned in the paper, the neural templates lacks generalization in different data distribution. It matters how big the influence of the data distribution is. The authors claim the drawback of other methods is the requirement of dense views with overlap. The generalization to new data distribution is also important in consideration of real-world setting. [1]MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please refer to Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments. > **Q1 - Performance on dense-view scenarios** As addressed in Q2 in the global response and in the paper, our current implementation uses MLPs for the reconstruction task, and as such, for a fair comparison, we compare against the MLP representation of MonoSDF which achieves a CD score of 0.84. However, there was an apparent boost in CD score in MonoSDF by simply replacing the MLPs with a multi-resolution hash grid, as in [1]. We expect a similar performance boost in our method with this replacement. We are happy to provide such results in the revision. Furthermore, we include a dense view comparison to GeoNeuS in Q2 in the global response. We show a significant margin to GeoNeuS. > **Q2 - Generalization of template learning network** This concern has been addressed in Q1 of the global response. [1] Müller, Thomas, et al. "Instant Neural Graphics Primitives with a Multiresolution Hash Encoding.", ToG 2022. --- Rebuttal 2: Comment: Thank the authors for the response. After reading the rebuttal and other reviews, I will keep my original rating.
Summary: This paper proposes DiViNet for sparse multi-view reconstruction, which specifically targets on sparse input as few as three disparate RGB images. The key is to regularize the reconstruction process by learning a set of neural templates as surface priors, which is basically a set of 3D gaussian functions with optimizable features. Extensive experiments are conducted to demonstrate the quality of DiViNet in sparse and disparate view settings. Strengths: • This paper focuses on an important and practice problem. Priors works fail to reconstruct accurate geometries when input views are sparse, while this paper proposes novel neural temporal regularization method to achieve good quality even with only three disparate images as input. • The idea of learning surface priors with neural templates is novel, and it’s effectiveness is also validated through the experiment section. • Some interesting observations are draw from the experiment part, for example, under sparse view input cases, VolSDF and NeuS even fail to bear COLMAP, while under dense input cases, all the neural methods outperform COLMAP. Weaknesses: • The ability of learned neural template to generalize to new scenes is not very clear. On one hand they are learned across different scenes, on the other hand the author claimed the need to be learned again when deployed to datasets from a different data distribution. A clarification and explanation on under what scenario can the learned template functions be reused will be very helpful. • Since each learned template is represented as a scaled, anistropic 3D gaussian, a visualization showing the learned gaussian from a scene and the corresponding sparse reconstructed pointcloud from Colmap will be helpful. I am especially curious about the positional distribution and the scale of Nt gaussian, whether the query points are affected by only a small number of 3D gaussians or not. Technical Quality: 3 good Clarity: 3 good Questions for Authors: • There are some point-based neural representation works which also consider 3D gaussian as their main geometry primitive, I feel the neural template function in this paper share a lots of insight with those papers. Including those paper in related works and providing some discussion will make the paper more stronger. o 3D Gaussian Splatting for Real-Time Radiance Field Rendering. Siggraph 2023 o Neural Point Catacaustics for Novel-View Synthesis of Reflections. ToG 2022 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors have discussed potential negative social impacts of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments. > **Q1 - Generalization ability of the Template Prediction Network** This has been addressed in Q1 in the global response. > **Q2 - Visualization of learned templates and COLMAP reconstruction** Please see Fig 4 in the rebuttal pdf for the visualization of the learned templates and COLMAP reconstructed point cloud. Also, we have included many more renderings of templates with the meshes in the supplementary (Fig. 7). > **Q3 - Related Works** Thanks for the pointers. We will discuss relevant works in the revised version.
Summary: This paper addresses the problem of surface reconstruction from spase input views. The authors adopt a SDF representation parameterized by an MLP. They propose learning a set of neural templates (in the form of 3D Gaussian functions) to serve as anchors in the reconstruction process to help stitch the surfaces over sparse regions. Reconstruction is carried out by optimizing the SDF (MLP) through minimizing the rendering loss and the Eikonal loss. The authors introduce a depth loss and SDF loss, both computed based on the estimated neural templates, to regularize the optimzation. Experimental results on the DTU and BlendMVS datasets show the proposed approach can reconstruct surface details to a reasonable extent from few disparate input views. Strengths: + The template prediction network can be trained end-to-end using RGB reconstruction loss without any 3D supervision. + The predicted templates, which approxmiate points on object surface, help to regularize the SDF optimization, allowing the proposed approach to produce more complete reconstructions with reasonable surface details. + The depth loss, which computes the difference between the rendered depth and depth cues obtained from the predicted templates, and the SDF loss, which evaluates the signed distance at template centers, both sound logical and correct in regularizing the SDF optimization. Their effectiveness has been validated through ablation study. Weaknesses: - It is not clear about the generalization ability of the template prediction network. No training details have been provided in the paper. In the evaluations, are the same datasets being used in both training and testing the template prediction network? What would be the performance if the template prediction network trained on one dataset is applied to a completely different dataset? How will this (negatively) affect the reconstruction? - Quite often the reconstructions also include incorrect background surface. Is that caused by incorrect template predictions? No discussions or analysis have been included in the paper. - The description of the template prediction network architecture is very confusing (both in the main paper and the supplementary material). For instance, it is not clear how per-template feature are being sampled using bilinear interpolation from the extracted feature maps. There is also no explanations for the design of the dimension (C x M x sqrt(N_t) . M x sqrt(N_t) . M) for the volumetric feature grid. - The formulation in (2) may be problematic. Note a ray will in general intersect an object surface in at least 2 points. This implies more than one template will produce a large value for w_k and therefore colors at multiple surface points will be mixed (as depth is not being considered). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the points in weaknesses. - L_{rgb} in (3) should be L_{color} instead. - \epsilon in (2) is undefined. - \sigma in (9) is undefined. How is \sigma and the signed distance being related? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable feedback. > **Q1 - Generalization ability of the template prediction network (TPN)** Please refer to Q1 in the global response for the generalization of the template prediction network. > **Q2 - Training details** In addition to the main paper, all the training details have been provided in the supplementary material. We would be happy to add missing training details if specified by the reviewer. > **Q3 - Evaluation of the TPN** For evaluating the TPN, we have a separate test set. Every time the TPN is trained, the training and test set, which are non-overlapping, are part of one dataset (e.g., either DTU or Blended MVS, accordingly). > **Q4 - Background surface in the reconstructed results** Note that this is not specific to, nor a limitation of our work since all the previous works such as NeuS, VolSDF, MonoSDF, etc. output such backgrounds. This is a common occurrence in this line of work. That said, we agree that since the TPN is not trained with pixel-accurate ground truth cues such as depth and normal maps (the priors in MonoSDF), there will always be imperfections in the predictions of templates (as can be seen in Fig. 4 of the rebuttal pdf). This results in some additional reconstructions in the background. But this can be easily avoided by using masks from pre-trained segmentation models like Segment-Anything [1] and incorporating it in the loss function using a binary cross entropy loss as shown in [2]. We will add this discussion to the revision of the paper. > **Q5 - Description of the TPN** We realize some confusion exists in the description of the TPN, which shall be written better in the revised version. Regarding the per-template feature: Given a fixed number of templates that we would like to predict, we create a uniform grid of pixel locations and use that grid to interpolate features using bilinear interpolation. Hence, all the template parameters are regressed from fixed anchors in the feature grid. With this process, we get the latent codes for each template which are then decoded using $D_{geo}$ and $D_{vox}$ decoders. > **Q6 - The formulation in (2) is problematic** A careful examination reveals that there is no problem with Equation (2). The weight function computed from Gaussians is a weighted sum of the local influences of all predicted Gaussians. And since we use a max operator, we get a single peak along a ray whose depth we use to regularize the surface. [1] Kirillov, Alexander, et al. "Segment Anything.", ICCV 2023 \ [2] Siddiqui, Yawar, et al. "Panoptic Lifting for 3D Scene Understanding with Neural Fields.", CVPR 2023 --- Rebuttal Comment 1.1: Comment: Thank the authors for providing their responses. My concerns on the generalization and evaluation of the TPN are mostly resolved. I still have a few questions regarding TPN: 1. In l120, why is the voxel grid feature V_k indiced by k? 2. In l124, what is the dimension of tri-linearly interpolated feature f_{vox}(p)? Is it C x N_t? Is the tri-linear interpolation performed on the MxMXM grid? 3. In l140, is the sparse point cloud reconstructed from 3 views? How many points are reconstructed and used on average? In that case, the point cloud only covers the visible surface, right? This is in fact related to my question on the formulation of (2). If templates are only predicted for the one side of the object, then there should be only a single peak along a ray. On the other hand, if templates are predicted for a closed surafce, then there should be in general at least two peaks along a ray. 4. In L145, L_{var} tends to make the template "spherical". This seems only suitable for very pointed features but may not approximate general / planar surface well. 5. Regarding the per-template features, do you mean that each feature is sampled at a pre-defined grid location on the image feature maps? For the case of 576 temples, the grid used would be 24x24? Since the input images are captured at different viewpoints, aggregating features at the same spatial location would not be very meaningful. Please kindly further clarify this step. Thank you. --- Reply to Comment 1.1.1: Comment: Happy to hear that our rebuttal helped. In regards to your detailed questions please see our point by point responses below. > **Q1 - In l120, why is the voxel grid feature V_k indiced by k?** We apologise for the confusion and thanks for pointing this out. We realize that, $V_{k}$ should actually be $V$, which we will revise in the next version. The encoder encodes the images into a per-template latent code (as described in the response to your Q5), and once these latent codes are obtained, we decode these codes into voxel grid features $V$ through transposed convolutions in $D_{vox}$. You can think of $V$ as $N_{t}$ local volumes each of size $M\times M\times M\times C$ arranged in a *grid* of size $sqrt(N_t) \times sqrt(N_{t})$. > **Q2 - In l124, what is the dimension of tri-linearly interpolated feature f_{vox}(p)? Is it C x N_t? Is the tri-linear interpolation performed on the MxMXM grid?** If $X = sqrt(N_t) \times M$ and $Y = sqrt(N_{t}) \times M$ and $Z = M$, then tri-linear interpolation takes place on a $X \times Y \times Z$ grid. Hence the dimension of the tri-linearly interpolated feature is just $C$. > **Q3 - In l140, is the sparse point cloud reconstructed from 3 views? How many points are reconstructed and used on average? In that case, the point cloud only covers the visible surface, right? This is in fact related to my question on the formulation of (2). If templates are only predicted for the one side of the object, then there should be only a single peak along a ray. On the other hand, if templates are predicted for a closed surafce, then there should be in general at least two peaks along a ray.** Following the previous works [1,2,3], during training we use the COLMAP point cloud reconstructed from *dense* views of the objects used for training the TPN. With this approach, the TPN network will learn how to *interpolate* the sparse regions through templates which can then be used for regularization. The scenario about closed surface occurs when we have $360&deg;$ object centric scenes, in which we might get 2 peaks if the templates are very sharp around the surface (which is not the case as per the training strategy of TPN). However, this particular case can be easily handled by sorting the peak depths and choosing the peak which is closer to the camera and using the same regularization term proposed in Eq. $14$ of the main paper. In addition to this, we would also like to emphasize that these peaks cannot be considered as an *exact* location of a surface but rather the purpose of these peaks is to just serve as anchors to stitch the surface over sparse regions. The rest of the reconstruction is taken care by the volume rendering step. Hence, volume rendering loss along with the regularization tries to find out the most optimum surface under sparse scenarios in our method. Moreover, our empirical analysis shows that there is always a single peak along a ray for the objects taken into consideration in the datasets we experimented with (DTU, BMVS) as mentioned in the response above to Q6. > **Q4 - In L145, L_{var} tends to make the template "spherical". This seems only suitable for very pointed features but may not approximate general / planar surface well.** $L_{var}$ encourages all the patches to be of similar sizes. This along with $L_{rad}$ will prevent the surface to be approximated only using very few *large* templates. Along with this, $L_{var}$ is also required for a more *stable* training of the TPN, because a skewed radius (in one dimension) will lead to NaN during training which the $L_{var}$ prevents. We have a trade-off here between more stable training and the variety of surfaces these templates can model and hence we balance the two. In addition to this, we do agree that this representation is not a *universal* representation for all type of surfaces, however, 3D gaussians are widely adopted representation in computer graphics for shape abstraction [4,5,6] from which we draw inspiration. [1] Roessle, Barbara, et al. "Dense depth priors for neural radiance fields from sparse input views.", CVPR 2022\ [2] Ren, Yufan, et al. "Volrecon: Volume rendering of signed ray distance functions for generalizable multi-view reconstruction.", CVPR 2023\ [3] Long, Xiaoxiao, et al. "Sparseneus: Fast generalizable neural surface reconstruction from sparse views.", ECCV 2022\ [4] Genova, Kyle, et al. "Learning shape templates with structured implicit functions.", CVPR 2019\ [5] Muraki, Shigeru. "Volumetric shape description of range data using “blobby model”." CGI 1991\ [6] Tretschk, Edgar, et al. "Patchnets: Patch-based generalizable deep implicit 3d shape representations." ECCV 2020
Summary: This work presents a volume rendering-based sparse view neural surface reconstruction method. For the hard sparse view reconstruction ,the authors propose to learn neural templates as surface priors to guide the learning of neural fields. The results on DTU and Blended MVS are better than NeuS and MonoSDF . Strengths: The idea of learning templates for enhancing spase view reconstrucion is novel and make sense. Weaknesses: This paper did not compare with the the most related works on sparse view reconstruction like SparseNeuS and VolRecon. Comparing only with NeuS / MonoSDF are not convincing, since this methods do not have special designes for sparse view reconstruction. The results on dense view reconstruction are quite poor, shown in Fig. 4. My concern lies in the poor results in dense view reconstruction, i.e. the artifacts in Fig.4, where all the baselines perform better. The quantitative comparisons shown in Tab.2 further deepen my concerns, where the SOTAs this paper not compared already reduce CD to less than 0.55 (e.g. Geo-NeuS), where this paper can only achieves comparable results with NeuS (0.84 vs. 0.79), which was published two years ago. I do not think that choosing MonoSDF as the main baseline is convincing, since MonoSDF is mainly designed for scene-level reconstruction, where the monolar priors of MonoSDF are not suitable for object level reconstruction (e.g. DTU). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Why not follow the experimental settings of SparseNeuS(ECCV22)? Is the proposed method also suitable for scene-level multi-view reconstrucion (e.g. Replica/ScanNet)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments. >**Q1 - No comparisons to SparseNeuS** We reiterate that our framework is designed to excel at reconstruction from sparse (i.e., few in number) *and* wide-baseline/disparate (i.e., little overlap) view images. Due to the latter criterion, we did not show comparisons to SparseNeuS, whose performance hinges on having sufficient overlap between the input views. We contacted the authors of SparseNeuS who confirmed that disparate view scenarios require significant architectural modifications. We did conduct experiments that showed that if the input views were disparate, then SparseNeuS did not converge. >**Q2 - No comparisons to VolRecon** In the case of VolRecon, surface reconstruction occurs through point cloud fusion from different views, rather than a single holistic reconstruction, thereby making the reconstruction vulnerable to outliers under disparate scenarios. This can be observed in our quantitative results in the below table and qualitative results in the rebuttal PDF (Fig. 1). Note that the metric used is Chamfer Distance (CD ($\downarrow$)). | Scan ID $\rightarrow$ | 24 | 37 | 40 | 55 | 63 | 65 | 69 | 83 | 97 | 105 | 106 | 110 | 114 | 118 | 122 | Mean | |:---------------------:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| | VolRecon | 3.59 | 4.16 | 4.12 | 3.2 | 3.56 | 3.76 | 2.46 | 2.44 | 2.57 | 2.66 | 2.75 | 3.75 | 1.6 | 3.0 | 2.16 | 3.05 | | Ours | **3.37** | **4.11** | **1.46** | **0.75** | **2.74** | **1.52** | **1.13** | **1.63** | **2.08** | **0.98** | **0.87** | **0.87** | **0.47** | **1.24** | **1.57** | **1.77** | > **Q3 - The artifacts in Fig.4, where all the baselines perform better** It is true that there are some artifacts in the *background* region in the dense view reconstruction. However, the surface reconstruction accuracy of the object is comparable, if not more, to the baselines shown in the figure. This can be verified from the quantitative results of Table 2 in the main paper. > **Q4 - Dense-view results and GeoNeuS** Please see Q2 in the global response for explanations about dense-view reconstruction results. GeoNeuS uses photometric consistency and a sparse reconstructed point cloud from COLMAP to regularize the reconstruction. In the case of scenes that are captured in the wild, COLMAP reconstruction is often noisy. Using this point cloud for regularization can negatively affect the reconstruction as observed in [1,2]. In contrast to this, DiViNet uses template priors which are trained across data. Such data-driven priors have been shown to be immune to outliers and noise [3]. We validate this by comparing our results with GeoNeus on dense views on the MobileBrick dataset [4]. The qualitative results are in the rebuttal PDF (Fig 3) and the quantitative results are in the table in the global response Q2. > **Q5 - Scene-level reconstruction** Our templates are only trained for object-level reconstruction. We do not currently try to reconstruct indoor scenes. One challenge, as mentioned in response to R#jSFe, is that the template prediction network relies on the COLMAP reconstructed point cloud to effectively learn surface priors. However, indoor scenes comprise large textureless regions because of which COLMAP reconstruction fails, as observed in [4]. Hence, training the template network becomes a challenge. We agree that this is an interesting direction for future work to investigate how Gaussian templates can be used for reconstructing indoor scenes. [1] Zhang, Jingyang, et al. "Critical Regularizations for Neural Surface Reconstruction in the Wild", CVPR 2022 \ [2] Li, Zhaoshuo, et al. "Neuralangelo: High-Fidelity Neural Surface Reconstruction", CVPR 2023 \ [3] Huang, Jiahui, et al. "Neural Kernel Surface Reconstruction", CVPR 2023 \ [4] Wang, Yusen, et al. "Neuralroom: Geometry-constrained Neural Implicit Surfaces for Indoor Scene Reconstruction", arXiv 2022 --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal. Comment: Thanks for the rebuttal. I am still not convinced by the response of authors to exclude SparseNeuS for comparison. Where are the results that the authors claim to "did conduct experiments that showed that if the input views were disparate, then SparseNeuS did not converge."? In my experiments, the sparseNeuS may produce not good results under little overlap of 3 views but can still converge and will not crash. I think the authors should share the results that "not convege" and anylyse them. Also, it is much more convincing to conduct experiments on both large overlap and little overlap settings, where you can provide a fair comparison with previous works in "large overlap" and also show your special ability in dealing with the new "little overlap" It is not suitable to just creat a new setting with "little overlap" and show no comparisons to the previous method of sparse view reconstruction due to different experiment settings. Still, the authors do not respond to my concern in "I do not think that choosing MonoSDF as the main baseline is convincing, since MonoSDF is mainly designed for scene-level reconstruction, where the monolar priors of MonoSDF are not suitable for object level reconstruction (e.g. DTU)." --- Reply to Comment 1.1.1: Comment: Thank you for the additional questions. > **Q1 - I am still not convinced by the response of authors to exclude SparseNeuS for comparison. Where are the results that the authors claim to "did conduct experiments that showed that if the input views were disparate, then SparseNeuS did not converge."? In my experiments, the sparseNeuS may produce not good results under little overlap of 3 views but can still converge and will not crash. I think the authors should share the results that "not convege" and anylyse them.** We appreciate your attention to detail. We did conduct experiments with SparseNeuS in the disparate view setting. In the first step, we trained the entire model from scratch in this new setting, using the default hyperparameters. However, during the fine-tuning step what we experienced was that the one-shot output from the trained model outputs a very noisy mesh and then once the iterations of the fine-tuning progress, the mesh vanishes. Since it did not generate any mesh upon finetuning, we decided to not show the results. In retrospect, we should have provided such details instead of simply calling it a non-convergence. Since the result was essentially a “failure”, we contacted the first author of the SparseNeuS paper just to be sure. He did kindly reply and remarked, “If your training images are not overlapped, sparseneus won't work, since sparseneus heavily relies on the matching information of overlapping images.” At last, please note that our rebuttal did provide a comparison to VolRecon, a follow-up of SparseNeuS which performs better. We believe this comparison is likely more meaningful. Still, if requested, we are happy to show any qualitative results we could obtain from SparseNeuS in the revision. > **Q2: Also, it is much more convincing to conduct experiments on both large overlap and little overlap settings, where you can provide a fair comparison with previous works in "large overlap" and also show your special ability in dealing with the new "little overlap" It is not suitable to just create a new setting with "little overlap" and show no comparisons to the previous method of sparse view reconstruction due to different experiment settings.** Our work focuses on the reconstruction problem with sparse and disparate input views. Hence, foremost, our experiments were conducted to convince the readers of this, as shown in Table 1 and Figs. 1, 3, and 5. For *completeness*, we also provided results and comparisons under the dense view setting, where our method was not the best, but close to being so. Note that we did not make any claim that our method ought to be the best. The main point we wanted to convey is that our method can achieve the best disparate view reconstruction without significantly sacrificing quality when the views are dense (and with large overlaps). The reviewer is correct in that the only input setting that we missed is a few views with large overlaps, as in SparseNeuS. But we made no claim over the superiority of our method in this setting either. If there was no intent to convince the reader of this on our part, we do not think there is any unfairness in not providing comparisons to SparseNeuS or other methods in this particular setting. Again, for *completeness*, we can surely provide such comparisons in the supplementary material. Regardless of where our method places, missing such an experiment is inessential and inconsequential to the main selling point of our work.
Rebuttal 1: Rebuttal: We thank all reviewers for their insightful comments. It is encouraging to see that the reviewers find the addressed problem important (R_4rRP), with a novel (R_jSFe, R_4rRP, R_hUMT) and technically sound (R_bjQu) proposed solution that circumvents the requirement of explicit cues (R_4rRP) and/or 3D supervision (R_Ahim), while achieving high-quality 3D neural surface reconstruction (R_Ahim, R_jSFe). > **Q1 - Generalizability of the neural template network (TPN) (R#Ahim, R#4rRP, R#bjQu)** We appreciate the reviewers for raising the issue of generalization to new data distributions. We also realize now that our remark at line 274 in the paper may have cast some doubts about our TPN on this front. This particular remark was made *in principle* since the templates were learned from and characteristic of the training data. Now we actually test the generalizability of our TPN on a new dataset, namely MobileBrick [1], by comparing the reconstruction results when the neural templates were learned by a pre-trained TPN (on DTU) vs. when they were trained on MobileBrick. Note that overall, the models from these two datasets are quite different in terms of geometry and structure. The last two columns in the Table below and the qualitative results in Figure 2 (see rebuttal PDF) show that the reconstruction qualities under the two scenarios are comparable, attesting to the generalizability of our TPN. Furthermore, we compare the generalizability of our neural reconstruction framework to that of MonoSDF (original code, with default hyperparameters), under the sparse+disparate input views, tested on MobileBrick. The second column of Table and also Figure 2 in the rebuttal pdf show that the reconstruction results by MonoSDF fall significantly behind those by both versions of our method. We will include these experiments and results in the revision and then revise our remark on line 274 about generalizability accordingly. Note that we chose MonoSDF for comparison since it is the closest approach to DiViNet in spirit, i.e., both overfit to an input with the reconstruction assisted by a learned prior, and both are applicable to the sparse input setting. However, the priors employed by the two methods are quite different – in the case of MonoSDF, priors come in the form of depth and normal maps, pre-trained on a large-scale dataset with 3D ground-truth supervision, while our work uses Gaussian templates trained on DTU, without requiring any such supervision. In principle, our TPN can benefit from more data and more sophisticated architecture design. Results demonstrating the generalizability of our method when applied to test models from MobileBricks, with the TPN pre-trained on DTU vs. trained on MobileBrick, also with a comparison to MonoSDF. The metrics used is F1 score as reported by MobileBrick dataset. | | MonoSDF | Ours (pre-trained TPN) | Ours (TPN re-training) | |:---------:|:-------:|:--------------------------:|:--------------------------:| | Bridge | 0.06 | 0.565 | **0.658** | | Camera | 0.282 | 0.61 | **0.67** | | Colosseum | 0.055 | 0.219 | **0.22** | | Castle | 0.019 | 0.175 | **0.187** | >**Q2 - Reconstruction quality under dense-view scenarios (R#hMUT, R#bjQu)** Several reviewers pointed out that our method is not the best performer under dense input views, e.g., placing #2 in Table 2 of the main paper. To this, let us state first and foremost that dense-view reconstruction is *not* our focus. More importantly, one probably should not expect that a single method must be the best performer in both sparse *and* dense view settings. We believe that for a method to really excel in one setting, it may rely on specific priors or inductive biases that are different from those necessary in the other setting. For example, multiview consistency is applicable for dense but not disparate views. Having said the above, we now show, based on reviewer requests, that the performance of our method on dense input views is stronger than what the current results may reflect. First, we compare to GeoNeuS (R#hUMT), which was initially neglected since it was specifically designed for dense-view scenarios. We tested GeoNeuS on MobileBrick and it *underperforms* compared to DiViNet. This demonstrates the robustness of our approach on different datasets, on dense views. The quantitative results are shown in the below table. As per the evaluation protocol of MobileBricks, we use the F1 score ($\uparrow$ is better) to quantify reconstruction accuracy. | | Bridge | Camera | Colosseum | Castle | |:------------------:|:------:|:------:|:---------:|:------:| | GeoNeus | 0.74 | 0.73 | 0.36 | 0.457 | | Ours | **0.915** | **0.846** | **0.415** | **0.572** | Overall, these new results demonstrate that our solution is extendable to dense-view scenarios, without significantly compromising result quality in comparison to SOTA. [1] Li, Kejie, et al. "MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices.", CVPR 2023. Pdf: /pdf/6813cabda6010b8f294831542c83e40360e04085.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose a framework for sparse view 3D reconstruction from disparate views. A two stage approach is presented for reconstruction of a scene from posed sparse images. In the first stage a template is predicted from the sparse images, represented by a number of parametric 3D gaussians. The second stage uses the predicted template to reconstruct the scene from sparse views. An SDF representation is used to represent the geometry. State of the art performance is shown on shape reconstruction and novel view synthesis. Strengths: 1. **Clarity** : The paper is very well written with great attention to detail. Each component is adequately motivated. The approach section is built up in a very methodical and thorough manner. 2. **Reproducibility**: The exact implementation details and training specifics are made clear, aiding in the reproducibility of the proposed approach. 3. **Results**: The qualitative results especially for shape recovery is very compelling, particularly for scenes with wide baselines. 5. **Novelty**: The use of gaussian templates to guide the reconstruction of the scene under sparse setting is a simple and elegant idea that is also easy to incorporate into existing frameworks as a form of regularization. This approach would serve as an important and strong baseline for sparse view 3D reconstruction method. 6. **Quantitative analysis**: The approach has been validated against a variety of contemporary approaches and state of the art performance is shown on chamfer distance for recovered surface. Weaknesses: 1. **Assumption on 3D information for stage 1**: The losses to train the template prediction network pretrains this network against dataset that have point clouds from COLMAP. Does this limits the applicability mainly to kind of scenes where COLMAP provides enough reconstruction information? Providing some more details about this prior is helpful. 2. **Evaluation**: Although quantitative metrics are provided for geometry, also include image level metrics like PSNR/ SSIM/ LPIPS for the novel view synthesis task is potentially helpful to strengthen the narrative of the evaluation section. 3. **Effect of number of template gaussians**: An ablation study showing how the number of gaussians are chosen and the effect that this number has on the reconstruction quality is instructive. 4. **Additional ablations**: Quantitative ablations are provided for CD as a function of number of input views. However, the manuscript would benefit from the ablations below: > - *Quantitative ablation* showing the effect of different regularization terms (particularly the SDF constraint and the depth constraints). > - *Quantitative/ Qualitative ablation* showing the direct optimization of posed sparse views without needing stage 1. > - *Qualitative ablation* showing the importance of $L_{cov}$ , $L_{radius}$ and $L_{var}$ in Stage 1. 5. **Video Results**: Although not strictly necessary, including turntable video results of the recovered geometry in the supplm will help demonstrate the efficacy of the approach better. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. $D_{geo}$ essentially takes the feature map and predicts the center, radius and scale. Does this imply that the template is only based on global level information of the scene, since all local information is lost? 2. The exact specifics of how the spatial resolution of the feature map is flattened for $D_{geo}$ is unclear. Is there a pooling happening between the feature layer and the geometry decoder? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Adequate treatment of the limitations of the approach has been provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the encouraging comments. > **Q1 - Assumption on 3D information for stage 1** Yes, currently, our template prediction network requires losses with respect to COLMAP reconstructed point cloud for it to learn the surface priors effectively. Hence, the quality of the learned templates largely depends on the quality of the reconstructed point cloud provided for supervision during training, where the predictions of the template network will be impacted if severe holes exist in the point cloud due to texture-less regions. However, we would also like to emphasize that all the methods which make use of COLMAP reconstruction make use of this assumption, e.g., [1]. > **Q2 - Evaluation** In the below table, we now provide the requested image-level metrics (PSNR/SSIM/LPIPS) for novel view synthesis. Note that MonoSDF only reports PSNR evaluation. As visible, our PSNR outperforms MonoSDF by a significant margin. | | PSNR ($\uparrow$) | SSIM ($\uparrow$) | LPIPS ($\downarrow$) | |:-------:|:-----------------:|:-----------------:|:--------------------:| | MonoSDF | 23.64 | - | - | | Ours | **24.34** | 0.7208 | 0.264 > **Q3 - Effect of number of templates Gaussians** We agree that this ablation study is valuable. Due to the required training time for these ablations, it was not feasible to produce them as part of the rebuttal. However, we are currently running them and will add the new results to the revision. > **Q4 - Additional ablations** We provide the quantitative evaluation showing the effect of different regularization terms for the object scan 65 shown in the paper. As shown in the table, using both constraints during regularization gives the best result. | | Chamfer Distance (CD) ($\downarrow$) | |:----------------------:|:------------------------------------:| | Only SDF Constraint | 2.70 | | Only depth constraint | 1.96 | | SDF + Depth Constraint | **1.52** | We are currently running the remaining two ablations comprising of direct optimization of posed sparse views without needing stage 1 and the importance of loss terms in stage 1. We will add the new results to the revision. > **Q5 - Video Results** We will provide a video demonstrating the reconstruction quality of the meshes in the revision. > **Q6 - Clarifications about $D_{geo}$** Yes, the templates are based on the global information of the scene since $D_{geo}$, the module responsible for predicting the template parameters, consumes a feature map that encodes the global scene information from the input image. Such a global guidance for sparse-view surface reconstruction is *exactly* the motivation for our work, and Gaussian templates parameterize this guidance. We will clarify this in the revision. > **Q7 - Clarification about feature mapping in $D_{geo}$** Yes, it's a kind of pooling achieved by bilinear interpolation. Here we encode the image into feature maps of fixed size, and based on the number of templates, create a uniform grid that is used to interpolate the features bilinearly. In this way, we get the latent codes for each template (aggregated locally), which are then decoded by the $D_{geo}$ and $D_{vox}$ decoders. We will clarify this in the revision. [1] Xu, Qiangeng, et al. "Point-nerf: Point-based neural radiance fields.", CVPR 2022.
null
null
null
null
null
null