title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Managing Temporal Resolution in Continuous Value Estimation: A Fundamental Trade-off
Accept (poster)
Summary: The paper discusses the impact of time discretization on estimating cost functions for continuous-time stochastic optimal control problems. The authors consider a discrete-time Monte Carlo estimator based on the left-rule formula for numerical integration. Under the LQG assumption, the authors derive closed-form results for the mean square error (MSE), which depend on the sample size and the integration step size. The authors provide a numerical evaluation based on their modeling assumptions and propose hypotheses on how to extend their findings to general nonlinear systems. Strengths: A large class of problems is naturally described as continuous-time systems, making investigations on this topic important. The paper effectively demonstrates how time discretization impacts the estimation of continuous-time quantities, such as the cost function. The work is well written and easy to follow. The results derived in this work are well presented and properly justified. Weaknesses: I believe the main weakness lies in the significance of the contribution itself. It is to me not particularly surprising that closed-form results for MSE under the LQG assumption can be found. Additionally, it is unclear to me when this method should be used. For instance, if one knows they have an LQG problem, they already know that the value function is a quadratic function and its coefficient can be computed by estimating the dynamics parameter. Furthermore, the numerical evaluation for the nonlinear systems appears somewhat weak, and I am not sure whether the results can be easily transferred from the linear to the nonlinear regime. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the time reviewing our work and for providing valuable feedback. We respectfully note that the formulation of MSE under a fixed data budget has not been explored before. While it may be reasonable to anticipate a closed-form MSE under this formulation, we believe that the process of devising this formulation and developing an exact characterization of the trade-off w.r.t. the step-size constitutes a significant contribution, even if it appears intuitive in retrospect.\ Note also that the aim of the paper is not to provide a method for estimating the value-function in the LQG setting, but rather to characterize an overlooked phenomenon affecting any continuous stochastic system. Our experiments on nonlinear systems in standard benchmark extend beyond the typical scope of experiments in LQG literature. These were designed to assess how well the theoretical findings could still hold when certain assumptions were not met. Interestingly enough, the results seem to extend quite remarkably to this more challenging scenario. This suggests that our findings may have broader applicability than the specific conditions under which our theoretical analysis was established.\ We hope that this clarifies the purpose and the scope of the evaluation for nonlinear systems, in response to the comment that it “appears somewhat weak”. If it does not address your concern, we kindly invite you to provide further clarification. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their clarifications. I still believe this is a solid piece of work and a "moderate-to-high impact paper", therefore, I will keep my score.
Summary: For the reinforcement learning (RL) setting, many problems are cast into a fixed discrete time sampling of the true underlying system. This paper investigates given some fixed data budget available, what is the optimal sampling rate in terms of temporal resolution to balance the trade-off between approximation and statistical estimation error. Theoretical results are obtained for the exact form of these errors is derived for the MSE of a Monte Carlo in a Langevin process for both finite and infinite horizon setting. This shows that there is an optimal choice in temporal resolution to balance these errors, and this is verified by simulated linear quadratic systems. Similar kind of trade-offs can be found in empirical (nonlinear) environments from gym and MuJoCo. Strengths: This is a well-written paper which looks at a problem in value estimation that does not appear to have been explicitly studied on this level before. There is detailed theoretical analysis for the case of a Langevin process, and the results are interpreted in a practical way. The demonstration of these theoretical results in numerical simulations in Section 4.1 is very clear. Verification of similar effects on empirical environments is useful to see. Overall, this paper contributes to the literature on additional considerations that should be made in continuous time RL setting, and it would be useful for future developments in this area. Weaknesses: Practically the policy is varying at the same time, and perhaps the same step size should not be used over the course of training. Therefore it may be that the practical impact on implementation of this paper may be limited. For Figure 3, it seems to be misleading to fit a line matching $h = \mathcal{O}(B^{-1/3})$ for the empirical experiments. Particularly given lines like for that of the InvertedDoublePendulum, clearly a different gradient should be considered. The observation on these empirical experiments are interesting, but there is no need to force the results on these nonlinear systems to match the analysis in the previous section. In setting the step-size $h$, this relies on the ability to perform experiments on a smaller data budget first, which may not always be possible in practice. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How might an adaptive sampling procedure affect the results? It is assumed that the estimation horizon is the same length as the episode, this seems like a strong assumption which does not hold in a lot of applications (i.e. you do not know how long the episode is a priori, nor are the episodes the same lengths). How would the results in finite horizon change in this case? Should the $h$ on line 165 be $h^*$? Can you clarify where the MSE_T between line 168 and 169 come from? The second term on the right hand side for the MSE_T has denominator $hB$, will this assume $h = \mathcal{O}(B^{-1/3})$ therefore the second term is $\mathcal{O}(B^{-2/3})$ hence till tends to 0? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The theoretical results obtained seems to assume a fixed observation/estimation horizon T, that is pre-set, but since episode are not of fixed lengths in RL settings, and indeed may increase as the policy improves, this may affect the validity of the theoretical results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the time reviewing our work and for providing valuable feedback. We will correct the typos pointed out by the reviewer. In the following, we address the main concerns. > Can you clarify where the $\text{MSE}_T$ between line $168$ and $169$ come from? `Re:` It was obtained by plugging $h^*$ from Eq. ($9$) into the $\text{MSE}_T$ in corollary $3.2$. > The second term on the right hand side for the $\text{MSE}_T$ has denominator $hb$, will this assume $h = \mathcal{O}(B^{-1/3})$ therefore the second term is $\mathcal{O}(B^{-2/3})$ hence till tends to $0$? `Re:` Note that the expression for the optimal $h$ is indeed $\mathcal{O}(B^{-1/3})$, therefore the conclusion holds. The optimal $\text{MSE}_T$ tends to $0$ as $B\to\infty$. The intuition is that given unlimited data ($B\to\infty$) and finite horizon $T$, the approximation error and variance both reduce to $0$ due to $h\to0$ and having infinitely many episodes. > Figure $3$ `Re:` Figure $3$ was meant to illustrate the comparison between a line fitted on data with a fixed order in $B$, given by the theoretical analysis, and experimental data points. We did not intend to force the results: InvertedDoublePendulum was indeed a negative example. The provided chart was indeed meant to show that the approximation given by the knowledge of the order in $B$ is good, but not always accurate. We will state it more clearly in further revisions to improve clarity. > "estimation horizon is the same length as the episode" `Re:` Perhaps we misunderstand the reviewer’s point: we explicitly analyzed the discounted infinite horizon case where the estimation horizon $T$ is necessarily different from the system horizon $\tau$, and so we did not assume the estimation horizon is the same length as the episode. ### Non-uniform horizon This is an interesting issue. We first note that the technical analysis is grounded in LQR systems, where the conventional setting involves a fixed episode horizon, either finite or infinite (with discounting), without considering early termination.\ In the nonlinear experiments, we utilized the benchmark Mujoco environments, which have a fixed time limit. Coupled with our assumption of a stable policy, this ensures that all episodes maintain consistent lengths. Hence a uniform horizon aligns well with the scope of our work in both the theoretical analysis and experimental results.\ While we acknowledge the potential interest in extending our analysis to scenarios with non-uniform horizons, this would necessitate a different formulation, potentially adopting the approach proposed in [Poiani et.al 2023]. We would like to explore this direction in future work. ### Adaptive sampling An adaptive sampling procedure, similar to those adopted in solving SDEs [Ilie et. al. 2015], could represent an interesting direction for future work. Nonetheless, there are few factors that complicate its applicability in the current setting.\ From a practical perspective, the most important difficulty lies in the additional hyperparameters, which needs to be fine-tuned.\ With regard to the analysis, establishing a theoretical framework for investigating non-uniform step-size is not straightforward, as it involves additional degrees of freedom, due to the fact that the number of steps in each trajectory does not need to be consistent, thus making the optimal step-sizes under such a setting not necessarily unique. **References:** Poiani, R., Metelli, A. M., & Restelli, M. (2023). *Truncating Trajectories in Monte Carlo Reinforcement Learning.* arXiv preprint arXiv:2305.04361. Ilie, S., Jackson, K. R., & Enright, W. H. (2015). *Adaptive time-stepping for the strong numerical solution of stochastic differential equations.* Numerical Algorithms, 68(4), 791-812. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for responding to my comments and providing further references, it has been useful. Apologies that I wasn't very clear before, my comment about the estimation horizon was also relating to the non-uniform episodes I believe (and therefore addressed in your comments above). I think this is a moderate-to-high impact paper that should be accepted so I will leave my score as 6.
Summary: The authors study the impact of time discretization on RL methods in order to improve data efficiency and show that that data efficiency can be significantly improved by leveraging a precise understanding of the trade-off between approximation error and statistical estimation error in value estimation. They conduct a theoretical analysis followed by numerical ones on MuJoCo environments to demonstrate value. Strengths: Quality and Clarity: The paper is well written and presents a clear and detailed theoretical framework to study the effect of time discretization in terms of data efficiency for control problems. The derivations are easy to follow with necessary background provided in the supplementary materials. Originality: Marginal improvements are demonstrated by providing the necessary theoretical foundation which is missing from some prior work like Lutter et al as pointed out by the authors. Significance: The paper addresses an important and relevant problem of improving data efficiency for data hungry RL algorithms. Weaknesses: Sample efficiency is a huge pain point for solving control tasks in real world where collecting large amounts of data is costly, infeasible or even dangerous. A key weakness I found is the fidelity of this approach in real world scenario. I found the theory to be sound, but built on a lot of simplifying assumptions that do not hold true in real world. So although the work is interesting, I am not sure how strong will these results hold on real world data where the noise isn't additive, or the model isn't linear. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No specific questions. All concerns are addressed in the paper+supplementary materials. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have done a good job in stating the limitations that I agree with- (a) it would be interesting to see how this work extends to more advanced techniques that MC estimation, like TD learning. (b) Secondly, studying the full control setting that includes policy optimization and, (c) simplifying assumptions like linear models with additive noise are considered, but in real world these assumptions don't hold where the observations are noisy, and systems are non-linear. So how this analysis extends to real world sample efficiency for RL algorithms is unclear from the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the time reviewing our work and for providing valuable feedback. While it is true that the technical analysis holds only in the case of linear dynamics, we explicitly conducted experiments in nonlinear systems to understand if the theoretical findings could still hold when certain assumptions were not met. Note indeed that in the non-linear Mujoco environments, the setting is significantly different from the one considered in the exact analysis. Nonetheless, the general results seem to extend quite remarkably to this more challenging scenario. This suggests that the results may have broader applicability than the specific conditions under which the theoretical analysis was established. \ Many nonlinear behaviors can be approximated by a high-dimensional linear system, which would nevertheless be bounded by our most general results on $n$-dimensional systems, hinting at the fact that similar trade-offs could characterize nonlinear systems as well, as demonstrated through numerical analysis. We believe that this work broadens the perspective and provides valuable opportunities for further exploration and development. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing my questions and concerns. I will keep my score and vote for acceptance.
Summary: The paper studies the optimal temporal discretization level to achieve the best Mean Square Error (MSE) in continuous value estimation, considering a fixed budget constraint (number of total Monte-Carlo samples). The paper provides theoretical analysis on a 1-dimensional Langevin dynamical system with quadratic costs and derives the asymptotic optimal step-size under both finite and infinite horizon settings. Extensive numerical studies, including non-linear MuJoCo environments, support the theoretical findings and reveal the bias-variance tradeoff associated with the choice of temporal discretization level. Strengths: Motivation and clarity: - The paper presents a clear motivation and an interesting problem setup regarding temporal resolution in continuous value estimation. - The paper is well-written with a clear structure. Key messages, such as the problem setup, objectives, key theoretical results, and numerical result figures, are easily understandable. Quality of the paper: - Section 3 delves deep into a special case of a 1-dimensional Langevin Process, fully characterizing the MSE and providing insights into the optimal step-size. - Section 4 builds upon the results from Section 3 and provides extensive numerical studies in various environments. - The conclusion of the paper effectively demonstrates the limitations and shows a deep understanding of the topic, as well as potential avenues for future research. Weaknesses: Limitation of the scientific impact: - My major concern lies in the limitation of the impact, as also mentioned in the paper's conclusion. The optimal resolution results are constrained to Naive Monte-Carlo estimation, which typically serves as a baseline for policy evaluation algorithms and cannot be directly extended to advanced methods such as temporal difference. - Although the numerical results demonstrate the bias-variance tradeoff for non-linear systems, the current theoretical results on the 1-dimensional linear dynamical system cannot be directly extended to more complex systems, where exact characterization no longer exists. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Suggestions: 1. If simplicity is not our primary criterion, it would be interesting to explore alternative discretization plans beyond uniform discretization. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the time reviewing our work and for providing valuable feedback. One clarification we would like to respectfully add is that tight bounds for the general case of a linear $n$-dimensional system are established in the paper. That is, the theoretical results are not restricted to the $1$-dimensional case. ### Monte-Carlo estimation While focusing on the case of Monte-Carlo estimation, the analysis serves as an essential starting point for studying the fundamental problem of step-size selection, which has been largely overlooked in the literature and there are currently few works that address this issue - as corroborated by two other reviewers. Historically, the Monte Carlo method has been instrumental in developing other reinforcement learning (RL) algorithms, such as Temporal Difference learning.\ We agree that this line of research can be further explored by considering different estimators (e.g. system identification case - pointed out by another reviewer - or Temporal Difference estimation) and/or different sampling schemes (e.g. adaptive step-size). In particular, the Temporal Difference estimator has a sufficiently simple form that appears to admit tractable analysis. We are indeed exploring similar analyses for TD and other algorithms as part of our future work. ### Alternative discretization plans beyond uniform discretization It is indeed interesting to consider non-uniform discretization schemes, such as through an adaptive sampling scheme, similar to those adopted in solving SDEs [Ilie et al. 2015]. We view this paper as providing the necessary first steps to providing a basis for evaluating alternative discretization schemes.\ From a theoretical perspective, a non-uniform step-size creates additional issues, since the number of steps in each trajectory might vary. A deeper investigation of adaptive step-size schemes represents an interesting future direction. ### Theoretical results for more complex systems It is true that the detailed analysis given in the paper requires the exact solution of the stochastic differential equation, which is unavailable for most nonlinear systems. However, one can leverage the tight bounds established for an $n$-dimensional linear system and expand the dimensionality in order to approximate more complex nonlinear dynamics in many cases.\ This approximation would then exhibit the same trade-off. We believe that these findings already make a valuable contribution to the field. That said, we agree with the value of further exploration in more complex settings. We thank the reviewer for pointing out these other potential research directions. **Reference:** Ilie, S., Jackson, K. R., & Enright, W. H. (2015). *Adaptive time-stepping for the strong numerical solution of stochastic differential equations.* Numerical Algorithms, 68(4), 791-812.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper examines the time discretization for continuous value estimation. By analyzing Monte-Carlo value estimation for LQR systems for both finite-horizon and infinite discounted horizon settings, the authors finds that there is a fundamental trade-off between approximation error and statistical error in value estimation, which indicates there is an optimal choice for time discretization that depends on the data budget. The authors also demonstrate the trade-off in numerical simulation of LQR instances and non-linear mujoco environments. Strengths: - The one dimension example presented in the paper is helpful for understanding - The experiments in the paper clearly support the theoretical analysis and results. Weaknesses: - My only concern is the practise of the setting or scenarios: people may argue that the $\delta t$ is not randomly chosen and might be fixed or determined by the system itself. Can the authors illustrate some realistic scenarios that we can leverage the methods you proposed. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: My questions is the weakness point I listed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the time reviewing our work and for providing valuable feedback. We are motivated by real-world scenarios where one must choose the frequency at which sensors operate to get samples of the system signal. Indeed, in many real-world applications, sensors sample at a higher frequency than necessary, and data is downsampled later on. A realistic scenario where a practitioner chooses $h$ when evaluating the policy of an in production recommender systems at large streaming companies, such as YouTube or Netflix, where the recommendations are entire homepages on the streaming platform. In this setting, it is infeasible to log every single interaction the system has with each user, due to the massive amount of interactions. The solution adopted in practice is to store only snapshots – subsets – of the latter, i.e. only a fraction of the total interactions and corresponding recommendations are logged. Storing a snapshot is costly, both in memory and in computation, as it interferes with other components of the production pipeline. A high sampling frequency causes real performance issues in the production pipeline and requires a large memory, only to marginally improve the estimated value of the recommender system. This is not desirable if the cost of data outweighs the latter improvement. Our analysis suggests that careful consideration should be taken when setting the snapshot frequency in order to mitigate the adverse effects of having to deal with a cumbersome amount of data. Another example is when applying RL to water treatment systems [Chen et al. 2021], one is faced with choosing a proper sampling step-size - or sampling frequency of the sensor. Sensors that gather data at a higher frequency tend to be more expensive. Our result suggests that opting for the faster sensor without careful consideration may not be the most economical or efficient strategy. Even in scenarios where $h$ is fixed (e.g. when a sensor is already in place in an older system), our result provides insights into whether data efficiency can be further improved in existing systems. The implications are two-fold. First, our findings can identify potential areas for improvement to guide future system upgrades. Second, if $h$ is smaller than the optimal value suggested by our method (i.e., the operating frequency of the sensor is too high), it is not necessary to store all samples. Nonetheless, for many sensors used in real-world applications, it is indeed possible to decrease the operating frequency. This insight is particularly useful for edge devices with limited hardware capabilities for storage and computation. In broad terms, our analysis seeks to understand the associated benefits and costs – with the constraint being a fixed data budget – when downsampling a system. **Reference:** Chen, K., Wang, H., Valverde-Perez, B., Zhai, S., Vezzaro, L., Wang, A. (2021). _Optimal control towards sustainable wastewater treatment plants based on multi-agent reinforcement learning._ Chemosphere, Volume 279, 130498. --- Rebuttal Comment 1.1: Title: Thank authors for the response Comment: I would like to thank the authors for answering my questions and clarifying my concerns. I will keep my score and vote for acceptance.
null
null
null
null
null
null
False Discovery Proportion control for aggregated Knockoffs
Accept (poster)
Summary: This paper presents a method KOPI, based on the knockoff framework that controls the false discovery _proportion_ instead of the false discovery _rate_. The key idea of the approach is to note that certain summaries of the knockoff statistics are exactly equal in distribution to certain functions of independent Rademacher random variables under the case of all nulls. One can then estimate the FDP via Monte Carlo using draws of Rademachers, allowing for the setting of thresholds to obtain a desired FDP. This approach is then straightforwardly extended to the case of aggregating multiple draws of the knockoffs. The approach is shown to truly control the FDP while maintaining good power on some simulated datasets. Strengths: * The approach is clever and relatively straightforward. * The approach does not rely on any distributional assumptions beyond the ability to generate knockoffs with the usual exchangeability property. * The paper does a good job of introducing the knockoff framework and situating the present developments in that context. * The approach seems well powered and has attractive theoretical guarantees. Weaknesses: * One minor weakness of the paper is that there are a number of potential tweaks to the method that are not explored or presented. While the authors seem to have found choices that appear to work well, they appear somewhat arbitrary without additional context. For example, the choice of using harmonic mean for aggregation seems somewhat arbitrary without comparing to other choices, particularly the obvious choices like arithmetic or geometric mean. Another example is the choice of template. The theoretical results are quite general, but then only a particular template is considered. Typos: * Line 75: is a word missing here: "knockoffs-based inference arbitrary aggregation schemes"? * Line 76: "and increases sensitivity" --> "and increased sensitivity" * Line 92: "complementary" --> "complement" * Line 93: "cardinal" --> "cardinality" * Line 140: Proposition 2 is empty and presumably included by mistake. * Line 153: The equations following this line are assuming that $W_j$ is positive, and that should be stated. E.g., if $W_j = -2$ and $W_k = 1$, then $1 = W_k \le -W_j = 2$ but $W_k$ is obviously not smaller than $0$. * Line 154: This line is assuming that $W_1,\ldots, W_P$ are ordered, which they are not according to the notation. Either there should be a "without loss of generality, assume that $W$ is sorted such that $\sigma(W)$ is the identity" or all of the indices in the proof should really be $\sigma_j$ and $\sigma_k$ instead of $j$ and $k$. * Line 184: "is arbitrarily close" should be changed to something like "can be made arbitrarily close by taking $B$ large enough" * Line 191: "consists in finding" --> "consists of finding" * Line 280: "face validity" --> "validity" * Line 308: Saying that the method requires no assumption on the "law of the Knock statistics under the null" is ignoring that the Knockoffs much be exchangeable with the original data, no? Technical Quality: 3 good Clarity: 3 good Questions for Authors: * The proof of Theorem 1 essentially bounds $p_0$ by $p$ at one point. Would it be possible to do the usual trick from FDR-type procedures of using a conservative estimate of $p_0$ in order to obtain a more powerful procedure? This is not necessary for the present paper, but I'm curious how difficult such an approach would be. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not believe the present work has any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and insightful comments. We also thank the reviewer for the typos found: they have been fixed in the manuscript. Please find our answers to the points raised below. > One minor weakness of the paper is that there are a number of potential tweaks to the method that are not explored or presented. While the authors seem to have found choices that appear to work well, they appear somewhat arbitrary without additional context. For example, the choice of using harmonic mean for aggregation seems somewhat arbitrary without comparing to other choices, particularly the obvious choices like arithmetic or geometric mean. Another example is the choice of template. The theoretical results are quite general, but then only a particular template is considered. While the theoretical guarantees we obtain hold for all choices of aggregation schemes and templates, these two hyperparameters indeed impact the power of KOPI. To address the reviewer's concern, we have performed an **additional experiment** on simulated data to compare four aggregation schemes: **arithmetic mean, geometric mean, harmonic mean and quantile aggregation [1].** We use the setup described in the main text - i.e. $\alpha = 0.1, q = 0.1$ - and run the simulation 50 times. Importantly, we first check that the FDP is controlled for all types of aggregation and in all settings considered by reporting the bound non-coverage as described in the main text. We use three settings of varying difficulty, parametrized by the correlation level $\rho$: | | Harmonic | Arithmetic | Geometric | Quantile aggregation | |--------------|----------|------------|-----------|----------------------| | $\rho = 0.5$ | $10$% | $0$% | $2$% | $10$% | | $\rho = 0.6$ | $2$% | $0$% | $0$% | $4$% | | $\rho = 0.7$ | $2$% | $0$% | $0$% | $0$% | The FDP is indeed controlled in all cases since non-coverage never exceeds the chosen level $\alpha = 10$%. **This is coherent with the theoretical guarantees we obtain in Theorem 3.** We now report the **average power** to benchmark aggregation schemes: | | Harmonic | Arithmetic | Geometric | Quantile aggregation | |--------------|----------|------------|-----------|----------------------| | $\rho = 0.5$ | **0.91** | 0.77 | 0.87 | 0.90 | | $\rho = 0.6$ | **0.83** | 0.58 | 0.77 | **0.83** | | $\rho = 0.7$ | **0.72** | 0.39 | 0.61 | **0.72** | Note that harmonic mean aggregation **outperforms arithmetic and gemeotric mean consistently** and performs similarly to quantile aggregation. Regarding the choice of template, we also considered using a linear template - i.e. $\mathbf{t}_k = \frac{\lambda k}{m}$ with $\lambda$ to choose by calibration. We found that using a nonparametric template that mimicks the shape of the $\pi$-statitistics under the null **consistently outperforms the linear template.** This is coherent with findings of [2]. Besides aggregation and template choice, we have not introduced additional hyperparameters and we make a canonical use of knockoffs. >The proof of Theorem 1 essentially bounds $p$ by $p_0$ at one point. Would it be possible to do the usual trick from FDR-type procedures of using a conservative estimate of $p_0$ in order to obtain a more powerful procedure? This is not necessary for the present paper, but I'm curious how difficult such an approach would be. This is an interesting idea which seems related to the step-down procedure proposed in Algorithm 4.1 of [4]. Following this idea, we have to compute a conservative estimate of the null set $\mathcal{H}_0$ denoted $\widehat{\mathcal{H}_0}$ with $|\widehat{\mathcal{H}_0}| = \widehat{p_0}$ via vanilla KOPI and use it to sharpen the bound obtained in Theorem 1. We believe that the implicit bounding of $p$ by $p_0$ that the reviewer describes happens at this step of the proof: >In the case where $\mathcal{H}_0 \subsetneq [[p]]$, false null $\chi_j$ will insert $-1$'s into the process on the nulls, implying that $N_k$ is stochastically dominated by $N^0_k$ Concretely, sharpening this bound requires using: \begin{align} \widehat{N^0_k} = \left| \{ j \in \widehat{\mathcal{H}_0}, \chi^0_j=1 \textrm{ and } \frac{1 + Z^0_j}{p} < t_k \}\right| \end{align} with the notation defined above. However, we are not sure what theoretical guarantees can be obtained using $\widehat{N^0_k}$ instead of $N^0_k$. The problem that arises is that **$\widehat{N^0_k}$ depends on the data** and therefore the rest of the proof does not hold. This discussion about a step-down version has been added to the paper. **References** [1] Meinshausen, N., Meier, L., & Bühlmann, P. (2009). P-values for high-dimensional regression. Journal of the American Statistical Association, 104(488), 1671-1681. [2] Blain, A., Thirion, B., & Neuvial, P. (2022). Notip: Non-parametric True Discovery Proportion control for brain imaging. NeuroImage, 260, 119492. [4] Blanchard, G., Neuvial, P., & Roquain, E. (2020). Post hoc confidence bounds on false positives using reference families. Annals of Statistics. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response to my comments. Hopefully they were helpful -- I believe that the presented results strengthen the claims in the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their prompt response and again for their helpful comments. We remain at the reviewer's disposal in case any additional clarification is needed.
Summary: I can understand this paper but I don't have sufficient knowledge to make a solid judgment on it, please ignore my review. Strengths: I can understand this paper but I don't have sufficient knowledge to make a solid judgment on it, please ignore my review. Weaknesses: I can understand this paper but I don't have sufficient knowledge to make a solid judgment on it, please ignore my review. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I can understand this paper but I don't have sufficient knowledge to make a solid judgment on it, please ignore my review. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: I can understand this paper but I don't have sufficient knowledge to make a solid judgment on it, please ignore my review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
null
Summary: This paper studies an important topic: variable selection. Concretely, the authors proposed a novel method KOPI which can theoretically control false discovery proportion at a pre-specified level. Besides, the authors also conduct lots of experiments based on simulated data and real data to verify the effectiveness of their proposed procedure. Strengths: 1, This paper is well-written, its notations and definitions are clear; 2, The authors provide some necessary theoretical guarantees for their proposed algorithm. Weaknesses: 1, Simulation experiments and real-data experiments are conducted, However, in order to verify the effectiveness of KOPI, more experiments on other real-data datasets are needed; 2, It will be better if the authors can provide an analysis of the false nondiscovery proportion (FNP), a dual quantity of FNP, or its expectation FNR., see [1] . [1] Genovese C, Wasserman L. Operating characteristics and extensions of the false discovery rate procedure[J]. Journal of the Royal Statistical Society Series B: Statistical Methodology, 2002, 64(3): 499-517. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and insightful comments. Please find our answers to the points raised below. > Simulation experiments and real-data experiments are conducted, However, in order to verify the effectiveness of KOPI, more experiments on other real-data datasets are needed; To address the reviewer's remark regarding real-world datasets, we performed an additional experiment on genomics data. The dataset we used is part of **GEMLeR (Gene Expression Machine Learning Repository)** [2] a collection of gene expression datasets that can be used to benchmark machine learning methods on genomics data. We chose the "Colon vs Kidney" dataset: this is a binary classifcation dataset where the goal is to distinguish cancerous tissue from two different organs (Colon and Kidney) using gene expression data. This dataset comprises $546$ samples and $10936$ genes. To make the problem tractable for Knockoffs-based methods we perform dimensionality reduction to select the $546$ genes that have the largest variance. Then, **we run all Knockoffs-based methods 50 times** and report the selected genes. | | KOPI | Vanilla KO | e-values | Closed Testing | AKO | | -------------------------- | ------ | ---------- | -------- | -------------- | --- | | Selected in > 90% of runs | **21** | 0 | 0 | 0 | 0 | | Selected in > 50% of runs | 22 | 25 | 0 | 0 | 0 | | Spurious detections (<50%) | **7** | 34 | 20 | 0 | 0 | We display **stability selection criteria** for 5 Knockoffs-based methods. Note that KOPI displays a **very stable selection set across all runs** with $21$ genes present in $>90$% of runs. **KOPI also avoids most spurious discoveries**, as only $7$ genes are selected less than $50$% of the time, compared to $34$ genes using Vanilla Knockoffs and $20$ using $e$-values aggregation. All other Knockoffs-based methods are powerless in all runs. This experiment further shows that KOPI outperforms all other Knockoffs-based methods **on real-world data** in terms of selection stability. > It will be better if the authors can provide an analysis of the false nondiscovery proportion (FNP), a dual quantity of FNP, or its expectation FNR., see [1]. Analyzing the FNP of KOPI is a very interesting perspective. However, it is challenging for several reasons. A first technical difficulty is that the analysis of the FNP made in [1] relies **on the assumption that the test statistics are independent, uniformly distributed under the null hypothesis, and identically distributed under the alternative hypothesis.** In this line of work, a first asymptotic analysis of the FNP/Power of JER controlling procedures is made in section 6.2 of [4]. This requires making relatively strong assumptions about the alternative distribution. An important contribution of the present paper is that **we are not assuming independence between the test statistics or assuming their identical distribution under the alternative.** In contrast, we show that the joint null distribution of the $\pi$ statistics can be sampled from, and our approach is agnostic with respect to their distribution under the alternative hypothesis. Furthermore, the FNP may be difficult to interpret in the context of Knockoffs for various reasons. Indeed, **the fiability of Knockoff statistics may be undermined when input variables are highly correlated [3].** In short, relevant variables that are highly correlated with irrelevant variables may be assigned a negative statistic. This leads to False Negatives for all Knockoffs-based methods. Also, **the FNP is affected by the sparsity of the signal:** Knockoffs-based methods are unable to make rejections if there are too few large positive Knockoff statistics. We have added a paragraph about FNP analysis to the Discussion section of the manuscript. **References** [1] Genovese C, Wasserman L. Operating characteristics and extensions of the false discovery rate procedure[J]. Journal of the Royal Statistical Society Series B: Statistical Methodology, 2002, 64(3): 499-517. [2] Stiglic, G., & Kokol, P. (2010). Stability of ranked gene lists in large microarray analysis studies. Journal of biomedicine and biotechnology, 2010. [3] Spector, A., & Fithian, W. (2022). Asymptotically Optimal Knockoff Statistics via the Masked Likelihood Ratio. arXiv preprint arXiv:2212.08766. [4] Blanchard, G., Neuvial, P., & Roquain, E. (2020). Post hoc confidence bounds on false positives using reference families. Annals of Statistics.
Summary: In this paper, the authors discuss controls for false discoveries in variable selection. While how to perform FDR control is known, authors opt to control FDP, which is a random quantity depending on a specific dataset. The main concept is an upperbound of FDP (proposition 1), namely JER. The monte carlo version of JER is applied to perform FDP control. The authors also propose an aggregated JER to reduce the stochasticity JER for variable selection. Experiments show good FDP control while maintaining good power. Strengths: 1. The paper is very well written. 2. FDP (not just FDR) control is an important problem. This paper targets on an important issue in our community. 3. The proposed method has clear theoretical support (theorem 2 and 3), although I have some questions. Weaknesses: It is not entirely clear to me the delta between this paper and an important related work, i.e, [4]. From Section 4, it seems that using JER to control FDP has already been proposed. Thus, I wonder how significant the contribution of this work is. For example, has Proposition 1 or a similar version already been proposed in [4]? I would appreciate a short paragraph comparing this work and [4]. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I am not sure Why JER, not the empirical JER is studied in Theorem 2? In practice, shouldn't one use empirical JER with monte carlo samples? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and insightful comments. Please find our answers to the points raised below. > It is not entirely clear to me the delta between this paper and an important related work, i.e, [4]. From Section 4, it seems that using JER to control FDP has already been proposed. Thus, I wonder how significant the contribution of this work is. >For example, has Proposition 1 or a similar version already been proposed in [4]? I would appreciate a short paragraph comparing this work and [4]. While the JER framework has indeed been proposed in [4], this work's main contributions differ substantially from [4]: * [4] focuses on marginal testing (e.g. massively multiple t-tests) and not conditional testing. KOPI is the first procedure to combine the JER framework and **conditional testing via the Knockoffs framework and $\pi$-statistics.** * In general, the JER cannot be computed analytically since it requires knowledge of the distribution of the test statistics under the null. In practice [4] relies on permutation schemes to estimate this distribution. This is computationally infeasible in the context of Knockoffs: such a procedure would require computing new Knockoffs for each permutation run. Theorem 1 of this work is, to our knowledge, **the first tractable upper bound on the JER of the $\pi$-statistics or any other Knockoffs-based statistic**. * **KOPI supports aggregation**. Aggregation of test statistics is not considered in [4], which focuses on JER control on $p$-values. In this work, we show that **FDP control stemming from JER control can be made robust using aggregation.** Regarding Proposition 1: indeed this proposition corresponds to Proposition 2.3 in [4] - we simply restate it for completeness and self-containedness and include the proof in the supplementary material. To clarify this, **we have added a sentence in the manuscript** that points to the proposition in [4]. > I am not sure Why JER, not the empirical JER is studied in Theorem 2? In practice, shouldn't one use empirical JER with monte carlo samples? To reach exact FDP control, the **JER itself** - and not the empirical JER - has to be controlled. While indeed $\mathbf{t}^B_\alpha$ is built using Monte Carlo samples in practice and controls the empirical JER by definition, we need to insure that **the JER of $\mathbf{t}^B_\alpha$ is controlled to obtain FDP control**. Theorem 2 shows that this is indeed the case up to Monte Carlo error, which can be made arbitrarily small. **References** [4] Blanchard, G., Neuvial, P., & Roquain, E. (2020). Post hoc confidence bounds on false positives using reference families. Annals of Statistics. --- Rebuttal Comment 1.1: Title: Thanks for replying! Comment: > Theorem 2 shows that this is indeed the case up to Monte Carlo error, which can be made arbitrarily small. Yep, this clarifies my confusion. > this work's main contributions differ substantially from [4]: ... This also clarifies my confusion. Thus I suggest authors include a version of these clarifications in the revision. I continue to lean toward accepting this paper as I believe FDP bounding is an important issue and the tractable upperbound on JER is interesting. Based on the novelty claims authors made, I will raise my score by 1. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their prompt response and careful consideration of our answers. We appreciate their decision to raise their score and remain at the reviewer's disposal in case any additional clarification is needed.
Rebuttal 1: Rebuttal: Rebuttal Summary ---- We thank all three reviewers for their time and comments. Here is a summary of the elements we addressed in our answers: * We have added **two experiments** to address the reviewers' concerns on **real-world datasets** and the choice of the aggregation scheme. We use a gene expression dataset and show that **KOPI improves selection set stability** compared to other Knockoffs-based methods while avoiding spurious discoveries. In the second experiment, we benchmark harmonic mean aggregation against other alternatives on simulated data. * We have clarified the distinction between this work's main contribution and the paper which introduces FDP control via JER control [4] for marginal testing. In short, the present work is **the first procedure to combine the JER framework and conditional testing via the Knockoffs framework and $\pi$-statistics.** The main contribution of the paper is obtaining a **tractable upper bound on the JER of the Knockoffs $\pi$-statistics.** This leads to FDP control in the Knockoffs framework, which hadn't been achieved previously. Furthermore, the procedure we propose supports **aggregation of test statistics to robustify inference,** which is not considered in [4]. * We have investigated interesting theoretical points raised by the reviewers, notably False Negative Proportion analysis and a possible step-down version of the proposed procedure. We remain at the reviewers' disposal, should they have any additional questions or remarks. **References** [4] Blanchard, G., Neuvial, P., & Roquain, E. (2020). Post hoc confidence bounds on false positives using reference families. Annals of Statistics.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Transformed Low-Rank Parameterization Can Help Robust Generalization for Tensor Neural Networks
Accept (poster)
Summary: The paper provides a thorough investigation of the generalization behavior of t-NNs for the first time, which closes the gap between the practical success of t-NNs and their theoretical analysis. To be specific, the authors derived the upper bounds for the generalization gaps. The authors also propose that compressing t-NNs with a transformed low-rank structure will result in more efficient adversarial training and tighter bounds. In addition to that, the authors show that adversarial training with GF in highly over-parameterized settings results in t-NNs with approximately transformed low-rank weights, and derive the sharp adversarial generalization bounds in this scenario as well. Strengths: - The paper provides a thorough investigation of the generalization behavior of t-NNs for the first time. - The authors provide the bound for t-NNs generalization gaps with firm proof. - The authors further explore the generalization bound of t-NNs with a transformed low-rank structure and show that there exists a training framework that will result in approximately transformed low-rank weights. - The work is well motivated and with firm theoretical evaluation. Weaknesses: - Only theoretical evaluation is provided. Some empirical evaluations regarding the approximately transformed low-rank weights are expected. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Instead of using adversarial training with GF in highly over-parameterized settings to train a approximately transformed low-rank t-NN, is it possible to apply some extra regularizations in training to achieve a better low-rank transformation, for example, LoRA? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for your favorable evaluation of our work. Your recognition of our strengths, including the comprehensive investigation into t-NNs' generalization behavior, rigorous derivation of generalization bounds, and exploration of transformed low-rank structures, reinforces our commitment to unleash the potential of t-NNs for machine learning. We would also like to take this opportunity to address the weakness you pointed out and provide answers to the question you raised. **Weakness:** *Only theoretical evaluation is provided. Some empirical evaluations regarding the approximately transformed low-rank weights are expected.* **Response:** Thank you very much for your thoughtful suggestion. While our paper primarily emphasizes theoretical findings, we recognize and appreciate the importance of incorporating numerical evidence to reinforce our conclusions. To that end, we have actively undertaken numerical evaluations, particularly utilizing the effective rank of the M-block-matrix of a weight tensor as a metric for approximate low-rankness transformation in Experiment II. Please review the preliminary numerical assessments presented in the rebuttal section titled "Contributions and Numerical Evaluations," along with the accompanying PDF file. **Question:** *Instead of using adversarial training with GF in highly over-parameterized settings to train a approximately transformed low-rank t-NN, is it possible to apply some extra regularizations in training to achieve a better low-rank transformation, for example, LoRA?* **Response:** Yes, it is possible to apply additional regularizations during training to achieve a better low-rank representation in t-NNs. Instead of relying solely on adversarial training with gradient flow in highly over-parameterized settings, these extra regularizations can potentially promote and enforce low-rankness in the network. To validate the concern regarding the addition of an extra regularization term, we performed a preliminary experiment. In this experiment, we incorporated the tensor nuclear norm (Ref. [45] in our paper) as an explicit regularizer to induce low-rankness in the transformed domain. Specifically, we add the tensor nuclear norm regularization to the t-NN with three t-product layer (D=128) in Experiment II with a regularization parameter 0.01, and keep the other settings the same as Experiment II. We explore how the effective ranks of tensor weights evolve with the epoch number with/without tubal nuclear norm regularization. The initial experimental results are depicted in Figure 8 of the attached PDF in "Contributions and Numerical Evaluations". According to the results Figure 8, it becomes evident that the introduction of the explicit low-rank regularizer significantly enforced low-rankness in the transform domain of the weight tensors, thereby confirming the suggestion. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns. Please consider adding the experiment part to the final version of the paper. I will keep my score unchanged. --- Reply to Comment 1.1.1: Comment: Thank you for recognizing our responses. We will incorporate the experimental results in the final version as per your suggestion.
Summary: The paper "Transformed Low-Rank Parameterization Can Help Robust Generalization for Tensor Neural Networks" considers neural networks with t-product layers, that encode the weights in tensor format equipped with the t-product as the corresponding tensor operation and ReLU activations. The paper is an analytical work that derives upper bounds for the generalization error in t-NNs, using norm bound estimates. Further, the influence of low-rankedness of the Tensor structures on robust generalization is inspected in the setting of exact low-rankedness and approximately transformed low-rankedness. Results show that low-rank t-NNs improve robust generalization properties. In particular, the Adverserial generalization bound for low-rank tNNs is shown to scale with the tubal rank of the tensors, instead of their dimension as in the full-rank setting. Additional findings show that in an overparametrized setting t-NNs yield approximately low-rank weights during adversarial learning. Strengths: - Although this is a rather technical paper, the authors manage to present the findings comprehensively by rigorously introducing the relevant concepts with a precise notation. - An extensive Appendix is provided to give an introduction to t-SVDs, and overview of previous work and technical details. - The theoretical results of the work are discussed and compared with previous works. Weaknesses: - No code and no numerical examples are provided to support the presented results. Although this is a rather theoretical work, a small test case can improve the presentation of this paper tremendously. I think this is a strong paper that gives a multitude of insights. A brief explanation of the intuition behind t-NN layers would also be helpful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the intuition of the transformed t-product, i.e. what is the reason that the M matrix is needed? What is the special case of M being the identiy? - Theorem 6: Can this statement be related to "standard" matrix valued neural networks with low-rank factorizations? - Theorem 10: Are you able to validate the statement in a numerical example. It would be interesting to see, how the rank of a layer changes as J gets larger. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss the limitations of their work, which manifest in rather loose bounds on the generalization error. They provide a suggestion on how to mitigate this issue in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to express our appreciation for your positive and thorough evaluation of our technical paper. Your acknowledgment of our dedication to presenting the research findings with rigor, the incorporation of an extensive appendix, and the thoughtful comparison with existing works has provided us with profound encouragement. We are devoted to unlocking the potential of t-NNs for machine learning and are fully dedicated to rectifying the weaknesses and addressing the questions you have pointed out. **Weakness 1** (No code or numerical examples). **Response:** Please refer to the initial numerical assessments in the rebuttal session titled "Contributions and Numerical Evaluations." We'll share the code after a thorough numerical evaluation. **Weakness 2** (A small test case to improve the presentation). **Response:** We acknowledge the importance of incorporating a practical example and are exploring the potential to include a relevant illustration that complements our theoretical results. **Weakness 3** (Intuition behind t-NN layers)*.* **Response:** The intuition behind t-NN layers lies in their ability to handle multi-channel data more efficiently compared to traditional deep learning layers. Unlike standard fully connected layers that collapse multi-dimensional data into vectors or matrices, t-NNs maintain the multi-channel structure of the data as tensors. This unique property of t-NNs allows them to capture complex correlations and dependencies in the data more effectively. By preserving the inherent structure of tensors, t-NN layers can better grasp the global context and interrelationships within the data, leading to improved performance in various applications ([25,30,31,40]). Additionally, t-NNs leverage specialized mathematical operations like the t-product and t-SVD in the transformed domain [14], which further enhances their ability to process complex data efficiently. The transformed domain allows t-NNs to focus on essential features and patterns in the data while disregarding noise and irrelevant local details, resulting in improved stability and robustness, especially in the face of adversarial attacks [40]. **Question 1** **Q1.1** (Intuition of the transformed t-product, i.e, the need of matrix M) **Response:** The introduction of the transformed domain in the t-product is a pivotal aspect of t-NNs, offering distinct advantages in handling complex data. By using the transform matrix M, t-NNs can map tensors to various domains, like Fourier or DCT. These domains offer unique data representations, aiding t-NNs in leveraging frequency domain low-rankness and global feature attention. Operating in the transformed domain enables t-NNs to capture global features and structures in high-dimensional data, enhancing their performance in various applications. This domain also helps filter noise and irrelevant details, making t-NNs more robust against adversarial attacks. Global feature attention mechanisms further improve t-NNs' pattern identification. Moreover, the transformed domain reduces parameters, enhancing computational efficiency for high-dimensional data. In conclusion, t-NNs benefit from transformed domains to model complex data effectively, enhance robustness, and improve efficiency, making them promising in machine learning. **Q1.2** (M being the identity). **Response:** Indeed, when the transform matrix M is an identity matrix, the transformed domain becomes equivalent to the original domain, and the t-product reduces to the standard parallel matrix multiplication of the frontal-slices of factor tensors. In this scenario, we lose the opportunity to leverage the advantages offered by the transformed domain representations, which is a situation to be avoided in t-NNs. Thus, we usually avoid an identity transform matrix M to ensure that t-NNs can exploit the benefits of the transformed domain representations for enhanced performance, robustness, and efficiency in modeling complex data. **Question 2** (Theorem 6 related to "standard" matrix neural networks with low-rank factorizations). **Response:** Yes! When the channel number c=1, the derived bound exactly recovers the corresponding bounds for classical fully connected neural networks. This result demonstrates the compatibility of our approach with traditional neural networks and highlights the versatility of our low-rank parameterization in various settings. Furthermore, our findings reveal that the adversarial generalization bound under low-rank parameterization exhibits a superior scaling behavior compared to standard (full-rank) neural networks. The improved scaling implies that networks with low-rank parameterization may require a smaller number of training examples to achieve the desired accuracy, leading to potential benefits such as reduced energy consumption, storage requirements, and computational cost. **Question 3** (Numerical evaluation for Theorem 10). **Response:** Yes, we have actively begun the numerical evaluations for Theorem 10. We totally agree that it is interesting to see how the rank of a layer changes as J gets larger, and are more than willing to conduct such numerical evaluations. However, we would like to acknowledge that our empirical findings indicate a substantial time investment. Even for a t-NN encompassing merely three t-product layers, an extensive commitment ranging from about 500 to 2,000 hours on our available computational resources is required to comprehensively study the implicit bias phenomena. As such, we have embarked on initial numerical experiments specifically concerning three t-product layers to explore the implicit bias towards transformed low-rankness. You can find preliminary numerical results in "Experiment II" in "Contributions and Numerical Evaluations." --- Rebuttal 2: Comment: We sincerely thank you for recognizing and appreciating our work. Your valuable feedback and suggestions have been noted, and we have tried to address all the concerns you've pointed out. As we update our final version, we're guided by your recommendations. We genuinely hope our explanations align with your expectations. If any details need further clarity, please let us know. --- Rebuttal Comment 2.1: Comment: I thank the authors for providing extensive feedback and addressing open questions. Thank you also for including the numerical experiments in the final paper. I will keep my score unchanged to 7 (accept). --- Reply to Comment 2.1.1: Comment: We genuinely appreciate your recognition of our efforts to address the questions and incorporate the numerical experiments. Your feedback is invaluable to us, and we're grateful for it.
Summary: The paper analyzes the generalization ability of t-product layers (t-NNs) by deriving upper bounds on generalization error in standard and adversarial settings. Strengths: The paper advances the theoretical understanding of t-NNs and derives their generalization behavior in two practical settings. Weaknesses: The paper focuses on and is written about t-product layers, referenced as [7, 28] in the first lines of the introduction. The first reference is a patent, and the second is a paper from 2018 (from the same authors as the patent), introducing the concept more or less academically. The submission assumes the reader's familiarity with these concepts right from the start, which is a mistake, and does not attempt to clarify or recap the importance of these concepts up until Sec. 3. From this standpoint, the writing should be improved. The importance of the usage of t-product layers in the Deep Learning community is not supported with sufficient evidence, leaving the question of the practical utility of the studied framework unanswered. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Is Definition-1 (t-product) equivalent to the batch matrix-multiplication operation, where `c` corresponds to the batch dimension? Is Definition-3 (t-SVD) equivalent to the SVD of a block-diagonal representation from Definition 2? Why do these concepts need special terms? What is the advantage of such complex structures over the traditional deep learning layers? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 1 poor Limitations: Discussed in the Conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1** (Writing). **Response:** Thanks for your comment on the writing. Firstly, we would like to address the need for references to [7] and [28]. - We appreciate your highlighting the reference to patent [7] for the scientific rigor and integrity of our work. However, patent [7] (granted in Dec. 2022) has garnered a certain level of attention, evident from citations in two IEEE T-GARS papers (Google Scholar) and four patents (Google Patents). Additionally, we draw attention to instances where scientific papers [R2.1] have cited important patents. Therefore, we respectfully assert that citing [7] does not have any negative consequences. - The arXiv paper [28] introduced t-NNs and presented noteworthy advancements on MNIST and CIFAR-10 datasets, outperforming standard matrix networks. It furnishes compelling evidence of the potential of t-NNs in real-world benchmark tasks. As a result, t-NNs have found extensive applications in machine learning over recent years, as evidenced by [25, 30, 31, 40]. Thus, [28] should also be cited. Secondly, we apologize for any confusion and wish to clarify that we "expect" rather than "strictly assume" readers' familiarity with t-product layers in the introduction. As aptly noted by Reviewer 3 in the Strengths, "Although this is a rather technical paper, the authors manage to present the findings comprehensively by rigorously introducing the relevant concepts with precise notation." We choose to briefly introduce the significance and recent advancements in t-NNs rather than the basic concepts within the introduction, due to the complexity of technical terms associated with t-NNs. To provide a smoother understanding for readers, we then revisit and recap the foundational concepts in subsequent sections. However, we totally understand that this may delay readers' eager comprehension of the concepts of t-NNs. To mitigate this concern, we will revise the introduction to provide a clearer and more comprehensive explanation of t-product layers, including the basic ideas on a conceptual level. Additionally, we will clarify and recap necessary notions of t-NNs in Section 2 rather than Section 3. [R2.1] Florsheim E B, et al. Immune sensing of food allergens promotes avoidance behaviour. Nature, 2023. **Weakness 2** (Practical utility of the studied framework). **Response:** We respectfully disagree with this comment. The importance of incorporating t-product layers in the deep learning community is substantiated by substantial evidence, showcasing their promising advantages across a range of domains. These advantages are clearly demonstrated in applications such as graph data augmentation, dynamic graph learning, code semantics embedding, seismic data processing, and more [25, 30, 31, 40, R2.2]. We reemphasize that T-NNs show great potential in advancing machine learning. They excel in capturing global features and complex patterns through transformed domain representations. The integration of low-rank structures, along with global feature attention, augments resilience and efficiency. Moreover, the utilization of low-rank decomposition streamlines the optimization of parameters, thus reducing storage requirements and enhancing computation speed. [R2.2] Yang J, et al. Toward interpretable graph tensor convolution neural network for code semantics embedding. ACM Transactions on Software Engineering and Methodology, 2023. **Question 1** (T-product vs batch matrix-multiplication operation). **Response:** No, the t-product is not a direct equivalent of batch matrix multiplication in the original domain according to Def. 1. Instead, it corresponds to batch matrix multiplication in the transformed domain. Additionally, this unique property of the t-product allows for efficient and effective data and parameter processing by leveraging the advantages of both batch matrix multiplication and the transformed domain. By performing computations in the transformed domain, the t-product enables more complex and intricate operations to be carried out, leading to enhanced performance and versatility in various applications. **Question 2** (T-SVD vs SVD of a block-diagonal representation from Def. 2). **Response:** Yes. Our Eq. (3) reveals that t-SVD is equivalent to the SVD of a block-diagonal representation per Def. 2. **Question 3** (Special terms of concepts). **Response:** The t-product and t-SVD are distinct terms that differentiate them from traditional matrix operations and factorizations. They are vital in t-NNs due to the multi-channel data structure, requiring new tensor-specific mathematical operations. These concepts efficiently manipulate tensors while maintaining their structure, enhancing global correlation capture. The t-product and t-SVD enable specialized tensor operations, addressing tensor-related challenges and unlocking t-NNs' potential for better performance in processing multi-dimensional data. **Question 4** (The advantage of such complex structures over the traditional deep learning layers). **Response:** T-NNs excel over traditional deep learning layers in handling complex multi-channel data. While conventional FC layers are limited to vectors and matrices, t-NNs utilize specialized tensor-centric operations (like t-product and t-SVD) to efficiently capture intricate correlations and preserve multi-channel structures. This approach allows t-NNs to better understand global patterns within the data, leading to enhanced performance across various applications. Additionally, t-NNs possess other advantageous features, including global feature attention mechanisms, robustness against adversarial attacks, and parameter compression, further enhancing their capabilities. Leveraging these complex structures enables t-NNs to process high-dimensional multi-channel data more effectively, making them a promising choice for addressing the challenges posed by modern data types and advancing model performance and generalization. --- Rebuttal 2: Comment: Thank you for taking the time to review our manuscript and for sharing your feedback. We've carefully addressed the points you brought up and will make necessary adjustments to our paper. We hope our responses meet your expectations. Should you have further questions or need clarification on any matter, please don't hesitate to inform us. --- Rebuttal Comment 2.1: Title: Acknowledgement Comment: I thank the authors for elaborating on the points raised in the initial feedback. I will keep my score.
Summary: The paper studies the generalization error in the standard and the adversarial settings of t-NNs, neural networks with layers and features parametrized via t-vectors and t-products. Strengths: - The provided generalization bounds for t-NNs show that neural networks with low-rank parameters have the potential for a better generalization. - This type of generalization analysis for tNNs seems new Weaknesses: 1. Although I appreciate this type of analysis for t-NN did not appear before, the t-product is essentially a composition of operations obtained through a suitable block reshaping of the parameter kernel, thus extending the results from standard fully connected linear layers seems like a natural and relatively direct result. In light of this, the novelty of this paper seems limited 2. I would have liked to see a few numerical experiments validating the theoretical findings and the tightness of upper bounds -- also to address the claimed limitations at the end of the paper Technical Quality: 3 good Clarity: 3 good Questions for Authors: - the authors should clarify in the main text how the obtained results differentiate with respect the corresponding bounds for linear fully connected nets (for example the bound in Thm 6 and Thm 10) and should highlight what the main differences in the proof/analysis of these bounds are with respect to the linear case - I think the rank-adaptive approach for selecting the ranks of the network mentioned on line 186 deserves more attention - in particular, i do not really understand in what way rank-adaptivity can be exploited in the analysis provided by Thm 6. See also [x],[y] for alternative recent rank-adaptive strategies for linear layers. - The fact that adversarial training leads to low-rank weights and vice-versa reducing parameters has a positive/negative effect of robustness (as mentioned at line 250) is subject to ongoing active research and there is a variety of different observations/results on this subject (see eg [a-d]). I would rephrase here in the light of these papers. In any case, these empirical studies are done for networks in the standard form not for t-NNs. Using that evidence to comment on findings for t-NN seems to support my weakness no. 1 [x] H. Yang, M. Tang, W. Wen, F. Yan, D. Hu, A. Li, H. Li, and Y. Chen. Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification (IEEE/CVF 2020) [y] S. Schotthöfer, E. Zangrando, J. Kusch, G. Ceruti, and F. Tudisco. Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations (NeurIPS 2022) [a] https://openreview.net/pdf?id=SJGrAisIz [b] https://arxiv.org/abs/1912.02386 [c] https://arxiv.org/abs/2306.01485 [d] https://link.springer.com/content/pdf/10.1007/s10994-021-06049-9.pdf?pdf=button Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have added a "limitations" paragraph stating that the obtained generalization bounds may be somewhat conservative, which I agree with. I am not sure I understand the proposed solution to the limitations though. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1** (Novelty). **Re:** We appreciate the reviewer for acknowledging the novelty of our analysis for t-NNs. We believe that our analysis can bridge the gap from the existing applications to a more rigorous understanding of t-NNs in machine learning. Notably, t-product's distinction from linear products in traditional FNNs underscores their structural and algebraic dissimilarity. This forms the basis for our unique approach and sets our analysis apart from FNNs. Specifically, the unique structure of t-NNs, including their t-product and transformed low-rankness, precludes the direct adaptation of generalization bound proofs (e.g., Lemma 3 and Thm 6) from [5, 42, 43], designed for FNNs. To address this, we derive several lemmas tailored to the special characteristics of t-NNs. For instance, Lemma 3 involves a t-product-based "peeling argument," implemented by reshaping weight tensors using Lemma 16. This sets the foundation for the crucial Lemma 17, supporting the argument. Furthermore, the proofs of Thm 6 require control over t-product layer outputs and the covering number of low-tubal-rank tensors, achieved through Lemmas 37 and 38, and Lemma 33, respectively. Likewise, proofs for adversarial generalization bounds for t-NNs with approximate transformed low-rank parameterization can't straightforwardly stem from the analysis of [34] for standard generalization of FNNs. Challenges arise due to capacity control mismatches under adversarial attacks on t-NNs. To address this, our proof involves introducing $(\delta, r)$-approximate low-tubal-rank parameterization for capacity measurement, controlling localized Rademacher complexity of the Minkowski difference of adversarial-counter parts of two t-NN classes for Thm 12, and employing low-tubal-rank approximations for precise capacity control in Thm 14. **W2** (Numerical experiments). **Re:** Please review the initial numerical evaluations in rebuttal "Contributions and Numerical Evaluations". **Q1** (Differences of results, proof, analysis of the bounds from FNNs). **Re:** Our theoretical results significantly differ from corresponding FNN bounds: - The standard (resp. adversarial) generalization bound in Lemma 3 (resp. Thm 5) differs from the counterpart for FNNs [5] (resp. [42,43]) due to the role of the channel number $\mathsf{c}$ in t-NNs' parameter complexity. Additionally, Thm 5's bound covers broader adversary classes compared to $l_p$-attacks in [42, 43]. - Thm 6's notable divergence from [42, 43] lies in its inclusion of weight low-rankness in the adversarial generalization bound. This aspect hints at the potential for enhanced robustness in generalization. - We introduce a novel concept in our analysis of GF for adversarial training (AT): the implicit bias of approximately transformed low-rankness in t-NNs. This significantly extends and enhances the findings from implicit bias in AT for FNNs in [22], which concentrates solely on convergence to a KKT point with exponential loss. Our research goes further by investigating approximately transformed low-rankness for t-NNs with broader loss functions in AT. - The key distinction in the adversarial generalization bounds of Section 4.3 from non-adversarial bounds for FNNs in [34] is the inclusion of the localized Rademacher complexity of the Minkowski difference between adversarial-counter parts of approximately and exactly low-tubal-rank t-NNs in Thm 12. Our analysis and proofs diverge from linear cases: - Unlike the analysis for FNNs' generalization bounds based on Rademacher complexity in [5, 42, 43], we derive specific lemmas for standard and adversarial generalization bounds in t-NNs. This is due to the unique structure of (low-rank) t-product layers. We reformulate the t-product through an operator-like expression in Lemma 16, paving the way for pivotal Lemma 17, supporting the t-product-based "peeling argument." Additionally, we introduce Lemmas 37, 38, and Lemma 33 to handle t-product layer output norms and covering low-tubal-rank tensors. - Proving the implicit bias of GF for AT of t-NNs in Thm 10, specifically the approximately transformed low-rankness, is nontrivial in comparison to the proof in [22] for the implicit bias of adversarial training for FNNs. As we consider more general loss functions for t-NNs in contrast to the exponential loss for FNNs in [22], we first derive a more general convergence result to the direction of a KKT point for t-NNs Lemma 9, and then goes deeper by using a constructive approach to establish the approximately transformed low-rankness in the proof of Thm 10. - Differing from [34] which focuses on standard FNN generalization, our approach delves into t-NNs' adversarial generalization. We achieve this by introducing the $(\delta, r)$-parameterization, bounding localized Rademacher complexity for a Minkowski set in adversarial settings, and using low-tubal-rank approximations for tensor weights. **Q2** (Rank adaptivity for Thm 6). **Re:** The analysis of Thm 6 doesn't require rank adaptivity due to explicit constraints on weight tensor ranks. Yet, we appreciate exploring alternative rank-adaptive methods like [x, y] due to the potential value for practical adversarial training of t-NNs. **Q3** (Empirical studies on robustness and low-rankness for standard NNs rather than t-NNs). **Re:** We recognize that the relationship between weight low-rankness and enhanced robustness is actively researched, yielding diverse observations and results. Direct comparisons should be approached cautiously, as these studies primarily concern standard network architectures and may not seamlessly apply to t-NNs. We'll incorporate insights from [a-d] into our discussion at line 250, highlighting the distinctions between standard networks and t-NNs with careful references. Additionally, we're planning to introduce new experiments (e.g., Experiment II in "Contributions and Numerical Evaluations") that employ t-NNs' own results to substantiate our claims. --- Rebuttal 2: Comment: Thank you for the valuable comments and suggestions. We have tried to address each of your concerns in detail and will make revisions to the final version based on your recommendations. May we ask if our responses have adequately addressed all your queries? If there are any points that require further clarification or explanation from us, please let us know. --- Rebuttal Comment 2.1: Comment: Dear authors, thank you for your response. I wish to maintain my score.
Rebuttal 1: Rebuttal: ## **Contributions and Numerical Evaluations** We thank all the reviewers for their valuable comments and suggestions. We first clarify our contributions and novelty again, and then report the initial numerical evaluations as suggested by Reviewers (R1, R3, R4). --- **Contributions & Novelty** With the rise of t-NNs in machine learning, our paper introduces a pioneering theoretical framework for t-NNs, enabling us to comprehend both their standard and robust generalization behaviors for the first time. The main contribution and novelty of this paper lie in an in-depth theoretical analysis of t-NNs, revealing key properties and robustness of this specialized type of neural networks. - *Theoretical Characterization of Generalization Behavior:* Through the introduction of lemmas specifically designed for t-NNs, this paper establishes upper bounds on the generalization error for t-NNs in both standard and adversarial contexts. - *Robustness Analysis of t-NNs:* The analysis shows that t-NNs with exactly and approximately transformed low-rank weights exhibit lower adversarial generalization bounds, highlighting the benefits of transformed low-rank weights in improving robustness and efficiency. - *Impact of Adversarial Learning on Weight Tensors:* The investigation reveals a novel observation that weight tensors in over-parameterized t-NNs tend to exhibit an approximation of transformed low-rankness. - *Influence of Transformed Low-rank Weights on Robust Generalization:* Through the precise derivation of adversarial generalization bounds, the importance of integrating transformed low-rank weights is emphasized as a means to strengthen the robustness of t-NNs. --- **Experiment I** To validate the generalization bound in Thm 6, we have conducted experiments on the MNIST dataset to explore the relationship between adversarial generalization gaps (AGP), weight tensor low-rankness, and training sample size. We consider binary classification of 3 and 7, with FSGM attacks of strength 20/255. The t-NN consists of three t-product layers and one FC layer, with weight tensor dimensions of 28×28×28 for $\underline{\textnormal{W}}^{(1)}$ to $\underline{\textnormal{W}}^{(3)}$, and 784 for the FC weight **w**. As an input to the t-NN, each MNIST image of size 28×28 is treated as a t-vector of 28×1×28. Thm 6 emphasizes: (i) lower weight tensor rank leads to smaller AGP bound, and (ii) AGP bound diminishes at a rate of $1/\sqrt{N}$ as $N$ increases. We explored this by conducting experiments, controlling the upper bounds $\textnormal{r}$ of the tubal-rank to 4 and 28 for low and full tubal-rank cases, and systematically increasing the number of training samples. Fig. 1 in the PDF presents initial results. The curves indicate that t-NNs with lower rank weight tensors have smaller robust generalization errors. Interestingly, the adversarial generalization errors seem to follow a linear relationship with $1/\sqrt{N}$, approximately validating the generalization error bound in Theorem 6 by approximating the scaling behavior of the empirical errors. --- **Experiment II** We carried out experiments to confirm two theoretical statements related to the analysis of GF-based adversarial training. *Statement 2.1* Thm 10 reveals that, under specific conditions, well-trained t-NNs with highly over-parameterized adversarial training using GF show nearly transformed low-rank parameters. *Statement 2.2* Lem 22 asserts that the empirical adversarial risk approaches zero, and the F-norm of the weights grows infinitely as \( t \) approaches infinity. In continuation of Experiment I, we focus on binary classification on MNIST under FGSM attacks. The t-NN is structured with three t-product layers and one FC layer, with weight dimensions set to D×28×28 for $\underline{\textnormal{W}}^{(1)}$, D×D×28 for $\underline{\textnormal{W}}^{(2)}$ and $\underline{\textnormal{W}}^{(3)}$, and 28D for the FC weight $\textbf{w}$. Our experiments involve setting values of D to 128 and 256, respectively, and we track the effective rank of each weight tensor, the empirical adversarial risk, and the F-norm of the weights as the number of epochs progresses. Since implementing gradient flow with infinitely small step size is impractical in real experiments, we opt for SGD with a constant learning rate and batch-size of 80, following the setting on fully connected layers in [22]. Note that fully observing implicit bias generally requires 10,000 to 40,000 epochs as shown in [22] for FNNs. This would take around 500 to 2,000 hours on our devices. *Due to time constraints in the rebuttal phase, we can only offer preliminary experimental results as initial support for our research statements.* For Statement 2.1, we present preliminary results illustrating the progression of the effective ranks of the M-block-diagonal matrix of tensor weights in Figs. 2 and 3 for the settings D=128 and D=256, respectively. Notably, these results show that the effective ranks decrease as more epochs are executed, thereby confirming the influence of implicit bias on transformed low-rankness, as described in Statement 2.1. For Statement 2.2, we present initial numerical findings depicting the progress of the empirical adversarial risk and the F-norm of the weights in Figs. 4 and 6 for D=128, and Figs. 5 and 7 for D=256, respectively. Unfortunately, due to time limitations, the program was only capable of running for less than 1/10th of the expected epochs. Nevertheless, these preliminary results exhibit a consistent pattern with the theoretical descriptions outlined in Statement 2.2 and the numerical results reported in [22] for adversarial training and [23] for standard training of FNNs. Specifically, we observe a decreasing trend in the empirical risk function and an increasing trend in the weight tensor's F-norm, which align with the expected behavior based on our theoretical framework and corroborate the numerical results presented in [22, 23]. Pdf: /pdf/03e5200f92ba9702bd621254c51d05f2cb49e5f5.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DreamHuman: Animatable 3D Avatars from Text
Accept (spotlight)
Summary: This is a paper focusing on generating 3D animatable full-body human avatar from text using pretrained 2D diffusion model and Score Distillation Sampling (SDS). The proposed approach differs from the existing approach in that, instead of directly representing a canonical space using surface template e.g. SMPL, it 1) uses the implicit 3D human model to establish the correspondence and condition the canonical representation on the pose parameter, 2) adopts a per-part optimization, and 3) uses a physics based shading formulation and jointly optimizes the environment lighting. These three technical novelty intent to achieve more plausible deformation for loose clothing, better detail reconstruction for faces and hands, and more realistic colors, respectively. The comparisons with AvatarClip and DreamFusion demonstrate the advantages of the proposed method. Strengths: - substantially better visual quality compared to AvatarClip and DreamFusion, although the former only with limited evidence. - the use of imGHUM seems to improve the diversity of clothing. - the part-based optimization visibly improves the visual quality of the generation. Weaknesses: - Even though the following papers can be considered concurrent, they should be mentioned in the related work. Cao, Yukang, et al. "Dreamavatar: Text-and-shape guided 3d human avatar generation via diffusion models." arXiv preprint arXiv:2304.00916 (2023). Jiang, Ruixiang, et al. "AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control." arXiv preprint arXiv:2303.17606 (2023). - The comparison with AvatarClip is not sufficient. In the supplemental material and Tab 1, results of AvatarClip should be included. - Since the NeRF in canonical space is pose-dependent, it is theoretically more prone to overfitting. How does the method perform for unseen poses? - The fact that the shape parameter $\beta$ can vary is not well explained. - The benefit of the shading and optimization of the SH for environment light is not elaborated sufficiently. What is the albedo before shading? How exactly do you model the irradiance (include rendering equation). The training trick with randomly perturbed SH coefficients is not well motivated. If the goal is to improve disentanglement, why not use some smoothness regularization? - The animations are shown in a fixed view. It's hard to judge whether loose clothing such as dresses and jackets deform as claimed from a fixed viewing angle. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: It would be great to address my comments above. In particular, I look forward to seeing in the rebuttal: 1. Comparison with AvatarClip 2. Clarification about shading and visualization of the learned albedo 3. Some visual examples of some avatars with loose clothes in walking motion, shown from the frontal view. 4. It would be great to include a user study on the visual quality. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and will address all raised questions. **Weaknesses** - *Concurrent works*: Thanks for the suggestion. We will cite and discuss the suggested references in the final version of the paper. We would like to highlight how our method compares to these 2 approaches. 1. AvatarCraft proposes to learn an animatable 3D avatar, but the avatar is very much tied to the underlying SMPL model and does not include instance-specific pose-dependent non-rigid effects. As a result, the geometry of the generated avatars is very close to the underlying body geometry and cannot model loose-fitting clothing, accessories like hats or dresses (see Fig. 5 and 6 of the paper). DreamHuman however is able to handle these cases much more effectively, as shown in our qualitative results. 2. DreamAvatar can better capture geometric variations beyond the underlying SMPL body model, but has the major disadvantage that it needs to be retrained every time for a new pose which makes it computationally very expensive. In contrast, our model, once trained, can be easily reposed at no extra cost. Additionally, avatars produced using DreamAvatar exhibit geometric and texture artifacts. 3. Overall, by inspecting the quality of the generated avatars, one can see that our method is able to create significantly more realistic avatars, with better texture and geometry, and can be reposed at test time at no major additional cost. - *AvatarCLIP comparisons*: We did not include comparisons with AvatarCLIP on Table 1 because it was trained using CLIP as supervision, so the CLIP metrics would be biased towards methods that optimize CLIP similarity losses. AvatarCLIP explicitly optimizes the CLIP similarity score. Even if a CLIP model with a different architecture is used for training and testing (e.g. ViT B16 vs ViT B32), all these models were still trained on the same data with the same strategy. Similar observations have been made in Table 1 of DreamFusion in their comparison with the previous work DreamFields. Nevertheless, as requested we run AvatarCLIP using the publicly available implementation on the same 160 prompt set and report the numbers: R-Precision=0.855, Top-3=0.962, Top-5=0.981. We will include these in the final version, and also include additional AvatarCLIP results. We also conducted a user study to assess the visual quality as requested (see below). - *Pose generalization*: The results shown in the paper (Fig. 1) and in the animations are on unseen poses. From these we can conclude that our model generalizes well on novel poses. We did not observe overfitting because the 3d pose (MoCap) datasets on which the prior was trained are large-scale and diverse. - *Shape parameter variation*: We optimize the shape parameters as free variables as the shape is part of the identity (in contrast to the pose). The body shape could be constrained by elements of the prompt and we want to make sure that the shape is consistent with those elements in the prompted textual description. For example, the text prompt “a bodybuilder wearing a white shirt” implies that the generated person should have a muscular build. We will add this discussion to the paper. - *Spherical Harmonics*: For exact details about the Spherical Harmonics model we use please refer to our answer to reviewer KMW9. The primary goal of randomizing SH coefficients is to support the geometry learning (similar to Dreamfusion’s randomized light direction). Randomizing SH coefficients in some cases additionally helps to decouple albedo and lighting, e.g. a dark shirt can be obtained with the absence of light or with dark color. When randomizing the light, the former is no longer possible. However, albedo estimation was not the primary goal and we would like to investigate further in future work. Thank you for the suggestion. - Animations: We plan to release more high-resolution animation results from varying viewpoints in the final version to better showcase how our method handles clothing deformations. All animations in the supplementary material were rendered from the same viewpoint for consistency reasons. We added an unrolled animation for a skirt from a front view in Figure 7 of the rebuttal, as well as a video animation for a dress in the link (see response to AC as per the conference’s guidelines for submitting videos). **Questions** 1. We added a CLIP-based comparison with AvatarCLIP as well as a user study as mentioned previously in our response. 2. We added renderings of the albedo in Figure 5 of the rebuttal PDF. Overall the albedo looks plausible. For the shading please refer to our answer in the previous question. 3. We added a rolled-out animation in the rebuttal PDF (Fig. 7) and also included videos in the response to AC section. We will include more results like this in the final version. 4. Following the reviewer's suggestion we conducted a user study to assess the quality of our results. We used 20 text prompts from the AvatarCLIP website from the “General Description” category. We run our method on those 20 text prompts and rendered the final results in the rest pose from the front and the side. We did the same for the precomputed meshes that the AvatarCLIP authors provide in their website. We then asked the users to rate the two methods on (a) the perceived agreement between the renderings and the input text, and (b) the perceived visual quality of the generated avatar. The ratings were on a scale from 1-5 with 1 meaning “very bad” and 5 “very good”. Our method achieved an average rating of 4.45 for “Agreement with text” and 4.16 for “Visual Quality”. AvatarCLIP scored 2.96 on “Agreement with text” and 2.49 on “Visual Quality”. In terms of the scale of the study, the user study we carried out is slightly larger than the one in AvatarCLIP (Ours: 20 images with 25 raters, AvatarCLIP: 8 images with 22 raters). We will include the full list of text prompts used in the Supplementary Material. --- Rebuttal Comment 1.1: Title: Satisfied with the answers Comment: Thank you for the explanation. I think the answers to shape parameter variation and optimizing environment light SH is important details, and should be included in the main paper. I appreciate the comparison with concurrent work. Finally, I agree with other reviewers in urging the authors to release their code. My final rating is accept.
Summary: This paper presents a method to generate animatable 3D human avatars from text. The pipeline is similar to DreamFusion and is built upon the Nerf representation and diffusion model. However, a key difference is that an imGHUM body model is introduced as prior, which allows for the construction of a deformable Nerf representation and a 3D animatable human model. This design not only enables animation capabilities but also effectively addresses the anthropometric consistency issues. In addition, a semantic zooming loss is proposed to refine details in body regions like the face and hands, resulting in a more photo-realistic overall quality. Quantitative and qualitative comparisons are conducted with state-of-the-art baselines such as DreamFusion and AvatarCLIP. The extensive results demonstrate that the proposed method outperforms previous approaches across all metrics. The visual quality and geometry detail are particularly impressive. Furthermore, the study includes an analysis of different components to highlight their importance within the framework. Strengths: - The paper is well-written and easy to follow. - The overall results are impressive, especially the appearance and the geometry details of the generated 3D human avatar. - Although each component can be seen in previous works like DreamFusion and AvatarCLIP, this work did a good job on putting all losses and modules together properly and achieving promising 3D human avatar modeling. - Extensive ablation experiments are conducted to show the importance of proposed components. - The proposed semantic zooming loss is interesting and effective, which largely improves the visual quality and helps to generate sharper, higher-quality textures. Weaknesses: - It’s not clear to me how to decide the shape parameters for the imGHUM model during optimization. If it is optimized together with the Nerf model, will it introduce additional training costs? - The overall computation cost is not clearly listed and compared. It would be better to report the optimization time and inference time for a single model/text prompt for the proposed method and baselines. - It would be better to show more quantitative results for the semantic zooming loss since it's one of the key contributions of this work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is there any quantitative result for more direct evaluation regarding the view consistency and pose dependency of the generated 3D avatar? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - As shown in Figure 7, artifacts can be found in the soldier avatar generated by DreamHuman. The generated avatar has more than two hands, and the method struggles with generating correct accessories. This can be really hard for the current model design since it's built on imGHUM body model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and will address all raised questions. **Weaknesses** - *Shape parameters*: The shape parameters are treated as additional parameters of the overall model during optimization and we compute the gradient of the loss with respect to them as if they were trainable weights of the neural network. This does not introduce any significant computational cost (only 10 additional latent shape parameters), whereas the NeRF MLPs have over 1 million parameters. - *Computation cost*: Training our method for 50K iterations takes around 6 hours per example on a TPU v4 machine. After training we can render the subject in an arbitrary pose at 512x512 resolution in 2.6 seconds, and at 256x256 in 0.67 seconds. The rendering time is similar with DreamFusion, however DreamFusion trains only for 15K iterations which takes about 1.5 hours. We need a larger number of iterations until convergence because we train a dynamic and not a static avatar. AvatarCLIP takes up to 10 hours to train on a single GPU with 32GB of memory. These are all typical times for training NeRF-based architectures. - *Semantic zoom*: We would like to thank the reviewer for their suggestion. We will add more results in the supplementary material. **Questions** - To the best of our knowledge there are no quantitative metrics to assess the view consistency and the quality of the pose-dependent deformations. To evaluate the quality of text-based 3D generations most methods rely on imperfect 2D metrics, such as the CLIP R-Precision score. However these are not very good for measuring the quality of the 3D geometry or the view-consistency of the generations, as they only measure the agreement of the rendering with the text prompt. We train a NeRF with no view-dependent effects, so our model is by design viewpoint consistent. The pose consistency relies on pose-sensitive deformations learned from data. **Limitations** - Indeed this is an interesting failure case of our method. In fact after inspecting the result we found that although the extra hand is painted on the military uniform, no extra geometry is created. In practice we observed that text-to-image models tend to exhibit certain pose biases, i.e. when prompted to generate an image of a soldier, most generated images include soldiers holding a weapon. In the vast majority of cases, randomization during training helps mitigate this issue, but in rare cases errors like this may occur. In fact in Figure 6 of the rebuttal PDF we show that if we run again the optimization with a different random seed we get a correct result this time. --- Rebuttal 2: Title: Thanks for your response. Comment: Thank you for the detailed feedback and the new qualitative results, which address my earlier concerns. I believe this is solid work with several innovations. And I agree that making the model accessible for the purpose of reproducibility would serve as valuable assets for the broader community.
Summary: This work proposes a method for text-driven human avatar generation. It combines animatable human nerf and diffusion model to implement avatar generation and animation. This work produces photorealistic avatars with high-quality details by incorporating spherical harmonics lighting model and semantic zoom. Extensive experiments demonstrate SOTA performance and the effectiveness of each design in the proposed framework. Strengths: 1. This is the first diffusion-based work that successfully produces photorealistic animatable 3D human avatars. 2. This work shows temporally consistent animation results. I believe this can open up more application possibilities for optimization-based avatar generation methods. 3. The incorporation of spherical harmonics lighting model can alleviate the long-standing issue of unrealistic over-saturated color for text-driven 3D object generation. 4. The semantic zoom loss is simple yet effective to improve the quality for detail regions such as face, arm, and hand. Weaknesses: 1. The imGHUM is designed for the whole body, it contains parameters for hand and facial expressions. But there is no result for the animation of facial expressions and hand poses. It could be better if the authors can provide animation results to show the controllability of these details. I think this will also help prove the necessity of semantic zoom loss. 2. I believe this work uses a more powerful diffusion model, and imGHUM is not completely open source. These two points will limit access to the proposed model. Is there any plan to release an online demo or interface for users? 3. Although the authors mentioned the training strategy in Sup Mat, there is no clear description of the computation cost of the proposed method. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. How to determine the rendering camera poses for each part in the semantic zoom? 2. Which diffusion model is used in this work? Is this diffusion model finetuned on human body images? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes, addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and will address all raised questions. **Weaknesses** 1. *Hands and facial expressions*: Thanks for the very good suggestion. For the rebuttal we have produced examples of renderings with varying facial expressions (Figure 2) and hand poses (Figure 3) and we will include them in the final version. Note that this is similar to varying shape coefficients (Figure 1 of the Supplementary Material), as we did not train with varying hand poses or expressions. Nevertheless, our model is able to generalize by leveraging the capabilities of imGHUM. 2. *Different diffusion models*: We use the same model as DreamFusion to ensure a fair comparison. Notice however, that our model is not necessarily more powerful than other public domain models like e.g Stable Diffusion. We included some example results by replacing Imagen with Stable Diffusion in Figure 4 of the rebuttal PDF. The performance of the 2 variants is perceptually very similar, thus showing that our model is agnostic to the architecture of the underlying text-to-image diffusion model. Regarding the availability of imGHUM, it is distributed under an academic license. We will strive to make a demo available. 3. *Computation cost*: Our model is trained for 50K iterations and this takes about 6 hours on a TPU v4 machine, which is similar to other competing methods (1.5 hours for DreamFusion for static avatars,, 4-10 hours for AvatarCLIP). After training we can repose and render our model very fast, and takes 2.6 seconds for rendering a 512x512 image, and 0.67 seconds for rendering a 256x256 image. **Questions** 1. *Semantic zoom*: Given a particular body pose and shape we can get the 3D joint locations for the body part. Given these we place the camera at a reasonable distance from the body part to minimize perspective distortion effects. We also compute a range of focal lengths that ensures that the body part occupies a large enough portion of the image. 2. *Diffusion model*: We use Imagen, a general-purpose text-to-image diffusion model. We did not finetune on human body images. As we show in Figure 4 of the rebuttal PDF, our method works with other diffusion models such as the open-source Stable Diffusion. --- Rebuttal Comment 1.1: Comment: Thank the authors for the answers. The results in the rebuttal file are nice and persuasive. All of my concerns have been addressed, and I will keep my rating.
Summary: This paper proposes a method to generate high-quality and animatable 3D human from textual input. Strengths: - The result is good, showing clear improvement compared to the previous text-to-3D method. - The method can be learned without 3D GT. - The ablation study is thoroughly done. Weaknesses: - What is the run-time at inference? - Density loss: I think this can work when the suject is wearing a tight clothing. However, wouldn't this confuse the network when the subject is wearing a clothing with large deformatation (i.e., more gap is present between the body model and the actual geometry)? - What is w_i of equation 5? - What is the proposal weights L_p in L195? - It is written that 4 renderings are supervised for a single training step (supplementary material L7). Does this mean that 4 randomly selected semantic parts (zoomed-in semantic parts) of the same pose are rendered? Or does this mean that single semantic part is rendered from different camera pose and body pose? Also, would rendering and supervising predictions less than four (e.g., when using single GPU with less memory) degrade the performance? Discussion on the running environment and performance would be helpful for the readers. - Lack of implementation details, making it difficult to reproduce. Also, no plan on supporting the reproducibility has been presented. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the rendering resolution at inference time? - How are the query locations sampled? Are they sampled in free space or bounded box? - Which method was used to perform rendering with spherical harmonics lighting model? - How are the camera parameters (both extrinsic and intrinsic) set? Since there are no known camera parameters for the images generated with Diffusion models, I am wondering how the authors set the parameters to render images. - Is there a reason behind choosing mip-NeRF360 as the backbone? Would using basic NeRF degrade the performance? - What is s (the output of imGHUM) exactly? Is it an index of the nearest vertex on the body? - Although the lighting result is much better than the previous work (Dream Fusion), still it looks unnatural. What is the reason behind this and how it can be improved? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The idea is interesting and the results are great. However, there is a concern regarding the reproducibility. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and will address all raised questions. **Weaknesses** - *Runtime*: The rendering time for a 512x512 image is 2.6 seconds, whereas for 256x256 it is 0.67 seconds. These timings were measured on a TPU v3 chip with 8 cores. - *Density loss*: The purpose of this loss is to act as a regularizer and encourage the model to preserve important details such as facial features as we show in Fig. 4 of the Supplementary Material. Our results in Figure 2 of the main paper show that we can handle loose clothing such as dresses, something that previous work like AvatarCLIP cannot. - W_i are the weights of the samples along a ray as defined in Eq (1). The formulation we use is the same as in Eq (10) of Ref-NeRF [56]. - *Proposal weights*: We use mip-NeRF 360 as our NeRF backbone, which consists of 2 MLPs, the proposal MLP and the NeRF MLP. The proposal weights are the rendering weights as defined in Eq (1) but for the proposal MLP. L_p as proposed in mip-NeRF 360 is used to supervise the proposal MLP and penalizes the proposal weights when they underestimate the distribution of the NeRF MLP. Please refer to Section 3 of mip-NeRF 360 for a more detailed discussion. - *Training details and environment*: For each view we randomly select camera poses, semantic parts and body poses. The selection probabilities for the semantic parts are listed in Table 1 of the Supplementary Material. We combine each view with a different semantic part and a different pose. We include a discussion of the running environment in Section 1 of the Supplementary Material. Our network can be trained on TPUs or GPUs and with 16GB of memory we can fit 1 view per device. The model still has acceptable performance when trained with 2 views per step but it needs more iterations to converge. With a smaller number of views, we noticed that it’s harder for our network to learn the dependency of f on $\theta$ (Eq 3), probably because of the higher variance in the gradients, similarly to training a neural network with a small batch size. With 1 view and all other hyperparameters unmodified the optimization becomes unstable. In Figure 1 of the rebuttal PDF we show examples trained with 2 and 4 views. - *Implementation details*: Thank you for pointing this out. We have included the hyperparameters and training strategy in the main paper and the supplementary material. We will expand this section in the supplementary material and include a more detailed algorithm section. As we show in Figure 4 of the rebuttal PDF, our method is generic enough and works with other open-source diffusion models such as Stable Diffusion. **Questions** - *Render resolution*: Since our model is NeRF-based and we trained it with camera randomizations, we are not tied to a particular resolution. In the paper figures we used a resolution of 512x512 pixels. In the Supplementary Material videos we used 256x256 due to document size constraints. - *Query location sampling*: We assume that the scene resides inside a unit sphere centered around the origin and we use an additional scaling factor to make sure the human fits in the unit sphere. - *SH*: We use a simple Spherical Harmonics Diffuse Lighting model. Given the 9 light source coefficients $c_j$, for each pixel in the image we compute the dot product between the 9-element vector of the learnable light coefficients and the vector of the corresponding spherical harmonics basis functions $h_j$ and then we use this to compute the final shaded color given the albedo. More specifically, in our formulation, the shaded pixel value $s_i$ at pixel $i$ with albedo $\alpha_i$ and unit surface normal $(x_i, y_i, z_i)$ is $s_i = \alpha_i \cdot \left(\sum_{j=0}^8 c_j h_j(x_i, y_i, z_i)\right)$. More details can be found in Spherical Harmonics Lighting: The Gritty Details (https://3dvar.com/Green2003Spherical.pdf). We will include all these details in the Supplementary Material. - *Camera parameters*: We use a reasonable distance from the subject to ensure that we don’t have severe perspective distortion effects. For example, for the full body shots the camera distance is between 2.5 and 4 meters away from the subject and the camera zoom is chosen such that the person takes up a large portion of the image and is fully visible in most frames. - *NeRF backbone*: mip-NeRF 360 has been shown to achieve better results than NeRF and is computationally cheaper because it needs fewer samples per ray than the standard NeRF. However any other competitive Nerf method can be used instead. - *imGHUM*: For a given pose θ and a point (x,y,z) in the 3D space, imGHUM returns the distance d of the point from the surface along with a semantic code s that associates it with the closest point on the surface of the posed GHUM model. If e.g. the closest point on the mesh is the tip of the index finger, then s is the 3D position of the tip of the index finger in the GHUM template mesh. The semantic code s is not an explicit vertex id, but rather a continuous surface mapping. We kindly refer the to the imGHUM paper for a more detailed explanation. - *Color quality*: It has been reported in previous works (e.g. DreamFusion) that the use of the SDS loss tends to produce saturated colors, and we have observed similar effects. Besides our proposed spherical harmonics estimation strategy, one way to improve the appearance could be to augment the training with an additional loss that attempts to match the statistics of our rendered images to those of natural images. --- Rebuttal Comment 1.1: Title: Response to the author rebuttal Comment: I appreciate the authors for their time and efforts put into the rebuttal. The author rebuttal has successfully addressed my concerns. I will keep my rating.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their valuable feedback. - Reviewer KMW9 states that our method shows clear improvements over previous methods without the need for 3D ground truth data, and that we have a comprehensive ablation study. - Reviewer urhH notes that our method produces animatable photorealistic 3D avatars, with temporally consistent animation results, and appreciates our novel components that improve the overall reconstruction quality. - Reviewer hMhh highlights that our 3D generation results are impressive, appreciates our extensive ablation study and notes that the novel components of our method contribute in achieving higher-quality results. - Reviewer FnWG praises the substantially better visual quality compared to previous methods and the importance of our semantic prompting. We will address the questions raised by each reviewer separately under each review. The summary of the rebuttal is as follows: - We performed extra evaluations against AvatarCLIP and conducted a user study on the visual quality of the results. - We demonstrated that our method also works with open-source diffusion models such as StableDiffusion, thus addressing potential concerns about the reproducibility. - We added extra clarifying qualitative results. We will include all these results in the final version of the paper. Pdf: /pdf/f6db59327136ded16f4daf834d4990f8c4a4b4a8.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Counterfactual Evaluation of Peer-Review Assignment Policies
Accept (spotlight)
Summary: In this paper, the authors study the counterfactual evaluation of peer-review assignment strategies with randomness. The authors adapt existing off-policy evaluation methods for the specific problem to handle non-positivity, missing reviews and attribution. A framework of off-policy evaluation with different imputation choices are provided. Empirical evaluations are carried out on two datasets to study (1) effect of randomness and (2) weights on different similarity components. Strengths: 1. The paper studied an important problem as online A/B experiments for evaluating peer-review assignments are very expensive and hard to carry out. 2. The authors propose a comprehensive framework based on off-policy evaluation with different choices of imputation methods. Moreover, the authors adapt the existing method to specific cases for paper review. 3. The authors carry out two case studies using the proposed method on two real-world peer review datasets. Weaknesses: One major concern is that there is no evaluation on the quality for the counterfactual evaluation. It would be much more convincing if some evaluation can be carried out with the help of random A/B experiments or results from existing random experiments. Different methods provide quite different bounds. It is hard to pick the proper method without a ground-truth provided by A/B experiment. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see above section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments! We appreciate that you recognize the importance of the problem we consider and the value of our off-policy evaluation framework. > One major concern is that there is no evaluation on the quality for the counterfactual evaluation. It would be much more convincing if some evaluation can be carried out with the help of random A/B experiments or results from existing random experiments. Different methods provide quite different bounds. It is hard to pick the proper method without a ground-truth provided by A/B experiment. Unfortunately, no existing conference has conducted such an A/B test due to numerous significant costs and challenges. Running an A/B test in a real conference for the purpose of evaluating this work either risks substantially harming the review quality (as some reviewers are assigned under a lower-quality policy) or brings substantial costs in terms of extra reviewer time (if each paper is assigned additional reviewers). Even if we did run an A/B test, if both A and B policies were stochastic we would still likely want to use the off-policy evaluation techniques we develop here to let observations under “B” inform estimates of the “A” policy (and vice versa). Another challenge in executing an A/B test, and considering it as ground truth, is due to issues of interference: we would need to split the pool of reviewers/papers into multiple groups, and each experimental arm would likely influence the results of the other leading to biased estimates. This is a common problem in market experiments and an active research area. Given these challenges, it would be far beyond the scope of this submission to deploy an A/B test in a conference, in addition to the contributions already made in this work. Our approach in this work is instead to consider the estimates of review quality obtained under a variety of different assumptions, which we verify to be reasonable on the observed outcomes (Appendix G). Where these estimates agree (as they often do), we can draw a conclusion that is likely robust to a violation of any individual assumption; when the estimates disagree, we clearly show how the conclusion depends on which assumption one finds to be most appropriate. We also note that our approach fundamentally relies on randomization in the paper assignment to perform estimation, and thus is not observational in the sense of “observational causal inference”, a context where A/B tests are a relative “gold standard” and highly desired form of validation. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thanks for the response. I have read the rebuttal and remain a borderline 4/5 score. I agree with the authors that it is hard to carry out but still desired as a gold standard to validate the methodology.
Summary: This paper leverages recently proposed strategies that introduce randomness in peer-review assignment—in order to mitigate fraud—as a valuable opportunity to evaluate counterfactual assignment strategies. This paper introduces novel methods for partial identification based on monotonicity and Lipschitz continuity assumptions for the mapping between reviewer-paper covariates and outcomes. The proposed methods are applied to peer-review data from two computer science venues, the authors find that placing higher weight on text similarity results in higher review quality and that introducing randomization in the reviewer-paper assignment only marginally reduces review quality. Strengths: 1. This paper proposes off-policy evaluation as a less costly alternative that exploits existing randomness to enable the comparison of many alternative policies, compared with A/B tests. 2. The paper is well written and organized. 3. The experiments are sufficient and convincing for the conclusion. Weaknesses: It would be better to compare the proposed off-policy evaluation method with previous commonly used evaluation methods, and analyze the effectiveness of this method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why the proposed off-policy evaluation is effective? 2. Compared with other evaluations, can the proposed evaluation method find a better assignment policy? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors provide the limitations in appendix K. 1. The method requires making assumptions: monotonicity and Lipschitz continuity. 2. The method considers estimates of the average review quality across all assigned reviewer-paper pairs, analogous to the common sum-of-similarities objective used for review-paper assignment, the analysis does not consider the impact on individual reviewers on papers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback! We are pleased to hear that you found the paper well-written and the experiments convincing. Below we provide responses to the questions posed in the review. > Why the proposed off-policy evaluation is effective? The key idea of our proposed approach to off-policy evaluation is that we can exploit already-existing randomness (and data already generated under that randomness) in deployed reviewer-paper assignment designs. This allows us to use causal inference to answer questions about potential alternative paper assignments without intervening in the deployed assignment (which could potentially result in deploying a worse assignment); such questions cannot be credibly answered through analysis of data from deterministic assignments. This work is the first to propose leveraging randomized paper assignments to perform off-policy evaluation in the peer review setting, and our proposed methods are designed to address the challenges of applying standard techniques here (e.g., large numbers of positivity violations). > Compared with other evaluations, can the proposed evaluation method find a better assignment policy? In our experimental analysis, we do find assignment policies with higher quality than the deployed policies in both venues (Figure 1). In TPDP, increasing $w_{text}$ results in a better assignment; in AAAI, increasing $w_{text}$ and $\lambda_{bid}$ both result in slightly better assignments. We additionally find that randomized policies have a very similar quality to deterministic policies, helping to inform future conference organizers who are considering randomizing their assignments for other reasons (e.g., for the purpose of mitigating fraud). Importantly, the methods we introduce in this work can be used by conference organizers to find the best policy among any class of assignment policies they may have in mind. We address the comparison with other evaluation methods below in response to the comments in the weaknesses section. > It would be better to compare the proposed off-policy evaluation method with previous commonly used evaluation methods, and analyze the effectiveness of this method. As this work is the first to consider off-policy evaluation in this setting, there is only a limited set of baseline methods for comparison. In the causal inference and econometrics literature, the standard approach to estimation in the presence of positivity violations is Manski bounds, which we do consider as one of our methods. Experimentally, we find that the Manski bounds are uninformative on real conference data and are generally unable to distinguish better policies from worse ones. In contrast, our proposed methods result in much sharper estimates by carefully leveraging intuitive assumptions, allowing us to identify policies that improve over the deployed policy. In practice, the current method for choosing the best assignment policy is highly ad-hoc: conference organizers typically generate a few sample assignments under different policies and spot-check a few of the papers’ assigned reviewers (based on their own prior knowledge). Without the techniques in this work, conference organizers have no way to evaluate an alternative, non-deployed policy in terms of the review quality (e.g., as measured by self-reported expertise). Our approach instead allows for the costless evaluation of any number of alternative policies (for which there is statistical support) based on data from the deployed assignment.
Summary: The authors consider the problem of evaluating alternative reviewer matching algorithms. They motivate this problem as the cost of running A/B tests. They propose to use off-policy evaluation by exploiting the randomness in review assignments introduced by a fraud-mitigating scheme in recent years. They tackle several main challenges, including positivity and attrition, using imputation, Manski bounds, and improved bounds under monotonicity and continuity assumptions. They apply their method to two CS venues and their findings suggest that randomization does not significantly hurt review quality and higher review quality can be obtained by paying more attention to text similarity. Strengths: - Sections 1 and 2 are beautifully written. - The authors have a clear set of challenges that they seek to addresse. - The authors run comprehensive experiment that yield interesting takeaways. Weaknesses: - I would suggest stating the assumptions clear and upfront. For example, should state Y_i is not a function of the assignment and no interference assumption. - The challenges subsection of Section 3 is a little messy compared to the rest of the writing. One way the authors could improve this is by introducing an explicit problem statement. At the moment, it's a mix of discussion, problem statement, and assumptions. - Section 4 could be improve organizationally. The authors should clearly delineate the proposed approaches & assumptions that go with each. - The experiment section is strong but could be improved in terms of (i) explanation and (ii) organization of takeaways. For example, the figure is not explained in the text, and it takes some work to understand/parse the notation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The authors consider self reported expertise and confidence as measures of review quality - I could see these as good, but also noisy and limiting proxies. What do the authors expect are the limitations of this approach? - I'm having trouble telling whether the results given in Section 5 are verified? That is, can we tell whether the proposed approaches improve estimation (beyond the fact that the authors' approaches do what they say they will do, e.g., that the assumptions result in tighter bounds)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss a possible limitation in their conclusion. They also discuss their assumptions in-line and refer readers to Appendix G for a discussion on assumption suitability. I do think the authors would benefit from a greater discussion of limitations (particularly if the answer to the second question above is no) and their assumptions in one place. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the helpful comments! Thank you also for your recommendations on the exposition of the results and the organization of the manuscript. We greatly appreciate them and look forward to improving the clarity of the manuscript with these recommendations in mind. >The authors consider self reported expertise and confidence as measures of review quality - I could see these as good, but also noisy and limiting proxies. What do the authors expect are the limitations of this approach? We do agree that the self-reported expertise and confidence measures that we use in this work can be noisy proxies for review quality: for example, reviewer self-reported expertise may be miscalibrated with true expertise. Similar concerns motivated us to report the results of both the reviewers’ self-reported expertise (Section 5) and confidence (Appendix L). The fact that the analyses based on both measures of review quality lead to the same substantive conclusions in the two datasets gives us reason to believe that the findings would be robust to other similar outcome measures. We additionally note that our framework and methods are general, and thus can be applied to any outcome/review quality measure available to conference organizers. Such alternative review quality measures could include asking the authors (or meta-reviewers) to rate the quality of the reviews. However, these measures can also be noisy: the authors’ reports may depend on the overall rating of the paper, and the scores of the meta-reviewers may depend on their seniority. Finally, in practice, many venues do not ask meta-reviewers / ACs to score the reviews they oversee, and when they do those scores are often missing, making off-policy evaluation even more challenging. As each measure provides an evaluation of a slightly different aspect of the review process and notion of review quality, the appropriate proxy to use may depend on the goals of the conference organizers. > I'm having trouble telling whether the results given in Section 5 are verified? That is, can we tell whether the proposed approaches improve estimation (beyond the fact that the authors' approaches do what they say they will do, e.g., that the assumptions result in tighter bounds)? Thank you for encouraging us to think further about verification—the problem setting is such that the ground-truth outcomes are unobserved and cannot be used to directly verify the estimates. Without making assumptions about the outcomes for reviewer-paper pairs violating positivity, the Manski bounds are the only applicable estimates, as they result from imputing minimal and maximal outcomes for these pairs (Section 4). Thus, each of our proposed approaches aims to improve the estimation by leveraging some assumption about these positivity-violating pairs (e.g., the monotonicity and Lipschitz continuity assumptions). So, while these assumptions again cannot be directly verified on the unobserved outcomes, there are many ways that we indirectly verify that they are reasonable. First, we carefully assess the suitability of these assumptions on the observed outcomes (Appendices G and E). Second, we find experimentally that a diverse set of assumptions lead to converging conclusions: e.g. the estimates of a variety of parametric models tend to lie within the same region as the estimates based on relatively mild Lipschitz/Monotonicity assumptions. This provides some evidence that these estimates are robust to violations of these assumptions; in contrast, estimates that significantly differ under different assumptions should be viewed more skeptically. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thanks for the response. I have read the rebuttal and remain a borderline 5/6 score. As a note: While I understand that it is difficult to evaluate the review setting given the limited metrics available, there are clear limitations to using self-reported expertise and confidence as proxies for review quality. I believe that the authors should clearly acknowledge these limitations in the work and possibly even account for how they could change the findings. I agree that the setting they examine is difficult given data (un)availability and that the framework is general, yet the authors should acknowledge the limitations in the work beyond stating that it is hard to get good data. Similarly, I appreciate that verification is difficult in this setting; please state this and why your analysis is sufficient upfront in Section 5. In general, my main feedback is to be upfront with limitations and assumptions. Doing so makes a work stronger, not weaker. This was a trend in the paper (see my initial review and the paragraph directly above). --- Reply to Comment 1.1.1: Comment: Thank you for the response. We appreciate your feedback and agree that being upfront about the limitations and assumptions of the analyses will make the paper stronger. We attempted to do so in our initial submission within the limited space, and we are happy to revise the manuscript to make the limitations/assumptions more explicit.
Summary: This work uses the randomness of a recently-implemented peer review paper assignment algorithm in order to perform off-policy evaluation of other (nearby) randomized assignment strategies. It uses multiple methods for imputing missing values and estimating the errors in the estimators of alternative policies' relative value, including fitting proxy values to the data which obey two plausible structural assumptions. The authors perform this evaluation on paper-reviewer matching data from two conferences, and identify practical and actionable recommendations for modifications to the similarity score calculation and assignment algorithm as currently implemented. Strengths: This paper has a clear premise, purpose, and scope, and it is well executed. It introduces structural assumptions into the this off-policy evaluation framework which are technically and theoretically interesting and apparently novel. It provides actionable recommendations for improving the implementation of paper-reviewer matching algorithms in a major conference, and a framework for performing similar evaluations going forward. Weaknesses: It seems hard to say how far these recommendations generalize beyond the conferences which are included in the analysis. It would have been nice to have slightly more explicit discussion of how the various uncertainty estimates are generated in the main body of the work. It would have been interesting to see this methodology used to compare the current method to a wider range of others---even deterministic similarity-based approaches---which have seen prior use. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In your discussion of review quality, you mention competing definitions of/proxies for review quality. How does your choice (self-reported expertise and confidence) compare against the other metrics (eg auto-generated and meta-reviewer-evaluated), and would you expect your results to be robust to this choice? The randomized paper assignment scheme of [17] which you take as the logging policy generates a fractional assignment based on the q constraints and then performs dependent rounding in order to generate an assignment. Since your estimators are linear in these assignments, I presume their means are unaffected by which rounding scheme is used. But what about the variances? (What rounding scheme is used in [17]?) Can you comment on how the choice of rounding scheme might affect the uncertainties as well as your uncertainty bounds, and whether this plays a role in your analysis? Minor comments (no response expected): You might make the definition of review quality which you work with more prominent. The choice of T_i for surrogate quality as well as text weight was confusing (unless these were surrogate text values, though that was not my understanding). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: It would have been great if this work had included more data for analysis, especially from a second large conference. The scope of the recommendations offered seem somewhat limited by conferences which are a part of the analysis. It would be especially interesting to see this analysis performed on multiple years of AAAI data (provided there is a consistent on-policy used) simultaneously. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review—we very much appreciate your feedback and are pleased to hear that you found the paper well-executed, the off-policy evaluation framework technically and theoretically interesting, and the recommendations based on our analyses actionable. > In your discussion of review quality, you mention competing definitions of/proxies for review quality. How does your choice (self-reported expertise and confidence) compare against the other metrics (eg auto-generated and meta-reviewer-evaluated), and would you expect your results to be robust to this choice? We decided to report the results of both the reviewers’ self-reported expertise (Section 5) and confidence (Appendix L) specifically because we wanted to demonstrate that our substantive results are robust to the particular choice of a review quality metric. The fact that the analyses based on both measures lead to the same substantive conclusions in both datasets gives us reason to believe that the findings would be robust to other related outcome measures as well. We also considered using the scores provided by the meta-reviewers; however, for AAAI 85% of the meta-reviewers’ scores were missing, and for the TPDP workshop these scores were not at all collected by the conference organizers. We note that similar to reviewers’ self-reports, the meta-reviewer scores and other alternative measures, e.g., the authors’ assessment of the reviews, come with some subjective biases as well. For instance, the meta-reviewers’ scores may depend on their seniority, and the authors’ reports depend on the overall rating of the paper (see Section 8.2.2 in [4]). Last, we note that our framework and the methods proposed are general and can be applied to any outcome / review quality measure available to conference organizers. [4] Nihar B. Shah. Challenges, experiments, and computational solutions in peer review. Communications of the ACM, 65(6):76–87, 2022. >The randomized paper assignment scheme of [17] which you take as the logging policy generates a fractional assignment based on the q constraints and then performs dependent rounding in order to generate an assignment. Since your estimators are linear in these assignments, I presume their means are unaffected by which rounding scheme is used. But what about the variances? (What rounding scheme is used in [17]?) Can you comment on how the choice of rounding scheme might affect the uncertainties as well as your uncertainty bounds, and whether this plays a role in your analysis? That’s an excellent question that we were also concerned about while developing the analysis framework. Your observation that there is a certain “degree of freedom” in how a deterministic assignment is generated given a fractional assignment is correct. The assignment sampling (or rounding) procedure proposed by Jecmen et al. [17] only guarantees that the marginal probabilities are respected and may choose an arbitrary distribution that does so. The covariance structure of this distribution will affect the variance of our estimators. If the off-policy evaluation was premeditated, it would be an excellent idea to devise an assignment sampling procedure that would proactively minimize the variance and maximize the power of the off-policy evaluation estimates. There are two reasons we opted to use the original procedure proposed by Jecmen et al. [17]. First and foremost, it is the procedure that had already been implemented in OpenReview and other peer-review platforms, and was already used by the conference organizers to generate the pre-existing assignment data we analyzed. Thus, efforts to improve this design would not have been useful to us in this work (but it would be an interesting direction for future work). Second, in practice, we found that accounting for the positivity violations contributes significantly more to the width of the uncertainty intervals than the variance of the estimates in the overlapping regions (where some form of covariance optimization would come in). This observation motivated us to focus our efforts on mitigating the impact of the positivity violations via our proposed estimation methods based on monotonicity and Lipshchtiz continuity assumptions. [17] Steven Jecmen, Hanrui Zhang, Ryan Liu, Nihar B. Shah, Vincent Conitzer, and Fei Fang. Mitigating manipulation in peer review via randomized reviewer assignments. Advances in Neural Information Processing Systems, 2020.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Policy Optimization for Continuous Reinforcement Learning
Accept (poster)
Summary: Reinforcement learning (RL) is a powerful tool for solving sequential decision making problems but has primarily been formulated for discrete-time Markov decision processes. However, many real-world systems are more naturally expressed in continuous time, and the proper choice of discretization time step may be challenging. Additionally, a controller operating in continuous time may be more suitable for high frequency applications. Prior work has formulated a continuous-time analog to the advantage function, called the q-value. Building on the concept of a q-value, the paper sets out to derive a continuous-time analog to the discounted occupation probability and performance difference lemma, two core concepts in RL. The authors then show how to compute a gradient of this performance difference lemma to give a continuous analog to the policy gradient. Next, the authors form a local approximation of the performance difference which takes an expectation over the current policy rather than the updated one. They provide bounds on the gap between the true performance metric and this local approximation. Then they use this local approximation to formulate a continuous analog to TRPO\PPO. The paper evaluates the continuous-time policy gradient and PPO algorithms on synthetic linear-quadratic stochastic control problems. They also consider a two-dimensional optimal pair trading problem and show their CPPO algorithm performs best. Strengths: - Continuous-time RL is an under-explored area that has application in controlling systems at high frequency and using an adaptive discretization scheme with non-uniform time steps. - This appears to be the first work to propose a continuous-time analog in a stochastic setting to the performance difference lemma and derive a continuous PPO algorithm. - The paper is well organized and clearly written. It does a good job explaining the results and provides enough information to support its claims. Weaknesses: - There is no discussion or experimental results which illustrate how the continuous-time formulation is advantageous over discrete-time RL. The proposed algorithms still ultimately require us to discretize. As such, there should be some comparison to existing discrete-time RL methods. The hope would be that formulating the problem in continuous time allows us to achieve better performance under certain choices of step size. Another potential benefit of continuous-time RL is the case of unevenly-spaced observations. Experiments which highlight these benefits would significantly strengthen the paper. - The problems considered are toy examples which do not tell us much about the scalability of the proposed methods. More complex scenarios would make the paper much more convincing. Especially ones in which the continuous-time formulation provides a clear advantage. - The discussion of related work could be better. It is still a little unclear to me how this work is positioned in the continuous-time RL literature. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - How do these continuous-time methods compare to their discrete-time counterparts when the time step discretization is uniform? How does the performance gap change with choice of discretization time step? What are the main advantages of this formulation? - How does this approach scale to harder, more realistic problems? - How exactly does the continuous-time policy gradient algorithm in this paper compare to previously proposed approaches? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: There is no discussion of limitations of the continuous-time formulation or their specific algorithms. The paper would be stronger if it discussed these. This could include assumptions made in the proofs or the fact that the work assumes the dynamics follow an Ito SDE driven by Brownian motion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments, especially on the experiments. 1. There is no discussion or experimental results which illustrate how the continuous-time formulation is advantageous... A: We have conducted extra experiments to compare the CPG and CPPO to their discrete counterparts. (The results will be included in the revision). Specifically, we discretize the MDP in Example 1, and implement the classical PG and PPO algorithms. Our results show that in time discretization with step size $\delta t=0.1$ and $\delta t= 0.05$, the performance of CPG and CPPO is (at least) comparable to their discrete counterparts; in particular, for $\delta t=0.1$, CPG outperforms PG. We have repeated the experiments for 25 random seeds, and plotted both the average performance line and the error bar. Please refer to the link https://www.dropbox.com/scl/fo/03g1ub7mvis64yucclqzm/h?rlkey=boyq188kpop55hj5ahyiyom16&dl=0. These experimental results indicate that the continuous approach has the potential to outperform their discrete counterparts. While this is quite preliminary, we do plan to do more thorough investigations along this line in the near future. We would like to emphasize that our main objective (and contribution) is to provide a continuous approach (which appears under-developed), with rigorous analyses and provable results, both of which appear to be difficult to do in the discrete setting. On the numerical side, preliminary experimentation (as summarized above) has shown that the continuous approach is at least comparable to (or outperform) its discrete counterpart. 2. More complex scenarios would make the paper much more convincing... A: As mentioned above (in point 1), our focus here is to develop a continuous approach for policy optimization, supported by rigorous analyses and provable results. While numerical studies have shown promising performance of the continuous approach, as highlighted above in point 1, we agree that these are preliminary and limited in scope, and more thorough experimentation is needed. We believe that our proposed algorithms can be used in many scenarios (financial trading, aviation control, etc) involving large-scale and high-frequency data. It will certainly warrant another independent study (which we do plan to pursue as a next step) to implement our proposed algorithms and evaluate their performance in one of such applications. 3. The discussion of related work could be better. A: To the best of our knowledge, this is the first paper that studies policy optimization in the continuous and stochastic setting. While this is a notably under-developed area in RL, we do have cited (on p.1-2) several related works and discussed how do they motivate or relate to our work. 4. How does the performance gap change with the choice of discretization time step... A: We have implemented our proposed algorithms with different (time) step sizes: $\delta t=0.02$, $\delta t=0.05$ and $\delta t=0.1$, and found that the performances are quite similar, suggesting that our CPG algorithm is insensitive to step size; please refer to the link https://www.dropbox.com/scl/fo/mkuqf1nux1ysiwhys9oqn/h?rlkey=8s4ui8i2wknd4za1o11alf1u5&dl=0 for details. 5. Limitation of the Ito-SDE formulation... A: Similar to the classical MDP setting, we need to assume a Markovian and stationary setting in our continuous approach, which is represented by an It\^o-SDE. Other more technical conditions are standard (and very mild), such as requiring the It\^o-SDE to be well-posed, the continuity and growth condition on model parameters. In the revision, we will state these conditions more explicitly.
Summary: This paper investigates the continuous reinforcement learning problem. The proposed method is based on the notion of occupation time for policy gradient, which is analogous to the visitation frequency in discrete Markov decision processes. Empirical evaluations are conducted on two example scenarios for two versions of the algorithm (CPG and CPPO). Strengths: - Analyze an Important formulation of continuous time and space in reinforcement learning. - Theoretical analysis is present to develop a policy gradient counterpart of TRPO/PPO. - The empirical evaluation is presented along with theoretical analysis. Weaknesses: - Lack of comparison with discrete counterpart. - The lack of implementation details (missing hyperparameter, random seed) makes reproducibility challenging. - The evaluation of the proposed method is limited to only two hand-crafted examples, which hinders a comprehensive understanding of its implications. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: In Figures 1 and 2, what does it mean to be the distance to the optimal to be some non-zero value? How does this translate to the reward performance? Reward performance is critical to know whether or not the achieved difference in distance is useful. The performance is compared with the proposed two methods, CPG and CPPO (Figure 3), and no baseline is considered. An essential choice would be to use the discrete counterpart (Policy Gradient - PG and PPO). A simple form can be to make the MDP discrete and run PG and PPO. This comparison is even important to understand what kind of problem the continuous RL is useful for. Is the achieved return by CPPO optimal (empirical best) in Figure 3? Implementation details need to be included; what are the PPO-specific hyperparameters (e.g., clipping), for Example 2? For reproducibility, a common practice is using several seed runs to account for variation in the results. How many random seeds are used for the experiments in Figures 1 and 2? These missing details make the presented results hard to reproduce. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments, especially on the experiments. 1. Lack of comparison to the discrete counterpart. A: We have conducted extra experiments to compare the CPG and CPPO to their discrete counterparts. (The results will be included in the revision). Specifically, we discretize the MDP in Example 1, and implement the classical PG and PPO algorithms. Our results show that in time discretization with step size $\delta t=0.1$ and $\delta t= 0.05$, the performance of CPG and CPPO is (at least) comparable to their discrete counterparts; in particular, for $\delta t=0.1$, CPG outperforms PG. We have repeated the experiments for 25 random seeds, and plotted both the average performance line and the error bar. Please refer to the link https://www.dropbox.com/scl/fo/03g1ub7mvis64yucclqzm/h?rlkey=boyq188kpop55hj5ahyiyom16&dl=0. These experimental results indicate that the continuous approach has the potential to outperform their discrete counterparts. While this is quite preliminary, we do plan to do more thorough investigations along this line in the near future. We would like to emphasize that our main objective (and contribution) is to provide a continuous approach (which appears under-developed), with rigorous analyses and provable results, both of which appear to be difficult to do in the discrete setting. On the numerical side, preliminary experimentation (as summarized above) has shown that the continuous approach is at least comparable to (or outperform) its discrete counterpart. 2. The lack of implementation details. A: On p.8, l.279, we have referred the implementation details to Appendix D. In addition, we plan to post our codes on Github, so the reproducibility will not be an issue. As to the random seeds, we use 25 random seeds to run the algorithms. More details will be added to the revision. 3. The evaluation of the proposed method is limited to... A: As mentioned above (in point 1), our focus here is to develop a continuous approach for policy optimization, supported by rigorous analyses and provable results. While numerical studies have shown promising performance of the continuous approach, as highlighted above in point 1, we agree that these are quite preliminary and limited in scope, and more thorough experimentation is needed, which we plan to do in the near future. Indeed we believe that our proposed algorithms can be used in many scenarios (financial trading, aviation control, etc) involving large-scale and high-frequency data. It will certainly warrant another independent study to implement our proposed algorithms and evaluate their performance in one of such applications. 4. Connection of policy distance and reward performance difference A: As we mentioned on p.9, l.282-28, minimizing the KL-divergence (between the iterated policy and the optimal policy) is equivalent to minimizing the distance between the current policy objective and the optimal objective; and we referred to Appendix D.1 for further details. In addition, preliminary experiments indicate that our proposed algorithms do converge to the (local) optimum. 5. The performance is compared with the proposed two methods... A: We have now conducted extra experiments and made more comparisons as summarized in point 1. 6. Implementation details need to be included; what are the PPO-specific hyperparameters (e.g., clipping), for Example 2? A: We did not use the clipping technique. Details regarding the hyperparameter are spelled out in Appendix D (as mentioned on p.9, l. 297). --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed responses and for conducting additional experiments. Based on the updated results and content, I have raised my rating from 4 to 5.
Summary: This paper seeks to answer three research questions: 1) Is there a continuous time analog of the state occupancy measure, 2) Is there a convenient expression for the performance difference between two policies in the continuous time setting, and 3) Can PPO be adapted to fit the continuous time setting? In answering these questions, the paper presents a continuous time occupancy measure, a policy performance difference similar to that of conservative policy iteration and related algorithms, and continuous versions of REINFORCE and PPO are provided. Theoretical results for several properties are given, and experiments demonstrating that the algorithms were able to solve a couple of problems are provided. Strengths: The paper provides a thorough theoretical treatment of defining policy gradients for the continuous time RL setting. The paper clearly defines and adequately answers its stated objectives. Weaknesses: The biggest area for improvement in this paper is in the empirical results. While the main results of this paper are, theoretical new practical algorithms are presented. Thus, they deserve proper evaluation and experimentation to educate the reader on the challenges of using them. For example, there are no experiments illustrating that there were any special difficulties in applying these algorithms to the continuous setting. There should be experiments illustrating how the hyperparameters, particularly those specific to the continuous time setting, impact the optimization process. Currently, the results only tell us that the algorithms were made to work. Great, but we do not learn anything beyond this trivial result. It is important to develop the reader’s understanding of how they can make the algorithms work or at least what can cause failure. Additionally, the adaptive penalty term should also be explored since it deviates from the standard PPO implementation. Does it effectively constrain the distribution throughout learning? How does the penalty change over time? These are important questions to answer when introducing a new method. Minor quibble: the opening sentence of the introduction is boring. Many papers have used this form, and my eyes glaze over as soon as I read them. Try keeping this motivation brief and tie it immediately to the scope of the work discussed in this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does the sampling at discretization impact the gradient estimate? - How does the step size need to be adapted to work in the continuous-time setting? Is it sensitive to the scaling of the time horizon or sampling rate? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the suggestions and comments. 1. How does the sampling at discretization impact the gradient estimate? A: We have implemented our proposed algorithms with different (time) step sizes: $\delta t=0.02$, $\delta t=0.05$ and $\delta t=0.1$, and found that the performances are quite similar, suggesting that our CPG algorithm is insensitive to step size; please refer to the link https://www.dropbox.com/scl/fo/mkuqf1nux1ysiwhys9oqn/h?rlkey=8s4ui8i2wknd4za1o11alf1u5&dl=0 for details. 2. Comparison of our proposed algorithm in continuous setting with the discrete MDP algorithms. A: We have conducted extra experiments to compare the CPG and CPPO to their discrete counterparts. (The results will be included in the revision). Specifically, we discretize the MDP in Example 1, and implement the classical PG and PPO algorithms. Our results show that in time discretization with step size $\delta t=0.1$ and $\delta t= 0.05$, the performance of CPG and CPPO is (at least) comparable to their discrete counterparts; in particular, for $\delta t=0.1$, CPG outperforms PG. We have repeated the experiments for 25 random seeds, and plotted both the average performance line and the error bar. Please refer to the link https://www.dropbox.com/scl/fo/03g1ub7mvis64yucclqzm/h?rlkey=boyq188kpop55hj5ahyiyom16&dl=0. These experimental results indicate that the continuous approach has the potential to outperform their discrete counterparts. While this is quite preliminary, we do plan to do more thorough investigations along this line in the near future. 3. How does the step size need to be adapted to work in the continuous-time setting? A: As mentioned above, the performance of our algorithms appears to be quite robust to step size. In the future, we plan to also investigate the sensitivity of the hyperparameters to the environment dynamics (e.g., the time horizon). --- Rebuttal Comment 1.1: Title: Response to author response Comment: The plots are not useful for comparing since they are all on different graphs. >As mentioned above, the performance of our algorithms appears to be quite robust to step size. What evidence do you have for this? It is not evident to me that one would expect the step size to be robust to changes in time discretization. --- Reply to Comment 1.1.1: Comment: Many thanks for your further questions and comments. For the plots, we will add the new graphs directly evaluating policy in the revised version, to allow a more precise comparison of the algorithm performance. Currently, because of time constraints, we added two new graphs, named "CPG-KL" and "CPG_l2", by concatenating the plots of different time discretization into one figure, for KL-divergence and l-2 distance separately, and we believe that the plots of KL-divergence could justify our claim as the performance of the CPG algorithm appears to be similar for different time discretization from our experiment results. Please see the link https://www.dropbox.com/scl/fo/mkuqf1nux1ysiwhys9oqn/h?rlkey=8s4ui8i2wknd4za1o11alf1u5&dl=0 for updated contents. For the step size issue, we agree that we need further experiments to test the robustness of the algorithm's performance to different choices of step size. We will also investigate how to quantitatively build the connection between the suggested / optimal learning rate (step size) and the model parameters (e.g. sampling rate, time horizon) in future works.
null
null
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Parallel Spiking Neurons with High Efficiency and Ability to Learn Long-term Dependencies
Accept (poster)
Summary: This paper introduces the parallel spiking neuron (PSN) and several variants. The primary benefit of PSN over the existing spiking neurons is the parallel implementation on digital hardware, which brings dozens of times of acceleration on GPU. The accuracy also demonstrates the effectiveness of this paper. Strengths: 1. The idea of parallelized neuron implementation is interesting and can be efficiently processed on GPU. 2. The experiments show good performances. Weaknesses: The authors pinpoint a disadvantage of current SNN, that is, low running speed on GPUs. However, it is known that GPU is not the most ideal device for deploying SNN, rather, it should be the neuromorphic hardware. PSN focuses on the optimization of SNN on GPU, however, GPU cannot utilize the binary spikes to lower the energy. So, even if PSN can accelerate the inference on GPU, compared to ANN on GPU, there is no advantage of efficiency still compared to ANNs. I was wondering whether the optimization on GPU is really useful cause people can always use ANNs on GPU. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can authors show the ablation study of PSN and vanilla SNN on static image datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Listed above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your encouraging comments about the acceleration and accuracy of the PSN family. Our responses to the weakness are as follows. > I was wondering whether the optimization on GPU is really useful cause people can always use ANNs on GPU. A typical workflow to use SNNs is: 1. Train SNNs to get high task performance 2. Prune and quantize the weights of SNNs during or after training 3. Deploy SNNs to neuromorphic chips For the moment, many researches aim at steps 1 and 2. Although GPU is not the target device for SNNs to deploy, the massively parallel computing ability makes it become the most widely used devices to train SNNs. We believe that the introduction of PSN decreases the training cost and benefits the SNN community. Meanwhile, although the PSN family has high efficiency in GPUs, it is also compatible with neuromorphic chips. Please refer to our response to the question "How it could map to the hardware?" from reviewer P1JU for more details. > Can authors show the ablation study of PSN and vanilla SNN on static image datasets? Thanks for your suggestion. We add ablation experiments on CIFAR10. We use the same network from the SNN to classify CIFAR10 in Table 2. Due to the limited rebuttal period, we only use 128 channels for the SNN and train 128 epochs. The results are shown in Table R9. Note that $T=4$ and we also train the masked PSN and the sliding PSN with $k=0,1,2,3$. | Neuron | IF | LIF | PLIF | KLIF | GLIF | PSN | | --------------- | ----- | ----- | ----- | ----- | ----- | ----- | | **Accuracy(%)** | 92.24 | 91.86 | 92.24 | 92.47 | 91.90 | 92.95 | | Neuron\Order | 1 | 2 | 3 | 4 | | ------------ | ----- | ----- | ----- | ----- | | Masked PSN | 92.59 | 92.44 | 92.75 | 91.94 | | Sliding PSN | 90.81 | 91.79 | 92.13 | 92.46 | **Table R9. Ablation experiments using different spiking neurons on CIFAR10.** As [1] suggests, the spiking neurons without leaks may have better accuracy in static datasets. And we also add the experiment using the IF neuron. Table R10 shows that the accuracy rank on CIFAR10 is `PSN > Masked PSN (k=3) > KLIF > Sliding PSN (k=4) > PLIF = IF > GLIF > LIF`. The rank indicates that the PSN family also performs better than most of vanilla spiking neurons in static datasets. ``` [1] Fang, Wei, et al. "Deep residual learning in spiking neural networks." Advances in Neural Information Processing Systems 34 (2021): 21056-21069. ```
Summary: The paper presents an approach to improve the efficiency and accuracy of Spiking Neural Networks by using a dependency method to generate hidden states, resulting in parallelizable neuronal dynamics and a significant increase in simulation speed. Strengths: The authors analyze the impact of removing the reset function from standard charge-fire-reset neuronal dynamics, proving that the parallelization of the spiking neuron can be achieved without it. They also present a general formula by rewriting the neuronal dynamics without a reset and introduce the PSN, a spiking neuron with entirely parallelizable neuronal dynamics. The paper also assesses the PS family's performance on sequential, static, and neuromorphic data classification tasks, showing that they attain higher accuracy than traditional spiking neurons. Weaknesses: Please see Questions. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Since the proposed models use parallel input of X[t-1], X[t], X[t+1], I am wondering how memory-intensive the models will be for parallel processing, and especially how it changes for the different datasets. Basically, it would be interesting to see a comparison of FLOPS/MACs for the standard vanilla spiking neurons and the proposed parallel spiking neurons 2. The VGG shows better performance for ImageNet and CIFAR-DVS. The authors show the performance of the VGG model is currently state of the art using vanilla neurons for the ImageNet dataset. It would be great if the authors could report the results of the VGG model with parallel neurons for the ImageNet dataset. I understand the tight time constraint and hope this won't be very difficult, especially because the authors have already shown results for VGG on the CIFAR-10 DVS 3. I am also curious how stable are these parallel spiking neurons compared to the vanilla neurons since the reset is removed. In the presence of input/weight noise, will it still perform as well? 4. When the authors evaluate the long-term dependencies, it would be interesting to see results on non-image datasets in Supplementary. Mainly because there is a high degree of correlation among the pixels for such image datasets, and I was just wondering how well it would work for other non-image-based datasets. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: It would be great if the authors could include what are the key limitations of this work and what are the trade-offs with the current vanilla spiking neurons. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments. Please refer to "To All Reviewers" for our discussions about the trade-off with vanilla spiking neurons. Our responses to your constructive questions are as follows. ## **Question 1** Thanks for your variable question. We have summarized the number of memory reading/writing and FLOPS of spiking neurons in Table R5. | | Memory Readings | Memory Writings | FLOPs | | ----------- | --------------------------------- | --------------- | ------------------ | | LIF/PLIF | $5T$ | $3T$ | $9T$ | | KLIF | $6T$ | $3T$ | $10T$ | | GLIF | $11T$ | $2T$ | $20T$ | | PSN | $T^{2} + 2T$ | $T$ | $2T^{2}$ | | Masked PSN | $\frac{(2T+1-k) \cdot k}{2} + 2T$ | $T$ | $(2T+1-k) \cdot k$ | | Sliding PSN | $k+1+T$ | $T$ | $(2T+1-k) \cdot k$ | **Table R5. Counting of the memory reading/writing and FLOPs of different neurons.** ## **Question 2** Considering the fact that the tuning of hyper-parameters also requires many comparative experiments, we are not able to finish the training of VGG-16 during the rebuttal period. ## **Question 3** Thanks for your suggestion. We add experiments about input/weight noise on the sequential CIFAR100 classification task. Note that all noise is added during both training and inference. #### Noised Input We add a Gaussian noise with mean 0 and variance $\sigma^2$ on inputs $X$. The results are shown in Table R6. It can be found that at each noise level, the PSN family is still better than the LIF neuron. | Neuron\\$\sigma$ | 0 (no noise) | 0.1 | 0.2 | 0.3 | | ---------------- | ------------ | ----- | ----- | ----- | | LIF | 55.45 | 54.01 | 51.53 | 49.17 | | PSN | 62.21 | 61.31 | 59.21 | 56.88 | | Masked PSN | 60.69 | 57.63 | 56.17 | 56.76 | | Sliding PSN | 62.11 | 59.72 | 56.74 | 55.17 | **Table R6: The accuracy on sequential CIFAR100 with noised inputs**. #### Noised Weights We add a Gaussian noise with mean 0 and variance $\sigma^2$ on weights $W$ and bias $B$ (if have) of all convolutional, batch normalization, and fully connected layers. | Neuron\\$\sigma$ | 0 (no noise) | 0.001 | 0.005 | 0.0075 | 0.01 | | ---------------- | ------------ | ----- | ----- | ------ | ----- | | LIF | 55.45 | 52.26 | 31.79 | 21.48 | 14.98 | | PSN | 62.21 | 57.56 | 20.56 | 12.49 | 4.67 | | Masked PSN | 60.69 | 53.58 | 23.16 | 11.79 | 4.58 | | Sliding PSN | 62.11 | 58.05 | 38.81 | 29.33 | 21.27 | **Table R7: The accuracy on sequential CIFAR100 with noised weights**. The results are shown in Table R7. When noise becomes large, the accuracy of both the LIF neuron and the PSN family drops quickly. The accuracy rank in the large noise environment is `Sliding PSN > LIF > PSN > Masked PSN`. It is worth noting that both the sliding PSN and the LIF neuron use shared weights across time-steps, while the PSN and the sliding PSN use temporal-wise weights. The former might have better robustness for noised network weights. ## **Question 4** Thanks for your advice. We add two kinds of experiments beyond image classification. ### Reinforcement Learning and Control We evaluate the performance of SNNs with different spiking neurons on the reinforcement learning and control tasks. We evaluate four typical tasks from OpenAI Gym with non-image inputs, which include Ant-v3, HalfCheetah-v3, Hopper-v3, and Walker2d-v3. We use similar encoding and decoding methods from [1]. The results are reported in Table R8. It can be found that the performance of the PSN and the sliding PSN is much higher than other neurons. | | Ant-v3 | | HalfCheetah-v3 | | Hopper-v3 | | Walker2d-v3 | | Average Performance Ratios | | ------------------------------ | ------ | ------- | -------------- | ------- | --------- | ------- | ----------- | ------- | -------------------------- | | DAN [2] | 5472 | 100.00% | 10471 | 100.00% | 3520 | 100.00% | 4999 | 100.00% | 100.00% | | PopSAN (current-based LIF [1]) | 4848 | 88.60% | 10523 | 100.50% | 517 | 14.69% | 4199 | 84.00% | 71.94% | | PopSAN (LIF) | 4991 | 91.21% | 8500 | 81.18% | 2613 | 74.23% | 3751 | 75.04% | 80.41% | | PopSAN (PSN) | 5210 | 95.21% | 9622 | 91.89% | 3255 | 92.47% | 4248 | 84.98% | 91.14% | | PopSAN (SlidingPSN) | 5362 | 97.99% | 9849 | 94.06% | 3406 | 96.76% | 4603 | 92.08% | 95.22% | **Table R8: The performance comparison of spiking neurons on reinforcement learning and control tasks.** ### Speech Recognition We add experiments about speech recognition. We use the same network structure and data processing methods from [3]. We achieve 95.56% accuracy, which is higher than the LIF neuron with 94.5% in [3]. ``` [1] Tang, Guangzhi, et al. "Deep reinforcement learning with population-coded spiking neural network for continuous control." Conference on Robot Learning. PMLR, 2021. [2] Fujimoto S, Hoof H, Meger D. Addressing function approximation error in actor-critic methods[C]//International conference on machine learning. PMLR, 2018: 1587-1596. [3] Pellegrini, Thomas, Romain Zimmer, and Timothée Masquelier. "Low-activity supervised convolutional spiking neural networks applied to speech commands recognition." 2021 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2021. ``` --- Rebuttal Comment 1.1: Title: Reply to the Author's Response Comment: I thank the authors for their detailed response and for clarifying my doubts.
Summary: This paper proposes the Parallel Spiking Neuron (PSN), which generates hidden states that are independent of their predecessors, resulting in parallelizable neuronal dynamics and extremely high simulation speed. The weights of inputs in the PSN are fully connected, which maximizes the utilization of temporal information. The authors evaluate the PSN family on simulation speed and temporal/static data classification, and the results show the overwhelming advantage of the PSN family in efficiency and accuracy. Strengths: 1. The motivation considering the parallel spiking neurons for high simulation speed is interesting and important for the futural application of spiking neural networks. 2. The experiments on the large datasets, such as ImageNet, improves the techinical soundness of the spiking neural networks. Weaknesses: 1. Since the main goal of the proposed parallel spiking neuron model is to improve the simulation speed, the current experiments do not reflect that advantage over other methods, in which only focusing on the accuracy and the firing rates analysis. 2. The network architecture of the spiking networks in Table 2 is not clear. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Please explain the advantage of the parallel spiking neurons as mentioned "high simulation speed " in abstract. 2. How about the whole training and test time of the proposed SNN compared with other methods? 3. Is there any limitation once applied the parallel spiking neurons into the neuromorphic hardware?How it could map to the hardware? 4.What is the network architecture used in Table2, such as the "VGG"? Please describe the detailed network architecture in the Table 2. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: 1. Since the authors claim that the main goal of the proposed parallel spiking neuron model is to improve the simulation speed, the current experiments do not reflect that advantage over other methods, in which only focusing on the accuracy and the firing rates analysis. 2. The network architecture of the spiking networks in Table 2 is not clear. The above limitations have been explained clearly in the following response. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comprehensive comments. Please refer to "To All Reviewers" for responses to **Question 3**. Other point-to-point responses are as follows. ## **Weaknesses 1 and Questions 1, 2** > Please explain the advantage of the parallel spiking neurons as mentioned "high simulation speed " in abstract. You can refer to Section 4.1 and Figure 3 for experimental results about the training and inference speed. > How about the whole training and test time of the proposed SNN compared with other methods? Comparable methods include PLIF, KLIF, and GLIF. These neurons are more complex than the LIF neuron with additional operations. Thus, we add experiments about speed comparison between the PSN and the LIF neuron, and if the PSN is faster than the LIF neuron, then it is also faster than other more complex neurons. We compare the training and inference speed (ms/batch) of the PSN and the LIF neuron with the SEW ResNet ResNet-18/34 and $T=4, 8$, which is the typical option for training deep SNNs. We use the LIF neuron from SpikingJelly, which provides the fastest implementation. The experiments are performed on an Ubuntu 18.04 server with an Intel Xeon Silver 4210R CPU, an NVIDIA A100-SXM-80GB GPU, and 256 GB of memory. | | SEW ResNet-18 (T=4) | | SEW ResNet-18 (T=8) | | SEW ResNet-34 (T=4) | | SEW ResNet-34 (T=8) | | | --------- | ------------------- | --------- | ------------------- | --------- | ------------------- | --------- | ------------------- | --------- | | | Train | Inference | Train | Inference | Train | Inference | Train | Inference | | PSN | 45.94 | 10.91 | 47.99 | 19.17 | 60.63 | 16.74 | 71.52 | 30.87 | | LIF | 60.40 | 10.55 | 98.10 | 18.82 | 94.09 | 17.52 | 149.86 | 30.42 | | LIF(cupy) | 49.71 | 12.08 | 53.81 | 18.74 | 77.49 | 17.36 | 82.38 | 30.19 | **Table R4. Comparison of speed (ms/batch) between the SNN using PSN and the LIF neuron.** The results are shown in Table R4. `LIF` is the LIF neuron implemented by PyTorch, and`LIF(cupy)` is the LIF neuron implemented by CuPy. The results show that the SNN using PSN is much faster than using the LIF neuron in training, and the advantage increases when the scale of the network is larger or $T$ increases. In inference, there is not much difference among speeds, which may be caused by that the speed of spiking neuron layers is not the bottleneck. Compared with `LIF(cupy)`, the speed advantage of PSN is not so significant with `LIF`. Note that `LIF(cupy)` fuses all operations concerning all time-steps into one single CUDA kernel, which minimizes the calling overhead of CUDA kernels including memory access time and kernel launch time. The PSN is implemented by PyTorch, and its operations include a matrix-matrix multiplication ($WX$), an element-wise subtraction ($H - B$), and a Heaviside function ($\Theta(H - B)$) in forward or a surrogate function ($\sigma (H - B)$) composed of many math operations in backward, which has higher calling overhead than `LIF(cupy)` using a single CUDA kernel. The speed of PSN can be improved further if its neuronal dynamics apart from the matrix-matrix multiplication are fused into one single CUDA kernel. However, considering the fact that PSN implemented by PyTorch is fast enough, we have not implemented the CuPy backend for PSN. ## Weaknesses 2 and Question 4 > What is the network architecture used in Table2, such as the "VGG"? Please describe the detailed network architecture in the Table 2. Sorry for the unclear expression. Then the detailed network structure in Table 2 is: - Modified PLIF Net: `{{c256k3s1-BN-PSN}\*2-APk2s2}\*2-Flatten-FC4096-PSN-FC10` - SEW ResNet: the standard SEW ResNet [1] using PSN as spiking neuron layers - VGG: `{c64k3s1-BN-SPSN}-{c128k3s1-BN-SPSN}-APk2s2-{c256k3s1-BN-SPSN}*2-APk2s2-{{c512k3s1-BN-SPSN}*2-APk2s2}*2-Flatten-DP-FC10`, and `SPSN` is the sliding PSN with $k=2$ ``` [1] Fang, Wei, et al. "Deep residual learning in spiking neural networks." Advances in Neural Information Processing Systems 34 (2021): 21056-21069. ``` --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications and the additional experiments. It addresses my concerns sufficiently.
Summary: This paper removes the reset mechanism from the dynamics of conventional LIF/IF neurons, and proposes to reformulate the neuronal dynamics using matrix multiplication instead of the iterative updating for the membrane potential. This matrix multiplication then can be simulated in parallel to accelerate the training of deep SNNs. This paper makes strong assumption that hidden states are independent of their predecessors, and proposes the Parallel Spiking Neuron (PSN) and its variants, masked PSN, sliding PSN. The whole framework is built on unfolding the computing graph over the latency $T$ and then uses a fully connected layer $H = W X$ to replace the neuron dynamics. Strengths: 1. Parallelization: Parallelizing in spiking neurons is an interesting topic in SNNs. By removing the reset mechanism, the neuronal dynamics can be reformulated in a non-iterative form. The proposed Parallel Spiking Neuron (PSN) framework allows for parallelized neuronal dynamics, enabling efficient computations across multiple processing units or threads. 2. Utilization of Temporal Information: The PSN utilizes fully connected weights for inputs, maximizing the utilization of temporal information and potentially enhancing the model's ability to capture temporal patterns. 3. High Simulation Speed: The PSN framework, with its independent hidden states and parallelizable dynamics, achieves extremely high simulation speed, which can be advantageous for real-time applications and large-scale simulations. Weaknesses: 1. Lack of Reset Mechanism: The removal of the reset mechanism in the PSN may limit its ability to handle certain types of dynamics or tasks that rely on precise timing and reset behavior. The authors did not provide reasonable explanation of why neuronal resetting can be ignored, what should be done to compensate for reset removal. 2. Large number of trainable parameters introduced by the new Weights: The use of matrix multiplication in the PSN introduces additional trainable parameters in the new weights, the performance improvement could benefit from using more trainable parameters, but not the new PSN model. With the same number of parameters for both PSN and LIF, will the performance still perform good using the new PSN compared to LIF? 3. Using future information: when calculating $H[t] = \sum_{i=1}^T W_{t,i} X[i]$, in this PSN the future information is used. For the sliding PSN and masked PSN, the authors tried to avoid using future information. So that in order avoiding using the future information, it's better to use masked PSN and sliding PSN, but in the experiments, it's not clear which PSN version is used. 4. This paper makes strong assumption that hidden states are independent of their predecessors, the Parallel Spiking Neuron (PSN) is proposed based on this. How to describe the neuron dynamics along the time-dimension if we remove this hidden state dependency? Technical Quality: 3 good Clarity: 3 good Questions for Authors: As above in Weakness. The whole PSN framework is built on the condition $u(t) < V_{th}$, how to get spikes if this is the precondition for PSN? The lack of dependency between successive time-steps is a concern for me. The whole framework is built on unfolding the computing graph over the latency $T$ and then uses a fully connected layer $H = W X$ to replace the neuron dynamics. It doesn't make sense. Can you give more detailed explanation on this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. The whole PSN framework is built on the condition $u(t) < V_{th}$, not clear how to get spikes if this is the precondition for PSN. 2. Using future information: when calculating $H[t] = \sum_{i=1}^T W_{t,i} X[i]$, in this PSN the future information is used. For the sliding PSN and masked PSN, the authors tried to avoid using future information. So that in order avoiding using the future information, it's better to use masked PSN and sliding PSN, but in the experiments, it's not clear which PSN version is used. 3. This paper makes strong assumption that hidden states are independent of their predecessors, the Parallel Spiking Neuron (PSN) is proposed based on this. How to describe the neuron dynamics along the time-dimension if we remove this hidden state dependency? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments and questions, which are also helpful for other reviewers. Please refer to "To All Reviewers" for responses to **Weaknesses 1: Lack of Reset Mechanism**. Responses to other comments are as follows. ## **Weaknesses 2** Although the PSN family uses extra trainable parameters, the increasing memory cost is negligible. To explain this point clearly, we list the number of parameters of different layers, which are shown in Table R1. Note that the parameter number of masked PSN is $T^{2} + T$ in training, and parameters are masked progressively. After training, the parameter number reduces to $\frac{(2T+1-k) \cdot k}{2} + T$ in inference. | Layer | Description | Params | | --------------------- | ------------------------------------------------------------ | ---------------------------------------- | | convolutional layer | $C_{in}$ input channels, $C_{out}$ output channels, $k_{h} \cdot k_{w}$ weight shape | $C_{in} \cdot C_{out} \cdot k_{h} k_{w}$ | | fully connected layer | $F_{in}$ input features, $F_{out}$ output features | $F_{in} \cdot F_{out}$ | | PSN | $T$ time-steps | $T^{2} + T$ | | Masked PSN | $T$ time-steps, $k$ orders | $T^{2} + T$ | | Sliding PSN | $T$ time-steps, $k$ orders | $k+1$ | **Table R1. The number of parameters of different layers.** In lines 207-209, we have shown that using the PSN in deep SNNs will cause a negligible increase in parameters. Now let us take the SNNs in Table 1 as a new example. Denote the number of the SNN using the LIF neuron $P(LIF)$ as the baseline, we report the ratios of the number of parameters $\frac{P(neu)}{P(LIF)}$ of SNNs in Table 1. The results are shown in Table R2. Although $T=32$ is large in these SNNs, the results in Table 1 show that the using of the PSN family will increase no more than 5% extra parameters. | Dataset\Neuron | PSN | Masked PSN | SPSN | GLIF | KLIF | PLIF | LIF | LIF wo reset | | ----------------------- | -------- | ---------- | -------- | -------- | -------- | -------- | ---- | ------------ | | **Sequential CIFAR10** | 1.014398 | 1.014398 | 1.00045 | 1.077785 | 1.000014 | 1.000014 | 1 | 1 | | **Sequential CIFAR100** | 1.013777 | 1.013777 | 1.000431 | 1.074431 | 1.000013 | 1.000013 | 1 | 1 | **Table R2. The ratios of the number of parameters $\frac{P(neu)}{P(LIF)}$ of SNNs in Table 1.** > With the same number of parameters for both PSN and LIF, will the performance still perform good using the new PSN compared to LIF? We can easily build SNNs with fewer parameters. Denote the SNN using the LIF neuron for classifying the sequential CIFAR100 in Section 4.2 as the baseline. We reduce the number of the output/input channels of the first/second convolutional layer by 1, causing a slight reduction of parameters. We train these SNNs with the same hyper-parameters and training options as Section 4.2, and the results are shown in Table R3. The results indicate that the PSN family still has higher accuracy even with fewer parameters than the SNN using the LIF neuron. | | LIF (baseline) | PSN | Masked PSN | Sliding PSN | | --------------- | -------------- | ------ | ---------- | ----------- | | **Params** | 536548 | 520415 | 520415 | 513254 | | **Accuracy(%)** | 55.45 | 62.74 | 60.23 | 61.75 | **Table R3. Comparison of sequential CIFAR100 between LIF and PSN family with fewer parameters.** ## **Weaknesses 3** > but in the experiments, it's not clear which PSN version is used. You can refer to Table 2 and Section 4.3 for more details. In a word, we use the PSN for CIFAR10 and ImageNet, and the sliding PSN with order $k=2$ for CIFAR10-DVS. ## **Weaknesses 4** > How to describe the neuron dynamics along the time-dimension if we remove this hidden state dependency? You can refer to Section 3.1 for more details. Here we summarize that the iterative neuronal dynamics with removing neuronal reset can be reformulated to non-iterative equations, as Eqs. (5) and (7) show. ## Questions > The whole PSN framework is built on the condition $u(t)<V_{th}$, not clear how to get spikes if this is the precondition for PSN. We regret the unclear expression, which may mislead you. In line 151, we assume $H[t] < V_{th}$ as a method to ignore neuronal resetting. In lines 152-158, we claim that this method is meaningless, and a better method to ignore neuronal resetting is removing resetting from neuronal dynamics directly. In the PSN family, $H[t]$ is not restricted to be lower than the threshold $V_{th}$. > The lack of dependency between successive time-steps is a concern for me. Denote the initial value of $H[t]$ as $H[-1]$. In the vanilla spiking neuron, the neuronal dynamics is a typical Markov chain with the transfer function $g$ determined by the neuronal dynamics, and $$ H[t] = g(H[t-1], X[t]) = g(g(H[t-2], X[t-1]), X[t]) = ...=g(g(g(H[-1], X[0]), X[1])..., X[t-1]). $$ Thus, $H[t]$ is actually determined by $X[0], X[1], ..., X[t]$. In PSN, masked PSN and sliding PSN, $H[t]$ is determined by $X[0], X[1], ... ,X[T-1]$, $X[0], X[1], ..., X[t]$ and $X[t-k+1], X[t-k+2], ..., X[t]$. In conclusion, we can find that $H[t]$ in both vanilla spiking neurons and the PSN family is determined by input at specific time-steps. The main difference is that the dependency in vanilla spiking is indirect, which is implemented by a Markov chain. While in the PSN family, the dependency is direct, which is the weighted sum. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I would like to thank the authors for providing thorough responses and for their efforts to enhance the paper based on my feedback. The clear and accurate explanations you provided exceeded my expectations, and I appreciate your dedication in addressing my questions.
Rebuttal 1: Rebuttal: Thanks for all reviewers' valuable comments. We are encouraged that reviewers find the idea of parallelizing spiking neurons interesting and commend the fast simulation speed of the PSN family. Meanwhile, most reviewers are concerned about the hardware implementation of the PSN family and the trade-off with vanilla spiking neurons. Our responses to these questions are as follows. ## Hardware Compatibility To the best of our knowledge, the behavior of most event-driven neuromorphic chips is closer to the step-by-step forward propagation (send input $X[t]$ to the SNN and get output $Y[t]$) that each event is processed and routed between cores. Thus, the masked PSN and the sliding PSN can be implemented in these chips because they support the step-by-step forward propagation. The PSN requires input at all time-steps and can not work in these chips. While some heterogeneous chips [1, 2] support the layer-by-layer forward propagation (send inputs $X[0], X[1], ..., X[T-1]$ to the SNN and get outputs $Y[0], Y[1], ..., Y[T-1]$), and the PSN has the potential to work in this type of neuromorphic chips. When deploying the PSN family on hardware, memory consumption should also be considered. Please refer to Figure 2 in the main text for details about the memory consumption of parameters. Additionally, the masked PSN and the sliding PSN require a stack with length $k$ to store $X[t-k+1], X[t-k+2], ..., X[t]$ if the hardware uses a step-by-step forward propagation. ``` [1] Kim, Sangyeob, et al. "C-DNN: A 24.5-85.8 TOPS/W complementary-deep-neural-network processor with heterogeneous CNN/SNN core architecture and forward-gradient-based sparsity generation." 2023 IEEE International Solid-State Circuits Conference (ISSCC). IEEE, 2023. [2] Chang, Muya, et al. "A 73.53 TOPS/W 14.74 TOPS heterogeneous RRAM In-memory and SRAM near-memory SoC for hybrid frame and event-based target tracking." 2023 IEEE International Solid-State Circuits Conference (ISSCC). IEEE, 2023. ``` ## Trade-off #### **Disadvantages** The PSN family lacks the neuronal reset. The neuronal reset is important in neuronal dynamics, as Table 1 has shown that the LIF neuron without reset has lower performance on temporal tasks. We summarize the effect of neuronal reset as follows: 1. Avoid too-high firing rates 2. Clear (hard reset) or reduce (soft reset) the influence of previous inputs after firing a spike 3. Introduce a nonlinearity during generating hidden states For effect 1, we have shown in Figure 4 and lines 265-269 that the firing rate of the PSN family is higher than vanilla spiking neurons, but the increment degree is minor. And the firing rate is still far away from 1.0, which does not damage accuracy. Meanwhile, the learnable weights and thresholds of the PSN family can also adjust the firing rates directly. Effect 2 works similarly to the gate mechanism in LSTMs. For the moment, the PSN family does not involve such a dynamic mechanism. The PSN simply uses all inputs without filtering, while the masked PSN and sliding PSN uses the latest $k$ inputs. Effect 3 is caused by involving $S[t]$ in neuronal reset, which is generated by the nonlinear Heaviside function. While the generation of $H[t]$ in the PSN family is fully linear. Although the PSN family lacks effects 2 and 3, it still works better in some temporal/static data classification tasks, as we have shown in this paper. The PSN family is the prototype of parallelizable spiking neurons. Based on it, effects 2 and 3 can be implemented by introducing a nonlinear gate mechanism. Here let us provide the formulation of the gated PSN. For example, when adding the input gate $\mathbf{I}$ and the forgetting gate $\mathbf{G}$ as $$ \mathbf{I} = \sigma(\mathbf{W_{I}}\mathbf{X} + \mathbf{B_{I}}), ~~~~~~~~~~~~~~~\mathbf{W_{I}} \in \mathbb{R}^{T \times T}, \mathbf{X} \in \mathbb{R}^{T \times N}, \mathbf{B_{I}} \in \mathbb{R}^{T} $$ $$ \mathbf{G} = \sigma(\mathbf{W_{G}}\mathbf{X} + \mathbf{B_{G}}), ~~~~~~~~~~~~~~~\mathbf{W_{G}} \in \mathbb{R}^{T \times T}, \mathbf{X} \in \mathbb{R}^{T \times N}, \mathbf{B_{G}} \in \mathbb{R}^{T} $$ where $\sigma$ is the sigmoid function or the Heaviside function with surrogate gradients. Then the hidden states are generated by $$ H[t] = G[t] \cdot H[t-1] + (1 - G[t])\cdot I[t] \cdot X[t]. $$ Although $H[t]$ is calculated by an iterative equation, it can still be parallelized by Parallel Prefix Sum (Scan) algorithm as $$ p[i] = \prod_{j=0}^{i}G[j], $$ $$ c[i][j] = \prod_{l=i+1}^{j}, $$ $$ G[l] = \begin{cases} \frac{p[i]}{p[j]}, j \geq i \\\\ 0, \mathrm{otherwise} \end{cases},\\ $$ $$ H[t] = \sum_{i=0}^{t}(1 - G[t])\cdot I[t] \cdot X[t] \cdot c[i][t]. $$ With CUDA devices to solve $H[t]$ at all $t$, the time complexity is $\mathcal{O}(\mathrm{log}(T))$. #### **Advantages** The advantages of the PSN family have been discussed in the paper in detail. We conclude them as: 1. Highly parallelizable neuronal dynamics and fast simulation speed 2. Higher or equal task performance to the vanilla spiking neurons 3. Easy to learn long-term dependency because the connection between any $X[i]$ to $H[j]$ is direct ## Network Structure Symbols To answer some questions, we need to show the detailed network structure. We use the following symbols to represent the network structure in our responses: - `c2k3s4`: the convolutional layer with output channels `2`, kernel size `3`, and stride `4` - `BN`: the batch normalize layer - `APk2s2`: the average pooling layer with kernel size `2` and stride `2` - `Flatten`: the flatten layer - `DP`: the dropout layer - `FC10` as the fully connected layer with `10` output features - `{}*2`: `2` repeat structures, e.g., `{FC10}*2` is `{FC10-FC10}`
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Why think step by step? Reasoning emerges from the locality of experience
Accept (oral)
Summary: This paper aims to investigate in a control and toy setup why it is that zero-shot chain-of-thought reasoning (e.g. prompting a model with "let's think step by step" and letting the model output intermediate reasoning traces before generating the final answer) improves downstream performance of language models in reasoning tasks. The authors hypothesise that reasoning is useful (or even necessary) when there is a local structure in the training data. A local structure in the training data here refers to local clusters of variables that directly influence each other in a DAG are observed together. The high-level mapping of this onto real-world experience is that we perceive things that are physically and temporally close, but nonetheless can reason over things that are physically and temporally far away. The setup in which the authors investigate this is a Bayes net. Imagine a directed acyclic graph (DAG) of variables, and the task is the predict the probability that one variable (the target) takes a value given that another variable (the input; physically separated from the target variable in the graph) has a certain value. The training setup is such that the learner does not see the target and the input variable together during training, but sees local clusters of variables from the graph. Importantly, these clusters overlap. So for each cluster there is at least one variable that is also in another cluster. The authors then show that a language model that is allowed to freely generate intermediate variables can better predict the target value probability than a model that directly predicts it, but only if the training data is structured in the local way described above. Intuitively, the learner achieves this by generating one of the overlapping variables and then moving between local clusters it has seen from training until it encounters the target variable. The authors also show that the learner's performance does not deteriorate with length of the intermediate variables generated, and that a learner that reasons freely is more data-efficient. The authors take these results to conclude that we can expect zero-shot CoT reasoning to help when a learner has to make inferences over things (concepts, topics, you name it) that did not co-occur directly in training, but can be connected through things that did co-occur. Strengths: This paper is written in an exceptionally clear way, it's a pleasure to read. Additionally, it's one of those rare papers that is able to convincingly connect a very theoretical, toy, controlled result to a real-world phenomenon that we, as of yet, understand poorly. I feel like I understand zero-shot CoT better after reading this paper, and am very excited about follow-up work in this direction. Other strengths: - the authors have a simple theoretical result that helps shape intuitions before diving into the experimental section - the authors convincingly apply control conditions to isolate the effect of the locality of the training experience, showing the same thing does not show up when there is no locality or the locality is "wrong" (clusters are from another DAG, meaning they are not actually local) - the results are very clear; there is a reasoning gap for direction prediction versus reasoning when there is a locality in the variables encountered during training - the authors additionally show benefits of this type of training data structure; data-efficiency - the authors' conclusion is convincing: reasoning can help when a learner needs to connect concepts that have not directly been seen during training, but those concepts are connected through other concepts that have been seen together Weaknesses: My main point of weakness with this paper is that the hypothesised connection to actual zero-shot CoT reasoning in SotA LMs, and with it, actual reasoning done by humans is not described explicitly enough. The authors cite Chan et al. (2022) here to be similar to their work ("Data distributional properties drive emergent .."), but I found the connection between that toy setup and real language slightly more convincing because they use a Zipfian distribution (which the vocabulary in language also follows to trade-off the speaker and listener effort due to ambiguity). So my question here is; how exactly does your setup relate to real-world language; what kind of data might we find these local structures where variables are connected through intermediate variables but not directly co-occur; what do the variables refer to in language? Can you give an example? And if the example is topics, can you work that example out a bit more clearly? Don't get me wrong here, I find the connection convincing, but I think the paper can benefit from some explicit reasoning about the connection of the setup to real-world language. In a sense, the setup in the paper works because the model can just generate variables randomly until it encounters the target variable, whereas in real-world reasoning arguably the intermediate reasoning steps are connected through some high-level or abstract similarity. Relatedly, it seems like a very important factor for reasoning like presented in the paper here to work is the overlapping clusters; what happens if not every cluster overlaps, or if overlapping variables are dropped out more often? Do you think people can make "reasoning jumps" through language between clusters of variables that do not overlap? To put it differently; perhaps when humans (and language models) do not observe some variable as both a part of cluster 1 and cluster 2, they can still "jump" between those clusters due to some abstract similarity between the two (e.g. a latent variable connection instead of an actual overlapping cluster). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Most important questions are listed in weaknesses. Other questions that are less important here: - Should line 138 say it only assigns non-zero probability to **adjacent** pars and not **non-adjacent** pairs as stated now? - I think the first control condition can benefit from some more explanation; are the values still from the right Bayes net and only the local structure is wrong? I.e. is it for example a Bayes net like in Figure 1A but the drawn lines of clusters are not actually local clusters, but the arrows and conditional probabilities are the same? - Random remark: Negative scaffolding intuitively seems to map onto this idea from LLM literature where people tried to test the hypothesis that CoT reasoning only works because it allows the model to output extra tokens. Tested with a control condition where they generate random tokens and then ask for the answer and then show that this works worse than CoT, meaning that the actual information the model outputs in its reasoning traces is important for performance. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately addressed, but I think the authors should be more explicit about about how well this setup maps onto real language models and reasoning, focusing also on the requirement of overlapping clusters, and where the setup simplified things compared to real-world situations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and supportive review. We are glad that the reviewer found our paper to be a pleasure to read and that they feel they understand zero-shot chain-of-thought better having read it. We have responded to the main weakness the reviewer identified, about the connection to actual CoT reasoning, in the main author rebuttal, but we also provide more specific responses below, along with responses to other points the reviewer brought up. > So my question here is; how exactly does your setup relate to real-world language; what kind of data might we find these local structures where variables are connected through intermediate variables but not directly co-occur; what do the variables refer to in language? Can you give an example? And if the example is topics, can you work that example out a bit more clearly? Don't get me wrong here, I find the connection convincing, but I think the paper can benefit from some explicit reasoning about the connection of the setup to real-world language. We will revise the paper to include more discussion of the connection between real-world reasoning and our setting. We will consider a detailed example to make the effect of topic structure clearer. For example, Wikipedia articles tend to mention the capital cities of countries and the climate of cities, but few articles directly talk about the climate of a country’s capital. If we were to ask a model trained on Wikipedia “What is the climate of France’s capital?” it would likely fail to answer directly. However, “France,” “capital,” and “Paris” co-occur frequently in the training set and “Paris” and “Oceanic climate” co-occur. By working through the intermediate reasoning step “Paris is the capital of France”, the language model should be able to answer the question correctly. It might produce a chain of thought like “The capital of France is Paris. The climate of Paris is oceanic. So the climate of France’s capital is oceanic.” > In a sense, the setup in the paper works because the model can just generate variables randomly until it encounters the target variable, whereas in real-world reasoning arguably the intermediate reasoning steps are connected through some high-level or abstract similarity. We agree with this point: locality is probably not the only factor making reasoning useful in natural language. Seeing examples of reasoning traces in the training corpus might lead the model to learn reasoning strategies that make use of high-level similarities between topics. Even more importantly, few-shot in-context-learning likely helps models generate relevant variables for CoT. (Here we studied only the 0-shot case.) This is an interesting direction for future research. > Relatedly, it seems like a very important factor for reasoning like presented in the paper here to work is the overlapping clusters; what happens if not every cluster overlaps, or if overlapping variables are dropped out more often? Do you think people can make "reasoning jumps" through language between clusters of variables that do not overlap? To put it differently; perhaps when humans (and language models) do not observe some variable as both a part of cluster 1 and cluster 2, they can still "jump" between those clusters due to some abstract similarity between the two (e.g. a latent variable connection instead of an actual overlapping cluster). Yes, one major difference between natural language and our setting is that natural language allows us to make abstract statements which can connect concrete facts in novel and unexpected ways. For instance, statistical information about predicates may influence joint probabilities of a variety of propositions that include them. While the focus of this work was to find a minimal case in which outputting extra information between a “question” and “answer” helps to produce better answers, the question of how abstract knowledge might enable better reasoning is an interesting direction for future work. We will update the discussion to mention this direction. > I think the first control condition can benefit from some more explanation; are the values still from the right Bayes net and only the local structure is wrong? Yes, that is correct. The values of the variables are still taken from the right Bayes net, but the local neighborhoods are based on a different Bayes net with a different structure. We will make this clearer in the paper. > Random remark: Negative scaffolding intuitively seems to map onto this idea from LLM literature where people tried to test the hypothesis that CoT reasoning only works because it allows the model to output extra tokens. Tested with a control condition where they generate random tokens and then ask for the answer and then show that this works worse than CoT, meaning that the actual information the model outputs in its reasoning traces is important for performance. Great point! Simply outputting extra tokens is not helpful in our setting. Step-by-step reasoning only works when the steps are relevant to the prediction task at hand. We will make this connection in the revision. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the rebuttal, my points are adequately addressed. I think this paper is going to be a valuable insight to the community, and I will argue for its acceptance, but I urge the authors to be more explicit in a final version about the conditions under which this type of intermediate variable generation will be useful (e.g. overlapping clusters) and the distinctions between this setup and natural language. I'll increase my score to an 8.
Summary: The starting point of this paper is the observation that large language model benefit from chain-of-thought reasoning. Namely, when prompt with a reasoning task, LLMs benefit from generating intermediate steps before reaching the final answer. The paper investigates this phenomenon. The authors hypothesize that this is a result of local structures in the training data where variables that often appear together have strong influence on each other, thus a model can generate chains of local connections and by that obtain relations between remote variables that do often appear together in the training data. To explore this hypothesis, the authors train an LLM on samples from randomly generated Bayes nets and task the model with inferring the conditional probability of one variable in the bayes net given another. They show that when the model is trained on samples that only include a subset of local variables then generating a chain of conditional probabilities involving adjacent variables, predicts the conditional probability involving two remote variables whith much higher accuracy than a model that is tasked with predicting the remote relation directly. They refer to this phenomenon as the “reasoning gap”. In contrast, when the model is trained either on the entire set of variables, or on a subset of variables from an irrelevant locality, that reasoning gap vanishes. They also prove a theoretical guarantee in the special case when the Bayes net is a simple chain of variables. Strengths: The topic of chain-of-thought reasoning in LLMs has garnered a lot of attention in the community recently. It is important both from a theoretical as well as applicable standpoints to investigate the conditions under which chain-of-thought reasoning is useful. The authors suggest an interesting set of experiments to do that as well as some theoretical work. At least for the model that was used in the experiments, the results and the conclusions are convincing. The paper is well written. Weaknesses: The authors ran extensive experiments, but the evaluation uses only one LLM architecture. To draw conclusions about chain-of-thought reasoning in LLMs in general, I would expect a larger set of model architectures to be part of the experiments. For example, it would be interesting to understand how the performance is influenced by the model’s size. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Line 139 - shouldn't it be "adjacent" instead of "non-adjacent"? Theorem 3.1 - what is q^hat? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for raising the important issue of architecture choice. We also appreciate that the reviewer finds the topic of chain-of-thought reasoning important and finds our results convincing. We have responded to the reviewer’s point about the choice of architecture, including new results with different architectures, in the main author rebuttal. We respond to other questions below. > Line 139 - shouldn't it be "adjacent" instead of "non-adjacent"? Yes, this was a typo. Thank you for catching it. > Theorem 3.1 - what is q^hat? $\hat{q}$ refers to estimators that are constructed from the raw predictions of the risk minimizer $q$. $\hat{q}$ alone is not defined, but $\hat{q}_D$ and $\hat{q}_S$ are estimators we define. See Section 2.2 for definitions of these different estimators. --- Rebuttal Comment 1.1: Title: My main concern has been addressed Comment: I thank the authors for the thoughtful comments and for addressing my main concern. Now that it has been lifted, I'll be glad to see this paper presented at the conference, so I have raised my rating.
Summary: This paper provides a theoretical analysis of situations in which chain-of-thought reasoning should be helpful. They do this by considering the task of predicting variable values in a Bayes net. Specifically, it is a Bayes net where child nodes are a nearly-deterministic function of their parents. During training, the network sees sets of nodes and is asked to predict the value of a target node. At evaluation time, the network is asked to predict a held-out target node based on another held-out node. They compare three approaches: (a) direct prediction, where the target node is predicted immediately (b) scaffolded prediction, where the model is prompted to predict all the nodes between the source node and target node, and (c) free prediction, where the model chooses itself which intermediate nodes to generate. They find that in Bayes nets where relevant local variables are observed near each other, scaffolded prediction and free generation outperform direct prediction. If the observations shown are not relevant to the task, all approaches do poorly. If observations are not locally structured, scaffolded generation does well, but we don't see a benefit from free generation. They show theoretically that in this setting "chain-of-thought" (i.e. scaffolded generation) produce better predictions and back this up with empirical tests. Strengths: Originality/significance: I am not familiar with theory in this area, but this appears to be a new and exciting result. It could be useful for the field by providing people with intuition about what types of datasets are promising candidates to use with chain of thought. Quality/clarity: They present theory and complementary empirical analysis. They consider a few different forms of reasoning and a few different types of observation frequency structures to confirm that their hypotheses hold across these conditions. Weaknesses: * It took a long time to understand the intuition behind how the theoretical and empirical results relate to the claims about reasoning in sequence models. Possibly this could be made clearer? * I wish there was an analysis of how accurate the intermediate reasoning steps were in the different conditions (scaffolded vs free generation). * The experiments look at conditions where each variable's value is nearly deterministic based on its parents. I wish there was either an analysis of what happens when this is not true or a discussion of whether the types of natural language tasks where chain of thought is most helpful share this nearly-deterministic property. * The experiments average over 10 Monte Carlo rollouts for the scaffolded and free generation cases. I wish there was an analysis of how chain of thought compares to direct prediction in the case of a single rollout, which is often the case being considered in language models. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In Section 3, line 189, should "$p_{obs}$ only assigns non-zero probability to non-adjacent variable pairs" say "adjacent" not "non-adjacent"? I'm not sure I fully understood the significance and takeaways of the analysis and experiments. I've summarized my understanding below, but please correct me if I'm misunderstanding: In the theoretical analysis, my intuitive understanding of the result is that if your training data consists of only adjacent pairs, then when you try to evaluate on non-adjacent pairs a perfect estimator will be very wrong (since those samples are never seen, so it will default to a uniform prior). On the other hand, if you predict probabilities of non-adjacent pairs by chaining a set of probabilities for adjacent pairs, then each term in the chain is in-distribution and you’ll get a better estimate. Is that the correct intuition? If so, (a) is there any way to make the intuition behind the result clearer, and (b) could you maybe make a higher analogy with language (e.g. explaining what a “chain” of variables would look like in language, what it means for training data to consist mostly of adjacent pairs, etc.) It seems like reasoning working well consists of two thing: (A) the model must be able to produce a reasoning trace which is relevant to the task at hand, and (B) the model must be able to use the reasoning to produce a better final answer. * It seems like the “scaffolded” condition checks if (B) is true, and the “free generation” condition checks if both (A) and (B) are true. The theoretical results suggest that (B) should is true but don’t touch on (A). Is there any theoretical reason to think (A) should be true primarily for locally-structured data? Or is the intuition just that locally-structured observations will bias the model towards generating intermediate variables easy to predict accurately b/c the model is conditioning on local context and are relevant to the task at hand? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review. We appreciate that the reviewer found the paper’s result new and exciting, and that they thought it provides useful intuition about when chain-of-thought is useful. We respond to the weakness about variable values being nearly deterministic given parents in the general author response, as well as the comment about the experiments averaging over 10 Monte Carlo rollouts. We also respond to the remaining questions below. > In the theoretical analysis, my intuitive understanding of the result is that if your training data consists of only adjacent pairs, then when you try to evaluate on non-adjacent pairs a perfect estimator will be very wrong (since those samples are never seen, so it will default to a uniform prior). On the other hand, if you predict probabilities of non-adjacent pairs by chaining a set of probabilities for adjacent pairs, then each term in the chain is in-distribution and you’ll get a better estimate. Is that the correct intuition? If so, (a) is there any way to make the intuition behind the result clearer, and (b) could you maybe make a higher analogy with language (e.g. explaining what a “chain” of variables would look like in language, what it means for training data to consist mostly of adjacent pairs, etc.) Yes, that is exactly the right intuition. The perfect estimator, with respect to the risk defined in the paper, will default to the uniform prior for non-adjacent pairs. When we chain the risk minimizer’s estimates of adjacent pairs, we indeed make use of “in-distribution” pairs and therefore can reduce bias. Regarding improving the clarity of the theoretical result, in the revised version of the paper, we can include a figure that illustrates the theoretical formulation more carefully. > It seems like reasoning working well consists of two thing: (A) the model must be able to produce a reasoning trace which is relevant to the task at hand, and (B) the model must be able to use the reasoning to produce a better final answer. We do not have concrete theoretical results for (A). The intuition behind why (A) happens in practice in free generation is that the relevant variables to reason through tend to be close to the observed variable and the training set consists of local clusters. In practice, free generation generates several variables near the observed variable, only some of which are relevant but which are sufficiently relevant in aggregate. It might be possible to prove analogous theoretical results for (A) but we leave a thorough characterization of free generation for future work. --- Rebuttal Comment 1.1: Title: Concerns addressed! Comment: Raising to 7.
Summary: This work investigates why and how chain-of-thought reasoning works in language models in the aspect of **local structure** in the training data. To this end, this work first proves the hypothesis that there exists a reasoning gap where reasoning through intermediate variables improves inference and then tests the hypothesis by training an autoregressive language model on samples from Bayes nets but only including a subset of variables in each sample, considering the estimators including direct prediction, scaffolded generation, and free generation. The key findings are intermediate steps are only helpful when the training data is locally structured with respect to dependencies between variables and that the combination of locally structured observations and reasoning is much more data-efficient than training on all variables. This work begins with the human practice of step-by-step reasoning and reviews the recent progress of a similar mechanism --- the intriguing chain-of-thought reasoning in large language models. Then the paper asks the question of why step-by-step reasoning helps, which may not only help understand how the large language models work but also provide insight into the origins of human reasoning. The basic hypothesis is that chain-of-thought reasoning is useful in language models due to the local structure in the training data. Such a hypothesis is intuitive as human reasoning transcends local bounds, supporting plans and conclusions that span in time and space. The meaning of local structure may be interpreted as observations occurring in overlapping neighborhoods of concepts. To verify the hypothesis, the authors conduct theoretical analysis and empirical experiments, finding that performing conditional inference by first generating intermediate variables improves the ability of a language model to match true conditional probabilities only when the training data is structured locally with respect to strong dependencies and the intermediate variables are relevant to the relationship between the variables of interest. Finally, the work also provides insights into three aspects: (i) when reasoning helps --- the observation distribution has the correct locality structure; (ii) when reasoning is unnecessary -- the observed and target variables co-occur in the training distribution; (iii) When reasoning fails --- data with the wrong locality structure. Overall, this work studies an important and timing research topic towards why and when step-by-step works in language models. This work provides comprehensive theoretical and experimental results to support the hypothesis. This work also provides useful insights into solving reasoning tasks, as well as dataset construction to amplify the capacity of large language models to perform step-by-step reasoning. Strengths: 1. The topic studied in this work is very important and has attracted increasing interest in the community. This kind of work is useful to advance our scientific understanding of why the intriguing chain-of-thought reasoning works (or fails on some tasks), and helps facilitate future studies on training dataset construction and prompting techniques to amplify the capacity of large language models to solve complex reasoning tasks. 2. The theoretical part clearly formulates the problem. It also introduces effective approaches for estimation, followed by convincing experiments on a real-world language model (though with the smaller-scale GPT-2 instead of the real large language models). 3. Many great insights can be found in the paper, after taking into account the influence of data complexity, when reasoning works, becomes unnecessary, and even fails. The findings basically align the real-world practice when applying chain-of-thought prompting in different reasoning tasks. Weaknesses: 1. The paper can be improved by taking large language models as the backbone and verifying the hypothesis on real-world datasets where chain-of-thought prompting techniques are widely applied. It would be interesting if this work could provide effective ways to identify the locality of a dataset, which may answer the commonly found yet unresolved question of why chain-of-thought techniques work quite well in arithmetic tasks but fail at some standard natural language understanding tasks like simple classification. 2. The connection and the difference with existing studies can be further clarified. In the introduction part, this work mentioned another jargon --- "burstiness" which is confusing. It is not clear how burstiness and locality differ from each other. Besides, the motivation for choosing locality as the research topic is not clear, either. It is interesting to see more elaborations on how the hypothesis is derived. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Why does this work choose a smaller version of GPT-2 for experiments instead of a large language model? Scaling laws affect the step-by-step reasoning ability of language models. Small models may commonly fail at step-by-step reasoning compared with direct reasoning. In contrast, large language models may be the better backbone to verify the hypothesis of this work. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have provided thoughtful descriptions of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and thoughtful review, and are happy to see that the reviewer found our theoretical analysis and simulation results insightful. We describe how we will address the second weakness in the general author rebuttal. We also respond to the question about the choice of architecture by showing that a similar reasoning gap appears for multiple different architectures. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal. Comment: I appreciate the authors' clarifications. My concerns have been well addressed.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments on this paper. These comments have informed additional analyses and clarifications. # Architecture Reviewers Jbx9 and tGUI ask about our choice of architecture. Jbx9 asks why we chose a smaller model given that models smaller than GPT-3 often fail at step-by-step reasoning, while reviewer tGUi identifies our use of only one architecture as a weakness. To these points, we first point out that our theoretical analysis applies to any autoregressive empirical risk minimizer. We prove that a reasoning gap should exist between direct prediction and scaffolded generation regardless of the specific architecture (for simple world distributions). We can interpret our learning curve results as evaluating whether a reasoning gap exists at different levels of empirical risk, or equivalently perplexity. The simulated data we trained transformers on is much simpler than natural language, so it is expected that a smaller model is capable of learning the data well enough for reasoning to be beneficial. Figure 1 in the PDF shows the mean perplexity on a validation dataset across Bayes nets vs. the size of the reasoning gap for each checkpoint in the geometrically-sized local neighborhood model. These results show that a model must achieve a certain perplexity before the reasoning gap emerges, but it exists for a range of perplexities achievable by our architecture. We have also run additional simulations in which we train different transformer architectures on geometrically-sized local neighborhoods from the same Bayes nets. We use a smaller model (4,473,600 parameters), a larger model (86,628,864 parameters, the base gpt-2 architecture) and model with a similar number of parameters, but larger embeddings and fewer layers (39,887,872 parameters). We also use a tiny model which is too small to learn the distribution of the data (8,644 parameters). All models except for the tiny model exhibit similar reasoning gaps, suggesting that our findings are not specific to one architecture. Performance across architectures and estimators is reported in Figure 2 of the PDF, which we will include in the appendix of the paper. # Number of samples Reviewer fiMT was interested in seeing results with a single Monte Carlo sample used in the scaffolded and free generation estimators. We have run this analysis for all numbers of samples between 1 and 10 and reported the results in Figure 3 of the PDF. While we used the mean squared error as a practical measure of the error of a language model, we could decompose MSE into bias and variance. The free and scaffolded generation estimators have lower bias, but higher variance, than direct prediction. Free and scaffolded generation have higher MSE compared to direct prediction with one sample, but averaging over several samples (at least 3 in our setting) leads them to have lower MSE than direct prediction. This finding has bearing on when we should expect generating multiple chains of thought, as is done in confidence methods (e.g. Wang et al., 2022), to be valuable. The specifics of the bias-variance trade-off depend on the underlying stochasticity of the Bayes net, so different environments might call for different numbers of samples. We will discuss this result in the paper. # Connection to real-world reasoning Reviewers Jbx9, fiMT, and jyTS mentioned that the connection between our setting and real-world reasoning in humans and language models was not clear in the paper. We are ultimately interested in studying the improvement at answering questions resulting from generating intermediate information between a question and its answer. Our setting of inference for held-out pairs of variables in a Bayes net is a minimal case where this happens. For example, in a simple Bayes net with three variables, A: it rained last night, B: the grass is wet, and C: mowing the lawn will be difficult, reasoning through the intermediate variable B might be necessary if A and C have not been encountered together directly in training data. We will revise the introduction to make the connection clearer. Reviewer fiMT asks about our choice to favor strong dependencies in generating Bayes nets. We chose to generate data this way to ensure that there are non-adjacent pairs of variables with high mutual information. If we sampled probabilities uniformly, mutual information would decay rapidly with distance and conditional probabilities for held-out pairs would be almost identical to marginal probabilities. Still, our Bayes nets have considerable randomness as the Beta distribution we use generates conditional probabilities between 0.1 and 0.9 32.7% of the time. We expect reasoning to be most useful in environments with strong dependencies, like in math word problems where truth values of statements are deterministic. An alternative way of looking at this is that strong long-range dependencies are themselves a precondition for reasoning – otherwise using the marginal frequency of the conclusion is enough. # Clarifications We will clarify the motivation behind our choice to study locality and how it differs from related ideas like burstiness. While burstiness concerns the distribution of a single class over time, locality is about which classes co-occur with each other. Co-occurrence patterns are relevant because reasoning connects different concepts. We also thank reviewers fiMT and tGUi for identifying typos and unclear points in the paper. In particular, we have fixed the typo on line 138/139 where “non-adjacent” should be “adjacent”. We once again thank the reviewers for taking the time to give detailed feedback on this work. Their comments have substantially improved this paper. # References Wang, X., Wei, J., Schuurmans, D., Le, Q. V., Chi, E. H., Narang, S., ... & Zhou, D. (2022, September). Self-Consistency Improves Chain of Thought Reasoning in Language Models. In The Eleventh International Conference on Learning Representations. Pdf: /pdf/90f1a5052b5ef7935bbf8c89b91d7b43611f2cd4.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Adaptive Test-Time Personalization for Federated Learning
Accept (poster)
Summary: This paper considers the federated learning setting of adaptive test-time personalization. Traditional test-time adaptation (TTA) can only handle specific target domain distributions, while federated learning requires flexible handling of multiple target domains. Existing TTA methods pre-define which modules to adapt, which limits the application of TTA in federated learning. Therefore, this paper proposes the Adaptive Test-time Personalization algorithm called ATP to automatically decide which modules to adapt and how much to adapt. Strengths: 1. The experiment in Section 3.2 is crucial as it effectively illustrate the inherent challenges encountered by current TTA methods when applied to federated learning, thus offering valuable insights and guiding directions to potential enhancements. 2. The proposed method is simple and easy to implement, and it has achieved impressive performance in the current experiments. Weaknesses: 1. The paper claims that it is the first to propose test-time personalized federated learning. However, this claim is questionable because previous works, such as [1], had already explored test-time personalized tasks in the context of federated learning. [1] Jiang, Liangze, and Tao Lin. "Test-Time Robust Personalization for Federated Learning." ICLR2023 (preprint arXiv:2205.10920 (2022)) 2. I am concerned about the significance of the “supervised refinement” step. If labeled data is available in this step, using the labeled data itself already leaks the distribution about the test samples in TTA tasks (while the test distribution is not known in the TTA setting), which obviously reduces the difficulty of TTA. If labeled data is not available, the proposed method for adaptively learning alpha seems unworkable. 3. The datasets used in the experiments are small-scale. It would be more convincing if experiments were conducted on real-world images, such as DomainNet, with at least a resolution of 224x224. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Figure 2, why does setting "bn.running_mean (m=-0.1)" improve the accuracy of TTA in the presence of label shift, while "bn.running_mean (m=0.1)" significantly harms the accuracy? This seems counterintuitive. 2. The task addressed in this paper is more akin to source-free unsupervised domain adaptation rather than Test-Time adaptation. In my opinion, the focus of most TTA works is on an online setting where the entire test set cannot be obtained at once, and operations are performed on individual test samples to be predicted (as the details in Section 4.3). 3. See Weaknesses 2, how important is the step of "supervised refinement"? Can the authors provide any ablation experiments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no limitations or potential negative societal impact discussed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer yJjv, We sincerely thank you for your comprehensive review and insightful feedback on our paper. We appreciate your positive feedback on the crucial experiment and effective algorithm of our work. We would like to address your concerns as follows. ## W1. Comparison to FedTHE > 1. The paper claims that it is the first to propose test-time personalized federated learning. However, this claim is questionable because previous works, such as [15], had already explored test-time personalized tasks in the context of federated learning. We would like to clarify that our proposed TTPFL, is substantially different from the setting in FedTHE [15]. FedTHE focuses on improve the model robustness to test-time shifts on seen labeled training clients (i.e., clients that participate in FL training with labeled data), while our algorithm focuses on better generalization to unseen unlabeled testing clients (i.e., clients that neither participate in training nor have labeled data) with different distributions. Our research problems are orthogonal. We discuss our differences in detail in [our response to all reviewers, part 3](https://openreview.net/forum?id=rbw9xCU6Ci&noteId=PEt62AClVa). ## W2. Significance of supervised refinement > 2. I am concerned about the significance of the “supervised refinement” step. If labeled data is available in this step, using the labeled data itself already leaks the distribution about the test samples in TTA tasks (while the test distribution is not known in the TTA setting), which obviously reduces the difficulty of TTA. Thanks for raising this concern. We would like to clarify that labeled data is only available for training clients, which participate in FL training. Neither the image nor the label for testing clients are used for supervised refinement. After training, each unseen testing client personalizes the global model in an unsupervised manner with the trained adaptation rates and test the adapted model’s performance. We report the average accuracy on testing clients. [Our response to all reviewers, part 1](https://openreview.net/forum?id=rbw9xCU6Ci&noteId=PEt62AClVa) could be helpful in further clarifying this issue. ## W3. Large-scale dataset >3. The datasets used in the experiments are small-scale. It would be more convincing if experiments were conducted on real-world images, such as DomainNet, with at least a resolution of 224x224. Thanks for your valuable advice. We further conduct experiments on PACS, a large-scale common datasets in TTA with resolution $224\times 224$. The result in the [rebuttal pdf, part B](https://openreview.net/attachment?id=PEt62AClVa&name=pdf) shows that ATP consistently outperforms all baseline. ## Q1. Figure 2 > 1. In Figure 2, why does setting "bn.running_mean (m=-0.1)" improve the accuracy of TTA in the presence of label shift, while "bn.running_mean (m=0.1)" significantly harms the accuracy? This seems counterintuitive. Thanks for your insightful question. We used a visual example in Appendix C.4 to explain why different momentum m has opposite effects when adapting to label shift (m in Figure 2 refers to $\alpha$ in Appendix C.4). In brief, adapting the running mean can be seen as feature aligner/disaligner. - Positive m aligns the intermediate feature distributions for training and testing data, resulting in aligned distribution of predictions. However, since the label distribution is shifted, such alignment introduces negative effects. - Meanwhile, negative m disaligns the intermediate feature distribution, which follows the change in label distribution and improves the accuracy. ## Q2. Source-free unsupervised domain adaptation > 2. The task addressed in this paper is more akin to source-free unsupervised domain adaptation rather than Test-Time adaptation. In my opinion, the focus of most TTA works is on an online setting where the entire test set cannot be obtained at once, and operations are performed on individual test samples to be predicted (as the details in Section 4.3). Thanks for your comment. We would like to clarify that our proposed ATP does not obtain the entire test set at once. Instead, - For ATP-Episodic, testing clients process each test batch independently, resembling test-time batch adaptation according to [R1]. - For ATP-Online, testing clients process test batches sequentially like Tent [39], resembling online test-time adaptation. Moreover, we would like to emphasize that our ATP is substantially different from previous works in TTA since ATP learns from multiple FL clients to tackle different types of distribution shifts. [R1] Jian Liang, Ran He, Tieniu Tan. A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts. ## Q3. Supervised refinement > 3. See Weaknesses 2, how important is the step of "supervised refinement"? Can the authors provide any ablation experiments? Thanks for your question. Supervised refinement is important in ATP since it learns a set of adaptation rates specific to the type of distribution shifts among FL clients. Without supervised refinement, the adaptation rates will be initialized as zero, thus ATP becomes identical to “no adaptation”. For ablation studies, we try (1) using constant adaptation rates for all modules, and (2) fitting adaptation rates with a different meta-distribution, e.g., fitted on clients with feature shift but tested on clients with label shift. As shown in [our rebuttal pdf, Table C](https://openreview.net/attachment?id=PEt62AClVa&name=pdf), - When fitted on the *same type* of distribution shift, ATP significantly improves the performance on testing clients and outperforms constant adaptation rates. It verifies the importance of supervised refinement. - When fitted with a *different type* of distribution shift, the effectiveness of ATP noticeably decreases. It proves that supervised refinement can learn distribution-shift-specific adaptation rates, which further validate the importance of supervised refinement. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. After reading the response and the opinions of other reviewers, I still have the following questions: The authors claim that the testing domains considered by FedTHE have labeled data, whereas this work does not. This claim is not rigorous because, as the authors stated in their rebuttal, there are training client $i$ and test client $i$ both sampled from the same distribution $\mathcal{P}_i$, then this setting is not substantially different from the setting of FedTHE. What is the difference between the "unseen" clients in this paper and the typical client test set in federated learning? In the usual setup, client training and test sets also come from the same distribution, and only the training set is used during training. --- Reply to Comment 1.1.1: Title: Clarification of non-overlapping training and testing clients Comment: Dear reviewer yJjv, Thanks a lot for your comment. We apologize for the confusion raised from notation of client index $i$. We would like to clarify that in our TTPFL setting, **training clients and testing clients are non-overlapping**, i.e., there are $N$ training clients $i = 1, \cdots, N$ and $M$ testing clients $j=1, \cdots, M$. **All the training and testing clients have different distributions**, i.e., $\\{\mathcal{P}\_i^{\text{train}}\\}\_{i=1}^N$ and $\\{\mathcal{P}\_j^{\text{test}}\\}\_{j=1}^M$ are all different, while these distributions are sampled from the same meta-distribution $\mathcal{Q}$, i.e., distribution of distributions. In this case, for a testing client $j$, its distribution $\mathcal{P}_j^{\text{test}}$ is different from all the training clients' distributions $\\{\mathcal{P}\_i^{\text{train}}\\}\_{i=1}^N$. Therefore, FedTHE cannot be used in our TTPFL setting since there are no labeled data from $\mathcal{P}_j^{\text{test}}$. Thanks again for your comment. We will carefully revised our manuscript to avoid confusion.
Summary: The paper studies test-time personalization in a federated learning setting --- after training on participating clients, the goal is to locally adapt the global model given unlabeled test data. The paper's main idea is by pointing out that label non-IID and domain non-IID require adaptation on different layers of DNNs, and propose a novel way to learn the adaptation learning rates of each layer automatically in a data-driven fashion. A simple learning method that alternatively do SGD on the DNN parameters and the layer-wise learning rates shows effective improvements in test-time personalization. Strengths: - The method proposes a novel aspect that to tailor different types of non-IIDness, the degrees of adaptation are different across layers. it is an interesting point. The solution is simple, sound, and effective. - Empirical results are overall satisfying. The experiments provide enough comparisons to centralized TTA algorithms; see some suggestions below. Weaknesses: Although I am positive on this paper, I observe several important concerns. If the authors could address them, my score can be higher. [Major 1] Overclaims: considering TTA in PFL setting was first introduced in [15]. This is a natural extension of centralized TTA. It is unnecessary and imprecise for this paper to make "test-time personalized federated learning (TTPFL)" a new setting. The definition in L38 about the combination of distribution shifts of labels and styles itself is nothing to do with federated setting; centralized TTA can have both label and style shifts. [Major 2] Misleading section 4.2. I cannot understand why the proposed refinement has a connection with meta-learning. ClientTrain in Alg 1 is simply a coordinate gradient style optimization. Alternatively, a discussion about hyperparameter optimization should be more relevant. [Major 3] Theorem 5.1 is not informative. It's simply extending the classic FedAvg generalization bound [18] for the adaptive parameters. It will be more informative and fit to the context to discuss the generalization to the new clients of different distributions ''after test-time personalization'' in TTA sense. Otherwise, I would recommend avoid this laundry theorem. [Minor 1] The datasets are rather small scales and synthetic. it will be better to include natural federated datasets like FEMNIST or iNaturalist-GEO or large-scale common datasets in TTA (like ImageNet). [Minor 2] Why FedTHE or FedTHE+ in [15] are not compared? [15] seems to be the closest work about TTA in FL. Is the discussion in L76 faithful? It seems to me FedTHE does not need labeled data. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Section 3.2 is the limitation to TTA but not about PFL. Can the author discuss why the problem or the solution is dedicated to FL? I suggest adding a discussion around L134. In my opinion, the key is that in PFL we can figure out the ideal adaptive rates by leveraging the training clients who have diverse distributions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 2b9x, Thank you for your detailed and insightful review of our paper. We appreciate your positive feedback on the novelty and empirical results of our work. We have carefully considered your concerns and suggestions, and will address them point by point as follows. ## W1. Overclaims > [Major 1] Overclaims: considering TTA in PFL setting was first introduced in [15]. This is a natural extension of centralized TTA. It is unnecessary and imprecise for this paper to make "test-time personalized federated learning (TTPFL)" a new setting. Thanks for your question regarding the comparison of our work and FedTHE [15]. We would like to clarify that although our setting has a similar name to theirs, our problem is substantially different. FedTHE focuses on improving the model robustness against test-time shift on seen labeled training clients (i.e., clients that participate in FL training with labeled data), while our algorithm focuses on better generalization to unseen unlabeled testing clients (i.e., clients that neither participate in training nor have labeled data) with different distributions. We compare our work with FedTHE in detail in [our response to all reviewers, part 1](https://openreview.net/forum?id=rbw9xCU6Ci&noteId=PEt62AClVa). ## W2. Connection to meta-learning & hyperparameter optimization > [Major 2] Misleading section 4.2. I cannot understand why the proposed refinement has a connection with meta-learning. ClientTrain in Alg 1 is simply a coordinate gradient style optimization. Thanks for raising this confusion of the connection between supervised refinement and meta-optimization. The ClientTrain in Alg 1 shares similarity with important meta-learning algorithms, e.g., MAML [8]. - During unsupervised adaptation, each client conducts coordinate gradient style optimization, which resembles adapting the meta-model on a specific task in MAML (line 6 in their algorithm 1). - During supervised refinement, each client updates the adaptation rates to minimize the post-TTA loss, which resembles updating the meta-model in MAML (line 7 in their algorithm 1). It is important to notice that in line 16 of ClientTrain, we compute the gradient w.r.t. adaptation rates $\boldsymbol{\alpha}$, not the personalized parameter $\boldsymbol{w}_{ij}$. > Alternatively, a discussion about hyperparameter optimization should be more relevant. We agree with you that a discussion about hyperparameter optimization can also be relevant. [R1] first investigated the problem of federated hyperparameter tuning and proposed FedEX that leverages weight-sharing from neural architecture search to efficiently tune hyperparameters. [R2] introduced FloRA that addresses use cases of tabular data and enables single-shot federated hyperparameter tuning. While these methods focus on improving the efficiency of hyperparameter optimization, our paper focuses on finding the optimal adaptation rates that benefit test-time personalization. We will include this discussion in the revision of our paper. [R1] Mikhail Khodak, et al. Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing. NeurIPS 2021. [R2] Yi Zhou, et al. Single-shot General Hyper-parameter Optimization for Federated Learning. ICLR 2023. ## W3. Theorem 5.1 > [Major 3] Theorem 5.1 is not informative. It's simply extending the classic FedAvg generalization bound [18] for the adaptive parameters. It will be more informative and fit to the context to discuss the generalization to the new clients of different distributions ''after test-time personalization'' in TTA sense. Otherwise, I would recommend avoid this laundry theorem. Thanks for your attention regarding our generalization analysis. We would like to clarify that our Theorem 5.1 actually considers the error rate **after test-time personalization**, instead of before it. As defined in Definition B.12 in Appendix B, we consider the error rate of the adapted model $\boldsymbol{w}_{ij}$, instead of the global model $\boldsymbol{w}_G$. Theorem 5.1 aims to show that if ATP achieves low post-TTA classification error over training clients after refining the adaptation rates, we are expected to get a similar low post-TTA classification error on testing clients. ## W4. More datasets > [Minor 1] The datasets are rather small scales and synthetic. it will be better to include natural federated datasets like FEMNIST or iNaturalist-GEO or large-scale common datasets in TTA (like ImageNet). Thanks for your suggestion on the improvement of our experimental setup. We further conduct experiments on PACS, a large-scale common datasets in TTA with resolution $224\times 224$. The result is shown in [the rebuttal pdf, part B](https://openreview.net/attachment?id=PEt62AClVa&name=pdf), where ATP consistently outperforms all baseline algorithms. ## W5. Comparison to FedTHE > [Minor 2] Why FedTHE or FedTHE+ in [15] are not compared? [15] seems to be the closest work about TTA in FL. Is the discussion in L76 faithful? It seems to me FedTHE does not need labeled data. Thanks for your question. We would like to clarify that we did not compare to FedTHE [15] because it requires labeled data. **FedTHE trains a model with two heads (global head and personalized head) during FL training in a supervised manner**, and fuses two heads during test-time in an unsupervised manner. When generalizing to unseen unlabeled testing clients, the client can only download the global head but cannot generate its personalized head due to the lack of labeled data. Therefore, FedTHE cannot be used for unseen unlabeled testing clients, which is the target of our paper. In our [rebuttal pdf, part A](https://openreview.net/attachment?id=PEt62AClVa&name=pdf), we design a variant of FedTHE using pseudo-labels to train the personalized head. Our algorithm outperforms this FedTHE variant across different types of distribution shifts.
Summary: This paper proposes a new setting called test-time personalized federated learning (TTPFL) and proposes an Adaptive Test-time Personalization algorithm. The authors show effectiveness of proposed method over other test-time adaptation methods. Strengths: The paper proposes an Adaptive Test-time Personalization algorithm and shows its effectiveness over other test time adaptation methods. Weaknesses: 1. The proposed setting is strange and not self-consistent. The authors claim that in this setting 'clients adapt a trained global model in an unsupervised manner without requiring any labeled data.' However, the proposed method involves labeled clients to learn the learning rates for different modules (Figure 3, left). At least, real-world scenarios as examples should be provided. 2. Missing representative federated learning baselines. This paper does not compare with any federated learning method. If the proposed method uses the labeled datasets for the training stage, then many federated learning methods can be seen as baselines that are trained on the labeled datasets. Baseslines include FedAvg, FedAvg with Fine-tuning, FedProx, FedProx with finetuning, Ditto, pFedMe. 3. Since the proposed method focuses on different operations on different modules of a model, model architecture is important to this paper, thus experiments on different model architectures are required. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Generally, I am really confused about the setting in this paper after reading. Please provide comprehensive explanations to correct me if I am wrong, and I would consider re-rating. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer h3BM, Thank you for your insightful review and valuable feedback on our paper. We sincerely appreciate your recognition of the effectiveness of our algorithm. Regarding the weaknesses, we address them point by point as follows. ## W1. TTPFL setting > 1. The proposed setting is strange and not self-consistent. The authors claim that in this setting 'clients adapt a trained global model in an unsupervised manner without requiring any labeled data.' However, the proposed method involves labeled clients to learn the learning rates for different modules (Figure 3, left). At least, real-world scenarios as examples should be provided. Thanks for pointing out this confusion of labeled and unlabeled clients. We would like to clarify that our TTPFL setting has the same data requirement as standard FedAvg [25]. In FedAvg, the global model is trained on training clients with labeled data, and then tested on testing clients with only unlabeled data. Similarly in TTPFL, during training, the global model and the adaptation rule are optimized over labeled training clients; while during testing, each testing client downloads the global model and the adaptation rule, and personalizes the global model with only its unlabeled data. We revised the description of TTPFL according to your suggestion in [our response to all reviewers, part 1](https://openreview.net/forum?id=rbw9xCU6Ci&noteId=PEt62AClVa). ## W2. More baselines > 2. Missing representative federated learning baselines. This paper does not compare with any federated learning method. If the proposed method uses the labeled datasets for the training stage, then many federated learning methods can be seen as baselines that are trained on the labeled datasets. Baseslines include FedAvg, FedAvg with Fine-tuning, FedProx, FedProx with finetuning, Ditto, pFedMe. Thanks for your suggestion of comparing to FL baselines. We compared our algorithm to all the baselines you mentioned and summarized the experiment results in [the rebuttal pdf, part A](https://openreview.net/attachment?id=PEt62AClVa&name=pdf). We would like to clarify that although our TTPFL setting requires labeled data for training clients (same as FedAvg), it does not require any labeled data for testing clients. On the contrary, most of the existing PFL algorithms either focus on the training clients exclusively (e.g., Ditto) or require labeled data for personalization on the testing clients (e.g., Fine-tuning, pFedMe). These algorithms introduce stronger data requirements to FL systems, and cannot be used for our TTPFL. To compare to these baselines, we keep the training phase of these algorithms, while using “pseudo-labels” for personalization on testing clients according to [R1]. The pseudo-label is the prediction of the global model. Experiment results in [Table A](https://openreview.net/attachment?id=PEt62AClVa&name=pdf) shows that most FL and PFL baselines can only introduce limited improvement to the testing clients. Meanwhile, our proposed ATP-Episodic outperforms all FL baselines in the setting of TTPFL across three types of distribution shifts. [R1] Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. Workshop on challenges in representation learning, ICML 2013. ## W3. Different model architectures > 3. Since the proposed method focuses on different operations on different modules of a model, model architecture is important to this paper, thus experiments on different model architectures are required. Thanks for your suggestion to include different model architectures. In our paper, we tried two architectures, ResNet-18 and ResNet-50, which are representative in FL [36,43,44] and TTA [14,39,45]. Experiment results show that ATP is compatible and effective with different model architectures. Please refer to Appendix C.2 for our experiments with ResNet-50. --- Rebuttal Comment 1.1: Title: Follow-up response regarding different model architectures Comment: Dear Reviewer h3BM, Thanks again for your valuable feedback on our paper. Regarding the question of different model architecture, we would like to provide additional experiment results. Consider that smaller models are used in FL for their smaller communicational and computational cost, we test our algorithm with a 5-layer CNN on CIFAR-10 dataset. Other experimental settings are identical to our CIFAR-10 experiment in Section 6. As shown in Table D below, similar to our results in the paper, our proposed ATP achieves better performance than various baselines across three types of distribution shifts, which further validate the effectiveness of ATP. Table D: Accuracy (%) under three kinds of distribution shifts on CIFAR-10. We report the average accuracy on testing clients. | Method | Feature shift | Label shift | Hybrid shift | Avg. Rank | | ------------- | ------------- | ----------- | ------------ | --------- | | No adaptation | 64.33 | 69.15 | 61.87 | 7.7 | | BN | 66.51 | 54.60 | 50.20 | 7 | | SHOT | 65.70 | 49.93 | 45.89 | 8.3 | | Tent | 65.57 | 50.02 | 45.72 | 9 | | T3A | 64.33 | 66.34 | 59.59 | 8 | | MEMO | 65.64 | 71.77 | 64.16 | 5.7 | | EM | 61.65 | *76.16* | 67.06 | 5 | | BBSE | 56.92 | 76.13 | 66.21 | 6.3 | | Surgical | 64.45 | 73.58 | 65.35 | 5.7 | | ATP-Episodic | *66.88* | 76.14 | *68.48* | *2.3* | | ATP-Online | **67.11** | **78.38** | **70.86** | **1** | --- Rebuttal Comment 1.2: Comment: Thank you for the responses. However, I still have two concerns: (Major) W2: The authors should show the results on the training clients and compare with the FL methods in Table A. It is essential to enhance the performances of training clients to ensure the setting is practical. Specifically, the training clients need to spend computation, communication, and privacy cost (to a degree) to afford a FL system. When the proposed method improves the performance of testing clients at the cost of decreasing the performance of training clients, the training clients would refuse to participate. Therefore, it is not sufficient to only show the performance of testing clients. W3: The proposed method requires specific designs on convolutional neural networks with batch normalization. I'm concerned about whether the method can be effortlessly extended to other types of models, such as Transformer. --- Reply to Comment 1.2.1: Title: Response to Reviewer h3BM Comment: Dear Reviewer h3BM, Thanks a lot for your suggestion. We would like to address your concerns as follows. ## W2. Comparison to FL baselines. Thanks for your comment regarding the performance on training clients. We understand your point that training clients should also benefit from the FL algorithm. According to your suggestion, we further show the results on the training clients in Table D below, and compare our proposed ATP to FL methods. As shwon in Table D, - ATP also greatly improves the performance of training clients, compared to global FL methods (FedAvg and FedProx). - ATP has good performance on training clients, which is comparable to FL methods designed for labeled clients exclusively (FT, Ditto, pFedMe, and FedTHE). Table D. Comparison with FL baselines on CIFAR-10. We report the average accuracy (%) on *labeled training clients*. | Method | Feature shift | Label shift | Hybrid shift | | ------------ | ------------- | ----------- | ------------ | | FedAvg | 69.02 | 72.65 | 60.34 | | FedAvg + FT | 69.64 | 79.38 | 70.45 | | FedProx | 68.94 | 72.56 | 60.31 | | FedProx + FT | 69.57 | 79.47 | 70.73 | | Ditto | 71.93 | 77.35 | **72.08** | | pFedMe | 61.75 | 74.91 | 71.74 | | FedTHE | 69.95 | 79.32 | 70.32 | | ATP-Episodic | **72.17** | **79.79** | 70.26 | Despite achieving good performance on training clients, we still want to emphasize that **the purpose of our proposed ATP is not to maximize the effectiveness of labeled training clients**. Instead, **our goal is to achieve better adaptation and generalization to new (i.e., unparticipating) unlabeled testing clients**. In real FL systems, the ability to generalize to new clients is of paramount importance [43], as only a small fraction of clients possess labels and participate in the FL training process [16]. ## W3. More architectures Thank you for bringing up this important question. We would like to clarify that our proposed algorithm doesn't involve tailored designs specifically for convolutional neural networks; rather, it adapts by learning appropriate adaptation rates for individual modules. Regarding batch normalization layer adaptation within our framework, it's essential to note that this foundational component is widely adopted in modern neural network architectures, including ResNet and newer transformer-based architectures [R2]. As a result, our ATP approach is versatile and doesn't depend on any specific backbone architecture, allowing for seamless integration onto diverse architectures. In our paper and rebuttal, we utilized ResNet-18/50 and ConvNet as backbones, in line with previous studies in FL [36,43,44] and TTA [14,22,39,45]. They have demonstrated remarkable performance on benchmark datasets we used [39,45], and they remain the most extensively explored backbone architectures in the literature [22,36,39,43,44,45], largely due to their efficiency. We genuinely appreciate your insightful suggestions, and we're enthusiastic about expanding our study to incorporate results from a broader array of backbone architectures in the future revision. [R2] Zhuliang Yao, Yue Cao, Yutong Lin, Ze Liu, Zheng Zhang, Han Hu. Leveraging Batch Normalization for Vision Transformers. ICCVW 2021.
Summary: This paper introduces a novel setting where personalized FL during the test procedure is considered and multiple distribution shifts are involved. A method termed ATP is proposed to solve the challenges posed in this setting. Adaptive learning rates are learned for the model. Both theoretical and empirical studies are carried out to demonstrate the effectiveness of ATP. Strengths: 1. The paper is well-organized and easy to follow. 2. The main claims, e.g., the capability of dealing with multiple distribution shifts and the effectiveness of adaptive learning schemes, are well supported by empirical studies. 3. Theoretical analyses on both the convergence and generalization ability of ATP are provided. Weaknesses: 1. One of the key factors of TTPFL is confusing. Usually, we assume test data are unlabeled and the target of the classification task is to predict the label of the test data. However, in the summarization of TTPFL, it is emphasized that each testing client only has unlabeled data for personalization. It is better to further clarify this factor. 2. The relation between FL and test-time shift is weak. It seems that the proposed adapting trainable parameters, adapting running statistics, and adapting rates can also benefit centralized test-time shift problems. For FL systems, the unique challenges brought by test-time shift issues and how they motivate these adapting solutions are not explicitly demonstrated. 3. Lack of detailed discussion on the difference between this work and the previous study[15]. [15] also considers feature shift, label shift, and a mixture of these shifts in their recent paper. There are various feature shifts in [15], which can also be considered as part of the experimental setting in this work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Section 4.2, the authors have mentioned the smaller communication costs achieved by ATP, are there any corresponding experimental results that demonstrate this in a quantitive manner? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This paper does not adequately address the limitations in terms of privacy issues, efficiency, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer A9BH, We would like to express our sincere gratitude for the thoughtful review and constructive feedback provided. We are grateful for your positive feedback on the paper's organization, empirical support, and theoretical analyses. We have carefully considered your comments and will address each concern in this rebuttal. ## W1. TTPFL setting > 1. One of the key factors of TTPFL is confusing. Usually, we assume test data are unlabeled and the target of the classification task is to predict the label of the test data. However, in the summarization of TTPFL, it is emphasized that each testing client only has unlabeled data for personalization. It is better to further clarify this factor. Thank you for pointing out this confusion. We emphasized that each testing client only has unlabeled data for personalization, in comparison to traditional PFL algorithms like Per-FedAvg [7], pFedMe [6], and FedTHE [15]. In these algorithms, testing clients have both labeled data (for personalization) and unlabeled data (for evaluation/prediction), which is a stronger requirement than ours. However, in the test-time of TTPFL, our ATP algorithm only requires unlabeled data for local personalization, utilizing the distribution of unlabeled data to adapt the global model and achieve better performance. We revised the description of TTPFL according to your suggestion in [our response to all reviewers, part 1](https://openreview.net/forum?id=rbw9xCU6Ci&noteId=PEt62AClVa). ## W2. Relation between FL and test-time shift > 2. The relation between FL and test-time shift is weak. It seems that the proposed adapting trainable parameters, adapting running statistics, and adapting rates can also benefit centralized test-time shift problems. For FL systems, the unique challenges brought by test-time shift issues and how they motivate these adapting solutions are not explicitly demonstrated. Thank you for pointing out this issue. We summarize the unique challenges of TTPFL in comparison with centralized TTA as follows: 1. **Various distribution shifts**: Since each FL client collects its data in a distributed manner, the data can exhibit multiple distribution shifts, e.g., a complex combination of feature and label shifts. However, the complexity of distribution shifts have been overlooked in most centralized TTA works. As shown in Subsection 3.2, *most TTA algorithms cannot tackle feature and label shifts simultaneously*. However, our ATP algorithm can learn to tackle different types of distribution shifts as shown in Subsection 6.1. 2. **Exploiting multiple data sources**: *centralized TTA considers adaptation from one source domain to one target domain, while our TTPFL includes multiple clients as data sources*. In TTPFL, centralized TTA algorithms adapt a global model which can only exploit training clients’ data as a mixed distribution $\mathcal{P}_G = \sum_i p_i \mathcal{P}_i$, ignoring how they are different from each other. However, our ATP algorithm learns how each client’s distribution $\mathcal{P}_i$ is different from the mixed distribution $\mathcal{P}_G$, and optimizes the adaptation rates accordingly. ## W3. Discussion of FedTHE > 3. Lack of detailed discussion on the difference between this work and the previous study[15]. [15] also considers feature shift, label shift, and a mixture of these shifts in their recent paper. There are various feature shifts in [15], which can also be considered as part of the experimental setting in this work. Thanks for your suggestion for a detailed comparison with FedTHE [15]. Our paper is substantially different from FedTHE [15] in research problems, algorithms, and experiment design. We discuss our differences in detail in [our response to all reviewers, part 3](https://openreview.net/forum?id=rbw9xCU6Ci&noteId=PEt62AClVa). Specifically, regarding the experiment design, while FedTHE and our paper use similar techniques to construct feature and label shifts, we consider a stronger fusion of them. In FedTHE, the author simply mixed samples from different shifted distributions, which mitigates the severity of distribution shifts. For example, a smaller portion of testing samples are corrupted. As a result, “mixture of test” is less challenging than “corrupted local test” as shown in Table 1 of [15]. Noticing this deficiency, we improved the way of mixing feature and label shifts in our paper, making “hybrid shift” more challenging than both feature and label shifts (as shown in Table 1 of our paper). ## Q1. Communication cost > In Section 4.2, the authors have mentioned the smaller communication costs achieved by ATP, are there any corresponding experimental results that demonstrate this in a quantitive manner? Thanks for mentioning the advantages of ATP in communication efficiency. Unlike FedAvg [25] which transmits the full model parameters in every communication round, training clients in ATP only download the full model once, and only transmit the adaptation rates during the training process. Therefore, for $T$ communication rounds, ATP only transmits $D+2Td$ floating numbers, much fewer than $2TD$ for FedAvg. Notice that $d \ll D$: For our ResNet-18 experiments on CIFAR-10, $d=102$ while $D=11,181,642$. For our ResNet-50 experiments on CIFAR-100, $d=267$ while $D=23,581,642$. ## Limitations > This paper does not adequately address the limitations in terms of privacy issues, efficiency, etc. We appreciate your comment. Regarding privacy, while our ATP framework shares the same communication protocol as FedAvg while transmitting fewer parameters, we did not further study the privacy of our framework. We believe differential privacy techniques can further protect the privacy of clients in ATP systems. --- Rebuttal Comment 1.1: Title: Thanks for rebuttal. Comment: Hi Authors, Thanks for your rebuttal that partially solved my concern. I will raise my score to 5.
Rebuttal 1: Rebuttal: Dear Reviewers, We would like to express our sincerest gratitude for your thoughtful and insightful reviews of our paper. We are particularly grateful for the recognitions bestowed upon our work, including - novel and effective algorithm (from all reviewers), - satisfying experiments (from reviewer A9BH, 2b9x, yJjv), and - comprehensive convergence and generalization analysis (from reviewer A9BH). Your insights have been immensely helpful in refining our work. In this response, we address the common questions raised by your reviews. ## 1. Revised description of test-time personalized federated learning (TTPFL) We consider an FL system with $N$ training clients (who participate in FL training, i.e., "seen" during training) and $M$ testing clients (who do not participate, i.e., "unseen"). Each training client $i$ has its labeled dataset $S_i$ with $m_i$ i.i.d. data points drawn from its own underlying distribution $\mathcal{P}_i$, while each testing client $i$ has only unlabeled data drawn from its distribution $\mathcal{P}_i$. The data distribution $\mathcal{P}_i$ is usually different across the clients, and is sampled from a meta-distribution $\mathcal{Q}$, i.e., a distribution of distributions. **Our data requirement is identical to FedAvg [25] and FedSR [26].** During FL training, the global model $w_G$ and the adaptation rule $\mathcal{A}$ (i.e., adaptation rates in ATP) are optimized only over labeled training clients. Testing clients’ data is not used during training. During FL testing, each testing client downloads the global model and the adaptation rule, and personalizes the global model with only its unlabeled data. The testing clients’ labels are never used in FL training or testing. It is important to notice that **our TTPFL setting does not require any labeled data for testing clients**, unlike the setting in traditional personalized federated learning (PFL). Most of the existing PFL algorithms either focus on the training clients exclusively [35,19] or require labeled data for personalization on the testing clients [7,5]. These algorithms introduce stronger data requirements to FL systems, and cannot be used for our TTPFL. Considering the enormous number of testing clients (compared to training clients) in real FL applications [16] and the challenge of generalization to testing clients [43], we pay special attention to the performance on the testing clients, and **report the average accuracy on testing clients**, instead of training clients (as in traditional PFL algorithms). ## 2. Relation between FL and test-time adaptation (difference between TTPFL and TTA) Different from centralized TTA, TTPFL has its own challenges, which TTA algorithms do not fully tackle. 1. **Various distribution shifts**: Since each FL client collects its data in a distributed manner, the data can exhibit multiple distribution shifts, e.g., a complex combination of feature and label shifts. However, the complexity of distribution shifts have been overlooked in most centralized TTA works. As shown in Subsection 3.2, *most TTA algorithms cannot tackle feature and label shifts simultaneously*. However, our ATP algorithm can learn to tackle different types of distribution shifts as shown in Subsection 6.1. 2. **Exploiting multiple data sources**: *centralized TTA considers adaptation from one source domain to one target domain, while our TTPFL includes multiple clients as data sources*. In TTPFL, centralized TTA algorithms adapt a global model which can only exploit training clients’ data as a mixed distribution $\mathcal{P}_G = \sum_i p_i \mathcal{P}_i$, ignoring how they are different from each other. However, our ATP algorithm learns how each client’s distribution $\mathcal{P}_i$ is different from the mixed distribution $\mathcal{P}_G$, and optimizes the adaptation rates accordingly. ## 3. Difference to FedTHE/FedTHE+ [15] Despite having a similar name, our paper tackles a problem substantially different from FedTHE. - **Problem**: FedTHE focuses on improving the model robustness to test-time shift on seen labeled training clients (i.e., clients that participate in FL training with labeled data), while our algorithm focuses on better generalization to unseen unlabeled testing clients (i.e., clients that neither participate in training nor have labeled data) with different distributions. Our research problems are orthogonal. - **Algorithm**: FedTHE trains a model with two heads (global head and personalized head) during FL training in a supervised manner, and fuses two heads during test-time in an unsupervised manner. When generalizing to unseen unlabeled testing clients, the client can only download the global head but cannot generate its personalized head due to the lack of labeled data. Therefore, FedTHE algorithm cannot be used for unseen unlabeled testing clients, which is the target of our paper. Different from FedTHE, testing clients in ATP download the global model as well as the adaptation rates, and adapt the global model locally with its unlabeled data. Labels are not required in ATP testing. - **Experiment**: While FedTHE and our paper use similar techniques to construct feature and label shifts, we consider a stronger fusion of them. In FedTHE, the author simply mixed samples from different shifted distributions, which mitigates the severity of distribution shifts. For example, a smaller portion of testing samples are corrupted. As a result, “mixture of test” is less challenging than “corrupted local test” as shown in Table 1 of [15]. Noticing this deficiency, we improved the way of mixing feature and label shifts in our paper, making “hybrid shift” more challenging than both feature and label shifts (as shown in Table 1 of our paper). ## 4. New experiments In the rebuttal pdf, we conduct experiments regarding 1. comparison to FL baselines (including FedTHE) 2. new experiments on PACS, a large-scale dataset, and 3. additional ablation study. Details are provided in the pdf. Pdf: /pdf/80167af1d603e5aeb546998c27d066cb4f773fa9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Swarm Reinforcement Learning for Adaptive Mesh Refinement
Accept (poster)
Summary: This paper formulates h-adaptive mesh refinement (AMR) as a decentralized partially-observable Markov decision process (Dec-POMDP), and proposes a methods ASMR with parameter-sharing among agents and individual rewards to find refinement strategies. Refinement policies are parameterized by message-passing graph neural networks (MPN) and trained by Q-learning (specifically, DQN) and proximal policy optimization (PPO). Evaluation was conducted on various 2D elliptic partial differential equations (PDEs) with different domains, showing that the proposed approach is competitive with traditional error-based AMR heuristics and outperforms modified versions of existing learning-based approaches. Strengths: Originality: One main novelty of this paper is the use of independent multi-agent reinforcement learning with individual rewards/returns for AMR. The proposed individual return is a novel solution to the posthumous credit assignment problem that arises in the AMR context, whereby an agent vanishes upon taking a refinement action. It also has the advantage of avoiding the difficulties of the regular multi-agent credit assignment problem that arises from using a single team reward. Quality: Overall, this paper covers most of the aspects of a high-quality applied paper, as it provides sufficient motivation and background for the application and provides a rigorous formulation and empirical evaluation. Clarity: The description of the problem, the proposed method, the notation, and experimental setup and results are all clearly written. See below for more comments about clarity. Significance: The results in this paper are significant as they provide a path to improving AMR for the finite element method, which is a critical tool in engineering and applied sciences for solving PDEs. Weaknesses: The characterization of the scale of experiments in this paper, in contrast to previous work, should be made more precise. The abstract of this paper says that previous work "scale only to simple toy examples", which leads readers to expect that this paper deals with real-world problems on large domains. However, the experiments in this paper were conducted only on simple elliptic PDEs on meshes with up to thousands of elements, which appears to be on the same level of complexity and scale as previous work (e.g., AMR on simple hyperbolic equations with up to thousands of elements). The paper's Limitation section also acknowledges this fact, so the abstract and main paper should be made consistent. The proposed approach seems to be extendable to include coarsening actions, but this is not demonstrated in this paper and hence it is hard to judge whether the proposed approach holds promise if extended. There is existing work that support coarsening (refs 28,29), so one would expect this to be a requirement for this application area. One would expect that an optimal strategy for the moving heat source problem requires coarsening actions, since regions that were refined due to proximity to the source may need to be coarsened once the source has moved away from them. Experimental evaluation includes randomization over domains, boundary conditions and initial conditions. However, for trained policies to be useful in applications to real-world problems, one should show that trained policies can work on larger meshes and longer step counts than those seen in training. The proposed method uses global features such as the number of mesh elements and environment time step, which this leads to concerns about what would happen if the trained policies are run on test cases where the number of elements and step count go out of the training distribution. It was confusing to read the phrase "PPO version of VDGN" (line 245) and "VDGN...produce better results with PPO" (line 265), since PPO is a policy optimization method that is fundamentally different from value-based methods, so it is unclear how exactly PPO can be used in combination with VDGN. Upon looking closer, it appears that some of the learning-based baselines described in Section 4 do not match the methods in the previous work (refs 27,28), so it is misleading to label them as such. For the "Argmax" method, (ref 27) trains a stochastic policy on a discrete action space consisting of all the available elements, but the implementation in this paper "predicts a continuous action for each mesh element." It is not clear what is that continuous action and what RL algorithm is used to train this baseline. For the baseline labeled as "VDGN", it appears that the implementation in this paper is an actually an on-policy policy optimization method based on PPO, using a decomposed value function. But that is fundamentally different from VDGN in (ref 28), which looks like an off-policy method based on Q-learning. Additional suggestions: - line 37: "shared observations" imply that agents have the same observation, but that is not the case since each agent is a node in the graph and have different observations. Perhaps the authors mean to say agents have the same observation space. - It is hard to see what is gained by calling the formulation an "Adaptive Swarm Markov Decision Process", as opposed to the more standard terminology of a Dec-POMDP with parameter-sharing among agents. - line 257: not clear what is meant by "holistic" in this context. Perhaps the authors mean that RL can find optimal strategies for long-term objectives Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Suggestions: - Demonstrate that trained policies generalize to larger meshes and more time steps than those in training. This is necessary for policies trained on tractable small problems to be deployed in real FEM simulations. - some of the learning-based baselines implemented in this paper do not accurately reflect the methods in the referenced papers, so they should not be described as though they represent those works. - Include empirical evidence that the approach extends to coarsening actions. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The statement of limitations should include the absence of empirical results on coarsening, and the lack of other classes of PDEs such as hyperbolic equations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback, particularly noting the innovative use of swarm reinforcement learning for AMR and the comprehensive clarity and rigor with which we presented our findings. We now address individual concerns, including the clarification needed in the experimental section and the accurate representation of related work. > […] the experiments in this paper were conducted only on simple elliptic PDEs on meshes with up to thousands of elements, which appears to be on the same level of complexity and scale as previous work. […] the abstract and main paper should be made consistent > Our experiments use conforming meshes and more refinement levels than existing work. While the considered PDEs are relatively simple, the difficulty of finding good refinements for them is evidenced by the performance of the baseline methods. We provide further discussion in the general answer and will address this distinction in the revision. We also refer to Figure 2 of the accompanying PDF for a refinement with more than $50\,000$ elements. > One would expect that an optimal strategy for the moving heat source problem requires coarsening actions […] Include empirical evidence that the approach extends to coarsening actions. > We solve the Heat Diffusion equation in an inner loop and optimize the mesh with respect to the error of the last step of the equation as described in Appendix B.5. Respecting the movement of the heat source is still important due to the propagation of errors, and this task showcases that our method is able to create a single refined mesh that provides accurate results for the full simulation. We agree that mesh coarsening is important for general AMR strategies and aim to explore this direction further in future work. We believe that iterative refinement from a coarse initial mesh is sufficient for the problems considered in this work, which we discuss in more detail in the general answer. > […] one should show that trained policies can work on larger meshes and longer step counts than those seen in training. The proposed method uses global features […] Demonstrate that trained policies generalize to larger meshes and more time steps than those in training. > Figure 2 of the general PDF shows that ASMR, with minor modifications to the training environment and observation space (excluding, in particular, the global information), can generalize to meshes with over $50\,000$ elements on the Poisson problem, achieving a speedup of more than $100$ times when compared to $\Omega^*$. We equate the initial size of elements in this larger mesh to those seen during training so that the six refinement steps that are used during training are sufficient. > It was confusing to read the phrase "PPO version of VDGN" (line 245) […] For the "Argmax" method, (ref 27) trains a stochastic policy on a discrete action space consisting of all the available elements […] Some of the learning-based baselines implemented in this paper do not accurately reflect the methods in the referenced papers […] > In Figure 8 in Appendix D.1, we compare VDGN (Ref. 28) with DQN and our “VDQN+PPO” variant. The latter is a policy gradient method and can not directly apply the value decomposition in the Q function. However, since PPO uses a value function as a baseline to approximate the advantage function, we apply a similar value decomposition there (c.f. Line 245). We find in Figure 8 that VDGN (DQN) performs significantly worse than our PPO variant, which is why we use the latter for the main experiments in the paper. *Argmax* (Ref. 27) trains a discrete policy that chooses one of the mesh elements for refinement. This is realized via a categorical distribution over all elements, which is computed via a softmax over the policy outputs for each element. We experiment with both DQN and PPO for our *Argmax* baseline in Figure 8 in Appendix D.1. For the PPO version, we take the maximum over a continuous scalar per element (c.f. Line 239). Since the softmax is order-preserving, this corresponds to acting on a categorical distribution. We directly use a categorical distribution for the DQN variant and apologize that the paper does not state this clearly. In summary, “VDGN+PPO” uses the value decomposition idea of VDGN for its advantage estimation, while the Argmax baseline essentially uses a categorical distribution to select the next element to be refined. We thank the reviewer for pointing out the issues with the phrasing and nomenclature for the baselines and will adapt both in the revision. > line 37: "shared observations" imply that agents have the same observation […]. Perhaps the authors mean to say agents have the same observation space. > We agree that “shared observation space” in Line 37 is technically more correct. We originally chose to use “shared observations” here due to the overlapping receptive fields of the individual agents and will adapt this in the revision. > It is hard to see what is gained by calling the formulation an "Adaptive Swarm Markov Decision Process" […] > We avoid the DEC-POMDP formulation due to agent-wise rewards, changing action and observation spaces and crucially mappings between the agents over time, all of which can be more naturally represented when formulating AMR as a swarm problem. We refer to the general answer for further details. > line 257: not clear what is meant by "holistic" in this context. Perhaps the authors mean that RL can find optimal strategies for long-term objectives > “Hollistic” refers to both the optimization of long-term objectives, and to finding an optimal refinement w.r.t. the full receptive field of the agent rather than just the element itself. We hope that this answer addresses the concerns of the reviewer. If any reservations or questions about the paper remain, we encourage the reviewer to reach out to us for further clarification. --- Rebuttal Comment 1.1: Title: Acknowledgement of rebuttal, main concerns on complexity claim and lack of coarsening remain; question remains about whether baselines match prior work. Comment: Regarding claim of complexity: I appreciate that the author ran an additional experiment on more than 50k elements shown in Figure 2 of the rebuttal PDF. However, this single result by itself is not sufficient to justify the way the abstract was written, which makes it sound as though the overall complexity of experiments in this paper is another level higher than previous work. The mostly stationary problems and the smooth nature of heat diffusion considered in this paper are arguably simpler than time-dependent problems where the solution feature cross over multiple elements between re-mesh steps. Regarding lack of coarsening and the heat diffusion problem: The lack of coarsening is a significant weakness, since it is critical for more complex time-dependent PDEs with rapidly moving features beyond the pedagogical problems used in this paper. The smoothness of the heat diffusion problem used in this paper is not a good representative example of the level of difficulty of time-dependency that needs to be handled in other problems, e.g. Euler equations. Regarding generalization to larger meshes: I appreciate the new result, and I believe this needs to included in the main paper because this kind of generalization is a necessary condition for using RL-trained AMR policies in practice. Regarding whether or not the baselines in this paper match exactly with prior work: I appreciate the clarification by the authors. For the baseline labeled "Argmax", is the use of PPO in this paper the same as the use of PPO in the ref [27]? In ref [27], the method has a global action space over all elements, and PPO is used as the single-agent RL for this action space. Is this the case for the "PPO implementation of Argmax" in this paper? Or is the PPO applied independently for each individual element (akin to the decentralized PPO used for the proposed method in this paper)? If applied independently, then this quite different from ref [27]. For the baseline labeled "VDGN", if the implementation shown in the main paper is the "PPO version" created by the authors, then it is necessary to revise the label names in Figure 4 and 5 to make the distinction clear. Upon closer reading, I see the authors use an MPN for the baseline labeled "VDGN", whereas ref [28] has a graph attention network. This architectural difference alone makes it hard to label the baseline as "VDGN". --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the additional discussion. > I appreciate that the author ran an additional experiment on more than 50k elements shown in Figure 2 of the rebuttal PDF. However, this single result by itself is not sufficient to justify the way the abstract was written, which makes it sound as though the overall complexity of experiments in this paper is another level higher than previous work. [...] We thank the reviewer for appreciating the additional experiments. Regarding the experiments' complexity, we concentrate on mostly stationary problems as these problems are important and ubiquitous in engineering [1,2,3,4]. Here, the difficulty lies in finding multiple accurate refinement steps. Our experiments show that all methods can do so for small meshes and shallow refinements (c.f., Figures 4, 5), but that only ASMR consistently provides good refinements for larger meshes. In the revised paper we will attenuate our claim of increased complexity to clarify the focus on finding multiple accurate refinement steps. The increase in complexity of our experiments with respect to these types of refinement compared to previous work is evident as the performance of the baselines breaks down with an increasing number of mesh elements. > The lack of coarsening is a significant weakness, since it is critical for more complex time-dependent PDEs [...] Our work focuses on conforming AMR due to its importance and prevalent use in engineering. While coarsening for conforming meshes is very difficult due to the non-local nature of conforming refinements (c.f. Ref. 41), they offer more accurate solutions and increased stability of the underlying system, making them the prevalent choice in many applications with stationary solutions [1,2,3]. Our experiments clearly show that our algorithm outperforms existing RL-based algorithms for this crucial use-case. We agree that mesh coarsening is an interesting aspect of AMR with non-conforming meshes, and briefly discuss how ASMR can be extended accordingly in Line 165. > Regarding generalization to larger meshes: I appreciate the new result, and I believe this needs to included in the main paper because this kind of generalization is a necessary condition for using RL-trained AMR policies in practice. We appreciate that the reviewer agrees that the new generalization results are significant. We will gladly include these results and an additional discussion on generalization in the revised main paper. > For the baseline labeled "Argmax", is the use of PPO in this paper the same as the use of PPO in the ref [27]? [...] We thank the reviewer for bringing this to our attention. We implemented the Argmax method as described in Ref 27 and found in preliminary experiments that the results are similar to those reported in our paper. In general, the method yields acceptable results on small instances, but does not scale to larger meshes and numbers of refinements. Note that Ref 27 optimized the hyperparameters on a per-task basis, which makes it more difficult to use in practical scenarios. We instead tuned one set of hyperparameters per method across all tasks. > For the baseline labeled "VDGN" [...] it is necessary to revise the label names in Figure 4 and 5 to make the distinction clear. [...] the authors use an MPN for the baseline labeled "VDGN", whereas ref [28] has a graph attention network. This architectural difference alone makes it hard to label the baseline as "VDGN". We consistently employ MPNs for all RL-learned baselines to ensure a fair comparison between methods. The contribution of VDGN lies in framing AMR as a multi-agent problem and applying a value decomposition to the RL objective. These advancements are independent of the underlying GNN architecture. We show in our experiments that while the value decomposition works well for small instances, it struggles to provide a good reward signal for larger meshes. We expect Graph Attention Networks to show a similar performance to MPNs, and will include them as an ablation in the revision. We will also revise the labels and clarify that the baseline implements the important aspects of VDGN, but differs in some details for the sake of a fair comparison. We want to thank the reviewer again for the continued discussion and will gladly provide further clarifications if requested. [1] Nagarajan, A., & Soghrati, S. (2018). Conforming to interface structured adaptive mesh refinement: 3D algorithm and implementation. *Computational Mechanics* [2] Ho-Le, K. (1988). Finite element mesh generation methods: a review and classification. *Computer-aided design* [3] Jones, M. T., & Plassmann, P. E. (1997). Adaptive refinement of unstructured finite-element meshes. *Finite Elements in Analysis and Design* [4] Geuzaine, C., & Remacle, J. F. (2009). Gmsh: A 3‐D finite element mesh generator with built‐in pre‐and post‐processing facilities. *International journal for numerical methods in engineering*
Summary: This paper proposes a novel MDP formulation and policy architecture for adaptive mesh refinement. The MDP formulation (ASMDP) defines the components of the MDP in order to account for the changing number of agents across timesteps, and the reward is formulated in a manner to make credit assignment easier and to account for particular aspects of the AMR task (e.g., accounting for the area associated with an element). The paper applies RL algorithms (PPO, DQN) with a graph neural network policy (with a discrete action space of whether or not to refine a particular node). The experiments consider a set of 2d mesh refinement problems and evaluate primarily with top 0.1% error against other learned and oracle baselines, showing that ASMR outperforms the other learned baselines. Strengths: ### Significance - Problem setting: FEM plays an important role in many fields and (given the potential scale of such problems) improvements in the accuracy / computational efficiency could have a significant impact on the rate of progress or computation required - Contribution: The proposed problem formulation / algorithm significantly improves performance and reliability on a sample of AMR tasks as demonstrated in the experiments and appendix, which constitutes a significant contribution. ### Originality - Aspects of the MDP formulation (reward function, aspects of the handling of a time-varying number of agents) appear to be novel contributions. - The use of the particular policy architecture appears to be novel (though the use of GNNs for the policy isn’t novel) - Extensive experiments investigating the effectiveness of the proposed formulation/algorithm in a variety of of AMR setting (demonstrating sota performance of the algorithm) ### Quality Approach - The application of RL in this setting is reasonable. The problem formulation / algorithm are formulated reasonably. Evaluation - The main experimental claim is that ASMR outperforms existing learned methods for AMR, and this is evaluated extensively through experiments across different tasks with appropriate metrics, and the results support the claim. - A number of important additional questions are addressed experimentally by the paper (e.g., ablations, use of top 0.1% error directly as reward, runtime comparison w/ just computing $\Omega^{*}$) ### Clarity - The paper is exceptionally well-written and clear. Figures and tables are explained clearly and contribute to the understanding of the method / experiments. - Sufficient details are given in the appendix to reproduce the paper, and code is provided in the supplement, which appears to be reasonably well-designed / implemented. Weaknesses: ### Originality Algorithm - Ultimately many aspects of the problem formulation / algorithm are based on prior work, and the novel contributions are (1) specific aspects of the problem formulation (e.g., reward function), (2) the application of existing methods to the problem (some of which are novel) (e.g., policy architecture, RL algorithms), and (3) extensive experimental evaluation. ### Quality Algorithm - See questions section Evaluation - Further evaluation of the generalization abilities of the algorithm would be beneficial (e.g., building on D.4 with quantitative metrics) because (I assume) that an RL algorithm for AMR would in practice be applied outside its training distribution. - Relatedly, it’s not clear to me based on the experimental results that practitioners using AMR for FEM would prefer ASMR to the local oracle or local maximum oracle approaches in practice (perhaps ASMR is faster excluding training, but is the reduction in simulation time worth the less consistent performance?). Does increasing the amount of training data (or other parameters) improve the performance of ASMR beyond these oracle baselines across (more of) the tasks? ### Clarity - The setting / setup of section D.4 on the generalization capabilities is unclear to me Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Problem formulation / algorithm 1. Does it make sense to formulate the problem as a SwarMDP / special case of a DecPOMDP? It seems that in practice (both at training and inference time) the problem is fully observable and decision making is performed in a centralized fashion (which would make it an MDP). Is that not the case? Perhaps this is sort of a semantics question about what constitutes the problem formulation vs solution. 2. Relatedly, assuming the SwarMDP formulation, in what sense is the problem partially observable to individual agents/elements? It seems that each agent makes decisions based on an observation space capturing information about the full state of the system (i.e., the observation graph taken as input seems to fully capture the state of the system). Is that not the case? 3. Given that $\Omega^{\*}$ and $\Omega^{0}$ are available, it seems like the problem could be formulated as a supervised learning problem (SL for AMR is discussed in the related work, but I don’t think the references answer the following question). Given this, why is it necessary to formulate the problem as an RL problem? Is it simply that mapping from $\Omega^{0}$ to $\Omega^{*}$ is best done in an iterative manner (due to the variable number of nodes) (thereby making it a sequential decision making problems best addressed by RL)? Evaluation 1. In the runtime comparison of section D.3, it seems like either of the processes being compared could be parallelized effectively on the GPU. Is that not the case? Why is the comparison performed on CPUs? 2. Related to that runtime comparison, what is the additional runtime cost of executing the local oracle approaches? Does it make sense to consider that in the comparison? 3. In figure 14 (right), would you elaborate on why ASMR underperforms the baselines? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See weaknesses and questions sections Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their detailed evaluation of our paper, particularly emphasizing our method's significant contributions and robust experimental claims. We will now respond to the individual concerns raised by the reviewer, including the generalization capabilities of the method and the importance of framing AMR as a Swarm Reinforcement Learning problem. > Further evaluation of the generalization abilities of the algorithm would be beneficial (e.g., building on D.4 with quantitative metrics) […]. > We recognize the importance of robust generalization for learned AMR methods and show further qualitative experiments in Figure 2 of the general PDF. Initial quantitative evaluations suggest that generalization is consistent across methods for all domains shown in Figure 11, likely due to the local perspective that all methods use. Given ASMR's superior performance, it also seems to perform best in these generalization tests. We will detail these quantitative evaluations in the revision. > perhaps ASMR is faster excluding training, but is the reduction in simulation time worth the less consistent performance? > Figure 10 in the Appendix shows that our method is up to $30$ times quicker than uniform refinement, while the main experiments underscore its improved reliability compared to other RL-driven approaches. To iterate on this, the large mesh in Figure 2 of the general PDF takes $10$-$12$ seconds to produce, whereas the uniform refinement needs $~20$ *minutes*. Training ML models such as ASMR in early stages of engineering development thus allows to speed up subsequent phases due to faster inference times. Generalizing well also allows for future problems to be predicted, making the training time negligible. > Does increasing the amount of training data (or other parameters) improve the performance of ASMR beyond these oracle baselines across (more of) the tasks? > Figure 9 in Appendix D.2 illustrates that more training PDEs marginally boost performance. Our preliminary results indicate that larger latent dimensions offer slight performance gains. Since our main goal is to introduce and assess an effective RL framework for AMR, we do not fine-tune the specific architecture to surpass the oracle heuristics. Still, we acknowledge the importance of this optimization for practical usage and will discuss it in more detail in our paper's revised version. > The setting / setup of section D.4 on the generalization capabilities is unclear to me > In Appendix D.4, we apply a singular trained policy to problems outside the training distribution. The policy's training data comprises 100 L Shapes with three-component load functions, as shown in the middle-left part of Figure 11. The rest of Figure 11 depicts ASMR's generalization across various shapes (left to right) and load functions (top to bottom). We'll offer more clarity in our paper's updated version. > Does it make sense to formulate the problem as a SwarMDP / special case of a DecPOMDP? It seems that […] the problem is fully observable […] in what sense is the problem partially observable to individual agents/elements? > We discuss the importance of the adaptive swarm setting in the general answer. While the problem is fully observable, each agent's observation space is localized, with the receptive field depending on the number of message passing steps of the GNN. Each agent can access a shared global vector of limited size (32 in our experiments), keeping the observation space predominantly local. > Given that Ω∗ and Ω0  are available, it seems like the problem could be formulated as a supervised learning problem […] why is it necessary to formulate the problem as an RL problem? > We present AMR as an RL problem because of its sequential nature, noting that it additionally allows for non differentiable refinement and PDE-solving processes, which increase the methods applicability. This approach aligns with existing literature (Refs. 27-29), whereas supervised methods often target error estimators (Refs.71-72) or specific AMR facets (Refs.73-74) instead of actual refining strategies directly. We'll clarify this distinction in our revised related work section. > […] it seems like either of the processes being compared could be parallelized effectively on the GPU. > Since our experiments utilize Scikit-FEM (Ref. 39) which is CPU-only, we perform all runtime tests on a CPU for fairness. In general, both the policy and solving the PDE could be efficiently parallelized on a GPU. Broadly speaking, our model's scalability is approximately linear with the refined mesh's elements, while solving a uniform refinement becomes progressively costlier as the problem size increases. > What is the additional runtime cost of executing the local oracle approaches? > The cost of executing the local oracle heuristics is dominated by the calculation of the reference refinement $\Omega^*$. Since only computing $\Omega^*$ already takes longer than our policy does, we chose not to include them in Figure 10. We will clarify this in the revision. > In figure 14 (right), would you elaborate on why ASMR underperforms the baselines? > In Figure 14, ASMR shows good performance for large meshes but lags behind some baselines for smaller instances. This is mainly because ASMR skips refining larger elements far from the flow's inlet. While these elements don't show a significant maximum error (as seen in Figure 5's left side), they contribute to a considerable total error mass. Many baselines offer relatively uniform refinements, explaining their performance and similarity with the uniform comparison. Crucially, ASMR recognizes that errors on the left side intensify and accumulate as information flows to the right, explaining its effectiveness for larger element counts. We hope that our answer clarifies some of the concerns of the reviewer, and would be grateful for further communication if any issues persist or new ones arise.
Summary: This paper builds on recent advances in learned adaptive mesh refinement method to scale to complex physical simulations. Instead of formulating AMR as a reinforcement learning problem with a single agent, the authors formulate AMR as a swarm reinforcement learning problem, in which multiple agents collaborate and share observations each other. By allowing the agents to be split into new agents in AMR process, a dense reward signal is efficiently feed to policy function to decide cells to refine. The experiments demonstrate that the method achieves competitive error against traditional error-based refinement strategies and generalizes well across different problem domains. Strengths: The paper is placed in the current growing literature on ML-based Adaptive Mesh Refinement (AMR), to enable scaling to large and complex systems. The authors propose formulating AMR as swarm reinforcement learning problem. The paper conducts various range of experiments and compare to RL-based baselines. The proposed method outperforms existing RL-based methods in most of the cases. The experiments are thorough, covering all the important ablations I could think of while reading the paper. Weaknesses: While the paper is generally well written, notations introduced in Section 3 are ambiguous. Below I list a few which made the paper harder to understand * $M^{t,k}$ seems not to meet $\forall$ $j, $ $\sum M^{t, k}_{i, j} = 1$ since it seems to be defined by Hadmard product. * Although definitions for state and observation are provided, they do not look well-defined; a local view of observation graph can have infinitely many choices. * The definition of $J_{I}^{t}$ is also ambiguous; the domain of the policy comprises the observation and action spaces while reward is defined on state space and action space. * What do $V$ and $E$ in the observation graph represent? As someone not familiar with this field, I found the subsection on "Systems of Equations" in Section 4 described very clearly PDE systems defined on various types of (maybe complex) shapes. However it is still unclear to me how challenging solving AMR for those PDEs is, although they are claimed as being challenging in the introduction section. Can you expand on the details of PDEs and give some reasons that make solving AMR for those types of PDEs difficult? For the evaluation of the wall-clock time of ASMR, I believe I don’t see plot of the proposed method in the figure in Appendix D.3. If so, it is difficult to compare the proposed method against other baselines. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It would be interesting to see a study of how the hyperparameters of MPN (for policy function) impact the wall-clock time and error estimate. For example, * Does large message passing steps decrease the error estimate and improves the scalability to much larger number of mesh elements? * How much does the wall-clock time increase as increasing the message passing steps? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, the paper discusses technical limitations in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's constructive feedback. We are pleased to hear that they found our experiments and ablations to be conclusive and convincing. We also thank them for their remarks on the notations in Section 3 and their inquiries about the challenges of solving AMR for specific types of PDEs. > While the paper is generally well written, notations introduced in Section 3 are ambiguous. Below I list a few which made the paper harder to understand > We thank the reviewer for pointing out that the paper is currently unclear in terms of notation. We will clarify it during the revision, and briefly explain the notation in its current form in the following. > $M^{t,k}$ seems not to meet $\forall_j \sum M^{t,k}_{i,j}=1$ since it seems to be defined by Hadmard product. > $\mathbf{M}^{t,k}$ is defined by the (regular) matrix multiplication $\mathbf{M}^{t,k}=\mathbf{M}^t\mathbf{M}^{t+1}\dots\mathbf{M}^{t+k}$ in Line 150 in the paper. We will clarify that this is a matrix multiplication rather than a Hadamard product in the revision. As the individual matrices satisfy $\forall_j\sum_i \mathbf{M}_{ij}=1$ , so does their product. > Although definitions for state and observation are provided, they do not look well-defined; a local view of observation graph can have infinitely many choices. > The state of the system includes the mesh, its boundary conditions and its current solution. The observations encode this state as an observation graph (c.f. lines 166-169). The features for this graph depend on the current PDE, mesh and solution, which we detail in lines 222-229. We will adapt the paper to make the connection between the definition of the observation graph and its instantiation clearer. The local observation for each agent stems from its position in the observation graph. Our method is permutation equivariant due to the use of GNNs, ensuring consistent outputs from agents with identical local observations. We hope that this addresses the concern regarding the “infinitely many choices” and would appreciate the opportunity to provide further clarification if it does not. > The definition of $J_I^t$ is also ambiguous; the domain of the policy comprises the observation and action spaces while reward is defined on state space and action space. > The reward is defined as a function of state and action, while the policy is a function of the deterministic local observations induced by the mesh/graph. The missing link here is the observation function $\xi(s)$. The full equation above line 150 thus reads $J_i^t:=E_{\pi(\mathbf{a}|\xi(\mathbf{s}))}[\sum_{k=0}^{\infty}\gamma^k(\mathbf{M}^{t,k}\mathbf{r}(\mathbf{s}^{t+k},\mathbf{a}^{t+k}))_i]\text{,}$ which we will adapt in the revision. > What do $V$ and $E$ in the observation graph represent? > $V$ and $E$ are the vertices and edges in the observation graph respectively, and represent individual mesh elements and their neighborhood. This is briefly explained in Lines 166-167. We will clarify this notation in the revision. > Can you expand on the details of PDEs and give some reasons that make solving AMR for those types of PDEs difficult? > We thank the reviewer for the insightful question and the opportunity to expand on the details of the used PDEs. The general answer discusses which aspects make finding a good AMR strategy difficult for a particular PDE. On a very high level, finding a good refinement strategy for a PDE is difficult if there are certain areas of the domain that are much more interesting for the solution than others. A good AMR strategy must identify these areas and adapt their local resolution to a degree that depends on the local complexity of the problem > I believe I don’t see plot of the proposed method in the figure in Appendix D.3. If so, it is difficult to compare the proposed method against other baselines. > Regarding the evaluation of the wall-clock time of ASMR, Figure 10 on Page 22 of the appendix compares our method (represented by solid lines) with the calculation of the reference solution on a uniform refinement $\Omega^*$ (dashed lines). Other baselines are not part of this figure. The other RL-based methods with the exception of the *Argmax* baselines have a runtime that is comparable to ours due to their iterative refinement process but produce significantly worse refinements. Meanwhile, the local oracles use the reference mesh $\Omega^*$ which dominates their runtime. We will add a section in the revised version of the paper to clarify this relationship. > Does large message passing steps decrease the error estimate and improves the scalability to much larger number of mesh elements? > Our preliminary experiments with larger network architectures suggest a very small but consistent performance improvement when increasing the latent dimension from $32$ to $64$, whereas more than $2$ message passing steps do not significantly improve performance. For a Poisson problem with approximately $10\,000$ final mesh elements, the total runtime during inference increases by slightly over $10\,\%$ when either the latent dimension is increased to $64$ or the number of message passing steps is increased to $4$. While concrete numbers depend on the sizes of the individual meshes, executing the policy for the above example takes about $25$ to $30\,\%$ of the total runtime of a rollout, whereas calculating the solutions of the intermediate meshes takes up about $40$ to $45\,\%$ of time. The remaining time is taken up by the construction of the initial mesh and PDE and the calculation of the observation graph. We will provide more detailed results over different mesh resolutions in the revised version. We hope that our answer clarifies some of the details of our method. Should any issues persist or new ones arise, including ones regarding the notation in Section 3, we would greatly appreciate further communication.
Summary: This paper presents a novel framework for adaptive mesh refinement for solving PDEs describing physical systems. The refinement is done as a Markov Decision Process in a Swarm RL setting, with each element of the mesh being an agent in the swarm. The agents' action space is binary and a learned policy decides whether an element should be refined or not. The system is supervised by a task-agnostic reward. Strengths: 1. The problem of neural-AMR for efficient simulation is an important one, and the proposed solution is novel and promising. 2. The ability to have variable number of agents as the process progresses allows for finer refinements with more degrees of freedom. 3. The proposed refinement strategy doesn't require task-specific heuristics/rewards. 4. The improvement in speed as observed in the supplemental is impressive, especially for Fluid Flow. 5. Code is provided. Weaknesses: 1. While the method handles complex PDEs, it does so only for 2D domains (as observed by the authors). it'd be interesting to see the results in 3D domains. 2. While Message Passing Networks do the job, it might be that Message Passing Attention Networks/ Graph Attention Networks as recently proposed might improve the results. I think an experiment in that direction is important to justify MPNs over more recent techniques. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the method be extended/adapted for Mesh Movement instead of AMR? It'd extend the applicability of the proposed framework. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Same as those written in the weakness section - primarily, the results are only shown on 2D domains. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable insight and suggestions, and particularly for highlighting the significance of our work's potential in addressing complex PDEs for efficient simulation. In the following, we want to address the individual points raised by the reviewer. > While the method handles complex PDEs, it does so only for 2D domains (as observed by the authors). it'd be interesting to see the results in 3D domains. > The present work focuses on 2D domains and conforming triangular meshes. We fully agree with the reviewer that extending the approach to 3D domains is an important next step, and briefly touch on the subject in the `Limitations and Future Work' section of the submission. While extending the experiments to 3D is out of the scope of this rebuttal period, we showcase the generalization capabilities of our method to significantly larger 2D domains in the PDF attached to the general answer. > Message Passing Attention Networks/ Graph Attention Networks as recently proposed might improve the results > We thank the reviewer for the suggestion. We use Message Passing Networks instead of e.g., Graph Attention Networks or Graph Convolutional Networks as they are the most general graph neural network architecture and have been used in related work for physical simulations (see e.g., Ref. 32, 37, 38) and learned adaptive mesh refinement (Ref. 27). The contributions of the present work lie in viewing mesh refinement as a swarm problem and providing spatial rewards and appropriate agent mappings for this problem. As such, the method is agnostic to the specific kind of GNN used. Still, we agree with the reviewer that a more optimized network architecture may improve downstream performance, which is important for practical applications of the presented method. We will add a discussion on the choice of network architecture to the revised paper. > Can the method be extended/adapted for Mesh Movement instead of AMR? > We thank the reviewer for the intriguing question. Mesh movement operations or R-Refinement could be implemented by extending the action space to include a velocity for each element’s barycenter and adapt the agent mapping based on overlapping element regions. Relatedly, we briefly discuss in lines 163-165 of the paper how the method can be extended to e.g., mesh coarsening operations. We find this to be a promising direction for future work, and provide a more detailed explanation for why this is not done in the present paper in the general answer. We hope that our responses address the reviewer's concerns, especially with respect to future extensions of the method. Should there be any further queries or unresolved issues, we encourage them to reach out to us.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable feedback on our submission. We are delighted to hear that the reviewers found our reward function and agent mapping innovative and useful (Tf6s), the speedup impressive (fwaD), the experiments convincing (ymhS), the improved stability of our method significant (LCc4), and the overall presentation to be of high quality and well-written (qjoP). We want to use this global response to address common concerns. > Can the method generalize to larger and different problems? (Tf6s, LCc4, qjoP) The most common concern raised by the reviewers was that of generalization to larger problems. We agree that it is crucial for the method to scale to larger (Tf6s, qjoP) and different (LCc4) problems and thus provide more significant speedups compared to the oracle heuristics. In Appendix D.4, we illustrate our approach's effective generalization to new load functions and similarly sized domains. To generalize to larger domains, we adjust the observation space by removing global features (as suggested by qjoP), messages, and boundary node distances. We employ data augmentation: modifying training PDEs to mimic larger mesh segments by altering boundary conditions and load functions. The means of the load function are sampled from a centered unit Gaussian, allowing modes outside the mesh. We use domains with random holes (Appendix B.1) for varied initial meshes and apply random Gaussian loads to selected boundary parts as 'inlets'. See examples in Figure 1. We add an L2 norm of $3e$-$4$ to combat overfitting and omit the per-domain normalization of Lines 181-183. Due to time constraints, we only train for $200$ instead of $800$ iterations, reducing training time to a few hours on CPU. We emphasize that these changes only affect the observation space and the environment and leave the underlying algorithm unchanged. We evaluate the resulting policy on a larger, spiral-shaped domain. This domain lies in $(0,20)^2$ compared to the $(0,1)^2$ domains seen during training and uses initial meshes with comparable element sizes. ASMR creates highly refined meshes with tens of thousands of elements, as showcased in Figure 2 in the attached PDF. The shown mesh has $50236$ elements, which is multiple times larger than any refinement shown by an existing RL-based method. Creating and solving this mesh takes $10$-$12$ seconds, whereas computing a uniform reference on the same domain takes over $20$ minutes and significantly more memory, or more than $100$ times as long. The error compared to the uniform mesh lies at $3$-$4\,\%$, depending on the metric, making it useful for most engineering applications. We want to thank the reviewers for pointing us towards these findings and will incorporate these insights into the paper revision. We believe that these additional results significantly strengthen the claims of our paper and the applicability of our method. > Mesh coarsening, 3-D domains and more complex problems are important for this line of work. (Tf6s, fwaD, qjoP) While mesh coarsening is an important aspect of AMR, is it not essential for stationary problems with a unique solution. In the present submission, we prioritize conforming refinements due to their increased precision and numerical stability over non-conforming refinements [1]. For these refinement types, coarsening is complex due to dependencies on neighboring elements, as briefly discussed in lines 159-162. We do not consider 3D meshes here, as including them would not contribute to our core methodology but plan including them in future work. We choose problems from diverse fields to evaluate the methods across different solutions, but keep the underlying PDEs simple to ensure manageable computational demand. The difficulty of finding good refinements is evidenced by the comparatively poor performance of existing RL-based methods as mentioned in the previous answer. > How does the method improve over the baselines, and why are the proposed tasks difficult? (Tf6s, ymhS, qjoP) Our experiments include various elliptic PDEs on conforming triangular meshes with up to $6$ refinement steps across different domains. In contrast, existing work utilizes simpler domains and non-conforming quadrilateral meshes, often with only $2$ refinement levels (Ref. 27, 28) or problems that can be refined from local information (Ref. 29). The challenges in our tasks arise from the need for attention on multiple parts of the mesh (e.g., the Poisson problem) and pinpointed refinements (e.g., Linear Elasticity problem). This complexity is evidenced by the baseline methods significantly underperforming ASMR. Further, ASMR can provide high-quality refinements for significantly larger instances than any existing RL-based method, as shown in Figure 2 of the provided PDF. We recognize the need to differentiate between PDE complexity and refinement intricacy (see also [1]) and will address this in the revised paper. > Why use a SwarMDP instead of a Dec-POMDP? (LCc4, qjoP) We adapt the SwarMDP framework (Ref. 31) to agent-wise rewards, changing action and observation spaces to accommodate the varying number of agents, and crucially to mappings between these varying agents over time. While related work (Ref. 28) fits the varying action and observation spaces into a DEC-POMDP through the use of dummy states, viewing the mesh as a swarm system is conceptually simpler, makes the permutation-equivariance of the agents explicit, and allows for a more natural integration of both the agent-dependent reward and the mapping between agents. For the revision, we will add a brief discussion on this framework and its advantages. We want to thank the reviewers again for their time and feedback and appreciate further communication if any concerns remain or appear during the discussion period. [1] Nagarajan, A., & Soghrati, S. (2018). Conforming to interface structured adaptive mesh refinement: 3D algorithm and implementation. *Computational Mechanics* Pdf: /pdf/6bf57380420604ac5ad0556c63cb769052c6e9ca.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The article presents an RL-based framework for adaptive mesh refinement in FEM. In this framework, each element in the mesh is perceived as an agent from RL's perspective. The observation for each agent is then constructed using a variety of local and global features relevant to the PDE of interest. The reward function is setup to account for the impact of agent's action in terms of error reduction and increase in compute cost, and also for future agents that are a result of a decision at current time (due to mesh refinement). The method is demonstrated to work for a variety of PDEs and be somewhat competitive with traditional error-based strategies. Strengths: Use of graph neural networks helps tackle unstructured meshes. The agent mapping $M$ helps account for future agents that result from an agent due to mesh refinement. Reward formulation helps account for error reduction and increase in compute cost due to mesh refinement. Weaknesses: While there is some novelty in terms of reward formulation and agent mapping, I find that the biggest weakness of the method is that it is only demonstrated to work for simple problems on meshes with only thousands of elements. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors only talk about how the formulation can be used for de-refinement, but do not actually include it in their method. The authors do not compare their method to existing estimators such as the one by Zienkiewicz-Zhu, which is fairly inexpensive and commonly used. The demonstrated generalizations are for simple PDEs for relatively small meshes. Thus, I find the use of the phrase "complex simulations" in the abstract arguable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their insightful feedback, particularly on the significance of our reward formulation and agent mapping. We acknowledge their concerns about the scalability of our method and the necessity for a comparison with existing error estimators. In the following, we address each point raised by the reviewer in detail. > While there is some novelty in terms of reward formulation and agent mapping […] > We thank the reviewer for recognizing the innovative aspects of our work. We want to highlight that the novelty in our approach, particularly in reward formulation and agent mapping, plays a fundamental role in the success of our method. This is evidenced by the performance of our method when compared to the baselines, which mainly differ from our method in these key aspects. > The biggest weakness of the method is that it is only demonstrated to work for simple problems on meshes with only thousands of elements. > We provide new generalization experiments in our general answer and the accompanying PDF that show how ASMR generalizes to meshes with tens of thousands of elements when slightly adapting its training setup. The experiments in the main paper also show that existing RL-based methods struggle on the presented tasks, suggesting that the novel reward formulation and agent mapping in the present work mark an important step towards learning mesh refinement strategies for more complex problems. > The authors do not compare their method to existing estimators such as the one by Zienkiewicz-Zhu. > We sincerely thank the reviewer for suggesting to compare to additional established methods for AMR such as the Zienkiewicz-Zhu (ZZ) error estimator. We apply this error estimator to a percentage-based heuristic that refines all elements that are within the top $k\,\%$ of errors, and compare the results with ASMR on the Poisson problem in Figure 3 of the uploaded PDF. We find that the ZZ is significantly outperfomed by ASMR for the Poisson problem when applied as is. We find on preliminary visualizations that this is likely due to the ZZ error estimate missing entire modes of the load function. To compensate, we also compare to variants that uniformly refine the initial mesh one or two times (ZZ Error ({1,2}x Uniform)) before starting to apply the ZZ-error-based heuristic. When applying two initial refinements, the error estimate is competitive to ASMR on the top $0.1\,\%$ error, but significantly worse on the mean error metric. These results suggest that the ZZ error can provide competitive refinement when applied to suitable meshes, but that ASMR is preferable for general tasks. We will revise the paper to include the ZZ error estimate for all considered tasks. > The authors only talk about how the formulation can be used for de-refinement, but do not actually include it in their method. > We acknowledge that mesh de-refinement is important for general AMR strategies and aim to explore this direction further in future work. In the present submission, we focus on AMR for conforming triangular meshes and consider problems where de-refinement and coarsening operations are not crucial. We provide additional discussion in the general answer. > The method is demonstrated to work for a variety of PDEs and be somewhat competitive with traditional error-based strategies. > While traditional error-based strategies outperform our method in select scenarios, a considerable advantage of our approach lies in its enhanced computational speed. We show in Appendix D.3. that computing a reference mesh, which is required for the local oracles, is $2$ to $30$ times slower than our method. This advantage in computational speed is further amplified when considering larger meshes such as the one shown in Figure 2 of the uploaded PDF. Here, the refined mesh consists of slightly more than $50\,000$ elements, and computing it is more than $100$ times faster than generating a uniform refinement. We earnestly hope that our clarifications and provided materials address the concerns raised. If there are any lingering reservations or questions about the paper, we encourage the reviewer to reach out to us for further clarification.
null
null
null
null
null
null
On the impact of activation and normalization in obtaining isometric embeddings at initialization
Accept (poster)
Summary: A study of layernorm and Gramm matrix isometry is presented in both theory and emprical results. The results explain effectiveness of transformer (and similar) architechtures. Application of layernorm after activation seems to be a correct strategy when we are interested to isometry of output Gramm matrix. Strengths: Good theoretical contributions in the field of explaining the effect of layernorm in deep neural networks via the concept of Gramm matrix isometry. Authors also added a large number of illustrative empirical results that help to drive home the main points of the paper. Weaknesses: My main issue with the paper is not with the theory, but how it is presented in the main paper. The appendices appear to be fairly thorough mathematical exposition, but same rigorous exposition is absent in the main paper. Main paper should be readable without referencing to the appendices. Here under, I itemize places where I feel more rigour in exposition would be needed: From Theorem 1: "in isometry as a function of fluctuations in norms". What specifically is ment by "fluctuations in norms"? Isn't this statement obvious "corollary 2 shows that the isometry of layer normalization is deterministic and does not rely on the random weights." ? In the setup of Corollary 2, no weight matrix is applied. So how this question even would be raised? Completely different issue is that if you would compare I(G_l) and I(G_{l+1}), there weight matrix and activation function are applied. Next sentence about empirical proof also is out of place in this context. In Section 3.1. what is ment by the sentence "which implies orthogonalization of these results."? Orthogonalization of what? It would be good to expand the properties of Def 3 a bit more. Now we only have that it is "non-linearity strength for activations". Paper has half a page space for extra content. I would appreciate a few paragraphs of extra content in this context. About Eq. (7), you mention that G_*^l is a sequence that approximates the G^l. From these paragraphs it is hard to see where the sequence is coming from. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Gap between theory (Theorem 4) and observed in Fig. 4 is quite big. As a future work, is there a clear path for obtaining tighter bound? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their positive review on our work > My main issue with the paper is not with the theory, but how it is presented in the main paper. ... some rigorous exposition is absent in the main paper. Main paper should be readable without referencing to the appendices. We thank the reviewer for these constructive feedback and will address it in the revised manuscript. > ... From Theorem 1: "in isometry as a function of fluctuations in norms". What specifically is ment by "fluctuations in norms"? To address this ambiguity we will replace "fluctuation of norms" by "variations" > - Isn't this statement obvious "corollary 2 shows that the isometry of layer normalization is deterministic and does not rely on the random weights." ? In the setup of Corollary 2, no weight matrix is applied. So how this question even would be raised? Completely different issue is that if you would compare $I(G_l)$ and $I(G_{l+1})$, there weight matrix and activation function are applied. Next sentence about empirical proof also is out of place in this context. This is a great and nuanced observation. We agree with the reviewer that the current phrasing does seem ambiguous. These two paragraphs (lines 124-130) stress that the non-decreasing isometry under `Layer Norm` is not a probablistic statement, but holds at all times. In order to track and analyze evolution of isometry through depth, we are fundumentally faced with the following chain $\dots \to x^l \to h^l \to x^{l+1} \to \dots$ where $h^{l} = \sigma(W^l \tilde{x}^l)$ and $x^{l+1} = \text{LayerNorm}(h^l).$ since $x^l$'s and $h^l$'s clearly depend on the preceding weights, one may attempt to anlyze their isometry by relying on the stochasticity of weights, and perhaps on their particular distribution (namely Gaussian weights.) Corrollary 2 and the following paragraphs assert that this isometry-preserving property does not rely on such stochastic nor mean-field approximations and holds at all times, implying that isometry of hidden representations $x_l$ cannot be smaller than isometry of hidden activations $h^l$. The empirical evidence in the attached PDF to general response solidifies that, regardless other components, normalization layers preserve or enhance isometry/ Namely in MLP at initialization (Figure 2 left), and after training (Figure 2, right), and in pretrained transformers (Fig 3). > - In Section 3.1. what is ment by the sentence "which implies orthogonalization of these results."? Orthogonalization of what? Assume that $\mathcal{I}(h_\ell) = 1$, then outputs becomes orthogonal to each other since the maximum determinant is obtained for orthogonal vectors. We will add more details about the notion of orthogonality and isometry. > - It would be good to expand the properties of Def 3 a bit more. Now we only have that it is "non-linearity strength for activations". Paper has half a page space for extra content. I would appreciate a few paragraphs of extra content in this context. We acknowledge the need for further elaboration on non-linearity strength. We have a novel experiment confirming that the linearity strengths predicts the convergence of SGD in early epochs, discussed in the general response. We will include additional paragraphs to explain this concept more thoroughly in the revised manuscript, emphasizing its computation, and its influence on training and isometry. > - About Eq. (7), you mention that G_*^l is a sequence that approximates the G^l. From these paragraphs it is hard to see where the sequence is coming from. Appendix A.1 elaborates on the link between $G_{\*}^l$ and $G^{l}$. The sequence $G_{\*}^l$ refers to the mean-field approximation of the Gram matrix dynamics in the infinitely wide network regime, derived based on pervious mean field approaches in [1,2,3]. Here's the Breakdown of the ideas of why $G_{\*}^l$ is an approximate of $G^l$. $G^l$ is the actual Gram matrix of the representations at layer $l$. Given the stochastic weights, the dynamics of this sequence is *stochastic*. The mean-field approximation $G_{\*}^l$ is constructed to represent the average, or mean dynamics, such that conditionally it holds $ E[G^{l+1} | G^l=G^l_\*]=G_\*^{l+1}. $ In contrast to the exact Gram sequence $G^l$ the mean-field Gram sequence $G_\star^l$ is *deterministic.* As we increase the width of the neural network, we observe that dynamics of Gram matrices conditioned on the previous layer becomes highly concentrated around their mean. This implies that at infinite width, evolution of mean-field Gram matrices $G^l_*$ will exactly describe the evolution of Gram matrices $G_l$'s. > ## Questions: > - Gap between theory (Theorem 4) and observed in Fig. 4 is quite big. As a future work, is there a clear path for obtaining tighter bound? This is an excellent question. A likely path to a tighter bound is finding a tighter bound between $\gamma(G)$ (Definition 4, appendix) and isometry gap. **References** [1] Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. In International Conference on Learning Representations, 2017. [2] Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. Advances in Neural Information Processing Systems, 2016. [3] Yang, Greg, et al. “A mean field theory of batch normalization.” ICLR (2019). --- Rebuttal Comment 1.1: Comment: I thank authors for providing a thorough rebuttal and clarification to my concerns. I am quite satisfied with these explanations and I do not have any other issue. --- Reply to Comment 1.1.1: Comment: > I am quite satisfied with these explanations and I do not have any other issue. We greatly appreciate the reviewer for their positive evaluation of our work. We also sincerely thank the reviewer for their time and providing us with their insightful feedback.
Summary: The paper investigates normalization and non-linear activation functions from the perspective of isometry. Strengths: - This paper provides interesting analysis into normalization and non-linear activation from the perspective of isometry. - The paper has provided interesting discussion on future research. Weaknesses: - Overall the paper looks a bit rushed or not polished. There are many typos for example, for example line 31 "about the the". - The paper may benefit more from slightly larger scale data. For example, using MNIST could be a good way to improve the paper without too much additional work. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - One question while reading this paper is, why do we want to preserve the distance and angles between the input data points after mapping them to the feature space? Suppose in image recognition, the desired representation would not preserve the distance and angles in the input data points? - I think the previous question leads to another question which is also discussed in the paper: why would we want to use isometry to measure and explain layers? Despite many interesting points in the paper, I would suggest the authors add more motivation or probably a few past literature to better justify the motivation of this interesting work. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Here are some minor comments: 1. I would recommend the authors polish more the paper. For example, for the figures, I recommend the authors add more captions for readers to better follow and understand the message. 2. I would also suggest improving the presentation a bit. For example, leaving two plots in the last page (page 9) may not be a good idea for the readers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments helping us to improve the writing. > The paper may benefit more from slightly larger scale data. For example, using MNIST could be a good way to improve the paper without too much additional work. To address the reviewer's concern about scale of data and susbstantiate the validtiy of our theorems, we repeated experiments for MNIST (Figure 4), and CIFAR100 (Figure 1) and also Wikidataset for large language models (Figure 2) in the attached PDF to the general response. Let us remark that as theoretical analysis establishes, results holds indepdent from the dataset as long as some mild conditions are met, i.e., samples in the input batch are linearly independent. > One question while reading this paper is, why do we want to preserve the distance and angles between the input data points after mapping them to the feature space? Suppose in image recognition, the desired representation would not preserve the distance and angles in the input data points? Pervious studies demonstrate isometry at initialization leads to a faster convergence of gradient descent (see general response). This is further illuserated in Figure 1 in document attached to the rebuttal respone. In this Figure, activation functions with a stronger isometry bias (i.e. a larger constant $\beta_0$) lead to a faster convergence of stochastic gradient descent. Remarkably, this is only the matter of initialization. The representations will change after training. > I think the previous question leads to another question which is also discussed in the paper: why would we want to use isometry to measure and explain layers? Despite many interesting points in the paper, I would suggest the authors add more motivation or probably a few past literature to better justify the motivation of this interesting work. We thank the reviewer for raising this lack of clarity on othe motivation and background. several works have attributed the success of normalization and residual/skip connections in training deep neural networks to a faitful signal propagation (also termed dynamic isometry) through deep between layers [1-7]. Here are a few past literature that are directly related to the notion of isometry. - Dynamic isometry or signal propagation property postulates that in order to ensure a fast training[1,2], the network output must be sensitive to changes in the input. This hypothesis is employed by [4] to train a 10,000 layer CNN by a proper weight initialization, without skip connection or normalization layers. - In a similar attempt, [5] develops multiple methods to obtain isometry in transformer architecture that trains with a comparable speed to the model with skip and normalization layers. - [6] proposes shaping activations to impose isometry and empirically shows this allows training of very deep neural networks. - [7] designs activation functions that impose isometry, hence significantly enhance the training of networks with batch normalization. We included an experimental result in the general response demonstrating applications of results, and will provide an extensive literature review, motivating our theoretical study. > # Limitations: > Here are some minor comments: > - I would recommend the authors polish more the paper. For example, for the figures, I recommend the authors add more captions for readers to better follow and understand the message. We thank the reviewer for pointing out this lack of clarity in captions. We have added key details regarding the data and experimental procedure to each figure. > - I would also suggest improving the presentation a bit. For example, leaving two plots in the last page (page 9) may not be a good idea for the readers. Overall the paper looks a bit rushed or not polished. There are many typos for example, for example line 31 "about the the". We will improve the writing. Thank you very much for the typo. --- **References** [1] Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. In International Conference on Learning Representations, 2017. [2] Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. Advances in Neural Information Processing Systems, 2016. [3] Hadi Daneshmand, Amir Joudaki, and Francis Bach. Batch normalization orthogonalizes representations in deep random networks. Advances in Neural Information Processing Systems 2021. [4] Xiao, Lechao, et al. "Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks." International Conference on Machine Learning 2018. [5] He, Bobby, et al. "Deep transformers without shortcuts: Modifying self-attention for faithful signal propagation." arXiv preprint arXiv:2302.10322 (2023) [6] Zhang, Guodong, Aleksandar Botev, and James Martens. "Deep learning without shortcuts: Shaping the kernel with tailored rectifiers." (2022). [7] Klambauer, Günter, et al. "Self-normalizing neural networks." Advances in neural information processing systems 30 (2017). --- Rebuttal Comment 1.1: Comment: I thank the author for providing this thorough rebuttal. The additional experiments on larger datasets seem very interesting and promising. Some concern left (which I think is out of the scope of the paper) is that does faster convergence (in the form of decreasing loss) means that the network is trained better. Previous literature and experiments have also demonstrated that faster drop in loss could lead to overfitting on the training dataset. This could be alleviated by reporting the testing accuracy of experiments in Figure 1 and Figure 2 of the additional material. The biggest concern is still that the paper, in the current form, does not convery the message too well. There needs to be quite substantial amount of work done on polishing the paper (e.g. adding motivation like the author wrote in the rebuttal and changing formats and wording of the work). I can only raise my score to borderline accept and I hope the author tries harder to polish this interesting work in the later versions. --- Reply to Comment 1.1.1: Comment: > The additional experiments on larger datasets seem very interesting and promising. We sincerely thank the reviewer increasing their score based on our rebuttal response and appreciating the experiments > Previous literature and experiments have also demonstrated that faster drop in loss could lead to overfitting on the training dataset. This is an excellent point raised by the reviewer we will subsequently add the test loss plots to the camera ready version. > The biggest concern is still that the paper, in the current form, does not convery the message too well. There needs to be quite substantial amount of work done on polishing the paper (e.g. adding motivation like the author wrote in the rebuttal and changing formats and wording of the work). I can only raise my score to borderline accept and I hope the author tries harder to polish this interesting work in the later versions. We would like to point that our main contribution is the theoretical results about isometry under normalization and activation layers. None of the rebuttal responses imply any major changes to these parts (except for small typos and additional details in captions). The rebuttal response which was prepared in response to reviewers, which will be added as an additional page. This is accepted under formatting instructions for NeurIPS, as quoted below: > If your submission is accepted, you will be allowed an additional content page for the camera-ready version. Please let us know if there are any further questions or points that need clarification.
Summary: The paper is a theoretical analysis in the mean-filed regime the of the second-to-last gram-matrix of MLP. It proves that the presence of layer normalization in conjunction with non-linear activation functions biases the input-output mapping at initialization towards an isometry. Strengths: This paper presents a novel theoretical contribution that aims to enhance our understanding of the interaction between layer normalization and non-linear activation functions at initialization. The proof relies on general assumptions and provides a justification for the utilization of pre-layer normalized representations in order to prevent training instabilities associated with the collapse of the rank of the input-output gram matrix in a multi-layer perceptron (MLP) network. Weaknesses: The experimental tests are not well described, for instance the authors do not specify the datasets used neither in the main text and nor in the captions (see also questions sections). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What kind of input datasets are used in Fig. 2-3-4 ? 2. Why the unit-sphere in d dimension is denoted as $\sqrt d$-sphere and not $d$-sphere? 3. Minor: typo l 146: comma in place of full stop. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Not really. Perhaps the main limitation is that the author limit their analysis on MLP architectures. This makes the results of their paper not immediately applicable to transformers (due to the presence of the attention block) which are of much greater practical interest. Maybe this should be stated as a limitation of the current work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and their constructive comments. > ## Limitations: > - Not really. Perhaps the main limitation is that the author limit their analysis on MLP architectures. This makes the results of their paper not immediately applicable to transformers (due to the presence of the attention block) which are of much greater practical interest. Maybe this should be stated as a limitation of the current work. This is a great suggestion by the reviewer. Let us remark that Theorem 1, and corrolaries 2 & 3 hold for any architecture and regardless of the weight distributions. Figure 3 in the `Rebuttal PDF` shows after each Layer Normalization layer in GPT2 (pretrained), the isometry gap decays. Given that the established property of isometry for normalization was deterministic, this is a consequence of these theoretical results. > ## Weaknesses: > The experimental tests are not well described, for instance the authors do not specify the datasets used neither in the main text and nor in the captions (see also questions sections). We thank the reviewer for pointing out these potential points of ambiguity regarding the experimental section. We will address this by adding detailed explanatioins regarding the experiments. > ## Questions: > - What kind of input datasets are used in Fig. 2-3-4 ? The input used in these figures wes artificially created to have very low isometry (close to degenerate), so that it highlights the effects of the normalization and activation layers. More specifically, the rows of $d\times n$ are drawn from $N(0, C)$ where $C$ has a highly skewed eigenvalue distribution. Same holds for figures 5,7, 8. We thank the reviewer for pointing out these missing information in the main text. These details have been added to captions and main text. > - Why the unit-sphere in d dimension is denoted as $\sqrt{d}$ sphere and not $d$ sphere? $\sqrt{d}$ denotes the radius of the sphere, but not the dimensionality. The classical definition of LN projects each sample onto the sphere with radius $\sqrt{d}$ as can be seen below: \begin{align} LN(x) = \frac{(x-\bar x)}{\sqrt{\frac1d \sum_i^d(x_i-\bar x)^2}} \implies \\|LN(x)\\| = \frac{\sqrt{\sum_i^d(x_i-\bar x)^2}}{\sqrt{\frac1d\sum_i(x_i-\bar x)^2}} = \sqrt{d}, \end{align} > - Minor: typo l 146: comma in place of full stop. fixed. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. They have addressed my concern regarding the experimental test on transformer architecture. I recommend that they include this explanation in the appendix if the paper will be accepted. I still have a small question about the notation used: I know that $\sqrt d$ is the radius of the sphere where data is projected after LN. My question was about how the authors denote this sphere. From what I know, and as shown in [1] and [2], the $d$-dimensional sphere is usually called the $d$-sphere. [1] https://en.wikipedia.org/wiki/N-sphere [2] https://mathworld.wolfram.com/Sphere.html --- Reply to Comment 1.1.1: Comment: > the $d$-dimensional sphere is usually called the $d$-sphere. We thank the reviewer for bringing this nuanced notation issue to our attention. Thus we will replace these mentions by "$d$-sphere with radius $\sqrt{d}$" > They have addressed my concern regarding the experimental test on transformer architecture. We will make sure to include it in the appendix in the revised manuscript. We again thank the reviewer for this valuable and interesting suggestion. In light of these additional experiments, should you find it fitting, we would be grateful for any potential reconsideration of our score.
Summary: This paper studies the isometry of Gram matrix under the effect of BN, LN and activation at initialization. For BN and LN, some results are obtained. For activation, most are empirical results. Strengths: Study multiple factors that affect the isometry of Gram matrix. Weaknesses: 1) The novelty of the theory for BN and LN is small. 2) For activation, the conclusion is wekk. It is not strange to see different activation function have different effects. But so what and why? 3) The paper does not justify why study isometry. Its connection to neural networks, such as training speed or generalization, should be demonstrated either empirically or theoretically. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weekness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See weekness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time reviewing our work, providing constructive feedback, and pointing out potential points of confusion. > This paper studies the isometry of Gram matrix under the effect of BN, LN and activation at initialization. For BN and LN, some results are obtained. For activation, most are empirical results. Theorem 4 is a theoretical result on activations and one of our main contribution. This theorem is characterizing the isometric properties of activations. The empirical evidence presented are only meant to substantiate and complement this theoretical contribution. > It is not strange to see different activation function have different effects. But so what and why? The paper does not justify why study isometry. Its connection to neural networks, such as training speed or generalization, should be demonstrated either empirically or theoretically. We directly address these points in supplementary experiments PDF attached to the general response (See figures 1-4). > The novelty of the theory for BN and LN is small. The isometry bias of normalization has been the subject of various theoretical studies, including [1,2,3,4, and 5]. While [1-4] establish a local isometry bias in a local neighborhood of certain inputs, we prove the global isometry bias for a wide range of inputs. Furthermore, [5] only proves the global stability for MLP with linear activations and BN. Our theoretical analysis derives a clear connection between non-linear activations and isometry. While pervious results only characterizes this bias for networks with random weights, Theorem 1 proves the isometry result for all networks with normalization layers. --- **References** [1] Yang, Greg, et al. "A mean field theory of batch normalization." ICLR (2019). [2] Li, Mufan, Mihai Nica, and Dan Roy. "The neural covariance SDE: Shaped infinite depth-and-width networks at initialization." Advances in Neural Information Processing Systems (2022). [3] Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. In International Conference on Learning Representations, 2017. [4] Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. Advances in Neural Information Processing Systems, 2016. [5] Hadi Daneshmand, Amir Joudaki, and Francis Bach. Batch normalization orthogonalizes representations in deep random networks. Advances in Neural Information Processing Systems 2021. --- Rebuttal Comment 1.1: Comment: Thank authors for clarification. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for increasing their score. We would highly appreciate if the reviewer specified which parts of our rebuttal response or their concerns remain unresolved?
Rebuttal 1: Rebuttal: We appreciate constructive reviews that helped us to improve the paper. Let us recount our main contributions by showing excerpts from the reviews. This excerpt from `B3t5`'s captures the main part of contribution: > A study of layernorm and Gramm matrix isometry is presented in both theory and emprical results... Application of layernorm after activation seems to be a correct strategy when we are interested to isometry of output Gramm matrix. This view is shared by reviewer `D7v5` highlighting applications of our result to training instabilities in deep neural networks > The proof relies on general assumptions and provides a justification for the utilization of pre-layer normalized representations in order to prevent training instabilities associated with the collapse of the rank of the input-output gram matrix in a multi-layer perceptron (MLP) network. Furthermore, reviewer `Qj2n` views our analysis interesting: > This paper provides interesting analysis into normalization and non-linear activation from the perspective of isometry. Reviewers have also raised some concerns which we have addressed with experiments and references: ## Background and motivation for isometry in the literature. Several studies have attributed the success of normalization and residual/skip connections in training deep neural networks to a faitful signal propagation (also termed dynamic isometry) through deep between layers [1-7]. Related literature demonstrate the important role of isometry in training stabilizing the training of deep neural networks: - Dynamic isometry or signal propagation property postulates that in order to ensure a fast training[1,2], the network output must be sensitive to changes in the input. This hypothesis is employed by [4] to train a 10,000 layer CNN by a proper weight initialization, without skip connection or normalization layers. - [5] develops multiple methods to obtain isometry in transformer architecture that trains with a comparable speed to the model with skip and normalization layers. - [6] proposes shaping activations to impose isometry and empirically shows this allows training of very deep neural networks. - [7] designs activation functions that impose isometry, hence significantly enhance the training of networks with batch normalization. ## A supplementary experiment To elaborate on applications of our theoretical results, we supplement our findings with an experiment showing that: **Non-linearity strength $\beta_0$ is a strong predictor for training performance** In Figure 1 of the attached PDF, we compare the convergence rates of stochastic gradient descent on a Multi-Layer Perceptron (MLP) with various activations for which $\beta_0$ (as defined in Eq. 6) is monotonically increasing. We observe activation functions with higher values of $\beta_0$ (as per Eq. 6) result in accelerated convergence for stochastic gradient descent. It is important to note that $\beta_0$ governs the isometry bias of the activation, as proven in Theorem 4. Consequently, our findings carry direct practical implications, shedding light on the popularity of certain activation functions such as ReLU in deep learning. ## Generality of our results for MLPs and transformers Several of the questions raised are related to generality and broader applicability of our theoretical results. In response, we have provided the following explanatioins and clarifications: - **Isometry of LayerNorm after training**: Theorem 1 and corollary 2 are valid in any setting, regardless of the architecture and the distribution of weights This is demonstrated in Figure 2 of the attached PDF - **Isometry of LayerNorm in transformers** Isometry-enhancing property of LayerNorm, as predicted by corollary 2, is architecture-independent. Figure 3 of the attached PDF shows isometry bias of normalization layers for a pretrained GPT2 architecture. - **Broad range of activations in MLPs** As demonstrated in Figure 2 of the attached PDF, the non-linearity strength We hope that with these additional clarifications and new empirical evidence, the reviewer's concerns are addressed. --- **References** [1] Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. In International Conference on Learning Representations, 2017. [2] Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. Advances in Neural Information Processing Systems, 2016. [3] Hadi Daneshmand, Amir Joudaki, and Francis Bach. Batch normalization orthogonalizes representations in deep random networks. Advances in Neural Information Processing Systems 2021. [4] Xiao, Lechao, et al. "Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks." International Conference on Machine Learning 2018. [5] He, Bobby, et al. "Deep transformers without shortcuts: Modifying self-attention for faithful signal propagation." arXiv preprint arXiv:2302.10322 (2023) [6] Zhang, Guodong, Aleksandar Botev, and James Martens. "Deep learning without shortcuts: Shaping the kernel with tailored rectifiers." (2022). [7] Klambauer, Günter, et al. "Self-normalizing neural networks." Advances in neural information processing systems 30 (2017). Pdf: /pdf/a8011e006779f7cf7edd49228dc91223d17d1241.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the isometric properties of a randomly initialized neural network. The authors show layer normalization and proper activation functions can mitigate rank collapse. In addition, they also quantify the normalization bias for different type of normalization layers. They use the Hermite expansion of the activation function to highlight the importance of higher order Hermite coefficients in the bias towards isometry. The paper provides theoretical results and empirical evidence to support its findings and discusses the potential implications for future research on neural network architectures and training algorithms. Strengths: (+) This paper provides analysis of the isometry properties using the penultimate Gram matrix in neural networks. It complements previous study by analyzing the role of layer normalization (while previous work mainly focus on batch norm). It also proposes a few useful measurement of isometry bias. (+) The authors provide theoretical results and empirical evidence (mainly figure 2 and figure 3) to show that activation and normalization techniques can bias the Gram matrix towards isometry at initialization, which can improve the training dynamics of deep neural networks. (+) The paper discusses the potential implications of the findings for future research on neural network architectures and training algorithms, which can inspire new directions for improving the performance and efficiency of deep learning systems. Weaknesses: (-) I feel this paper is not very coherent between different sections, though possibly because I’m not an expert in this field. I can understand each section but does not find the connection between different sections. (-) The empirical evidence presented in the paper is quite limited. Only very few activation functions are considered and it also assume the MLPs are of fixed width. The conclusion might be hard to generalize or be helpful to the practice. (-) The analysis of section 3 and section 4 seems to be two different systems. It does not explain how they are connected and lack a unified theory of them. (-) There are also a few confusing statements such as the authors show higher He coefficients have negative impact on isometry properties. However, in the abstract, they also state “ highlighting the importance of higher order (>2) Hermite coefficients in the bias towards isometry“, which implies higher order Hermite coefficients help isometry. Overall, I think this paper has limited applicability and does not inspire new ways on network design (either normalization or activation functions). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I have some questions regarding the coherence of this paper, for example: 1) what is the motivation of introducing normalization bias in Section 3.2, how does it related to the isometry bias? 2) Is definition 1 the definition for ‘isometry’, or a measure of ‘isometry property’? 3) Is the isometry gap (-logI(M)) ranges from 0 to -\infinity? (It is \infinity on line 100). I’m also curious how can the insights from this paper be used to design more efficient and effective deep learning systems? What is mainly the assumptions you make that present challenges of applying these insights in practice? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: This paper is unlikely to have potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time reviewing our work, providing constructive feedback, and pointing out potential points of confusion. *Detailed responses:* > There are also a few confusing statements such as the authors show higher He coefficients have negative impact on isometry properties. However, in the abstract, they also state “ highlighting the importance of higher order (>2) Hermite coefficients in the bias towards isometry“, which implies higher order Hermite coefficients help isometry. Theoretical result aligns with the statement in the abstract. There is a subtle difference between isometry ($\mathcal{I}$) and isometry gap ($-\log\mathcal{I}$): The negative logorithmic of isometry is the isometry gap that quantifies how far a matrix is from isometry. Thus Figure 5 shows that $He_2, He_3,$ lead to a decay of isometry gap, which implies higher isometric properties. This is consistent with our main result and the summary stated in abstract. Extremium values of both isometry and isometry gap are elaborated in lines 97-101 of the main text: isometry lies between $0$ (degenerate matrix) and $1$ (perfect isometry, ie, orthogonal matrix), while isometry gap lies between $-\infty$ (degnerate) and $0$ (orthogonal matrix, ie, perfect isometry). According to Theorem 4, the higher order Hermite cofficient imposes a faster decay in isometry gap, hence a higher faster convergence for isometry to 1. For example, there is no decay in $-\log \mathcal{I}$ for linear activations. Therefore, linear activation does not impose isometry which is experimentally substantiated in Figure 3. > The empirical evidence presented in the paper is quite limited. Only very few activation functions are considered To address the reviewer's concerns about limitations of our theory, we have expanded empirical results to include activations `Identity, ReLU, PReLU, Tanh, SiLU(Swish), ELU, GELU, SELU.` Figure 2 in the PDf attached to the general response depicts the isometry across layers of an MLP for various activations, along with their $\beta_0$ value. As can be seen, the value of $\beta_0$ accurately predicts the isometry gap decay. > I feel this paper is not very coherent between different sections, though possibly because I’m not an expert in this field. I can understand each section but does not find the connection between different sections. The analysis of section 3 and section 4 seems to be two different systems. It does not explain how they are connected and lack a unified theory of them. Both results are characterizing isometry across the layers of MLP. While Section 3 focus on a single normalization, Section 4 investigates the isometry for MLPs with layer normalization and non-linear activations. We will add an outline before section 3 to highlight the connection between different sections. In Figure 2 attached to the general response, we illustrate how these two results connects and when we can invoke results in these sections for deep neural networks. We will elaborate on the coherence . > and it also assume the MLPs are of fixed width. The conclusion might be hard to generalize or be helpful to the practice. The assumption that MLP has fixed width is merely to ease the notation. In fact, one can readily extend the results to an MLP with variable widths across layersas long as these widths are sufficiently large, ie., $1/\sqrt{width}$ is small). To substantiate this, we have added experiments with variable width (see PDF attached to general response, Figure 4). > Overall, I think this paper has limited applicability and does not inspire new ways on network design (either normalization or activation functions). I’m also curious how can the insights from this paper be used to design more efficient and effective deep learning systems? What is mainly the assumptions you make that present challenges of applying these insights in practice? The general response demonstrates the following applications: - **Predicting training speed with $\beta_0$**: The non-linearity coefficient $\beta_0$ is a strong indicator for training convergence speed (see general response and Figure 1 of attached PDF to the general response) - **Explaining isometry bias of activations** Theorem 4 and non-linearity strength $\beta_0$ accurately predict the isometry of activations for a wide range of activations (see Figure 2 of attached PDF to the general response) - **Explaining isometry bias of layer normalization in MLPs and transformers** Theorem 1 and corollary 2 prove the isometry-enhancing properties of LayerNorm in MLPs (Figure 2, attached PDF) and transformers (Figure 3, attached PDF) > what is the motivation of introducing normalization bias in Section 3.2, how does it related to the isometry bias? They both refer to the "isometry bias of normalization." To avoid creating the impression that they are separate concepts, we will replace occurrences with "isometry bias of normalization" in the revised manuscript. We introduced the normalization bias to show normalization layers impose isometry across the layers of neural networks. When normalization bias is significantly greater than zero, normalization layers constantly increases the isometry across the layers. Figure 2 and 3 illustrate the consequence of having high normalization bias. As we observe in Figure 3, having high normalization bias (in Figure 2) concludes $-\log I$ decays across the layers of neural networks. > Is definition 1 the definition for ‘isometry’, or a measure of ‘isometry property’? It is the definition for isometry. Notably, the terms of isometry is slightly different from isometric maps that preserves the distances. > Is the isometry gap $(-\log I(M))$ ranges from 0 to $-\infty$? (It is \infinity on line 100). *Isometry gap ranges between $0$ and $\infty$:* Since isometry is smaller than $1,$ its logarithm is non-positive $\log I(M)\le 0$. hence Isometry gap is non-negative $-\log I(M)\ge 0$. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed reply. My question about 1) inconsistency of the effect He coefficients in experiments and abstract; 2) MLP of fixed width; 3) limited activation functions are addressed. I have two follow-up questions: Can you clarify the relationship between isometry gap, normalization bias, isometry bias, $-\log(I)$ ? In my understanding: isometry bias including isometry bias of normalization and isometry bias of activation. Is this correct? I have this question because figure 2 and 3 both show Isometry bias of normalization while the y-axis is "normalization bias" and "$-log(I)$" separately. What is the difference between the two figures? Another question is about identity activation function. I thought identity function would preserve the isometry property (it preserve the distance), but this seems to be contradict with Figure 3. Is this because identity actiavtion is not helpful on recovering the isometry gap (although it does not make it worse) while other activations are? --- Reply to Comment 1.1.1: Comment: We highly appreciate the reviewer's response to our rebutal response. > Another question is about identity activation function. I thought identity function would preserve the isometry property (it preserve the distance), but this seems to be contradict with Figure 3. Is this because identity actiavtion is not helpful on recovering the isometry gap (although it does not make it worse) while other activations are? Yes the reviewer is correct. We thank the reviewer for making this nuanced observation. Please note that there is a distinction between preserving isometry (which is what identity achieves), and improving isometry (which is what non-linear activations achieve). We can see this in Theorem 4, equation (5), that isometry gap decays exponentially with depth $\ell$ with rate $\exp(-\ell \log\beta_0)$. Thus, for any activation with non-linear components $\beta_0 > 1,$ we have $\log\beta_0 > 0,$ while identity activation we have $\log\beta_0 = 0$, which affirms your observation. > Can you clarify the relationship between isometry gap, normalization bias, isometry bias, ? In my understanding: isometry bias including isometry bias of normalization and isometry bias of activation. Is this correct? The reviewer is correct in assuming that various components of neural networks such as normalization and activation layers, influence isometry. Let us clarify the definition of these terms and where they are defined in the main text: | Term | Definition| Where defined | Range | | -------- | -------- | -------- | ----- | | Isometry | $\mathcal{I} = \frac{G.M.(eigs)}{A.M.(eigs)}$ | Table 1 | $[0,1]$ | | Isometry gap | $-\log\mathcal{I}$ | Table 1 | $[0,\infty]$| | Normalization bias | $\frac{Var(\text{norms)}}{\text{mean}(\text{norms})^2}$ | equation 4 & 5 | $[0,\infty]$ | - Isometry bias of normalization: According to Theorem 1, a larger normalization bias causes a larger increase in isometry (or decrease in isometry gap) after passing through the normalization layer:o $$ \mathcal{I}(\text{post-normalization Gram}) \ge \mathcal{I}(\text{pre-normalization Gram})(1+\text{normalization bias}) $$ - Isometry bias of activation: According to theorem 4, isometry gap of MLP with activation non-linearity strength $\beta_0$ decays exponentially with rate $1/\beta_0$: $$ -\log\mathcal{I}(\text{Gram matrix of layer $\ell$}) \lesssim \exp(-\ell \log\beta_0) = (1/\beta_0)^\ell $$ > I have this question because figure 2 and 3 both show Isometry bias of normalization while the y-axis is "normalization bias" and "$-\log \mathcal{I}$" separately. What is the difference between the two figures? Figure 2 is plotting normalization bias, as defined in equation (4) and (5), but, in Figure 3 the y-axis is showing the isomtery gap, as defined in Table 1. > Overall, I think this paper has limited applicability and does not inspire new ways on network design ... We wonder if the reviewer finds our responses regarding applicability convincing or not?
null
null
null
null
null
null
Wasserstein Gradient Flows for Optimizing Gaussian Mixture Policies
Accept (poster)
Summary: The paper proposes a new approach to adapt robot motion policies for different task conditions. It suggests leveraging the structure of probabilistic policies, specifically Gaussian mixture models (GMMs), and formulating policy optimization as an optimal transport problem. By using the L2-Wasserstein distance between GMMs, the policy updates can be constrained to improve the stability of the optimization process. Additionally, the Bures-Wasserstein manifold geometry is utilized for optimizing the Gaussian distributions of the GMM policy through Riemannian optimization. The proposed method is evaluated on various robotic scenarios and demonstrates better performance in terms of task success rate and low-variance solutions compared to common policy optimization baselines. Strengths: Novel Approach: The paper introduces a new approach to adapt robot motion policies by leveraging the structure of probabilistic policies and formulating policy optimization as an optimal transport problem. This novel perspective can provide insights into enhancing the adaptability and performance of robot motion policies. Utilization of Gaussian Mixture Models (GMMs): The paper focuses on GMMs, a widely used representation for modeling complex motion policies. By exploiting the specific structure of GMMs, the proposed method offers a tailored solution for policy optimization, which can potentially lead to more effective and efficient adaptation of robot motion policies. Consideration of Stability: The incorporation of the L2-Wasserstein distance between GMMs as a constraint in policy updates aims to enhance the stability of the optimization process. This consideration addresses a common challenge in policy optimization algorithms and can contribute to more reliable and consistent results. Riemannian Optimization: The paper leverages the geometry of the Bures-Wasserstein manifold for optimizing the Gaussian distributions of the GMM policy. This utilization of Riemannian optimization techniques showcases a sophisticated mathematical framework to refine the policy parameters, potentially leading to improved performance and convergence properties. Experimental Evaluation: The proposed method is thoroughly evaluated on common robotic settings, including reaching motions, collision-avoidance behaviors, and multi-goal tasks. The results demonstrate that the proposed approach outperforms common policy optimization baselines in terms of task success rate and low-variance solutions. This empirical validation strengthens the credibility and practical relevance of the proposed method. Weaknesses: I noticed that this does not discuss much other GMM-based methods, such as the PMOE[1]. Including a discussion and comparison with existing GMM-based methods would greatly enhance the comprehensiveness and value of your work. Furthermore, I suggest you consider conducting additional experiments to compare your proposed approach with these methods. Because comparing only with non-GMM methods, like PPO and SAC, is not fair. [1] Ren J, Li Y, Ding Z, et al. Probabilistic mixture-of-experts for efficient deep reinforcement learning[J]. arXiv preprint arXiv:2104.09122, 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for your time reviewing our work and the provided suggestion to compare against PMOE, an algorithm we were not aware of. We also appreciate the positive feedback about both the theoretical and practical aspects of our paper. Below we address the main concern of the review: 1. **Discussion and comparison against PMOE**: * In a similar spirit as our paper, PMOE highlights the importance of leveraging a mixture of experts in the policy structure of deep RL methods. However, using GMMs straightforwardly in generic off-policy and on-policy deep RL algorithms introduces a difficulty in their end-to-end training process. This is due to the inherent non-differentiability caused by the optimization of categorical distribution parameters within the GMM optimization procedure. PMOE addresses this problem by proposing a gradient estimator for optimizing the mixture weights. Although PMOE also assumes that the policy $\pi$ is represented by a mixture of Gaussians, *its formulation does not provide an optimal transport perspective to the RL problem as in our paper*. PMOE mainly focuses on formulating feasible gradient updates for both the mixture weigths and Gaussians' parameters, but it does not consider their underlying geometry. In sharp contrast, our approach formulates the policy optimization as a gradient flow according to the Wasserstein distance, which leverages the view that *the set of mixtures of Gaussians can be associated with a Wasserstein metric*, leading to a policy optimization with updates following a Wasserstein gradient flow. Moreover, our approach *exploits the fact that a Gaussian can be identified with the Bures-Wasserstein manifold*, which corresponds to the product manifold $\mathbb{R}^d \times \mathcal{S}_{++}^d$. This allows us to formulate an explicit Euler scheme for the *gradient updates that builds on Riemannian optimization*. This means that *our update rule does not depend on inexact Riemannian gradient updates* (like natural gradients), and therefore *our method guarantees that the gradient flow stays on the underlying Riemannian manifold*. The PMOE gradient-based updates [R1, Secs. 3.3, 3.4] do not provide such guarantees. * As suggested by the reviewer, we added PMOE [R1] as an additional baseline for our benchmarking experiments. Specifically, we added the PMOE version for both PPO and SAC (i.e., PMOE-PPO and PMOE-SAC). Our implementation is based on the [code](https://github.com/JieRen98/rlkit-pmoe/tree/master/rlkit/torch/PMOEsac) provided by the first author of the paper [R1], and includes several minor changes to comply with recent versions of the code dependencies. The figures in the attached PDF include the results corresponding to the two newly added PMOE methods. In general, PMOE often showed a better performance than the vanilla PPO and SAC baselines, which follows the experimental insights provided in the PMOE paper [R1] regarding the advantages of the probabilistic mixture of experts in deep RL settings. Nevertheless, as shown in the figures, our approach also outperforms both PMOE versions in all three 2D robotic tasks and in the newly added simulated task aimed at testing all the methods in a higher-dimensional setting. The new results corroborate the observations presented in our paper: leveraging the geometric structure of the probabilistic Gaussian mixture yields improved success rates and solutions with reduced variance. [R1] [Ren J, et al. Probabilistic mixture-of-experts for efficient deep reinforcement learning. arXiv, 2021.](https://arxiv.org/pdf/2104.09122.pdf) We will include the above discussion and the updated figures (appearing in the attached PDF) to the revised version of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for adding experiments to solve my questions within the limited time. I think I can raise my score based on your descriptions, but where are the detailed updated experiment results? --- Reply to Comment 1.1.1: Title: PDF location Comment: Dear reviewer vzrR, Thank you for having read out rebuttal in such a timely manner. The figures of the experiments are attached to the PDF in the main reply (above) to all the reviewers and AC. The link is [here](https://openreview.net/attachment?id=QuMwbM0knB&name=pdf). Note that we will of course add the experiments details to the Appendix of the revised version of the paper as we were not allowed to add text to the attached PDF.
Summary: This paper proposes to formulate policy optimization as a Wasserstein gradient flow over the Gaussian Mixture Model (GMM) space, which enhances the stability of policy optimization processes. In the proposed GMM policy updates, the mean and variance of gaussians are optimized through Riemann gradient descent via the Bures-Wasserstein metric, and the weight of gaussians are optimized through the implicit Euler scheme. The paper then demonstrates the effectiveness of their proposed approach by conducting experiments on various robotic tasks in simulation. Strengths: Overall the paper is well-written and well-presented. Discovering more principled approaches to optimize GMM policies to model more complex policy distributions is a meaningful research direction. Weaknesses: My main concern lies in the quality of experiments and empirical evaluations. Specifically, - The three robotic tasks illustrated in the main experiments are rather simple tasks with low-dimensional observation and action spaces. It would be interesting to demonstrate whether the proposed approach can generalize to higher dimensional tasks that exhibit multimodality solutions. - More in-depth analysis on why the proposed approach outperforms the baselines (PPO+GMM & SAC+GMM) can be included. Specifically, it would be meaningful to qualitatively compare policy distributions between the proposed approach and the baselines, particularly for the observations where the proposed approach is better than the baselines in modeling policy distributions. This would significantly enrich the insights provided by the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See "weaknesses". Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are mostly addressed. Another limitation to include is regarding the current empirical evaluations on low-dimensional robotic tasks. Showing the potential for the proposed approach to scale up to higher dimensional tasks with multimodality natures would significantly enhance the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for your time reviewing our work and the provided suggestions. We appreciate the positive feedback about our paper, and are glad to read that our approach " to optimize GMM policies to model more complex policy distributions is a meaningful research direction"! Below we address the issues raised in the review. 1. **Complexity of robotic settings**: * First, we would like to point out that the three 2D robotic tasks cover common adaptation problems in robot motion policy optimization, namely, (1) adapting a reaching motion to a different target, (2) adapting in the presence of obstacles, and (3) adapting a multimodal task. We acknowledge that these experiments did not fully show the capabilities of our approach. To achieve this, we already provided a first evaluation of our method in the narrow-path task using an off-the-shelf 7-DoF robotic manipulator in simulation (see App. A.6.3). * For this rebuttal, we conducted an additional experiment on the simulated 7-DoF robotic manipulator in the narrow-path task. In this case, the distinction lies in the acquisition of the robot's motion skill, which was learned in the space of robot joint configurations, where the state was $\mathbf{s} = \mathbf{q} \in \mathbb{R}^7$, and the action corresponded to $\mathbf{a} = \dot{\mathbf{q}} \in \mathbb{R}^7$. We controlled the simulated robotic arm via a joint velocity controller at a frequency of 100Hz. The objective of this experiment was to evaluate our approach performance in adapting robot motion policies within state-action spaces of higher dimensions. As illustrated in Figure 2 within the attached PDF, our method effectively adapts the robot's motion policy, thus ensuring a collision-free skill execution. Note that the narrow-path task specification aligns with the details outlined in Section 4.1 of the original paper. The observed outcomes distinctly demonstrated our approach's superiority over all benchmarks in this simulated robotic task. This provides further evidence that our approach scales and is proficient in the adaptation of robot motion policies within tasks involving higher dimensions on off-the-shelf robots. 2. **Additional analysis**: Following the suggestions made by this reviewer and other reviewers, we added two baselines to our evaluations, namely, PMOE-PPO and PMOE-SAC, which were proposed in [R1]. As in our approach, PMOE assumes that the policy $\pi$ is represented by a mixture of Gaussians, and it addresses the problem of estimating the gradients needed to optimize the mixture weights. However, PMOE does not take an optimal transport perspective on the policy optimization problem, nor does it account for the geometry arising from the GMM parameters. The results reported in Fig. 1 of the attached PDF show that our approach also outperforms these two newly added baselines, providing further evidence on the importance of considering the geometry of the GMM space. [R1] [Ren J, et al. Probabilistic mixture-of-experts for efficient deep reinforcement learning. arXiv, 2021.](https://arxiv.org/pdf/2104.09122.pdf) **Limitations**: As discussed above, we added an additional experiment showcasing our approach performance in adapting a motion policy in the narrow-path task where the GMM-based policy is learned in a $14$-dimensional state-action space. Moreover, we tested 4 different baselines in the same setting. The results reported in Fig 2. showed that our approach scales to higher-dimensional problems and still outperforms baselines that disregard the geometric structure arising from the policy representation. --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for author's rebuttal! My concerns have been sufficiently addressed and I'm raising my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer uu8b, We are glad to know that our rebuttal sufficiently addressed your concerns and suggestions. We truly appreciate your timely response and positive feedback.
Summary: The authors investigate Wasserstein gradient flows (WGF) for a Gaussian mixture model policy. The WGF is a principled natural gradient method for updating the parameters since it follows the metric space defined by the Wasserstein-2 divergence. This approach is compared to deep RL methods on some planar control tasks. Strengths: This was a very nicely written paper with also excelled presentation. I commend the authors for this. I also think WGF is an interesting research direction for RL. Weaknesses: **Ignores natural gradients in RL** I was very surprised to see this paper almost completely ignores the work on natural gradient methods for RL. While TRPO is cited, it is only very vaguely with no mention of the KL natural gradient it uses. Here are some key references: [1] Amari, S. I. (1998). Natural gradient works efficiently in learning. Neural computation [2] A natural policy gradient, Sham M Kakade - Advances in neural information processing, 2001 [3] Natural actor-critic J Peters, S Schaal - Neurocomputing, 2008 Related to this is the linear programming view of RL, which also uses KL regularization (but uses EM rather than NG) [4] Relative entropy policy search, J Peters, K Mulling, Y Altun AAAI 2010 [5] A unified view of entropy-regularized markov decision processes, G Neu, A Jonsson, V Gómez, 2017 and mirror descent as well [6] Mirror Descent Policy Optimization, Manan Tomar, Lior Shani, Yonathan Efroni, Mohammad Ghavamzadeh, 2020 These methods are basically Equation 5 of this paper, but with the KL rather than the W_2 divergence. The interesting question for me would be how these two divergences compare in the context of policy optimization. This seems to be central in the recent paper by Moskovitz et al but completely absent in this work. I think there has been a lot of work on doing NG methods for GMMs, e.g. Handling the Positive-Definite Constraint in the Bayesian Learning Rule, Lin et al 2020. This criticism naturally extends to the choice of baselines in the experiments. There should definitely be a natural policy gradient and EM-based method (for example MPO, advantage weighted regression or actor critic) to compare the quality of the update. I believe this is quite a big issue as it would require a major overhaul of the paper and experiments. The value of the contribution of the paper drops significantly if you cannot place WGF relative to prior work on natural gradients in RL. **Limited relevancy of GMR policies for NeurIPS** This paper focuses exclusively on GMM / GMR policies, which would also be called 'locally weighted regression' (LWR) back in the day. These models are rarely used because they scale very poorly to high dimensions, because you need either need hierarchy, dimensionality reduction or many many mixtures (e.g. [7]). [7] Locally Weighted Projection Regression, Vijayakumar et al ICML 2000. The cited works for GMR used them for learning from demonstrations for low-dimensional tasks. As a consequence, I feel like this work would be better suited for venues such as CoRL or ICRA where these methods may be still used, unless this method can be extended to use WGF on models more relevant to the Neurips community. **Misc** Typo: Equation 7 is missing a square root on the right-most squared Bures metric term. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1) For the 2D experiments, if the policies are initialized from demonstrations, why is this success rate 0? Are the baselines also initialized with the demonstrations? Also, consider using the *-from demonstrations variant of RL algorithms (e.g. SACfD). 2) I was a bit confused by the action space for the deep RL baselines. It seems the SAC-GMM paper focuses more on a hierarchical RL set up, where the SAC policy defines a GMM policy that runs for several timesteps. There is no reason why you could not have just replace the PPO and SAC policies with the GMR policy, since PPO just needs samples and log probabilities and SAC just needs to do the reparameterization trick. The issue with SAC would be that the max entropy regularization would be too powerful for GMR policies (since the SAC policy is usually clamped to upper-bound the entropy) so you would have to use the KL-regularied version of SAC, e.g. see Marino et al. Iterative amortized policy optimization, Marino et al NeurIPS 2021. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 1 poor Limitations: The focus on finetuning from demonstration is understated in the title and abstract. It's not clear from the experiments if this method + policy can be used to train a policy from scratch. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. We are very pleased to read that the reviewer found our paper "very nicely written paper with also excelled presentation"! Below we address the issues raised in the review. 1. **Natural gradients (NGs) in RL**: * NGs leverage a metric to locally control the gradient-based updates of the policy parameters. In this sense, we understand that the reviewer identifies a connection with our approach as it leverages the geometric structure of the Wasserstein space of GMMs. A similar connection may arise when analyzing the NG as an approximation to the implicit optimization scheme. Nevertheless, we would like to point out that: (1) our SOTA revision focused on approaches that exploited the *Wasserstein* geometry of the policy distribution (lines 86-96); (2) our paper did revise the only work (to the best of our knowledge) that exploits the Wasserstein natural gradient in RL problems [36], which focuses on the differences between the KL-div and the Wasserstein distance for NG updates in RL, but *without assuming a specific policy structure*. * In our paper we already discussed in Appendix A.4 the connections between the forward and backward discretization in the Bures-Wasserstein (BW) metric, where we explained that the Wasserstein NG, according to the BW metric in Eq. (39), is an approximation of the exact Riemannian gradient descent in Eq. (40), which is derived from a 1st-order approximation of the geodesic. As noted in Appendix A.4, such approximation has no guarantees that the approximated geodesic stays on the manifold, as it is also brought up in [R1], a paper suggested by this reviewer. * As our problem involves policy updates with, for example, positive-definite constraints arising from the covariance matrices, using NG is mathematically flawed as it does not guarantee that the updates stay on the underlying manifold. Note that the main reason why the method in [R1, Sec 5.3 " Our Rule as an Inexact RGD Update"] still used an inexact Riemannian gradient descent is the difficulty of computing the exponential map (or geodesic) necessary to calculate the exact Riemannian update. In sharp contrast, we leverage the Riemannian operations over positive definite matrices identified by the BW geometry as proposed in [49], thus avoiding to use inexact Riemannian gradient descent updates or approximations of the Riemannian retraction operator. In such a way, we also avoid computing numerical estimates for the Fisher information matrix, which is known to break the invariance-to-parametrization properties of NG methods [R2]. * On the experimental front, our paper reported an ablation study in Appendix A.6.2, where we computed the GMM parameter updates following the implicit Euler scheme (Eqs. (41) and (42)). It is worth highlighting that the *Wasserstein NG* approximates such an implicit Euler scheme, as explained in Appendix A.4 (Eqs. (37)-(39)) and detailed in [47]. This ablation study experimentally showed that our exact Riemannian formulation outperforms non-Riemannian methods, suggesting that a proper treatment of the geometry arising from the GMM structure is advantageous over other non-Riemannian approximations. * Although we are aware that part of our discussion appears in the Appendix of the paper and that we did not stress it and refer to it enough in the main text, we believe that our paper did not ignore the relevance and connections between NG and our approach, as suggested by this reviewer. [R1] Handling the positive-definite constraint in the bayesian learning rule, ICML, 2020. [R2] New insights and perspectives on the natural gradient method, JMLR, 2020. 2. **GMM-based policies relevance**: We respectfully disagree with the reviewer's statement claiming that "GMM-based models are rarely used because they scale very poorly to high dimensions...". Note that: * Recent works have shown that it is possible to efficiently train GMMs based on stochastic gradient descent, therefore scaling better to high-dimensional settings (e.g., 30000 dimensions) [R3], which shows that it is possible to use such models in problems of higher-dimensionality. * When learning robot motion policies, the curse of dimensionality often arises in perception modules (e.g., images), which usually represents an observation of the state of the task. Therefore, it is common practice to employ deep NNs to learn *low-dimensional* embeddings representing the task state, which is later used to train robot motion policies. Note that GMM-based policies may also leverage this kind of perception backbones, and thus the corresponding state of the task may still belong to a relatively low-dimensional space. * Note that reviewer **vzrR** brought to our attention the PMOE method, which employs a probabilistic mixture of experts, represented by a GMM, for deep RL settings, and it shows that certain policies are better learned and represented via a mixture of experts. This is just another example of the relevance of such kind of representations. Also, works like [24] show that GMM-based methods are still relevant in the ML community. [R3] Gradient-Based Training of Gaussian Mixture Models for High-Dimensional Streaming Data, Neural Proc Ltrs, 2021. **Q1**: Success is measured w.r.t to the new task requirements, not w.r.t the task previously learned from demonstrations. All methods started with the same GMM policy learned from demonstrations. Note that neither the baselines nor out method used training methodologies such as experience replay, demonstrations buffer, etc. Such algorithmic improvements are out of the scope of this paper. **Q2**: In our understanding, SAC-GMM is not a hierarchical RL method as it is basically a SAC method whose actions correspond to increments applied to the GMM-based policy parameters. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: Thanks for replying to my review. **Natural Gradients.** In the main rebuttal the authors state 'As the core of our paper is on Wasserstein gradient flows, we omitted the literature on KL divergence.', however Richemond et al. [30] and Moscovitz et al. [36] both make comparisons to natural gradient with the KL central to their paper's motivation on using the Wasserstein divergence, so it feels like a reasonable request that this submission make a similar acknowledgement and positioning. While the authors do cite Moscovitz et al., they do not explain what a natural gradient is in the main paper, how it compares to the WGF, nor do they mention the KL divergence once. The appendix also does not explicitly mention the KL divergence or natural gradient. I don't think it's fair to expect a reader to make the connection between forward and backward discretization and the natural gradient. It appears the authors have a lot to say about NG with KL vs WGF, so I'm surprised this discussion isn't included in the paper. In this field it is not uncommon that the mathematically more rigorous approach under-performs in practice against a more approximate method, so it would be good to see empirically that the WGF is the better approach. Currently, the complete omission of NG with the KL looks suspicious. **GMR policies.** My comments about GMMs specifically referenced the (linear) GMR policies used in the paper, not GMMs in general, so this weakness still stands. **Q1.** Why are policies pre-trained on demonstrations irrelevant to the task? What is the benefit of this? **Replay Buffers.** Soft actor critic is a deep off-policy reinforcement learning algorithm and therefore a replay buffer is central to its implementation. What have the authors implemented as 'SAC' if no replay buffers are used? The authors say in the main text that they use the SB3 Implementation of SAC, which has a replay buffer. **Q2.** If you read section IV.C of SAC-GMM [25], the GMM policy is executed for $N=32$ timesteps and the GMM 'represents a dynamical system controlling the motion in the trajectory space'. The paper doesn't call their method hierarchical but uses the term 'hybrid' instead to the same end. Could the author's clarify what algorithm they have implemented? --- Reply to Comment 1.1.1: Title: Natural gradients Comment: We thank the reviewer for having read our rebuttal and for the timely feedback. Below our reply. 1. We agree with the reviewer that our Introduction section may be improved by revising previous works using the KL divergence in policy optimization, along similar lines as [30, 32, 36], so that our motivation of using the Wasserstein distance is more clear in the context of "regularized objectives" in policy optimization. We plan to include the following short revision in Section 1 (from line 49) of the revised version of the paper (including some minor text changes): "A well-established technique for enhancing the stability of policy optimization involves introducing regularization to the objective function using the Kullback-Leibler (KL) divergence [40, R1, R2]. This regularization mechanism aims at maintaining small changes between successive policy updates. Policies similarity can also be quantified via the Wasserstein distance, as recently proposed in [30, 32, 36]. Unlike the KL divergence, the Wasserstein distance enjoys powerful geometrical, computational, and statistical features [26, 27]. In this paper we exploit such properties in a Wasserstein-regularized objective for GMM-based policies optimization. This allows us not only to see the policy iteration as a Wasserstein gradient flow, as in [30], but also to leverage the Riemannian geometry associated to the GMM space to formulate exact Riemannian gradient descent updates." Moreover, we will provide further details in Appendix A.4 so that the connections among our implicit scheme, the approximation with NGs, and the retraction operation in Riemannian manifods become clearer. We believe that the minor changes proposed above complement and improve the motivation and position of our method w.r.t KL-regularized approaches. [R1] Towards an Understanding of Default Policies in Multitask Policy Optimization. AISTATS, 2022. [R2] Information Asymmetry in KL-regularized RL. ICLR, 2019. 2. We would like to point out that we did not deliberately omit an empirical comparison against NG- or KL-based approaches, and we did not intend our paper to "look suspicious", as this reviewer suggests. In this regard, we would like to highlight that: - Our original benchmark experiments included PPO, which optimizes a KL-regularized objective [44, Eq. 8]. - During our rebuttal, we added a GMM-based formulation of PPO, i.e, PMOE-PPO. - Our ablation study reported in App. A.6.2 is aimed at showing experimental evidence on the difference between our Riemannian approach and the implicit scheme. We would like to emphasize that the natural gradient is an approximation of such an implicit scheme, as discussed in our Appendix and in other related works. Although we did not compare against an NG-based approximation of our method, we still believe that the results reported in our paper and the rebuttal provide empirical evidence of our approach performance against methods that include KL divergence (as soft constraints), and an implicit schema (related to the inexact NG). So, we find the reviewer's argument "the complete omission of NG and the KL looks suspicious" inaccurate and unfair. 3. Note that a method that formulates the Wasserstein NG for multivariate GMMs in policy optimization has not been proposed yet (to the best of our knowledge). As pointed out in our previous answer, Moskovitz et al [36] work is the only work where Wasserstein NG is used in policy optimization, but this does not propose a formulation for GMM policies. Therefore, in order to carry out such comparison we would need to develop a new policy optimization approach that builds on the use of the Wasserstein NG developed in [R3], and that employs similar numerical estimates as proposed in [36], perhaps independently for each component of the GMM. Still, such an approach would disregard the positive-definite constraint of the Riemannian manifold arising from the covariance matrices (for which approximated projections would be then necessary). As a conclusion, we agree with the reviewer that such an approach would be relevant, but it is non-existent in the current literature. We believe this could be an interesting future research direction, although it remains an approximation of our mathematically rigorous method. [R3] Optimal transport natural gradient for statistical manifolds with continuous sample space, Information Geometry, 2020.
Summary: This paper proposes an algorithm for RL in continuous action spaces based on the Wasserstein Gradient Flow formulation of Richemond and Maginnis. By specializing to the case of Gaussian Mixture policies, a simpler formulation is obtained, where the Gaussian part of the parameters (mean and covariance for each mixture component) are learned by optimizing a closed-form quadratic loss term rather than having to compute gradients of the Wasserstein_2 distance numerically. Strengths: Just like in Zhang et al. although taking another route, Wasserstein_2 distance terms simplify to L2 distances thanks to the Gaussian assumption. In itself the algorithm represents a straightforward application of the principles described in Richemond & Maginnis or Zhang eet al., all the way to the use of the Sinkhorn algorithm, although the optimization part, framed as a splitting operation over two Riemannian gradient descent problems, is nicely executed. The paper is well written and easy to follow, even if several typos still remain ('Wassertein', 'WFG' instead of WGF...). Empirical results show that the Wasserstein Gradient Flow outperforms non optimal-transport motivated baselines. Weaknesses: However, we do have several concerns regarding the scope and significance of these results. These span three axes. First, the PPO and SAC comparison baselines, while standard and still competitive, are 2017 or 2018 algorithms and would deserve updating. Second, only three robotics environments are considered, and it would be good to include more, in particular standard control domains. Finally, the Future Work section of the paper mentions multiple conceptual possibilities for improving and modifying Algorithm 1, as details such as the initialization of the Sinkhorn algorithm used for optimizing the mixture parameter can be absolutely critical in practice. We encourage the authors to implement some of those ideas. For these reasons, and being cognizant of the associated compute requirements, we feel the paper would much benefit from a revamped and extended empirical section implementing these ideas and testing them across a wider set of environments. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: What can the authors do in order to improve empirical evaluation ? I would be willing to raise my score if some of the above concerns were addressed. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for taking the time to review our work. We are delighted to read that "our splitting operation over two Riemannian gradients is nicely executed"! Below, we address some of the key concerns raised as part of the review. 1. **Baselines**: We would like to point out that PPO and SAC continue to uphold their status as prominently competitive methods in RL, even when applied in the context of robot control policies. For example, [R1] reports a very exhaustive evaluation of standard RL algorithms where PPO and SAC are often among the top 3 best performing methods. A similar observation was made in [R2]. Moreover, it is noteworthy to emphasize that the competitive performance displayed by PPO and SAC renders them enticing choices for real robotic manipulation tasks [R3]. Therefore, even though these methods have their origins dating back 5-6 years, their competitive performance establishes them as foundational baselines in our work. Nevertheless, following the concern raised by this reviewer and the suggestion made by reviewer **vzrR**, we added a new baseline: PMOE [R4], which is a method to train deep RL policies using a probabilistic mixture of experts via GMMs (see Figs. 1 and 2 in the attached PDF, the former is an updated version of Fig. 3 of the original paper). Note that *our approach also outperforms both versions of this newly added baseline*, which provides additional experimental evidence on the importance of considering the geometric structure arising from the space of GMMs and the associated Bures-Wasserstein metric on the GMM parameters. [R1]. [F. Helfenstein, Benchmarking Deep Reinforcement Learning Algorithms. MSc. Thesis, 2021.](https://www.ias.informatik.tu-darmstadt.de/uploads/Team/DavideTateo/felix_thesis.pdf) [R2]. [MushroomRL: Simplifying Reinforcement Learning Research, JMLR, 2021.](https://jmlr.org/papers/volume22/18-056/18-056.pdf) [R3]. [Continuous control actions learning and adaptation for robotic manipulation through reinforcement learning. Autonomous robots, 2022.](https://link.springer.com/content/pdf/10.1007/s10514-022-10034-z.pdf?pdf=button) [R4]. [Ren J, et al. Probabilistic mixture-of-experts for efficient deep reinforcement learning. arXiv, 2021.](https://arxiv.org/pdf/2104.09122.pdf) 2. **Robotics environments**: In our paper we reported three different 2D robotic tasks that reflect the most common robot motion generation problems: reaching, obstacle avoidance, and multimodality. Moreover, we also conducted an additional experiment with a simulated 7-DoF robotic manipulator that learned a collision-avoidance behavior in the 3D Cartesian space of the end-effector (see Appendix A.6.3 for details). However, in the spirit of providing further experimental studies as suggested by this reviewer, we tested the same collision-avoidance behavior in the 3D narrow-path setting reported in Appendix A.6.3, with the difference that the robot motion skill is learned in the robot joint space (i.e. the state $\mathbf{s} = \mathbf{q} \in \mathbb{R}^7$ and the action $\mathbf{a} = \dot{\mathbf{q}} \in \mathbb{R}^7$). In this case, the simulated robot was controlled via a joint velocity controller at a frequency of 100Hz. This experiment is aimed at *assessing the capabilities of our approach to adapt robot motion policies in state-action spaces of higher dimensions*. As shown in Fig. 2 in the attached PDF, our approach is able to adapt the robot motion policy so that the robot end-effector safely passes through a narrow path defined by two spherical obstacles (the narrow-path task description is the same as in Sec. 4.1 of the original paper). As observed, it is clear that our approach outperforms all the baselines in this simulated robotic task, providing evidence that our approach scales and it is able to adapt robot motion policies in higher-dimensional tasks. 3. **Future work ideas**: We agree with the reviewer on the aspect that the suggested future work directions may improve the performance of our proposed method, although it is worth highlighting that most of them are algorithmic improvements. However, due to time constraints, we decided to focus on improving our comparison against new baselines and on showing the performance of our method in higher-dimensional settings, as reported in the new figures shown in the attached PDF. Note that the newly added experiments support our findings reported in the original paper: leveraging the geometric structure of the probabilistic mixture of Gaussians provides higher success rate and low-variance solutions. **Questions**: 1. **Improvement on empirical evaluation**: As discussed previously, we added two additional baselines to our comparison and tested our approach in a higher-dimensional robotic setting. These new experiments align with our findings discussed in the original paper, providing strong evidence on the importance of accounting for the geometry of the policy structure in the formulation of policy optimization methods. --- Rebuttal Comment 1.1: Title: Re : Rebuttal and further experiments Comment: Thanks for these clarifications. Having run additional experiments is appreciated and the performance uplift of using the WGF method is now more tangible. Based on these efforts and results I am indeed raising my score. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We truly appreciate your positive feedback on the additional work we made to improve our paper according to your suggestions. Once more, we are very grateful for your time and feedback provided during the review/rebuttal processes.
Rebuttal 1: Rebuttal: We would like to thank the reviewers and the area chair for their work and feedback on our manuscript. Below we summarize the main points addressed in our rebuttal. We also attach a PDF with two figures showing newly added results for additional experiments as requested by the reviewers. **Rebuttal summary** 1. Following the recommendations from reviewers **zWwA**, **uu8b**, and **vzrR**, we performed the following new experiments: * Tested our approach (and baselines) on a new simulated robotic task of higher state-action dimensionality. Our results show that *our method is able to adapt the learned policy and outperforms all baselines providing lower-variance solutions and faster convergence*. * Added two new baselines (i.e., PMOE-PPO and PMOE-SAC) based on the method suggested by reviewer **vzrR**. The results show that *our approach still outperforms all methods* despite that the PMOE-based formulations provided a performance gain when compared to vanilla PPO and SAC versions. 2. We provided a thorough clarification about the position of our paper w.r.t natural gradient approaches: * We clarified that *we did include in our original submission the only paper (to the best of our knowledge) that uses the Wasserstein natural gradient in RL*. As the core of our paper is on Wasserstein gradient flows, we omitted the literature on KL divergence. * We indicated that our original submission included a short discussion (with equations) in App. A.4 to explain the connection between our Riemannian optimization and the approximation provided by an inexact natural gradient approach. * We pointed out that our original submission included an ablation study in App. A.6.2 where we compared our method against an optimization solving the implicit Euler scheme (Eqs. 41 and 42), for which a natural gradient method is an approximation. 3. We discussed the importance of GMM-based policies, their wide use in robotic applications, and the recent efforts in the machine learning community to scale GMMs to very high-dimensional spaces. We trust that our rebuttal has provided clear explanations for all the questions raised by the reviewers. Finally, we wish to express our advanced appreciation to the reviewers for their time in reviewing our rebuttal and for their forthcoming involvement in the discussion phase. Pdf: /pdf/0615772b3fd4a1d15209fa938f45b2f4704b47b1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
PRED: Pre-training via Semantic Rendering on LiDAR Point Clouds
Accept (poster)
Summary: This work incorporates images into point cloud pretraining since images contain richer semantic information. Instead of adopting the back propagation strategy which can not handle the misalignment between camera and LiDAR, this paper leverages the neural rendering technique to injecting the semantics into the representation learning process. Strengths: (1) This paper is well-written and presents a clear and concise idea. (2) The motivation of leveraging the neural rendering to overcome the mismatch of pixel and point is compelling. (3) The experimental results demonstrate consistent improvement on different datasets with varying baselines. And the ablation studies validate the contribution of each component in the pipeline. Weaknesses: (1) The technical contribution is limited. For example, the BEV conditioned semantic neural rendering strategy, and the masking strategy have been studied in previous works. (2) Although the occlusion problem can be alleviated by assigning a reduced weight to occluded points, the rendering process will introduce additional noise since it assigns multiple semantic labels to each point, leading to the semantic ambiguity. (3) The rendering process allow every points in the ray receive the gradients which leads to many irrelevant points being optimized. (4) The authors claims that the neural rendering is superior to the point-to-pixel projection, but there is no performance comparison between these two methods in the experiments. I think the latter may also bring an obvious improvement since the inambiguous projections. (5) This work takes the semantic labels as supervision for neural rendering. I think it would be better to use the 2D feature vectors as the supervision. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: My major concern lies in the superiority of this rendering process compared to the point painting process. While the author address the occlusion issue, I firmly believe that it can be effectively resolved by accurately determining the object boundaries. By precisely locating the boundaries of objects, the occlusion challenge can be easily mitigated, potentially diminishing the need for a complex rendering approach. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The author address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive feedback regarding the paper's clarity, motivation, and experimental results. We understand your concerns and would like to address your points one by one. **Q1: Concerns regarding technical contribution.** **A1:** Our main contribution lies in a novel pre-training framework for outdoor point clouds. By combining neural rendering with point-wise masking and using 2D semantic labels as supervision, we provide a comprehensive solution that effectively addresses the challenges of reconstruction ambiguity and occlusion in point cloud pre-training. The effectiveness of our approach has been demonstrated through improved performance on several downstream tasks. Moreover, to the best of our knowledge, we are the first to introduce a BEV-conditioned semantic neural rendering strategy for point cloud pre-training. --- **Q2: Semantic ambiguity arising from the rendering process.** **A2:** Assigning multiple labels to a single point can be a source of noise. However, our dataset's emphasis on autonomous driving scenarios mitigates this concern. Given the car-centric viewpoint of our images, overlaps between different perspectives remain minimal. For instance, in the nuScenes dataset, the overlap between two views is less than 10%. Consequently, multi-label rendering affects a limited set of points. Future research can further mitigate this by selectively sampling non-overlapping regions, a refinement we'll underscore in our revision. --- **Q3: The rendering process allows every point in the ray to receive the gradients which leads to many irrelevant points being optimized.** **A3:** The rendering process allows for the geometric understanding of the scene. When rendering semantics, we apply the stop-gradient to the signed distance as explained in Line 162. This means that only a small number of significant points with larger weights are primarily optimized by semantic loss. In our depth rendering, while we don't curtail gradients to seemingly 'irrelevant' points, these contribute substantially to the model's geometric understanding of the scene. This geometry perception proves pivotal for downstream tasks, like object detection. --- **Q4: Contrasting neural rendering with point-to-pixel projection.** **A4:** Sorry for any oversight. Kindly direct your attention to Table 5 for a comprehensive comparison between neural rendering and point-to-pixel projection. The results illustrate the substantial superiority of our neural rendering approach over the point-to-pixel projection method. --- **Q5: Embracing 2D feature vectors as supervision.** **A5:** We have explored the option of supervising the model with feature vectors extracted from the segmenter's backbone. Through comparative results (shown below) under the experimental settings as detailed in Section 4.4, we find that both supervision signals are effective. | Supervision | PreTrain | mAP | NDS | |:-----------------:|:--------:|:--------:|:--------:| | baseline | ❌ | 61.5 | 68.0 | | semantic label | ✔️ | 64.2$_{+2.7}$ | 69.7$_{+1.7}$ | | feature vector | ✔️ | 64.3$_{+2.8}$ | 69.4$_{+1.4}$ | We ultimately chose to use supervision from semantic labels as it simplifies the computation of the loss function. Specifically, semantic labels are utilized for class-balanced sampling and loss weights, as outlined in Lines 128-135. Nonetheless, the potential of 2D feature vectors remains promise, particularly when sourced from advanced Vision Foundation Models such as SAM [60]. Even though SAM doesn't predict semantic labels, its impressive generalization capabilities could enhance the effectiveness of our method—a prospect worth deeper exploration. [60] SAM: Segment Anything. --- **Q6: Comparing neural rendering to the point painting process.** **A6:** The idea of utilizing accurate object boundaries to address occlusion challenges is indeed intriguing. However, in the context of autonomous driving, which involves sparse and incomplete point clouds, accurately determining boundaries becomes challenging due to these constraints. Furthermore, this methodology would require pre-processing object segmentation and 3D boundary computations, potentially increasing system complexity. While we value your perspective, our research leverages neural rendering, effectively managing occlusion without resorting to explicit boundary detection. Nevertheless, exploiting the inductive bias of point clouds to handle occlusion presents a promising direction that we are eager to explore in future efforts. --- Rebuttal Comment 1.1: Comment: After carefully reading other comments, I believe the solution proposed by the author is fancy but lack of substance. Although the experiments validate the rendering-based approach over point-to-pixel approach, the experimental justification only relying on a few metrics of downstream tasks appears to be rather weak. I am now inclined to give 5 (broadline accept). --- Reply to Comment 1.1.1: Title: Thanks for your feedback Comment: Thanks for your comments. In this work, we present a semantic rendering approach for point cloud pre-training. This approach effectively addresses challenges related to reconstruction ambiguities and occlusions. Our method consistently demonstrates enhancements compared to various baseline methods across a diverse range of datasets and tasks. We will further revise the paper accordingly.
Summary: This paper investigates weakly-supervised representation learning for outdoor LiDAR point clouds. To start, the authors point out that the inherent incompleteness of outdoor LiDAR points would reduce the effectiveness of self-supervised representation learning approaches. To mitigate this, the authors propose to use synchronized images as additional signals to supervise the representation learning process. Observing the lack of color information in the point cloud and the mismatch between points and pixels due to occlusion, the authors propose to use neural rendering of pixel semantics as a pre-text task to circumvent the two problems. Specifically, with a slight modification to NEUS’s weighting function, the authors build an implicit neural representation on top of the BEV LiDAR features, positing that good BEV LiDAR features for implicit neural representations could be good representations for any downstream recognition tasks. Extensive experiments have been conducted to show the effectiveness of the approach. Strengths: - Novelty - Using neural rendering as a pre-text task for point cloud representation learning is a novel idea the reviewer has never seen in the literature. - Beautiful figures - Figure 1,2,3 are aesthetically pleasing, with figure 2 clearly illustrating the core problems in representation learning for outdoor LiDAR point clouds. - Strong performance improvement over baselines that do not use images as additional signals. - Thorough experiments - Experiments thoroughly demonstrate the strength of the approach. Weaknesses: Overall, this is a strong paper but the reviewer has to point out three weaknesses — two related to presentation and the other related to baselines. - (Presentation 1) One of the contributions of the paper is to show that images could be valuable signals for pre-training representation. However, it is unclear which baselines in table 1 and 2 actually use additional images as signals. Also, it is unclear what type of signals are being used (i.e. raw pixel values vs pixel semantics). - (Baseline) Another contribution of the paper is to show that pixel semantics is a useful signal for pre-training representation. Although the proposed approach did not converge when trying to render raw pixel value (line 101-103), SLidR (table 2) is a baseline that actually leverages color information to form superpixel for pre-training. Given the big difference between the no-pretrain variants for SLidR and ours, it is difficult to judge whether pixel-value is a weaker signal or there is some other difference (such as model architecture or optimization recipes) that is causing SLidR to underperform PRED. - (Presentation 2): Since using pixel semantics is part of the core contributions, it is important for the authors to acknowledge/mention whether there are any overlaps between the pixel semantic label space and the downstream task label space. Without this, it is difficult to tell whether the proposed approach is a self-supervised approach or a weakly-supervised approach. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The reviewer believes this is a good paper. However, the reviewer thinks it is important to properly address the weaknesses (especially on the SLiDR baseline and presentation 2). For pre-rebuttal, the reviewer would give “borderline reject” to implore the authors to properly address the weaknesses. If the weaknesses are sufficiently addressed, the reviewer is more than happy to give this submission a “strong accept”. Suggestions: - To fix the first weakness in the presentation, the reviewer recommends adding a column in table 1 and 2 to indicate whether and the type of pixel signal used. Also, it would be great to emphasize this in section 4.2. - To fix the second weakness in the presentation, the reviewer recommends adding a paragraph indicating label overlap between the pixel semantics and downstream tasks in section 4.1. - For the baseline, the reviewer would like to know why the “no-pretrain” SLiDR baseline is much worse than the "no-pretrain" OURS baseline is table 2. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Yes, the authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are encouraged by your acknowledgment of our work's novelty and thoroughness in experimentation. We aim to address the concerns raised to offer clearer insights into our methodology. **Q1: Clarity on image signals in Tables 1 and 2.** **A1:** To offer greater clarity, we will introduce a dedicated column in Tables 1 and 2 that indicates which baselines utilize image signals and the specific type of those signals - be it raw pixel values or pixel semantics. Here are rough versions of how the updated tables might look: Table 1: | Method | PreTrain | Pixel Signal Used | mAP | NDS | |:----------------:|:--------:|:-----------------:|:--------:|:--------:| | CenterPoint | ❌ | - | 56.2 | 64.5 | | PointContrast | ✔️ | - | 56.3$_{+0.1}$ | 64.4$_{−0.1}$ | | GCC-3D | ✔️ | - | 57.3$_{+1.1}$ | 65.0$_{+0.5}$ | | ProposalContrast | ✔️ | - | 57.4$_{+1.2}$ | 65.1$_{+0.6}$ | | GD-MAE | ❌ | - | 58.1 | 65.6 | | GD-MAE | ✔️ | - | 58.9$_{+0.8}$ | 66.1$_{+0.5}$ | | Ours (CenterPoint) | ❌ | - | 61.5 | 68.0 | | Ours (CenterPoint) | ✔️ | pixel semantics | 64.2$_{+2.7}$ | 69.7$_{+1.7}$ | Table 2: | Method | PreTrain | Pixel Signal Used | mAP | |:-------------------:|:--------:|:------------------------:|:--------:| | PointRCNN | ❌ | - | 28.74 | | SECOND | ❌ | - | 51.89 | | CenterPoint | ❌ | - | 60.05 | | PointContrast | ❌ | - | 51.89 | | PointContrast | ✔️ | - | 53.59$_{+1.70}$ | | SLidR | ❌ | - | 28.80 | | SLidR | ✔️ | super-pixel & pixel feature | 30.72$_{+1.92}$ | | ProposalContrast | ❌ | - | 64.24 | | ProposalContrast | ✔️ | - | 66.32$_{+2.08}$ | | GD-MAE | ❌ | - | 62.62 | | GD-MAE | ✔️ | - | 64.92$_{+2.30}$ | | Ours (CenterPoint) | ❌ | - | 64.28 | | Ours (CenterPoint) | ✔️ | pixel semantics | 67.41$_{+3.13}$ | Additionally, we'll underscore the significance of harnessing image signals in Section 4.2, elaborating how distinct pixel signals like raw pixel values and pixel semantics can profoundly affect model efficacy. --- **Q2: Performance discrepancy between 'no-pretrain' SLiDR and OURS baselines in Table 2.** **A2:** The distinction between the 'no-pretrain' versions of SLidR and our model originates from their foundational detection frameworks. SLidR employs PointRCNN as its detector, a methodology grounded in point-based detection. In contrast, our approach is designed for a BEV-based detector, which typically yields a superior baseline performance. Even if our baseline has less room for improvement, the boost our method offers over the baseline surpasses that of SLidR. For instance, our method (CenterPoint) achieves a remarkable +3.13mAP improvement compared to SLidR's +1.92mAP gain. One more clarification, SLidR's methodology extends beyond the utilization of color information; it incorporates image features for point cloud contrastive learning supervision. These image features are extracted through a pre-trained ResNet employing MoCov2. In contrast, our strategy only uses pixel semantics. We will clarify these points in the revised version of our paper. --- **Q3: Overlap of labels between pixel semantics and downstream tasks.** **A3:** In our revised manuscript's Section 4.1, we will incorporate a detailed overview indicating the label overlaps between pixel semantics used in our pre-training and those of our downstream tasks. Here is a preliminary look at the label overlap: 1. Pixel Semantics: Our pre-training phase utilizes pixel semantic labels that include 19 classes such as road, sidewalk, building, wall, fence, pole, traffic light, sign, vegetation, terrain, sky, person, rider, car, truck, bus, train, motorcycle, and bicycle. 2. Downstream tasks: - In the nuScenes object detection task, the labels include car, truck, construction vehicle, bus, trailer, barrier, motorcycle, bicycle, pedestrian, and traffic cone. - For the ONCE object detection task, the labels are limited to vehicle, pedestrian, and cyclist. - The nuScenes BEV map segmentation task uses labels such as drivable, pedestrian crossing, walkway, stop line, car park, and divider. There's an overlap in labels like 'car', 'bicycle', and 'pedestrian', and our approach is more in line with the field of weakly supervised learning. Furthermore, we have the prospect of integrating Vision Foundation Models (VFMs), like SAM [60], into our framework, capitalizing on their semantic features as the supervision. These models exhibit enhanced generalization capabilities, which could potentially result in further performance improvements for our method. However, given the relatively brief existence of VFMs, there remains some ambiguity regarding whether pre-training with VFMs falls within the realm of self-supervised or weakly-supervised methods. We will discuss these in the revision. [60] SAM: Segment Anything. --- Rebuttal Comment 1.1: Title: Response Comment: The reviewer thanks the authors for the detailed response. The reviewer's concerns are sufficiently addressed and decides to raise the rating to strong accept. --- Reply to Comment 1.1.1: Title: Thanks for your positive feedback! Comment: Thank you very much for the positive feedback! Your constructive comments and suggestions are very helpful in improving our paper quality. Thanks!
Summary: This paper proposed a novel point cloud pre-training framework, PRED, which leverages the semantic information consistency between the LiDAR point clouds and the camera images to improve the point cloud pre-training performance. The author proposed (1) a novel semantic rendering module for decoding the semantics from the BEV feature maps and (2) a point-wise masking mechanism to alleviate the reconstruction ambiguity. The proposed pre-training method demonstrates superior performance according to the experiments section. Strengths: (1), To the best of my knowledge, the proposed point cloud pre-training framework with semantic rendering is novel and reasonable. (2), point cloud pre-training is an important task for academia and industry. (3), According to the experiment section, the proposed framework PRED has achieved superior performance on multiple benchmarks. Weaknesses: (1), Dealing with occlusion is claimed as one of this paper's major contributions and advantages. However, why the occluded points will be allocated a lower weight is not illustrated clearly in the paper. The author could add more presentation, analysis, visualization, and evaluation. (2), The pre-trained semantic model is required for the proposed approach. This point should be marked and compared with other methods in Tables 1 and 2. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: (1), The proposed method relies on well-trained 2D segmentation models, which limits its generalization ability. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: (1) Is there anything special or novel for handling occlusion compared to [46]? If not, then this point should not be highlighted. (2) How would the proposed method perform if the 2D segmenter fails? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The recognition of our approach's potential significance in both academic and industrial circles is particularly encouraging. We acknowledge the concerns you've highlighted and would like to offer clarifications: **Q1: Is there anything special or novel for handling occlusion compared to [46]? Why the occluded points will be allocated a lower weight?** **A1:** Sorry for any confusion. The key point we aim to emphasize in our paper is the importance of addressing occlusion during point cloud pre-training, a factor that has been overlooked in prior works. In this regard, we apply neural rendering, drawing inspiration from and aligning with [46], to effectively handle occlusion with encouraging results. The allocation of a lower weight to occluded points stems from our volume rendering computation, as outlined in Equation 2. For two depth values $t_1$ and $t_2$ that satisfy $d(t_1) = d(t_2)$ and $t_1 < t_2$, we have $$w(t_1) - w(t_2) = \exp \left(-\int_0^{t_1} \rho(u) \mathrm{d} u\right) \rho(t_1) - \exp \left(-\int_0^{t_2} \rho(u) \mathrm{d} u\right) \rho(t_2) = (\exp \left(-\int_0^{t_1} \rho(u) \mathrm{d} u\right) - \exp \left(-\int_0^{t_2} \rho(u) \mathrm{d} u\right)) \rho(t_1) > 0.$$ This indicates that, for points with equivalent SDF values, those proximal to the viewpoint are awarded greater weight, resulting in occluded points receiving reduced weight during rendering. We will clarify this in the revised manuscript. --- **Q2: The pre-trained semantic model is required for the proposed approach, which should be marked and compared with other methods in Tables 1 and 2.** **A2:** Your suggestion is well-received. We will address this point in Tables 1 and 2. Here are rough versions of how the updated tables might look: Table 1: | Method | PreTrain | Pixel Signal Used | mAP | NDS | |:----------------:|:--------:|:-----------------:|:--------:|:--------:| | CenterPoint | ❌ | - | 56.2 | 64.5 | | PointContrast | ✔️ | - | 56.3$_{+0.1}$ | 64.4$_{−0.1}$ | | GCC-3D | ✔️ | - | 57.3$_{+1.1}$ | 65.0$_{+0.5}$ | | ProposalContrast | ✔️ | - | 57.4$_{+1.2}$ | 65.1$_{+0.6}$ | | GD-MAE | ❌ | - | 58.1 | 65.6 | | GD-MAE | ✔️ | - | 58.9$_{+0.8}$ | 66.1$_{+0.5}$ | | Ours (CenterPoint) | ❌ | - | 61.5 | 68.0 | | Ours (CenterPoint) | ✔️ | pixel semantics | 64.2$_{+2.7}$ | 69.7$_{+1.7}$ | Table 2: | Method | PreTrain | Pixel Signal Used | mAP | |:-------------------:|:--------:|:------------------------:|:--------:| | PointRCNN | ❌ | - | 28.74 | | SECOND | ❌ | - | 51.89 | | CenterPoint | ❌ | - | 60.05 | | PointContrast | ❌ | - | 51.89 | | PointContrast | ✔️ | - | 53.59$_{+1.70}$ | | SLidR | ❌ | - | 28.80 | | SLidR | ✔️ | super-pixel & pixel feature | 30.72$_{+1.92}$ | | ProposalContrast | ❌ | - | 64.24 | | ProposalContrast | ✔️ | - | 66.32$_{+2.08}$ | | GD-MAE | ❌ | - | 62.62 | | GD-MAE | ✔️ | - | 64.92$_{+2.30}$ | | Ours (CenterPoint) | ❌ | - | 64.28 | | Ours (CenterPoint) | ✔️ | pixel semantics | 67.41$_{+3.13}$ | We hope these amendments will help readers better understand the prerequisites of our method and its comparison with others. --- **Q3: The generalizability of using 2D segmentation models.** **A3:** Our method leans on well-trained 2D segmentation models. However, given their widespread adoption spanning a diverse spectrum of applications, we believe this reliance is justified. Notably, with the recent advancements in Vision Foundation Models (VFMs), we envision leveraging these VFMs to further enhance our method's generalization capabilities. For instance, SAM [60], a current popular segmenter known for its strong generalization performance, could be integrated into our framework. While SAM's predictions exclude semantic labels, its semantic features present an attractive supervisory signal, marking an exciting trajectory for our ensuing endeavors. Moreover, our method has consistently demonstrated its efficacy across a range of datasets, benchmarks, and tasks, underscoring the inherent robustness ingrained within our approach. Nevertheless, we will try to explore the feasibility of utilizing VFMs to further enhance the generalization ability. [60] SAM: Segment Anything. --- **Q4: Implications of potential 2D segmenter failures.** **A4:** In instances where the 2D segmenter fails to accurately predict semantics, a potential impact on the quality of pre-training arises due to the methodology's reliance on 2D semantic labels. However, our extensive empirical evaluations have consistently demonstrated the segmenter's resilience across a diverse array of datasets. This robustness can be attributed to its foundation on the Cityscape dataset, which includes scenes similar to those present in other datasets. Nonetheless, we acknowledge there might be situations where the 2D segmenter could be unreliable. In such cases, our approach incorporates the maximum prediction score as a weighting factor within our loss function, as detailed in Lines 127-129 of our methodology section. This weighting scheme effectively assigns reduced significance to potentially erroneous semantic labels, thereby mitigating the potential consequences of segmentation errors on the overall pre-training process. --- Rebuttal Comment 1.1: Title: Keep my rating Comment: Thanks to the reviewer for responding to my comments with further comparative studies and analyses. The practice of utilizing 2D output to aid in training 3D tasks is widely accepted. The remaining issue pertains more to the constraint of denoising and enhancing the generalizability of the 2D views, which are restricted. The authors have indeed taken into account my comments, yet the novelty of the paper remains somewhat constrained. I intend to maintain my current rating. --- Reply to Comment 1.1.1: Title: Thanks for your comments Comment: Thank you for your feedback. In this study, we introduce a semantic rendering approach for pre-training point clouds that effectively tackles the challenges associated with reconstruction ambiguities and occlusions. Across a diverse range of datasets and tasks, our method consistently demonstrates improvements over various baseline methods. We will further revise the paper accordingly.
Summary: This work proposes a new pretraining algorithm for outdoor 3D perception tasks, where images are utilized to provide comprehensive semantic information. The main algorithm is to leverage the semantics of images for supervision through neural rendering. The authors also apply point-wise masking with a high mask ratio to further enhance performance. The pretraining brings notable performance gains on multiple benchmarks. Strengths: 1. The authors provide a novel insight into the exploitation of image semantics by combing off-the-shelf image segmenter and neural rendering, which I think is well-motivated. 2. Extensive experiments on multiple benchmarks demonstrate the effectiveness of the proposed method. 3. The paper is well-written and easy to follow. The illustrations are straightforward and helpful. Weaknesses: 1. I acknowledge the motivation and the method design of this paper. Different from neural rendering for RGB as NeRF does, the high-level idea of rendering semantics from point-cloud is quite novel. However, I have heavy concerns about the use of Cityscape pre-trained segmenter as it is not general enough, e.g., Cityscape pre-trained segmenter can perform well on nuscenes and Once but can't on Waymo. Despite the authors explaining the reason Waymo experiments do not perform that well on Line101 of the appendix, I think the poor generalizability of pre-trained segmenter maybe also a reason. So I suggest replacing the pre-trained segmenter with SAM. 2. This work shares a similar spirit to Ponder [1]. Despite [1] for indoor scenes, the paper aims at outdoor scenes, the authors should carefully discuss the intrinsic differences. [1] Ponder: point-cloud pre-training via neural rendering. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness part, I would raise my score if the two concerns can be addressed. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive feedback regarding the motivation and results of our research. We understand your concerns about certain aspects of our paper and would like to provide some clarification. **Q1: On the selection of the pre-trained segmenter.** **A1:** We will definitely explore the possibility of replacing the segmenter with SAM. The selection of the pre-trained segmenter on Cityscape was motivated by its contextual relevance to our experimental datasets—all involve autonomous driving scenarios. The diverse array of scenes within Cityscape aligns with those found in other datasets, thus enhancing the segmenter's ability to generalize across a range of autonomous driving datasets. In the case of Waymo, its point cloud exhibits greater density compared to other datasets, attributed to the utilization of 64-beam LiDAR scanning, whereas datasets like nuScenes employ 32-beam LiDARs. Moreover, Waymo only includes images captured from front and side views, whereas images in other datasets encompass a wider range of perspectives. As a result, the utility of these images in enhancing point cloud downstream tasks, such as object detection, remains limited on the Waymo dataset. This observation also provides an explanation for the relatively modest improvements achieved through image-derived semantic information during the pre-training phase. Nonetheless, we agree with you that SAM might exhibit superior generalization performance compared to the segmenter trained on Cityscape. While SAM exclusively predicts object masks without providing semantic labels, an intriguing strategy could involve substituting semantic rendering with semantic feature rendering. In this scenario, SAM could be employed to extract semantic features from images to serve as the supervision. This endeavor indeed holds great promise, and we deeply value your insightful suggestion. In future work, we will consider employing Vision Foundation Models like SAM to enhance the generalizability of our pre-training framework. --- **Q2: The intrinsic differences between our work and Ponder.** **A2:** Several fundamental differences indeed exist between our work and Ponder, and it's crucial to emphasize these. - First and foremost, the application domains differ: Ponder focuses on indoor scenes, where point clouds often contain color information, facilitating color-based pre-training supervision. In contrast, our work addresses outdoor environments—specifically, autonomous driving—where point clouds are typically LiDAR-derived and colorless. Consequently, Ponder is not applicable due to the absence of color data. In this context, we propose semantic rendering. Unlike Ponder's color-based rendering, our approach capitalizes on the semantic consistency between point clouds and images, offering a distinct strategy for point cloud pre-training. - Second, the pixel sampling strategies differ: Unlike the one-to-one point cloud-to-pixel correspondence found in densely scanned indoor point clouds, outdoor point clouds exhibit only partial semantic information from the image due to their sparsity. Furthermore, the semantic imbalance prevails within the point clouds. Our method addresses these challenges by sampling pixels projected from point clouds with a class-balanced strategy. This stands in contrast to Ponder's approach of random sampling from images. - Lastly, Ponder employs a 3D voxelized feature volume for rendering, whereas we favor a bird's eye view (BEV) feature-based rendering strategy better suited to the unique traits of outdoor settings. We will discuss Ponder in our revision. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I would prefer to provide the results around SAM. Although SAM does not provide semantic labels, could you use "Semantic-Segment-Anything", "Recognize Anything", and/or "SAM+OV-seg (https://huggingface.co/spaces/facebook/ov-seg)". If the model can be adapted to these models and it is demonstrated in the rebuttal phase, it will be very strong to accept. My concerns about differences to Ponder are addressed. Thanks! --- Reply to Comment 1.1.1: Title: Adaptation to SAM-Based Segmenter and Performance Prospects Comment: Thanks for your constructive suggestions. We have implemented the method you suggested, substituting the segmenter with a SAM-based approach. Given the ease of use and time constraints, we opted to employ Semantic-Segment-Anything (SSA) as the segmenter. The results are outlined below. Apart from the segmenter, all other experimental parameters remain consistent with those detailed in Section 4.4. | Segmenter | PreTrain | mAP | NDS | |:----------------:|:--------:|:-----------------:|:--------:| | Baseline | ❌ | 61.5 | 68.0 | | DeepLabv3 | ✔️ | 64.2$_{+2.7}$ | 69.7$_{+1.7}$ | | Semantic-Segment-Anything | ✔️ | 64.5$_{+3.0}$ | 69.9$_{+1.9}$ | Thanks to SAM's strong generalization capabilities and segmentation performance, our method demonstrated further enhancements when paired with SSA as the segmenter. We anticipate that fine-tuning hyperparameters will lead to even more substantial performance improvements.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: 1. The paper proposes PRED, a novel pre-training framework for outdoor point clouds that leverages image semantics through neural rendering. The paper addresses the challenges of incompleteness and occlusion in point clouds, which are common in outdoor LiDAR datasets for autonomous driving. 2. The paper uses an encoder-decoder architecture to extract a BEV feature map from the point cloud and render semantic maps from image views, supervised by image segmentation and depth estimation. 3. The paper incorporates point-wise masking with a high mask ratio (95%) to enhance the pre-training performance and avoid losing semantics of small objects. 4. The paper conducts extensive experiments on nuScenes and ONCE datasets, showing that PRED significantly improves various baselines and state-of-the-art methods on 3D object detection and BEV map segmentation tasks. Strengths: 1. The paper presents a novel pre-training framework for outdoor point clouds that integrates image semantics through neural rendering. This is a creative and effective way to address the incompleteness and occlusion issues in point clouds, which are often overlooked by previous pre-training methods. The paper also introduces point-wise masking with a high mask ratio, which is different from the conventional patch-wise or group-wise masking strategies and preserves more semantics of small objects. 2. The paper is technically sound and well-motivated. The paper provides a clear and detailed description of the proposed method, including the encoder-decoder architecture, the semantic rendering pipeline, the loss functions, and the masking strategy. The paper also conducts extensive experiments on two large-scale datasets, nuScenes and ONCE, and compares with various baselines and state-of-the-art methods on 3D object detection and BEV map segmentation tasks. The paper reports significant improvements over previous methods, demonstrating the effectiveness and generality of the proposed framework. Weaknesses: 1. The paper does not conduct ablation studies on the choice of image segmentation model and its impact on the pre-training performance. The paper uses DeepLabv3 as the image segmenter but does not justify or evaluate this choice. It is unclear how the quality and accuracy of the image segmentation model affect the semantic rendering and supervision. 2. This paper does not report the computational cost or time complexity of the pre-training framework. Semantic rendering involves sampling and aggregating points along multiple rays for each pixel, which may be computationally expensive and memory-intensive. It would be helpful to provide some statistics or benchmarks on the pre-training speed and resource consumption. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see the comments on 'weaknesses'. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Please see the comments on 'weaknesses'. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We're gratified by your acknowledgment of our approach's novelty and efficacy. We also value the concerns you've raised and here's our detailed response: **Q1: Evaluating the Impact of Image Segmentation Model Choices on Pre-training Performance.** **A1:** We acknowledge the importance of examining the choice of image segmentation model. To this end, we conducted ablation studies involving a range of renowned segmentation models, namely PSPNet [58], DeepLabv3 (MobileNet), DeepLabv3 (ResNet101), and SegFormer [59]. Keeping other settings consistent with Section 4.4, our results (as presented below) demonstrated robustness in pre-training performance across these models. DeepLabv3 ultimately emerged as our preferred choice, owing to its impressive performance and user-friendliness. These results will be incorporated into our revision. | Image Segmenter | PreTrain | mAP | NDS | |:-------------------------:|:--------:|:--------:|:--------:| | baseline | ❌ | 61.5 | 68.0 | | PSPNet | ✔️ | 63.9$_{+2.4}$ | 69.5$_{+1.5}$ | | DeepLabv3 (MobileNet) | ✔️ | 64.2$_{+2.7}$ | 69.7$_{+1.7}$ | | DeepLabv3 (ResNet101) | ✔️ | 64.4$_{+2.9}$ | 69.7$_{+1.7}$ | | SegFormer | ✔️ | 64.3$_{+2.8}$ | 69.9$_{+1.9}$ | [58] PSPNet: Pyramid Scene Parsing Network.\ [59] SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. --- **Q2: Computational cost or time complexity of the pre-training framework.** **A2:** Sorry for overlooking these details earlier, we will incorporate this information into the revised version. For our pre-training scheme, we utilized eight V100 GPUs, each equipped with 32GB memory. Abiding by the configurations detailed in Section 4.1 and Appendix A, and capitalizing on mixed precision training, each GPU consumed approximately 28GB to 30GB of memory. A complete pre-training exercise spanning 45 epochs on the nuScenes dataset was accomplished in around 32 hours. Additionally, training across the ONCE datasets—20 epochs for the small variant, 5 for the medium, and 3 for the large—required approximately 30, 38, and 45 hours, respectively. Since we only sample 768 pixels per scene as a training batch in each iteration, the overall training time and resource consumption are acceptable. --- Rebuttal Comment 1.1: Comment: I would thank the authors for their rebuttal. I will retain my positive score.
null
null
null
null
null
null
Locality Sensitive Hashing in Fourier Frequency Domain For Soft Set Containment Search
Accept (spotlight)
Summary: This paper presents a novel approach, called FourierHashNet, for fast soft set containment search. The key idea is to extend set containment to soft set containment by representing query and document elements as embedded representations, instead of atomic IDs. The authors propose a dominance similarity measure based on hinge distance and transform it into the frequency domain using a Fourier transform. This allows for the efficient use of traditional LSH techniques. Strengths: Strength: 1. The significance of the proposed asymmetric dominance similarity measure is emphasized, indicating its critical role in the targeted applications. By transforming the dominance similarity measure into the frequency domain, the authors enable the utilization of traditional LSH methods. This approach not only enhances retrieval efficiency but also demonstrates a better trade-off between query time and retrieval quality. 2. This paper is well-written. It effectively communicates the technical aspects of the proposed method, including the use of the Fourier transform and the learning of data-sensitive hash codes. Weaknesses: NA Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is it possible to discuss the potential of FourierHashNet in the similarity search of general kernel measure? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors thank reviewer wHYk for the positive feedback. In the general response, we delve into a broader application context of FourierHashNet, particularly for similarity search involving shift invariant functions. Please reach out to us if you have further inquiries or points for clarification.
Summary: Locality-sensitive hash (LSH) functions are mappings from sets of queries and documents to "buckets," in such a way that similar queries and documents are assigned to the same bucket. This paper proposes an asymmetric LSH where the notion of similarity is related to the hinge distance (called dominance similarity, based on the element-wise inequalities between components of two vectors). The method conceptually follows three steps: truncate, transform, and sample. First, the dominance similarity is truncated to a bounded region of the input domain; this is done so that the problem is tractable. Then, the similarity is Fourier-transformed into a form where the contributions from different vectors separate into an inner product. This form is used to sample Fourier representations that are used to compute the LSH in practice. The paper also contains experiments comparing the proposed ALSH with other LSH algorithms on two web log datasets, where the task is to perform vector search under the hinge distance. FourierHashNet outperforms these methods by a sizable margin. To further improve practical performance, the paper proposes a learnable version of the algorithm and conducts ablations to investigate its effect. **Post rebuttal:** The authors provide a general framework for ALSH, of which the hinge distance LSH is a specific instance. This substantially extends the utility and impact of the method over the original proposal. Strengths: **S1: Important topic.** The problem of designing an asymmetric LSH with an exotic collision probability is useful and interesting. There is not much work in this area, but there are several emerging LSH applications (not just search) where this capability will be helpful. For example, the [LGD sampler [NeurIPS 2019]](https://proceedings.neurips.cc/paper_files/paper/2019/file/a1e865a9b1065392ed6035d8ccd072d9-Paper.pdf) uses LSH functions that are sensitive to classification loss functions. Some of these are non-symmetric, so new ways to construct asymmetric LSH can correspond to faster optimization routines for more problems. The efficient kernel-matrix multiplication algorithm of Backurs et. al. [[ICML 2021]](https://arxiv.org/abs/2102.08341) is based on LSH and currently only works for symmetric kernels - asymmetric LSH could substantially extend this framework. An asymmetric LSH was recently developed for the linear regression/classification loss and was used to learn differentially-private classifiers [[CCS 2021]](https://dl.acm.org/doi/abs/10.1145/3460120.3485255); new ALSH functions will also directly expand the usefulness of this framework. Therefore, ALSH is an increasingly important topic with the potential to impact search, ranking, optimization, numerical linear algebra, differential privacy, and likely other areas. **S1: Novelty.** This paper attacks a technically difficult problem (ALSH for the hinge distance) and introduces some new ALSH techniques to do the design. I checked the math and the results seem to be correct. Note that I did not check details (e.g. the algebra involved in going from (7) to (8), or the derivation of (5) in the appendix). Weaknesses: **W1: Presentation.** The presentation and flow of the argument can be improved. Here are some specific examples: - The "i" used to denote sqrt(-1) should probably be the standard italic i (with a dot) rather than the symbol \i. - It may be more correct to refer to the "hinge distance" as the "hinge quasimetric" or the "hinge divergence" because it is not strictly speaking a metric (it obeys all properties except symmetry). - On line 177, consider mentioning that Equation (3) is just the Fourier Transform of -d(q,x) from Equation (1). It took me a moment to determine why (3) was written as the sum rather than as K iterated integrals. - Equation (4) could be introduced as a "clamped" or "truncated" version of -d(q,x), to make its definition more clear to the reader. - The "Fence Sitting" and "bit balance" losses don't seem particularly well-known (the only reference I could find was in "Adversarial Permutation Guided Node Representations for Link Prediction" in [AAAI 2021](https://arxiv.org/abs/2012.08974) - this paper should probably be cited). Therefore, these losses could possibly use some more explanation. - The results figures for the experiments would benefit from better layout to make the graphics larger (e.g. using subplots with tight_layout or similar). - Several components seem out of place. For example, the formal definitions of LSH and ALSH are introduced but are not used to prove that the hash function is indeed an ALSH. Observations about symmetric LSH (e.g. the paper by Chierichetti and Kumar) are mentioned throughout the development of the asymmetric LSH in Section 3, and it might be clearer to collect all these arguments in Section 2. The sampling algorithm is fairly involved and probably merits its own section. **W2: Impact.** The main contribution of the paper is an asymmetric LSH function for a very specific similarity function. This paper could have an impact on the LSH algorithm area and on any area that uses the hinge distance for set search. However, I have some concerns about the applicability of the ideas outside of what seems to be a fairly niche search problem. - **On LSH.** As mentioned above, there are several problems that can benefit from asymmetric LSH. However, it is hard to see which contributions from this paper might be more broadly applicable. It would be much more interesting (and appeal to a wider audience) if this algorithm can be generalized into a broader "truncate - transform - sample" template. Such a template might apply directly to other applications. It is also fairly essential to demonstrate that this is, indeed, an ALSH (that provably satisfies Definition 2.2). - **On set-search.** The community seems to be moving away from representing sets as a single vector and toward approaches that examine the pairwise similarity between all elements in the set (e.g. [ColBERT](https://arxiv.org/pdf/2004.12832.pdf), [ColBERT v2](https://arxiv.org/abs/2112.01488), [PLAID](https://arxiv.org/abs/2205.09707), several follow-ups). This approach was originally motivated by the comparatively high cost of performing transformer inference but is now [doing very well](https://arxiv.org/abs/2212.01340) on tasks like passage ranking. Many of the papers cited as applications are more than five years old (pre-2019), so it is not clear whether search with the hinge distance is still a pressing problem. **W3: Experiments.** The baselines in the experiments are not very competitive, leading to a limited evaluation. LSH is a classical, standard technique but it is no longer state-of-the-art for most search/ranking problems. Other approaches (like FAISS-IVF or HNSW) have taken over as the search algorithm of choice. These methods are also compatible with the hinge distance (though, this does require use of the C++ APIs rather than the simpler Python API). They also tend to be an order of magnitude faster than LSH, according to well-established benchmarks ([ann-benchmarks](https://ann-benchmarks.com), see also the NeurIPS 21 competition [big-ann-benchmarks](https://big-ann-benchmarks.com)). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I like this line of research and would be willing to raise my score if some of the following items can be addressed. In particular, I'm looking for evidence that the hinge distance is an important bottleneck and for evidence that the new techniques developed in this paper could be useful more broadly. - Is it possible to generalize the sampling process (bottom of page 5) into a generic algorithm with theoretical guarantees (e.g. on the expectation and possibly variance of the estimate)? This would go a long way towards addressing W2, as it would be a result that other works could build off of. - Is element-wise vector inequality the current state-of-the-art for the applications mentioned in the paper? ColBERT (which examines all pairwise relationships between the two sets) and follow-ups have shown very good results in natural language applications, and determinental point processes are SOTA in many market basket analysis applications (e.g. "Learning Nonsymmetric Determinantal Point Processes" at [NeurIPS 2019](https://proceedings.neurips.cc/paper_files/paper/2019/file/cae82d4350cc23aca7fc9ae38dab38ab-Paper.pdf)). I am less familiar with the knowledge graphs and subgraph isomorphism applications mentioned in the appendix so it might be possible that the problem is very strongly motivated by these areas. I did take a quick look at the references but I did not find this search problem mentioned. - Can this algorithm handle box embeddings (perhaps with some modifications to the algorithm / the embeddings)? Search over box embeddings is a known bottleneck and is one reason why, despite modeling advantages, they have not replaced angular-similarity embeddings in major industrial recommendation pipelines. - Is it possible to prove that this scheme is an ALSH (i.e. satisfies Defintion 2.2) for the hinge distance? Due to the truncation in Equation (4), it might be necessary to restrict the domain over which we expect Definition 2.2 to hold (but this would be fine and would not affect the quality of the theoretical result). - Just to clarify - what are the "gold" and "silver" instances on page 6? Is this the same as the gold relevance labels from the datasets (page 7)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Yes - no issues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer AbHb for the spectacular feedback. > Proof that FourierHashNet is an ALSH Note that $p(\omega^j_k)\propto|Re(S(\omega _k ^{j}))|+|Im(S(\omega _k ^{j}))|$. Let $I$ be the proportionality constant. Assume $\mathrm{sim}(q,x)>s _m>0$ and $\cos^{-1}$ is $L _{\cos}$-Lipschitz. We have: $||\pmb{F} _q(\pmb{\omega}^{1...M})|| _2^2=||\pmb{F} _x(\pmb{\omega}^{1...M})|| _2^2= \sum _{i\in[M],k\in[K]}\frac{|Re(S(\omega\_k^j))|+|Im(S(\omega\_k^j))|}{p(\omega^j _k)}=MKI$. We use $\Pr _{g,h}[g(q)=h(x)]=\mathbb{E} _{\pmb{\omega^{j}}}[\Pr _{g,h}[g(q)=h(x)|\pmb{\omega}^j]]$ to write \begin{align} &\Pr _{g,h}[g(q)=h(x)|\pmb{\omega}^i]=1-\frac{1}{\pi}\cos^{-1}\bigg(\frac{\pmb{F} _q(\pmb{\omega}^{1...M})^{\top}\pmb{F} _x(\pmb{\omega}^{1...M})}{||\pmb{F} _q(\pmb{\omega}^{1...M})||\ ||\pmb{F} _x(\pmb{\omega}^{1...M})||}\bigg) \\\\ &=1-\frac{1}{\pi}\cos^{-1}\bigg(\frac{\operatorname{sim}(q,x)}{KI}\bigg) \\\\ &-\frac{1}{\pi}\cos^{-1}\bigg(\frac{\pmb{F} _q(\pmb{\omega}^{1...M})^{\top}\pmb{F} _x(\pmb{\omega}^{1...M})}{MKI}\bigg) +\frac{1}{\pi}\cos^{-1}\bigg(\int _{\omega}\frac{\pmb{F} _q(\pmb{\omega})^{\top}\pmb{F} _x(\pmb{\omega})p(\pmb{\omega}) d \pmb{\omega}}{KI}\bigg)---(A) \end{align} Note \begin{align} &\mathbb{E}\bigg[-\cos^{-1}\bigg(\frac{\pmb{F}_q(\pmb{\omega}^{1...M})^{\top}\pmb{F}_x(\pmb{\omega}^{1...M})}{MKI}\bigg) +\cos^{-1}\bigg(\int _{\omega}\frac{\pmb{F}_q(\pmb{\omega})^{\top}\pmb{F}_x(\pmb{\omega})p(\pmb{\omega}) d \pmb{\omega}}{KI}\bigg) \bigg] \\\\ &\le\frac{L _{\cos}}{KI}\mathbb{E}\bigg|\sum _{j\in[M]}\pmb{F}_q(\pmb{\omega}^{j})^{\top}\pmb{F}_x(\pmb{\omega}^{j})/M-\int _{\omega} \pmb{F} _q(\pmb{\omega})^{\top}\pmb{F} _x(\pmb{\omega})p(\pmb{\omega}) d \pmb{\omega}\bigg| \\\\ &=\frac{L _{\cos}}{KI\sqrt{M}}\sqrt{K}\bigg(\text{Var}\bigg[\pmb{F}_q(\omega_k)^{\top}\pmb{F}_x(\omega_k )\bigg]\bigg)^{1/2}\le\frac{L _{\cos}}{\sqrt{KM}}---(B) \end{align} The last inequality follows from a bound on variance via \begin{align} &\pmb{F}_q(\omega_k)^{\top}\pmb{F}_x(\omega_k)\\\\ &=\pmb{S}_q(\omega_k)^{\top}\pmb{S}_x(\omega_k)/p(\omega_k)\quad(\text{Eq 9 in paper})\\\\ &=\frac{Re[\pmb{S}(\omega)]\cos\omega_k(q[k]-x[k])-Im[\pmb{S}(\omega)]\sin\omega_k(q[k]-x[k]))}{(1/I)[|Re[\pmb{S}(\omega)]|+ |Im[\pmb{S}(\omega)]|]}\\\\ &\le I\frac{|Re[\pmb{S}(\omega_k)]|+|Im[\pmb{S}(\omega_k)]|}{|Re[\pmb{S}(\omega)]|+|Im[\pmb{S}(\omega)]|}=I \end{align} Putting (B) into (A), we have \begin{align} \mathbb{E}[\Pr\_{g,h}[g(q)=h(x)|\pmb{\omega}^{j}]]\le p_2=1-\frac{1}{\pi}\cos^{-1}\bigg(\frac{\operatorname{sim}(q,x)}{KI}\bigg)+ L_{\cos}/\pi\sqrt{KM} \end{align} Similarly, we have: \begin{align} \mathbb{E}[\Pr\_{g,h}[g(q)=h(x)|\pmb{\omega}^{j}]]\ge p_1=1-\frac{1}{\pi}\cos^{-1}\bigg(\frac{\operatorname{sim}(q,x)}{KI}\bigg)- L_{\cos}/\pi\sqrt{KM} \end{align} If we ensure \begin{align} M>\frac{4L _{\cos}^2}{K\left[\cos^{-1}\big(\frac{c\cdot s _m}{KI}\big)-\cos^{-1}\left(\frac{s _m}{KI}\right)\right]^2}, \end{align} then we have $p_1>p_2$. This satisfies the condition for ALSH. > *W3: Experiments. ColBERT/FAISS/HNSW vs A/LSH and FourierHashnet* FourierHashNet beats FLORA (Fig 3 in main), which is a very recent (2023) neural retrieval model that outperforms HNSW. In response to your highlighting FAISS-IVF, we compare the retrieval performance of FAISS-IVF variants against FourierHashNet (Fig. 14 and 15 of the rebuttal PDF). FAISS-IVF retrieval suffers because its quantizers, that assign vectors to the Voronoi cells, rely on a metric like L2 or IP, which are unsuitable for asymmetric hinge distance. Also, IVF-style retrieval may fall short in other applications. We will incorporate the following discussion into the manuscript. **Textual entailment** (SNLI, MNLI) is usually presented as *classification*: (text_a, text_b) → {implies, contradicts, none} etc. The most accurate methods inject ([CLS], text_a, [SEP], text_b) into an early-cross-interaction transformer, and predict the label from the output [CLS] embedding. The associated *retrieval* problem is, given text_b and a large corpus, quickly find top-K corpus text_a that are most likely to imply text_b. Early-cross-interaction precludes effective indexing/hashing. Lai+Hockenmaier propose asymmetric late interaction: separately obtain embeddings of text_a and text_b, and force the former to elementwise dominate the latter (by fine-tuning the transformer). Thanks to our work, dominance distance is now hashable. **Box embeddings** and **order embeddings** are used to model type hierarchies in knowledge graphs (KGs) and object hierarchies in images. If type t1 is subtype-of t2, we expect their order embeddings to follow elementwise dominance, or the box embeddings of t1 to be contained in that of t1. They can be used in our setup; see global response. > *In set-search, elementwise cross-interaction better than single set embeddings?* For applications other than retrieving soft supersets of the query set (as is the case in ColBERT-type passage retrieval) ColBERT and follow-ups may not be the best choice: * ColBERT uses a symmetric term-to-term similarity, which is inadequate for detecting subgraph isomorphism. unlike hinge distance — see Table 16 in rebuttal PDF. * If we are looking for subsets (rather than supersets) of the query, then our hinge distance continues to work. But ColBERT can now assign multiple query atoms to the same passage atom. * If the query (text or graph) is large, ColBERT (using IVF) will still hit many FAISS clusters and thus amass a large number of corpus items to score. This can be controlled better using ALSH in Fourier space. For these reasons we believe ALSH for hinge distance remains very strongly motivated. >*"gold" and "silver"* Instances with gold labels means ground-truth relevance labels. After training with gold labels, silver labels are predicted from hinge distances. These are used to train the hashing protocol. >*broader application of "truncate - transform - sample" template.* Please refer to general response. --- Rebuttal Comment 1.1: Comment: Wow, thank you! This is an incredible response and I have raised my score by 2 points. **Regarding proof of ALSH:** I am happy to see a proof of ALSH, though the $L_{\cos}$-Lipschitz condition effectively restricts the range of $p_1$, $p_2$ for which the ALSH guarantee holds (because $d / dx \arccos(x) \to -\infty$ as $x \to 1$, so really high $p_2$ is a problem). Otherwise I was able to verify the proof. Your result is particularly nice because it shows how to set $M$ (which determines the time and space complexity of the hash). **Regarding general framework:** This is fantastic! My only question here is whether the ALSH-style $p_1, p_2$ guarantees can be extended to the general framework, perhaps by assuming some additional (smoothness?) conditions on $a(\mathbf{q} - \mathbf{x})$. This might let you get guarantees for hinge distance, ColBERT, box embeddings etc as specific instantiations (note that this is possibly out of scope, potentially something to consider in follow-up work if it is nontrivial / not easy). **Regarding experiments:** Thank you for conducting an evaluation with FAISS. This is a solid result and it increases my confidence in the method. It might be possible to use hinge distance for the voronoi clustering / quantization too, but this is likely beyond what a practitioner would be expected to do in adapting FAISS for their specific use case. **Regarding graph baselines:** I still think it is important to compare with a graph-based index (HNSW or one of the many follow-ups: DiskANN, SpeedANN, EFANNA - they are all variations of the same core algorithm, so you would only really need to compare against one). Even though FLORA > HNSW and FourierHashNet > FLORA, this *does not* imply that FourierHashNet > HNSW because the experiment setups are much different. FLORA is a semantic hashing method that attempts to learn the representation space (embeddings) alongside the hashing function - the advantage of FLORA is likely a better data representation and not a faster search (LSH loses to HNSW evaluations where the metric is not learnable). The FLORA evaluation also implements HNSW in a very nonstandard way (instead of a distance metric, they use a "semantic relevance" score defined by a neural network). HNSW will likely do much better for the hinge distance over pre-trained embeddings because this obeys the triangle inequality. I do recognize that it would be difficult to run this during the rebuttal period (due to time constraints, and also because `nmslib/Hnswlib` is hard to extend - though in this case, adding hinge distance is probably do-able just by inheriting from `SpaceInterface`). But it's important point to make. However, it wouldn't substantially change my review - I think your truncate-transform-sample framework is significant, even if it loses to HNSW, because of the many aforementioned applications of asymmetric LSH. **Regarding revision:** Please do your best to integrate the general framework / ALSH proofs into the next version of the paper. For readers coming from the hashing / LSH community, the framework will be the most interesting part of the paper. **Minor points / follow-up questions:** - In the revision, it would be great to have a table of all the new similarities that are now LSH-able, to highlight the impact. - It would also be nice (but non-essential) to see the empirical collision probability (take 100 FourierHashNet hashes and plot the average collision rate between points against the ground-truth hinge distance). - For new experiment, is the wall-clock speedup against FAISS also ~2x? Hashing / graph algorithms are sometimes slower than cluster search, even though they perform fewer distance calculations, due to their memory access pattern and cache locality effects. To summarize, your response completely changed my view on this paper. All of the important points are addressed and I plan to argue in favor of acceptance. Great job! --- Reply to Comment 1.1.1: Comment: We once again extend our sincere gratitude to the reviewer for their insightful and appreciative feedback. > I do recognize that it would be difficult to run this during the rebuttal period (due to time constraints, and also because nmslib/Hnswlib is hard to extend - though in this case, adding hinge distance is probably do-able just by inheriting from SpaceInterface). But it's important point to make. However, it wouldn't substantially change my review - I think your truncate-transform-sample framework is significant, even if it loses to HNSW, because of the many aforementioned applications of asymmetric LSH. As per your suggestion, we extended the `SpaceInterface` class of `nmslib/Hnswlib` to implement HNSW for hinge distance. In order to track the number of distance computations performed by HNSW during retrieval, we used a counter inside `fstdistfunc_`. We searched across different values of `M`, `ef` and `ef_construction`, and tracked the number of distance computations against corresponding MAP values. |#calls to `fstdistfunc_`| MAP | Method | |:----------------------:|:----:|:--------------:| | 839 | 0.17 | HNSW | | 1162 | 0.53 | FourierHashNet | | 1549 | 0.43 | HNSW | | 1668 | 0.51 | HNSW | | 1846 | 0.58 | FourierHashNet | | 2578 | 0.59 | HNSW | | 3529 | 0.71 | HNSW | | 3926 | 0.72 | FourierHashNet | | 4773 | 0.77 | HNSW | | 5347 | 0.74 | FourierHashNet | | 6694 | 0.81 | HNSW | The table (perhaps better viewed as a scatter) presents our study on MSWEB dataset with 10734 corpus items. We observe that FourierHashNet LSH has an edge over HNSW in the regime of fewer distance computations, with a MAP of 0.53 using 1162 distance computations. However, when allowed more distance computations, HNSW outperforms FourierHashNet with a MAP of 0.71 in 3528 computations, and a MAP of 0.77 in 4773 computations. (We count the number of pseudo-distance computations as a surrogate for real time, to avoid non-determinism in measurements and low-level implementation differences. It was not possible to *exactly* equalize the number of distance computations performed by HNSW and FourierHashNet by tuning their respective hyperparameter. HNSW has many performance-tuning parameters; we will present a more complete exploration of this space in the updated manuscript.) > In the revision, it would be great to have a table of all the new similarities that are now LSH-able, to highlight the impact. Thanks! We will make sure to do so. > It would also be nice (but non-essential) to see the empirical collision probability (take 100 FourierHashNet hashes and plot the average collision rate between points against the ground-truth hinge distance). | Hinge distance ⟶ | >1e+1 | [1e+1,1e+0] | [1e+0,1e-1] | <1e-2 | |:-------------------------:|:-----:|:-----------:|:-----------:|:-----:| | #Buckets↓ | | | | | | 2^5 | 0.024 | 0.08 | 0.24 | 0.32 | | 2^7 | 0.005 | 0.04 | 0.15 | 0.21 | | 2^9 | 0.001 | 0.02 | 0.1 | 0.14 | We present the empirical collision probabilities for some randomly sampled embedding pairs. The embedding pairs are sampled such their hinge distances are at varying orders of magnitude. The columns of the table indicate the distances in decreasing order, while the rows indicate the number of buckets in the hash tables as dictated by the hashcode lengths. > For new experiment, is the wall-clock speedup against FAISS also ~2x? Hashing / graph algorithms are sometimes slower than cluster search, even though they perform fewer distance calculations, due to their memory access pattern and cache locality effects. We analyze the wall-clock times, for various values of number of comparisons (K), for both FAISS-IVF and FourierHashNet. For meaningful benchmarking, we implement FourierHashNet utilizing Falconn's C++ LSH APIs, while comparing against FAISS-IVF's C++ code. Our study confirms that FourierHashNet's ~2x speedup against FAISS-IVF, earlier reported in terms of number of comparisons, also holds in terms of wall clock time. In the final draft, we will add a scatter plot for illustration.
Summary: This paper studies a new search problem called vector dominance (or set containment), where the authors provide strong motivation from various real-world applications. They present a new approach named FourierHashNet along with a fresh asymmetric vector dominance distance to address this problem. Through extensive experimentation, the study confirms the effectiveness of the proposed distance measure and demonstrates the superior performance of FourierHashNet compared to established LSH and ALSH baselines. Strengths: - The authors study a new yet significant problem, which contains many well-motivated applications regarding text, image, and graph retrieval. - They propose an effective distance measure customized for the problem they looked at. - The FourierHashNet is interesting, and it is reasonable to combine learning to hash together with Fourier features. - They conduct extensive experiments to confirm the effectiveness of FourierHashNet. Weaknesses: Overall, in my opinion, this is a well-written paper in various aspects. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please further proofread and polish the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work does not appear to have any negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors thank reviewer 2JDE for the positive feedback. We will undertake thorough proofreading and polishing for the final version of the manuscript.
null
null
Rebuttal 1: Rebuttal: # Truncate-transform-sample paradigm ### General Framework Indeed, our algorithm can be generalized to include a wide variety of scoring functions, including Box embedding based volume scores, facility location scores used by ColBERT, etc. Our framework extends to any shift-invariant scoring function of the form: $a(\pmb{q} -\pmb{x})$, which remains unchanged if $\pmb{q}$ and $\pmb{x}$ are shifted by the $\pmb{\delta}$. Assuming bounded values $||\pmb{q}|| _{\infty} \le q _{\max}$ and $||\pmb{x}|| _{\infty} \le x _{\max}$, a truncated function $sim(q,x) = s(\pmb{q} -\pmb{x}) = a(\pmb{q} -\pmb{x}) -a _{\min}$, is active only within the specified bounds, and zero otherwise (In our case, $a = -d(q,x)$ and $a _{min} = -KT$). This allows an absolutely convergent (because of trunctation) Fourier transformation: \begin{align} S(\pmb{\omega})=\frac{1}{(2\pi)^K} \int_{\pmb{t} \in \mathbb{R}^K}s(t) e^{-i \pmb{ \omega}^\top \pmb{t} } d \pmb{ t } \end{align} and the Inverse Fourier Transform of $s(\pmb{q} -\pmb{x})$ as: \begin{align} s(\pmb{q} -\pmb{x}) & = \int_{\pmb{\omega} \in \mathbb{R}^K} S(\pmb{\omega}) e^{i \pmb{ \omega}^\top (\pmb{q} -\pmb{x}) } d \pmb{ \omega} = \int_{\pmb{\omega} \in \mathbb{R}^K} \pmb{S}_q(\pmb{\omega}) ^{\top} \pmb{S}_x(\pmb{\omega}) d\pmb{\omega} \end{align} Here, \begin{align} \pmb{S}_q( \pmb{\omega}) {=} \Big[\text{Sign}(Re(S(\pmb{\omega}))\sqrt{|Re(S(\pmb{\omega}))|} \big[\cos(\pmb{\omega}^\top \pmb{q}), \sin(\pmb{\omega}^\top \pmb{q})\big], \text{Sign}(Im(S(\pmb{\omega})) \sqrt{| Im(S(\pmb{\omega})|}\big[-\sin(\pmb{\omega}^\top \pmb{q}), \cos(\pmb{\omega}^\top \pmb{q}) \big]\Big] \end{align} and \begin{align} \pmb{S}_x( \pmb{\omega}) {=} \Big[ \sqrt{|Re(S(\pmb{\omega}))|} \big[\cos(\pmb{\omega}^\top \pmb{q}), \sin(\pmb{\omega}^\top \pmb{q})\big], \sqrt{| Im(S(\pmb{\omega})|}\big[\cos(\pmb{\omega}^\top \pmb{q}), \sin(\pmb{\omega}^\top \pmb{q}) \big]\Big] \end{align} Above expressions are similar to Eq (7) in our paper, where they were defined for each component frequency $\omega_k$, thanks to the decomposability of the score functions as a sum of independent scores across dimensions $(s(\pmb{q}-\pmb{x}) = \sum _{k=1} ^K s(q[k]-x[k]) )$. In contrast, here, we show that the setup can be extended to generic (shift invariant) scoring functions which need not be decomposable as a sum across dimensions. However, we can define a similar distribution $p(\pmb{\omega})$ over the vector $\pmb{\omega}$ and obtain $s(\pmb{q} -\pmb{x}) = \mathbb{E} _{p(\pmb{\omega})} [ \pmb{S}_q( \pmb{\omega})^{\top} \pmb{S}_x( \pmb{\omega})/p(\pmb{\omega})]$. ### ColBERT E.g., given a query $q = (q _1,.., q _m)$ and one corpus item $x = (x_1,...,x_n)$, ColBERT computes the (facility location based) similarity scores between these two sets as \begin{align} \operatorname{sim}(q,x) =\sum _{i =1} ^m \max _{j \in [n]} a(q _i,x _j) \end{align} If $a$ is a shift invariant score $a(\pmb{q} _i-\pmb{x} _j)$, say, inverse to Euclidean distance, then we can write its soft surrogate \begin{align} \operatorname{sim}(q,x) &=\frac{1}{\lambda} \sum _{i =1} ^m \log \left( \sum _{j\in [n]} \exp(\lambda a(\pmb{q} _i-\pmb{x} _j) )\right) \end{align} Note that the function $sim(q,x)$ is a shift invariant function of the form $s(\pmb{q}-\pmb{x})$. ### Box Embeddings The box embedding based volume score can be expressed as a shift invariant score $a(\pmb{q} - \pmb{x})$. Here, the query and corpus items are expressed as boxes denoted by $(\pmb{z} _q, \pmb{Z} _q )$ and $(\pmb{z} _x, \pmb{Z} _x )$ respectively (the lower and upper corner coordinate vectors). The hard intersection between $q$ and $x$ is then the box $(\pmb{z, Z})$ where $\pmb{z} _{q,x}= \max ( \pmb{z} _q, \pmb{z} _x, )$ and $\pmb{Z} _{q,x} = \min ( \pmb{Z} _q, \pmb{Z} _x, )$. Then the score between q,x is measured as: \begin{align} \operatorname{sim}(q,x) = \prod _{k = 1} ^K [\pmb{Z} _{q,x}[k]- \pmb{z} _{q,x}[k]] _+ \hspace{3cm} ---(S) \end{align} We will show that there exists embedding $\pmb{q}$ and $\pmb{x}$ for which $sim (q,x) = a( \pmb{q} - \pmb{x})$. We first note that $\max(x,y) = x + (y-x) _+$ and $\min(x,y) = y - (y-x) _+$. Using them, we have $\pmb{z} _{q,x} = \pmb{z} _{q} +( \pmb{z} _{x}-\pmb{z} _{q}) _+$ and $\pmb{Z} _{q,x} = \pmb{Z} _{x} -( \pmb{Z} _{x}-\pmb{Z} _{q}) _+$. Thus, Eq. (S) is written as \begin{align} \operatorname{sim}(q,x) = \prod _{k = 1} ^K [ \pmb{Z} _{x} -\pmb{z} _{q} -( \pmb{z} _{x}-\pmb{z} _{q}) _+ -( \pmb{Z} _{x}-\pmb{Z} _{q}) _+ ] _+ [k] \end{align} If we represent $\pmb{q} = [\pmb{z} _{q} , \pmb{z} _{q} ,\pmb{Z} _{q} ]$ and $\pmb{x} = [\pmb{Z} _{x}, \pmb{z} _{x}, \pmb{Z} _{x} ]$, then we have $\operatorname{sim}(q,x) = \prod _{k = 1} ^K [A _1 ( \pmb{q} -\pmb{x} ) - [A _2 ( \pmb{q} -\pmb{x} ) ] _+ -[A _3 ( \pmb{q} -\pmb{x} )] _+] _+$ where $A _1 = -[\mathbb{I}, 0 ,0], A _2 = -[ 0, \mathbb{I} ,0]$ and $A _3= -[0,0,\mathbb{I}]$. Thus $\operatorname{sim}(q,x)$ is shift invariant with respect to $\pmb{q}$ and $\pmb{x}$. Thus, we can extend our algorithm to box embedding setup. # Citations [Box and Order Embeddings] T Chheda et al. "Box Embeddings: An open-source library for representation learning using geometric structures". arXiv:2109.04997 [FLORA] K Doan et al. "Asymmetric Hashing for Fast Ranking via Neural Network Measures". SIGIR 2023. arXiv:2211.00619 [NeuroMatch] R Ying et al. "Neural Subgraph Matching" arXiv:2007.03092 [BERT-INT] X Tang et al. “BERT-INT: A BERT-based Interaction Model For Knowledge Graph Alignment.” IJCAI 2020. [SNLI] S Bowman et al. A large annotated corpus for learning natural language inference. EMNLP 2015. [MNLI] A Williams et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. NAACL 2018. Pdf: /pdf/6592dbe0c4ddf969e4eda4cd69d978ddb018493b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Homotopy-based training of NeuralODEs for accurate dynamics discovery
Accept (poster)
Summary: This paper proposes a novel method for training NeuralODEs, based on synchronization and homotopy optimization. They show that the addition of the synchronization module can smooth the loss landscape, on which homotopy optimization can be applied to enhance training. The new training method achieves competitive results in convergence speed and interpolation and extrapolation accuracy when compared with other baseline methods, especially for long training data. In addition,they demonstrate the robustness of the method experimentally. Strengths: 1. The method of homotopy optimization has been introduced into the training of various neural networks, but this paper combines the idea of homotopy optimization with synchronization, and proposes a method that is very suitable for training NeuralODE on dynamical system-related tasks. 2. The new training method can effectively improve the efficiency of network training and has the potential to alleviate a series of problems caused by irregular loss landscape. 3. Compared with other methods, this method has no restrictions on the model structure or parameters at all, so it is more general. 4. The experiments are valid. Weaknesses: 1. The explanations in this paper are all experimental and lack rigorous theoretical support. For example, whether it is possible to analyze if the hessian trace of loss can be strictly bounded by the traditional method after adding coupling items for homotopy optimization. 2. The authors only test the proposed method in low-dimensional toy models. 3. It is mentioned in the paper that there are many stabilized models that improve training by limiting the expressivity of the model, but these methods are not compared experimentally with this new method to verify the advantages of this new method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is it possible to add some theoretical discussions on the smoothness of the loss landscape? 2. Could you provide experimental results on more highly complex real-world datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and the constructive feedback. We have added the responses to both of the reviewer's questions in the global response above. We look forward to additional discussions with the reviewer. We also add a bit more theoretical description about the synchronization below: Let's say that our system under consideration is $$d\mathbf{x}(t)/dt=\mathbf{F}(\mathbf{x}(t)),$$ where $\mathbf{x} \in \mathbb{R}^m$ is an m-dimensional vector. We take the input vector to be transformed as a scalar variable in the form $u(t)=\mathbf{K}^T\mathbf{x}(t)=K_1x_1(t)+K_2x_2(t)+...+K_mx_m(t)$, with $\mathbf{K}$ a constant column vector. The output subsystem is then written as $$d\mathbf{y}(t)/dt=\mathbf{F}(\mathbf{y}(t))-\mathbf{D}(v(t)-u(t))$$, where $v(t)=\mathbf{K}^T\mathbf{y}(t)$ and $\mathbf{D}=(D_1, D_2, ..., D_m)^T$ is an m-dimensional constant vector. Observe that if $\mathbf{y}(t)=\mathbf{x}(t)$ is plugged into Eq. (2), the equation is satisfied, meaning that synchronization of the systems is possible for the combined system, Eqs. (1) and (2). It can be guaranteed that if we evaluate the largest Lyapunov exponent for the subsystem Eq. (2) with respect to the trajectory $\mathbf{y}(t)=\mathbf{x}(t)$. Consider infinitesimal deviations of $\mathbf{y}(t)\$ from $\mathbf{x}(t)$, $$ \mathbf{y}(t)=\mathbf{x}(t)+\delta\mathbf{y}(t). $$ From (2), $$ d\delta\mathbf{y}(t)/dt=[\partial\mathbf{F}(\mathbf{y})/\partial\mathbf{y}\mid_{\mathbf{y}=\mathbf{x}}-\mathbf{DK}^T]\delta\mathbf{y}. $$ Here, if you see the $[\partial\mathbf{F}(\mathbf{y})/\partial\mathbf{y}\mid_{\mathbf{y}=\mathbf{x}}-\mathbf{DK}^T]$ as a matrix A and for $\delta\mathbf{y}$ as an error vector, then the Eq.(3) can be rewritten as: \begin{equation} d\mathbf{e}/dt=\mathbf{A}\mathbf{e} \end{equation The solution of this differential equation is given by \begin{equation} \mathbf{e}(t)=e^{\mathbf{A} t}\mathbf{e}(0) \end{equation} where $\mathbf{e}(0)$ is the initial value of $\mathbf{e}(t)$. We can always make this to be converged by choosing the proper value of $\mathbf{K}$. Connecting this result to the hessian of the loss is a bit tricky at the moment as the matrix $\bf A$ depends on the parameters as well, and we are continuing to research on this front at the moment. We suspect it might be the case that it is impossible to state a global smoothness claim for the loss landscape. However, we believe by drawing on the discussion of homotopy from topology, it can be possible to weakly assert the local smoothness of the loss manifold. The main general idea in topology is to study spaces which can be continuously deformed into one another. This idea is given mathematical substance by the introduction of homeomorphisms. If we take two topological spaces $T_1$ and $T_2$ then a map $\alpha$ from $T_1$ to $T_2$: \begin{equation} \alpha : T_1 \rightarrow T_2. \end{equation} is called a homeomorphism if it is both continuous and has an inverse which is also continuous. This notion, homotopy, is inspired, as is the notion of homeomorphism, by the more informal or intuitive notion of deformation. While homeomorphism generates equivalence classes whose members are topological spaces, homotopy generates equivalence classes whose members are continuous maps. Take two continuous maps $\alpha_1$ and $\alpha_2$ from a space $X$ to a space $Y$: \begin{equation} \begin{split} \alpha_1 : X \rightarrow Y, \\ \alpha_2 : X \rightarrow Y. \end{split} \end{equation} Then the map $\alpha_1$ is aid to be homotopic to the map $\alpha_2$ if $\alpha_1$ can be deformed into $\alpha_2$, in precise mathematical terms: \begin{equation} F:X \times [0, 1] \rightarrow Y,\quad F \ continuous. \end{equation} and $F$ satisfies \begin{equation} \begin{split} F(x, 0)=\alpha_1(x), \\\\ F(x, 1)=\alpha_2(x). \end{split} \end{equation} In other words as the real variable $t$ in $F(x, t)$ varies continuously from 0 to 1 in the unit interval $[0, 1]$; the map $\alpha_1$ is deformed continuously into the map $\alpha_2$. Homotopy is an equivalence relation and it divides the space of continuous maps from $X$ to $Y$, which is written $C(X, Y)$, into equivalence classes. Since a homeomorphism is also a continuous map, these equivalence classes are unchanged under homeomorphism of $X$ or $Y$. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. Although no strict theoretical proof has been given, the above explanation explains the rationality of the method to a certain extent. However, its extrapolation prediction on more complex real-world systems is poor, which shows that its applicable scenarios are very limited. For the above reasons, I maintain the previous score.
Summary: Training neural ODE models on long sequences of data from a dynamical system is difficult. The authors argue empirically that this is due to a poorly conditioned loss landscape leading to difficulties in optimization. To rectify this difficulty the authors use tools from the literature on synchronization (which gives conditions for when two dynamical systems will converge based on an error correction coupling term. They provide a homotopy between the synchronized dynamical system and the original one and argue that the optimization dynamics will be nice on the sychronized system and progressively less nice along the homotopy to the original system. However by gradually training the parameters and moving along the homotopy the method “ratchets up the difficulty” in a way that makes the ultimate optimization problem easier. They demonstrate the success of the system on a collection of commonly used dynamical systems taken from the neural ODE literature when compared to vanilla SGD and a multiple-shooting algorithm that handles the longer time horizons by breaking them up. The authors show that their method archives better accuracy, robustness, and convergence than the baselines on these problems. Strengths: The work combines the idea of synchonization with the standard Neural ODE training to improve the performance on long-horizon data. Since “long-horizon” data is not that long when training Neural ODEs, this is a welcome development. The idea of synchronization is intuitive and theoretically justified as well as empirically demonstrated to be useful through the loss landscape Hessian. The empirical results are not broad but they are deep; the authors study many variations of the problem on a small number of dynamical systems (3) against a small number of baselines (2). However, the method performs well under noise, sparsity, and long data on these and with better test performance and more quickly. Weaknesses: The theoretical explanation leaves a little bit lacking. I would love to know more about the properties of the dynamical system as it is more or less synchronized. If there were traces of such in an image I think it would be intuitive. It took me a while to work out that as \lambda -> \infty any setting of the parameters is optimal and I think it might be good to explicitly say / visualize that. Any more detailed theoretical statements about this method would also be welcomed. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How is K chosen? I couldn’t find it in the main paper and the idea is intuitive even if K=I but I’d like to know? Would this method work for control systems as well? Seems like the counterfactual nature of the problem could be interesting. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: This seems pretty general and seems to widely apply to the breadth of neural ODE architectures and dynamical systems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the interest in our paper and the positive comments. We have listed the answers to the reviewer's question below, and have grouped the answers to some of the more commonly occurred questions from multiple reviewers in the global response above. > The theoretical explanation leaves a little bit lacking. I would love to know more about the properties of the dynamical system as it is more or less synchronized. If there were traces of such in an image I think it would be intuitive. We had originally tried to illustrate the effect of coupling strength on the dynamics trajectories through Figure 10 in Section B of the Appendix (please look at the appendix.pdf file in the zip file of our Supplementary materials as it is the updated version compared to the one at the end of the main paper pdf). In the figure, we perturb the coefficients of the Lotka-Volterra and Lorenz systems, couple them to a reference trajectory, then plot the resulting dynamics trajectory as well as the loss landscape as a function of coupling strength. We hope this can provide some additional information about our work. We would also love to add additional visualizations the reviewer finds would be useful and look forward to the reviewer’s comments. > It took me a while to work out that as $\lambda \rightarrow \infty$ any setting of the parameters is optimal and I think it might be good to explicitly say / visualize that. In our paper, we wanted to convey this point through the upper right panel of Figure 2, where we drew the loss landscape for increasing values of the coupling strength. We will update this part to explicitly mention that the flat landscape for the large value of k means that any setting of the parameters is equally optimal. Also to better visualize the flatness of the landscape, we will adjust the y-axis limits as well. > How is K chosen? I couldn’t find it in the main paper and the idea is intuitive even if K=I but I’d like to know? We apologize for the confusion. In our work, the matrix $\bf K$ and the scalar algorithm hyperparameter $k$ are related to each other by ${\bf K}=k\bf I$, where $\bf I$ is the identity matrix. We would like to expound on this by clarifying that this is not the only way to construct the coupling matrix – any choice of $\bf K$ can achieve synchronization provided that the matrix $\frac{\partial \bf U}{\partial \bf u}-\lambda\bf K$ is always negative definite along the trajectory ${\bf u}(t)$. However, naively constructing $\bf K$ in a different manner results in additional hyperparameters corresponding to each of the independent elements of $\bf K$. Therefore, we decided to reduce $\bf K$ into a single scalar value to minimize the number of hyperparameters in our algorithm. With that being said, we do think that exploring different parametrization of $\bf K$ is an important future direction of our research. In systems where the order of magnitude of the different degrees of freedom differ significantly, it can be more intuitive to use ${\bf K}=diag(k_1, k_2 , \dots)​$. Also in the chaos synchronization literature, [1] reports that $K = BV^T$ where $B, V$ are constant vectors can be tuned to achieve synchronization when the parameters of the coupled systems differ greatly. > Would this method work for control systems as well? Seems like the counterfactual nature of the problem could be interesting. We thank the reviewer for the very interesting discussion point. We believe that our method can work on control systems, but the added complexity of the problem will require additional experimentation and theoretical studies to accurately confirm our hypothesis. To elaborate, applying our homotopy method to control system is straightforward since everything can be kept the same and only the NeuralODE slightly modified to accept the control signal ${\bf v}(t)$: $$ \frac{d\bf u}{dt}={\bf U}(t,\bf u, v;\theta)-K(u-\hat u) $$ where $\bf \hat u$ is the interpolant constructed from measured time series data ${\bf u}_{data}(t_i)$. As for whether this scheme will actually result in improved training, we can take some hints from the synchronization literature. In [2], synchronization and homotopy optimization were applied to estimate the coefficients of ODE systems that contain time-dependent forcing terms. Since the such forcing terms can be viewed as control terms, this study shows that it is indeed possible to use the homotopy method to discover control systems and outline the possibility of using our homotopy method to train NeuralODEs on such systems as well. [1] G. A. Johnson et. al, *Phys. Rev. Lett.* **80**, 18 (1998). [2] R. Manikantan et. al, *Mathematics* **8**, 7 (2020).
Summary: The authors present a new method for training Neural Ordinary Differential Equations (NeuralODEs), a modeling approach that merges neural networks with the paradigm of differential equations from physical sciences. While NeuralODEs offer significant potential for extracting dynamic laws from time series data, they often suffer from long training times and subpar results. In this paper, the authors propose a training method built on synchronization and homotopy optimization. Their results demonstrate that the new method achieves competitive or even better training loss, often requiring fewer epochs compared to traditional model-agnostic techniques, and also exhibits better extrapolation capabilities. Strengths: 1. The paper addresses a notable challenge in the field of NeuralODEs: the long training times and subpar results. Their proposed solution is both innovative and logical, introducing a novel concept in the NeuralODE literature. 2. The authors demonstrate the advantages of their method convincingly, showing that it improves the training loss and requires fewer epochs for training. Weaknesses: 1. The method's extension to high-dimensional systems or partially observed states is yet to be explored, limiting its current applicability. 2. The paper could have benefited from more real-world application examples, demonstrating the utility of the method beyond the context of benchmark experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the efficacy of homotopy and multiple-shooting methods change with the size of the NeuralODE model? Is it possible to find a balance between model size and training method for optimal performance? 2. What are the effects of data sparsity and noise on the performance of different training methods for NeuralODEs, particularly the homotopy training method? 3. How can this methodology be extended to high-dimensional systems or systems with partially observed states? 4. What are the practical implications and potential applications of this training method in various industries or scientific disciplines? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper mainly uses synthetic datasets for testing the performance of training algorithms. The findings might not generalize to real-world datasets, which can exhibit unique complexities and noise structures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments as well as the detailed questions. We have listed the responses to the questions below and grouped the responses to some of the more common questions in the global comment above. > How does the efficacy of homotopy and multiple-shooting methods change with the size of the NeuralODE model? Is it possible to find a balance between model size and training method for optimal performance? While more experiments comparing the two methods are needed for a final confirmation, we found in our experiments that the relative efficacy of the two methods were not very sensitive to the model size. Both methods returned orders of magnitude improvements compared to the vanilla training albeit homotopy method resulting in better extrapolation performance than the multiple-shooting method. However, continuing the discussion, we do think there is a different factor that can change the relative efficacy of the two methods - the length of the training data. This is because the multiple-shooting method can be implemented to compute all of the shooting segments parallel in time [1] - resulting in a time complexity proportional to the length of one segment - whereas the homotopy method scales with the length of the total data. Therefore, for short to long time series data, homotopy training method tends to be more effective than multiple shooting because even though a single epoch of the homotopy method can take longer than multiple shooting, its faster convergence and improved extrapolation qualities makes homotopy the more attractive option. For very long sequences however, multiple-shooting method can be more economical as the reduced time complexity starts to outshine the improved convergence speed of the homotopy method. This is a very interesting direction of research that we would like to pursue in the future. Another potentially interesting direction is to use the homotopy method and the multiple-shooting method at the same time - that is, subdivide the data into segments and apply synchronization on each - as the two frameworks are compatible with each other. This has the potential to reap the benefits of the two methods at the cost of increased number of parameters (the sum of the hyperparameters of the two methods). > What are the effects of data sparsity and noise on the performance of different training methods for NeuralODEs, particularly the homotopy training method? Thank you for your question. Regarding the performances of the different training methods with respect to data sparsity and noise, we would like to refer the reviewer to the last paragraph of Section 7.1 (corresponding to third and fourth columns of Figure 3) and Section G.3 in the appendix with emphasis on Figures 17-20 (please look at the appendix.pdf file in the zip file of our Supplementary materials as it is the updated version compared to the one at the end of the main paper pdf). First, regarding noise, we find that the vanilla method is very sensitive with the model predictions degrading rapidly with increasing noise. In comparison, both the homotopy and the multiple-shooting methods are much more robust, although for very large noise amplitudes our homotopy method prevails. An interesting point here is that large amounts of noise can introduce high frequency artifacts to the cubic spline interpolant used in homotopy training (Figure 20, rightmost column) but the corresponding training results (Figure 19, rightmost column) are not affected by these artifacts. As for data sparsity, we find that multiple-shooting method suffers the most as data sparsity increases. This is intuitive because as multiple-shooting method subdivides the time series data, it works well as large, dense data regime where each shooting segment is given enough training data elucidate the system dynamics. Another unexpected phenomenon we observed that the vanilla method tends to work better as the data sparsity increases. This hints at another potential training improvement strategy where one first trains a NeuralODE with an undersampled version of the data, then subsequently training the model on denser and denser version of the data to arrive at the final training result. Finally, we find that our homotopy method is quite robust to data sparsity - model predictions do degrade as data sparsity is increased, but still the training model predictions are competitive with those from vanilla training. As the homotopy method produces such results at an accelerated training time, we believe homotopy method is still better in this situation. > What are the practical implications and potential applications of this training method in various industries or scientific disciplines? Our homotopy training method is valuable in multiple scientific disciplines where the goal is to discover governing equations underlying data or need to create a surrogate dynamics model that provides reliable forecasts of the data. One example is [2], where the authors introduced a gray-box model to predict the temperature of the ocean as a function of depth. The neural network component in this hybrid model serves to account for the missing physics that are not captured by traditional physics based modeling. To obtain a proper surrogate model, an effective training method is necessary and this is where our homotopy method shines. Especially, our results on the gray-box models for the Lotka-Volterra system (main paper, Figure 4) show that even though significant prior knowledge on the governing equation was added to the NeuralODE model, vanilla training was not able to fully capitalize on this and instead returned models the performed significantly worse than their black-box counterparts. Our homotopy method can help resolve this problem, thus aiding in creating proper surrogate models. [1] M. Stefano et. al, Differentiable Multiple Shooting Layers. *NeurIPS* **34** (2021). [2] A. Ramadhan et. al. *arXiv:2010.12559* (2023). --- Rebuttal Comment 1.1: Comment: The author has substantially addressed most of my concerns. I will keep my score.
Summary: The paper proposes a homotopy-based training method for NeuralODE models, in particular for the case with cases with long sequence of training data. Comprehensive experimental results demonstrated the effectiveness of the proposed method. Strengths: 1. The proposed method is based on homotopy method, which is a classic global optimization method, with sound theoretical foundation. 2. Comprehensive experimental results. Good results when compared with the baseline methods 3. The paper explains key concepts and results well. Weaknesses: 1. The argument on the loss function landscape lacks theoretical insights. The connection is established by empirical results. 2. Although the overall idea of using homotopy is good, some details are not properly addressed. See the questions and limitation part for more details. 3. A few key references related to homotopy methods are not cited. The references by JH He really put homotopy methods in the mainstream of scientific computing: https://scholar.google.com/citations?user=tzM7c2cAAAAJ&hl=en Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. I question the statement around Line 196, on the usage of cubic splines and its potential consequences. Cubic spline is a relatively high-order method that may be inaccurate when the underlying dynamic system is **stiff** (or there are widely spread time constants). Will the cubic spline interpolation be able to handle responses from _stiff_ systems? 2. I question the convergence of the homotopy method when constant $\Delta \lambda$ is used. The introduction of homotopy to the original ODE effectively increases the order of the ODE by 1, and transform it into a DAE. The authors should address whether there is the potential of non-convergence, and when it happens, how to address it. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: No negative societal impact perceived. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and the very interesting questions. We list our answers below, and have grouped the common questions in the global comment. > A few key references related to homotopy methods are not cited. The references by JH He really put homotopy methods in the mainstream of scientific computing: https://scholar.google.com/citations?user=tzM7c2cAAAAJ&hl=en Thank you for kindly pointing us to key references regarding the homotopy method. We will update the manuscript to include these references in the related works section. > I question the statement around Line 196, on the usage of cubic splines and its potential consequences. Cubic spline is a relatively high-order method that may be inaccurate when the underlying dynamic system is **stiff** (or there are widely spread time constants). Will the cubic spline interpolation be able to handle responses from *stiff* systems? We thank the reviewer for bring up an interesting discussion point. To address the reviewer’s comment, we created simulation data from the Robertson’s chemical reaction system and constructed a cubic spline interpolant from it. The results are displayed in Figure 2 of the pdf linked in the global comment. The results show that in this case, cubic spline interpolation does return satisfying results, except for a slight ripple after the first peak in the y variable. Still, we do agree that it is possible that on some time series datasets, such as those from more pathological stiff systems or data with large amounts of noise, cubic spline interpolation may return sub-par results. However, our experiences seem to indicate that our training method is relatively robust to the quality of the interpolant. To support this point, we would like to guide the reviewer to Figure 19 and Figure 20 of our appendix (please look at the appendix.pdf file in the zip file of our Supplementary materials as it is the updated version compared to the one at the end of the main paper pdf), where we plot trained NeuralODE predictions, as well as the cubic spline interpolants used in the training for increasing noise in the data. We find that when the noise amplitude becomes large, the cubic spline interpolant displays high frequency artifacts (Figure 20, rightmost panel). However, the corresponding model predictions (Figure 19, rightmost panel) show no signs of this artifact carrying over, and still outperforms all other baseline methods. > I question the convergence of the homotopy method when constant $\Delta \lambda$ is used. The introduction of homotopy to the original ODE effectively increases the order of the ODE by 1, and transform it into a DAE. The authors should address whether there is the potential of non-convergence, and when it happens, how to address it. We thank the reviewer for bringing up another important point. We agree with the reviewer that our homotopy training method does not guarantee convergence to the global minimum. The reason for this is because homotopy optimization method, while effective, is still a local optimization method. One possible resolution to this problem is to use the HOPE algorithm from [2], which is a combination of homotopy optimization and particle swarm optimization. By employing multiple "particles" to explore the parameter space, the algorithm has a much higher chance to converge to the global optimum at the cost of increased computation. Finally, regarding the reviewer’s comment about the homotopy turning the original ODE into a DAE of 1 order higher, we are less familiar with DAEs and are unsure if this is indeed the case. The reason behind our confusion is that in the original ODE, the states evolve with time t, whereas the homotopy parameter changes with the training epochs. Therefore, we are not sure if our situation can be cast into the typical form of DAEs, given by $$ \frac{d\bf u}{dt}={\bf f}(t, \bf u, v);\quad 0={\bf g}(t, \bf u, v) $$ We would love to discuss more about this point and learn from the reviewer. [1] H. H. Robertson, The solution of a set of reaction rate equations, *Numerical Analysis: An Introduction*, pp. 178–182, Academic Press, London (1966). [2] D. M. Dunlavy and D. P. O'Leary, *Sandia Reports* SAND2005-7495 (2005). --- Rebuttal Comment 1.1: Title: Cubic Spline and others Comment: Thanks to the authors for the responses and extra experiments. - Stiffness I am glad the authors are exploring the Robertson equation in the scipy package. However in my view it's not the extreme case. A good example can be found in Matlab discussion of stiff ODE's, such as stiff van der Pol equation with $\mu=1000$. I was wondering if cubic spline can handle the extremely sharp transition around $t=750$. (search for *matlab*, *solve-stiff-odes* to find the webpage). - Convergence I understand the authors' argument. I will come back to this issue again before the discussion period ends. --- Reply to Comment 1.1.1: Title: Additional results on the van der Pol equation Comment: We thank the reviewer for kindly pointing us to the van der Pol equation, as well as the relevant references. We have replicated the code in the matlab documentation in Python and have used our cubic spline code to generate the interpolants. We present the results in the following anonymous link: https://drive.google.com/file/d/1xOZ3UnpUKmkB_X7ICuB226pgAprfTHK4/view?usp=sharing The results show that the cubic spline does incur some Gibb's phenomenon-like high frequency artifact at the waveform edges, but the severity of this artifact is not that large as witnessed by the zoomed-in figure in the right panel of our attached figure in the link above. We would also like to mention our noise studies on the Lotka-Volterra system (Figure 19, 20 of our appendix: please look at the appendix.pdf file in the zip file of our Supplementary materials instead of the one at the end of the main paper pdf) where we found that oscillatory artifacts in the cubic spline does not carry over to the trained model predictions. We are happy to continue the discussion with the reviewer.
Rebuttal 1: Rebuttal: 1. Can our algorithm be applied to potentially partially observed high dimensional data? We believe that our algorithm can be applied to higher-dimensional problems, provided that a couple of challenges due to the increased dimensionality are addressed. Depending on the nature of the data, we believe there are two possible strategies. The first strategy is to use a latent NeuralODE model where a NeuralODE is used to model the latent state dynamics ${\bf z}(t)$ via: $\frac{d\bf z}{dt}={\bf U_1}(t, {\bf z};\bf\theta_1)$ with the latent state mapped to the observables ${\bf u}(t)$ by ${\bf u}(t)={\bf U_2}({\bf z}(t); \bf\theta_2)$. Depending on the available prior knowledge, ${\bf U_2}$ can either be a fixed equation, a parametrized equation, or a neural network. Coupling between the dynamics and the data can achieved by the following augmentation: $\frac{d\bf z}{dt}={\bf U_1}(t, {\bf z};{\bf\theta_1})-{\bf K}({\bf u}-{\bf \hat u})$ where $\bf \hat u$ is the interpolated version of the observed data. Such type of coupling has been reported in the ODE synchronization literature, an example being [1]. We expect this scheme to perform in the NeuralODE setting as well, provided that the observable $\bf u$ contains sufficient information about the latent state $\bf z$. The second strategy to tackle observations from high-dimensional systems is to perform a phase space reconstruction using the observable time series, then train a simple NeuralODE on the reconstructed phase space data. In this case, our training method remains the same since we have access to all of the states of the reconstructed phase space. This idea was explored in [2], where the authors used temperature time series data from a weather station in Jena, Germany to train recurrent neural networks. As temperature corresponds to a scalar observable from an unknown high dimensional dynamical system, the data was detrended, denoised, then subjected to phase space reconstruction using time delay embedding. 2. Training on real world data We thank the reviewer for the suggestion. To address the reviewer’s concerns and to continue the discussion in our previous response, we trained a NeuralODE on a sub-sampled version of this phase space reconstructed temperature dataset (to reduce computational burden) using our homotopy-based method. We present the results in Figure 1 of the attached pdf. We find that our model can predict the reconstructed dynamics well in the interpolation interval, but fails in the extrapolation regime. This is likely due to insufficient portion of the high dimensional phase space being sampled by the training data, and should be improved with additional training data. We find that our model can predict the reconstructed dynamics well in the interpolation interval, but fails in the extrapolation regime. This is likely due to insufficient portion of the high dimensional phase space being sampled by the training data, and should be improved with additional training data. Still, this result shows the feasibility of using our method on real world data, and we plan to continue exploring in this area. 3. Relationship between K and k We apologize for the confusion. In our work, the matrix $\bf K$ and the scalar algorithm hyperparameter $k$ are related to each other by ${\bf K}=k\bf I$, where $\bf I$ is the identity matrix. 4. Pdf: /pdf/8cb096dff522ff446a637fa39c9166b6b690fbef.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a new training method for Neural Ordinary Differential Equations (NeuralODEs) that aims to improve their performance in extracting dynamical laws from time series data. The proposed method is based on synchronization and homotopy optimization, which does not require changes to the model architecture. The authors demonstrate that their method achieves competitive or better training loss while often requiring less than half the number of training epochs compared to other techniques. Furthermore, models trained with their method display better extrapolation capabilities. Strengths: [+] The paper is well-organized and clearly written, which makes it easy to follow and understand the proposed method. The introduction to the relevant background is sufficient, and the presentation of the experimental part is well-structured. [+] The proposed method is well motivated, and some visualizations play a significant role in understanding the problems with traditional methods and the performance of the newly proposed method. [+] The newly proposed method is very easy to integrate with models. In fact, the new method does not change the model but only changes the training process. It is simple to use and has excellent results. [-] Considering that there are five main hyperparameters, if there are targeted sensitivity experiments in the experiments to demonstrate how the selection of hyperparameters affects the experimental results to some extent, the experimental results might be more convincing. [-] The few systems involved in the experimental part are low-dimensional ODE systems. Can the proposed algorithm be applied to higher-dimensional problems? What new challenges might be encountered? [?] What is the relationship between the K in the algorithm framework and the hyperparameters: Coupling strength (k)? [?] It would be better if there is an analysis of computational complexity or a comparison of computation time with the baseline algorithm. Weaknesses: see above Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: see above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments and suggestions. We have written our responses to some of the common questions in the global comment above, and have listed responses to the rest of the reviewer's questions below. > Considering that there are five main hyperparameters, if there are targeted sensitivity experiments in the experiments to demonstrate how the selection of hyperparameters affects the experimental results to some extent, the experimental results might be more convincing. While we agree with the reviewer, due to limited computational resources, we were only able to sweep a subset of the full hyperparameter space while running our experiments. Still, to provide some more information, we have included training curves for our preliminary runs where we individually varied some of the hyperparameters in Figure 3 of the pdf attached in the global comments. In case of the homotopy parameter decrement ration $\kappa$ and the number of homotopy steps, a suboptimal choice causes the train loss to jump in the later stages of the optimization due to the loss landscape changing too drastically after a homotopy step. In case of the learning rate, larger values cause the train loss to decrease rapidly in the initial stages, but also likely to stick in a bad local minimum later on during training. For the coupling strength, larger values cause the training loss to start from a smaller value and facilitate stable optimization. However, using too large values cause the coupled train loss to act completely independent of the true mean squared error due to insensitivity of the model parameters to the train loss. > It would be better if there is an analysis of computational complexity or a comparison of computation time with the baseline algorithm. First regarding the computational complexity, our homotopy algorithm has almost identical complexity as the vanilla training of NeuralODEs. This is because our algorithm only changes the differential equation to be solved (by adding on a coupling term to the model). The major factors of computational complexity are how the differential equation is solved and how the gradients are calculated (direct backpropagation or adjoint method-based techniques) - which were kept identical for both the homotopy and vanilla methods in our experiments. One difference between homotopy and vanilla training arises from the calculation of the cubic spline interpolant. However, this is calculated only once prior to the training and the resulting interpolations are cached for use during training. Therefore this only adds a negligible constant factor to the total algorithm. While this was not a direction explored in our paper, in the case of the multiple shooting, the computational complexity can be made different if one follows the approach of [1]. In this implementation, all of the segments for multiple shooting are solved in parallel, bringing down the computational complexity by a factor equal to the number of multiple shooting segments used. However, we would like to clarify that this does not always guarantee shorter training time there is a competition between each epoch of the multiple shooting method taking shorter time and our homotopy method requiring much less total epochs to converge. The use of adaptive solvers (which we describe below) further obfuscates this problem, and a further investigation is required for a final verdict. Regarding the computation time, we want to preface this point by saying that we did not account for such analysis at the time of our experiments, and thus our implementations are unfortunately not streamlined for speed. In the case of the homotopy method, some part of the cubic spline coefficients were recomputed each epoch and our multiple-shooting implementation was not parallelized in time. Furthermore, we had used multiple machines to run some of the longer experiments in parallel, which made it difficult to provide a comprehensive computational time benchmark. We apologize for this inconvenience and will improve on this point in future works. With this, we present the computation time benchmarks for the Lotka-Volterra training results in Table 1 of the attached pdf in the global comment. We see that while our homotopy method does take longer time per epoch than the other methods, the time reached to reach best epoch is much shorter due to the much less number of epochs required to train the model. A point worth stressing is that since we used 5 segments for the multiple shooting training, the expected speed up from parallelizing the multiple shooting algorithm is about 5x. However, our homotopy method takes less than this time to converge, indicating that the our method is competitive with even a optimized, parallelized version of the multiple-shooting algorithm. Finally, we want to comment on one factor, other than the algorithm time complexity, that can greatly affect the computation time. This is due to the use of the adaptive ODE solver for solving the NeuralODE. As adaptive solvers evaluate the ODE variable amount of times in order to reach given tolerance values, a method that has a better per-epoch time complexity can show worse computation times if the solver evaluates the ODE more times for that method. This number of function evaluations is itself dependent on the form of the differential equation - i.e. the parameters of the NeuralODE. Therefore, a training method that guides the NeuralODE through a more favorable region of the parameter space will result in lesser number of function evaluations throughout training, which can result in reduced total training time even if the method’s per-epoch time complexity is worse than its competitors. [1] M. Stefano et. al, Differentiable Multiple Shooting Layers. NeurIPS 34 (2021). --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. I will keep my positive socre.
null
null
null
null
null
null
Efficient Learning of Linear Graph Neural Networks via Node Subsampling
Accept (poster)
Summary: The work tackles the problem of scaling GNNs used for regression tasks to large graphs by subsampling given nodes. The technique consists of first performing node sub-sampling to estimate the leverage scores of $AX$ and then performs leverage score sampling on $AX$. The authors show that this technique is a good spectral approximation to the full $AX$ computation, but avoids the $O(n^2d)$ matrix multiplication cost. Strengths: The paper tackles the interesting and practically useful problem of scaling GNNs to large graphs. The proposed technique has useful theoretical guarantees and as far as I am aware it is a novel approach. The approximation to the regression problem is interesting as it has useful spectral guarantees and could be of general interest from an algorithmic perspective outside of the GNN community. Weaknesses: There are a few weaknesses of the work, some of which the authors point out. The most glaring one is the fact that this work targets only linear GNNs. While analysing non-linear models is clearly more challenging, it would be interesting to have some comments or results for non-linear models with more easy to study non-linearities (such as ReLU). I found following the different run-times a bit challenging. It may be useful to have a summary table that compares the various computational/memory complexities for the different techniques used in the experiments in Section 5. As this problem is a practical one, I believe Section 5 could be stronger. While the authors argue that the matrix multiplication is a bottleneck, modern GPUs are extremely optimized for exactly these sort of operations and as a consequence it may be the case that the full matrix multiplication is still faster than sampling. As such, I would be very interested to see some results on the runtimes of the various techniques. Further, it would be interesting to test regimes in which the graph is too large to fit in a GPU and sampling is absolutely necessary. It may also be useful to run experiments with non-linear GNNs using the proposed sampling technique. I also spotted some typos and formatting issues in the manuscript (line 60, line 95). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: While the work mentions linear GNNs, it seems to be the case that the work only considers one-hop linear GNNs. While a multi-layer linear GNN is still a linear one, do these results apply to multi-hop linear GNNs as well? It would be useful to have more details on the number of layers and exact model used in section 5 as well. Is there a reason in figure 2 (b) and (d) that MSE with uniformly sampled $AX$ is not included in the experiments? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors acknowledge a main limitation that is the fact that this work only studies linear GNNs. A further limitation is that while the authors acknowledge the improved theoretical run-times of their technique, they do not give practical wall-clock runtimes for the experiments. While the technique may be theoretically faster, in practice many operations such as matrix operation are extremely optimized. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1 & W2. Lack of run-time comparison: Thank you for pointing out this issue. Please see the common response and the additional experiments in the attached pdf above. W3. Extension to nonlinear GCN: Please see the common response. W4. Some typos (line 60, line 95): Thanks for pointing out these typos. We fixed those. Q1. Extension to the more-than-two-layer case: We thank the reviewer for pointing out that we did not discuss this properly. Note that without non-linearities, considering a multi-layer network is equivalent to performing additional matrix multiplication by $A$ and the output of the previous layer. Since our proposed scheme yields a spectral approximation guarantee for matrix multiplication, we believe that an analysis similar to that of Algorithm 1 should be possible for this extension to obtain a $(1+\epsilon)^{L}$-approximation guarantee, where $L$ denotes the number of layers. Since the approximation guarantee comes with high probability, a union bound could be used to obtain a high probability guarantee for the multi-layer case (since the order or error for each approximate matrix multiplication is the same). If the paper is accepted, we will include a discussion on this extension. Q2. Is there a reason in Figures 2 (b) and (d) that MSE with uniformly sampled AX is not included in the experiments? Thank you for pointing out the need for clarification. Figures 2b and 2d are the magnified views of 2a and 2c and hence the uniformly sampled curve does not appear within the axes limits, respectively. L1. The authors acknowledge a main limitation which is the fact that this work only studies linear GNNs. A further limitation is that while the authors acknowledge the improved theoretical run-times of their technique, they do not give practical wall-clock runtimes for the experiments. While the technique may be theoretically faster, in practice many operations such as matrix operation are extremely optimized. We agree that the focus on linear GNNs is a limitation of this work, but we view our work as studying a specific regime where existing theoretical tools allow the derivation of theoretical guarantees. We also agree that practical wall-clock run-times for the experiments are important here. We have run additional experiments to provide wall-clock runtimes for the experiments and also empirical results on nonlinear GNNs. Please see the common response above and the attached pdf file for detailed experiment results. --- Rebuttal Comment 1.1: Comment: Thank you for the additional information and further experiments. I think the work is an interesting direction and I feel like the experiments with the wall-clock/memory time are indeed promising. The extension to multi-hop GNNs would definitely add to the analysis. I have increased my score accordingly. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their helpful discussion, feedback, and updated score. Please let us know if you have further questions.
Summary: This paper proposes a method to overcome the computational difficulties often encountered when using graph neural networks (GNNs) for large-scale datasets, by subsampling in the adjacency and feature matrices. By assuming a two-layer linear GNN for the regression problem and performing subsampling based on the leverage score, the authors show that the MSE of the reduced problem generated by their proposed method is bounded from above by the true MSE times $(1+\epsilon)$. Strengths: Originality & Significance: The proposed method basically follows the common idea in subsampling algorithms for linear regression in a standard setting; the same is true for the derivation of the upper bound of the reduced problem MSE. However, in the current problem, the regression coefficients are multiplied with the product of the adjacency matrix and the feature matrix, and the computation of the product itself is expensive and should be avoided. A new twist is introduced at this point: the proposed algorithm breaks the problem into two steps; in the first step the leverage score is approximately estimated by subsampling the rows of the adjacency matrix and the corresponding columns of the feature matrix; in the second step, the reduced adjacency matrix is generated according to the estimated leverage score. In this way, the direct computation of the product is avoided. I think this idea is interesting. Subsampling algorithms with theoretical guarantees are currently rare in this field, and this makes the proposed method valuable. The originality and significance of the proposed method are considered to be thus high. Quality: The explanation of the algorithm and the derivation of the theoretical guarantee are very clear. The numerical experiment is also convincing. The quality of the paper is thus high. Clarity: The presentation is nice and clear, except that the references are missing in the main manuscript (they are in the supplementary material). I think the explanations of the numerical experiment result should be more detailed, though the important parts are understandable. Some typos are also found. Although there are small flaws like these, the overall clarity is high. List of typos found: Line 95: discussio -> discussion Line 186: unnecessary space seems to exist before ``Algorithm''. Line 225: unnecessary space seems to exist before ``Algorithm''. Line 226: Appendix Appendix G -> Appendix G Line 245: 4039 -> 4039 nodes Line 248: 100),in -> 100), in Weaknesses: - The linear GNN is not employed in practice, and the extension to nonlinear GNN is desired. I think it is at least possible to apply the proposed algorithm to the nonlinear case, although there is no theoretical guarantee in that case. Some more discussion or experimental results in such a nonlinear case are welcome. - The extension to the more-than-two layer case is also nontrivial. Some more discussions are welcome. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Concerning to the experimental section, how about the computational time in practice? The proposed method comes from the computational issue and hence it is better to show the computational time/cost as the result. - Figures 2b and 2d are the magnified views of 2a and 2c, respectively. Is this right? If so, it is better to mention this point. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors address their research limitations well. I think there is no concern about the potential societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1. Extension to nonlinear GCN: We thank the reviewer for suggesting interesting directions for future work. We leave comments in the common response section and provide additional experimental results for nonlinear GCN in the attached pdf. W2. Extension to the more-than-two-layer case: We thank the reviewer for pointing out that we did not discuss this properly. Note that without non-linearities, considering a multi-layer network is equivalent to performing additional matrix multiplication by $A$ and the output of the previous layer. Since our proposed scheme yields a spectral approximation guarantee for matrix multiplication, we believe that an analysis similar to that of Algorithm 1 should be possible for this extension to obtain a $(1+\epsilon)^{L}$-approximation guarantee, where $L$ denotes the number of layers. Since the approximation guarantee comes with high probability, a union bound could be used to obtain a high probability guarantee for the multi-layer case (since the order or error for each approximate matrix multiplication is the same). If the paper is accepted, we will include a discussion on this extension. Q1. Concerning to the experimental section, how about the computational time in practice? The proposed method comes from the computational issue and hence it is better to show the computational time/cost as the result. Please see the common response and additional run-time experiments described in the attached pdf. Q2. Figures 2b and 2d are the magnified views of 2a and 2c, respectively. Is this right? If so, it is better to mention this point. Yes, that is correct. We will add a sentence to mention that point. --- Rebuttal Comment 1.1: Comment: I thank the authors' responses and additional experiments. These completely address my issues and I am satisfied. I also have read the discussions between the other reviewers and the authors. Although there seem to be some objections against the simple problem setting by the authors, which is different from the practical settings where GNNs are applied, I think, in my opinion, the authors are providing reasonable responses against the criticisms from the reviewers, from the theoretical viewpoint. Thanks to these efforts, I think the paper increases its value. Accordingly, I would like to raise the score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your acknowledgment of our paper and the updated score. Based on the discussion and feedback, we will keep polishing the revision to a better version.
Summary: The authors proposed a sampling method to train graph neural networks efficiently. However, the current implementation of graph neural networks is based on the sparse matrix multiplication and thus the complexity is not O(n^2d). The authors only analyze the complexity theoretically without experimental supports. Both of major issues make the contribution weak. Strengths: 1. Theoratically, the proposed method is efficient compared with the dense implementation of graph neural networks. 2. The experiments are conducted on the large scale datasets with different variant of the methods. Weaknesses: 1. The statement about the complexity of the graph neural network is not correct. The current implementation is based on the sparse multiplication, so the complexity is not O(n^2d). 2. The computational time comparision is missing, which is essential for this work. 3. There is lack of baselines, it only compares with its own variants. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What is the computational time of this approach compared with sparse matrix multiplication? 2. Besides MSE, how about applying this to the real semi-supervised classification tasks? Will this lead to the same prediction accuracy? 3. Is it possible to use this for link prediction tasks when dropping edges for validation? Is there still any theory guarantee in this case? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1. The statement about the complexity of the graph neural network is not correct. The current implementation is based on sparse multiplication, so the complexity is not O(n^2d). We thank the reviewer for pointing this out. The main motivation for our graph subsampling procedure is settings where the adjacency matrix is dense (with $\Theta(n^2)$ non-zero entries), and sparse multiplication would still require time $O(n^2d)$. The reviewer is correct that, whenever $A$ is sparse, sparse matrix multiplication could avoid the $O(n^2d)$ complexity. We will correct the paper to be clear on this point. W2. Lack of run-time comparison: We thank the reviewer for pointing out this issue. Please see the common response and the attached pdf file for further experiments. W3. Comparison with other baselines: Thanks for the suggestion. Please see the common response and the attached pdf file for details. Q1. What is the computational time of this approach compared with sparse matrix multiplication? We believe that a fair comparison would not be comparing our proposed algorithm with sparse matrix multiplication since our algorithm includes a multiplication step after reducing the size of the graph adjacency matrix via leverage score sampling. Hence our algorithm could potentially be combined with sparse matrix multiplication. For this reason, we would expect a similar tendency as demonstrated in Table 1 in the attached pdf that even with adopting sparse matrix multiplication; that is, the computation time of our approach will be faster than the regression by using the exact multiplication of $AX$. Q2. Besides MSE, how about applying this to the real semi-supervised classification tasks? Will this lead to the same prediction accuracy? We thank the reviewer for suggesting interesting directions for future work. Our proposed sampling scheme (based on approximating the leverage score) and the theoretical guarantee we made are tailored for the node-level regression task in the training phase. The reason we focused on this specific setting was so that we could take advantage of the theoretical results on leverage score sampling and linear regression in order to obtain theoretical results for GNN training using a subsampled graph. In particular, we are helped by the fact that in the linear GCN setting, the optimal solution for the linear regression is a matrix-vector product which can be expressed as $((AX)^{\top}(AX))^{\top}\cdot (AX)^{\top} \mathbf{y}$, where $A$, $X$, and $\mathbf{y}$ denote the adjacency matrix, data matrix, and the labels respectively. For graph link prediction and classification tasks (under possibly semi-supervised settings), however, since the optimal solution cannot be expressed as a matrix-vector product (due to non-linearities), our proposed algorithm does not extend to the classification problems in a straightforward way and similarly our theoretical guarantee may not hold anymore. That being said, we agree that both node and edge classification problems are interesting. For future work, we want to study whether our same subsampling process can have any guarantee for a node classification task. For more general tasks, we believe that we need different sampling strategies other than the leverage score sampling and hence we may need other tools for developing a theoretical guarantee for sampling algorithms. Q3. Is it possible to use this for link prediction tasks when dropping edges for validation? Is there still any theory guarantee in this case? That is an interesting question. As mentioned above, our proposed sampling scheme is a node subsampling scheme that selects some representative nodes based on leverage scores and uses all their edges and is tailored for predicting labels (continuous value) of each node. It is not straightforward to apply our algorithm to predicting links between nodes nor to extend the theoretical results to that setting. However, standard GNN approaches to link prediction still rely on message passing on the graph, requiring the computation of $AX$, and could benefit from a subsampling technique similar to the one we proposed in order to reduce the computational costs. But since link prediction is a fundamentally different inference task, we will need to find an alternative to the leverage score sampling-based approach that we considered. This is an interesting direction for future work. --- Rebuttal Comment 1.1: Title: Thank the authors for the effort to address the questions Comment: Thank the authors for the effort to address the questions! But currently, I will hold my score: 1. The assumption is that the graph is dense, which is too strong and weakens its potential application in real cases since most graph structures are sparse. A densely connected graph means that most of the nodes are similar, and maybe the graph neural network is not even useful in such a case, e.g., consider the extreme case when the elements of the adjacency matrix are all-ones. 2. The authors didn't provide the wall time of the sparse multiplication. The primary concern is how dense the graph should be to make the proposed method useful. It is essential to know when is the proposed method faster than the sparse multiplication. 3. The current method only works for limited tasks/structures, further weakening the contribution. --- Reply to Comment 1.1.1: Title: Additional response to comment 1 Comment: C1. The assumption is that the graph is dense, which is too strong and maybe the graph neural network is not even useful in such a case, e.g., consider the extreme case when the elements of the adjacency matrix are all-ones.” Thank you for the further comments. First, we would like to highlight that the graph we consider is weighted. In other words, the $a_{ij}$ entries of the adjacency matrix are non-negative real values, and not restricted to $\{0,1\}$. This can be used to capture the strength of the connection between nodes $i$ and $j$. Depending on the weights, a dense matrix with different entry values is not necessarily similar to a matrix with all ones and may not correspond to the graph where most of the nodes are similar. An adjacency matrix with non-negative real values could represent, for example, a pairwise similarity matrix and, in such a case, one would expect a fairly dense matrix (even if many of the entries are close to zero but strictly positive). In addition, our definition of "dense" is just that the number of non-zero entries scales as $n^2$, but it could still be a small percentage of the edges. For example, an Erdos-Renyi graph with constant edge probability $p$ with have a number of edges that scales as $n^2$. Lastly, although our theoretical guarantee requires that the adjacency matrix is dense, we would like to highlight that our proposed sampling algorithm works reasonably well empirically, even for sparse graphs. Referring to Figure 2(c) and Figure 3(c) (in the supplementary material), the proposed algorithm outperforms the baselines that we considered for the ego-facebook dataset where the graph is sparse. Hence, we believe that our algorithm may offer benefits in many real-world scenarios.
Summary: The paper presents an efficient GNN training solution through node subsampling and leverage score sampling, which is proven to be efficient in learning a regression model with bounded entry access and running time. Strengths: * The proposed technique leads to a proven efficient approach for GNN training, with potentially many real applications. * Experimental study on real graph data validates the proposed technique on real world graphs, showing significant improvement over baseline sampling strategies. Weaknesses: * It is interesting to see the proposed approach for GNN training, as efficiency of GNN on big graphs is a headache for many real world problems. However, although being different approaches, we can find the idea of subsampling used in various GNN models, but it is not well compared in this paper. For example, GraphSage performs neighbor subsampling and GraphSaint subsamples subgraphs from a big graph. The authors should compare the proposed approach with those different sampling approaches to show the pros and cons. * The experimental results show the significant efficiency of the proposed method. However, since it is for GNN training, the only evaluation on MSE would draw some concerns, if MRS is the only metrics GNN should be concerned about. If not, the authors should expand the experimental study for a comprehensive understanding/evaluation. * Compared to MSE, it is critical to see how much typical GNN machine learning tasks would be impacted by the subsampling. For example, graph link prediction, node classification, label prediction, etc. This paper does not include evaluation form those perspectives. * Although the big-O notation of complexity analysis is quite helpful, the experimental study of efficiency should be provided, if the training time is also significantly reduced with comparable performance. * It is a bit concerned that, for studying GNN efficiency, the authors selected two small graphs from OGB, although much bigger graphs exist. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1. Comparison with other baselines (e.g., GraphSage and GraphSaint) and the pros and cons: We thank the reviewer for the suggestion. Please see the comments in the common response section and the attached pdf for additional experiments. W2. MSE was the only evaluation criterion: Since we focused on a regression task, we believe that the MSE is a natural performance metric. Furthermore, our theoretical guarantee is in terms of the L2 error in the regression task, and the main goal of our experiments was to validate the theoretical guarantees. In future work, once we consider other learning tasks, such as node classification, different performance metrics may be needed. W3. Applicability of our proposed algorithm to graph link prediction and node classification: We thank the reviewer for suggesting interesting directions for future work. Our proposed sampling scheme (based on approximating the leverage score) and the theoretical guarantee we made are tailored for the node-level regression task in the training phase. The reason we focused on this specific setting was so that we could take advantage of the theoretical results on leverage score sampling and linear regression in order to obtain theoretical results for GNN training using a subsampled graph. In particular, we are helped by the fact that in the linear GCN setting, the optimal solution for the linear regression is a matrix-vector product which can be expressed as $((AX)^{\top}(AX))^{\top}\cdot (AX)^{\top} \mathbf{y}$, where $A$, $X$, and $\mathbf{y}$ denote the adjacency matrix, data matrix, and the labels respectively. For graph link prediction and classification tasks, however, since the optimal solution cannot be expressed as a matrix-vector product (due to non-linearities), our proposed algorithm does not extend to the classification problems in a straightforward way and similarly our theoretical guarantee may not hold anymore. That being said, we agree that both node and edge classification problems are interesting. For future work, we want to study whether our same subsampling process can have any guarantee for a node classification task. For more general tasks, we believe that we need different sampling strategies other than the leverage score sampling and hence we may need other tools for developing a theoretical guarantee for sampling algorithms. W4. Lack of run-time comparison: We thank the reviewer for pointing out this issue. We leave comments in the common response section and provide additional experimental results in the attached pdf. W5. Experiments on two small graphs: We would like to first highlight our main contribution lies in providing an efficient sampling algorithm for learning linear GNNs with a large number of nodes with a theoretical guarantee (please see the common response for details). We agree that we include experiments on rather smaller graphs, so we have run further experiments to provide wall-clock runtimes for the experiments and also empirical results on nonlinear GNNs for various sizes of graphs. We refer to the common response above and the attached pdf file for detailed experiment results. We will include additional experiments on large graphs in the main draft if the paper is accepted. --- Rebuttal Comment 1.1: Comment: I would like to thank the response from the authors. The detailed response helps clarify some ambiguity in the original paper and also improves the soundness via the comparison with additional baselines. I will update the scores accordingly. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive feedback and helpful discussion. Based on the discussion and feedback, we will keep polishing the revision to a better version. We are also happy to engage in further discussion if there are any remaining questions. We look forward to seeing an updated score, as you mentioned. --- Reply to Comment 1.1.2: Title: Dear Reviewer S2HF Comment: Dear Reviewer S2HF, I hope this message finds you well. We deeply appreciate your insightful comments on our manuscript and positive feedback on our response. We are also glad that ambiguities are clarified and the soundness of our theoretical result has been improved through our response and further experiments. We are making efforts to integrate our responses and further experimental results to revise our manuscript. We genuinely hope that our detailed responses illuminate the potential and significance of our research. If you believe our responses have addressed your concerns, would you kindly consider showing further support by re-evaluating the score? This is a friendly reminder that it seems like the score has not been updated yet (although you mentioned to update the score). Such an endorsement would greatly enlarge the chance of this paper being accepted. Warm regards and our deepest gratitude for your time and expertise.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments. For weaknesses/questions raised by multiple reviewers, we provide common responses. For the remaining comments, we provide point-by-point responses below. We would like to first highlight what we see as the main contribution of our paper. Our paper provides an efficient sampling algorithm for learning linear GNNs with a large number of nodes with a theoretical guarantee. Specifically, for a two-layer linear GNN with adjacency matrix $A \in \mathbb{R}^{n \times n}$ and data matrix $X \in \mathbb{R}^{n \times d}$, we show that it is possible to learn the model accurately while observing only $O(nd \log(n))$ entries of $A$. This in turn yields a run-time of $O(nd^2\log n)$, avoiding the computation time $O(n^2d)$ of using the full matrix multiplication $AX$. This reduction in run-time complexity is particularly significant when the number of nodes $n$ is large and the feature dimension is much smaller compared to $n$ (i.e., $d \ll n$). We would also like to highlight that this result holds even when the graph adjacency matrix and the data matrix are dense in which case sparse matrix multiplication would not help much. Hence the proposed algorithm bears practical importance as well. While we view our main contribution to be of a theoretical flavor, we agree with several of the reviewers' comments requesting additional experimental validation, which we discuss next. W1. Lack of run-time comparison: We note that the experiments in the original submission focused on the tradeoff between the number of observed entries and regression accuracy. In the attached pdf, we provide the computation time comparison for the end-to-end training process for a regression task. In particular, we compared the wall-clock time of performing a regression task with full $AX$ computation and that with Algorithm 1 (our proposed algorithm) that uses partial observations of $A$ and $X$. For a fair comparison, we use the same regression solver and the use the same specification of 48 cores of a x86 64 processor with 503.74GB memory. The results (see Table 1 in attached pdf) show that our proposed scheme requires orders of magnitude less wall-clock time when the graph is large than the regression with exact $AX$ computation. In particular, for the “ogbn-arxiv dataset” (from Open Graph Benchmark) having 17K nodes and 1M edges, our algorithm runs about 40x faster than the regression with the exact computation of $AX$. In addition to run-time, we expect our proposed approach to offer significant improvements in terms of memory usage. As a simple experiment in terms of memory usage, in Table 2 in the attached pdf, we provide the peak memory usage comparison for the same datasets during the sampling and regression steps of the algorithm. In the best case, the proposed algorithm requires 1414× less memory than the algorithm using the exact computation of $AX$ in the best case. W2. Comparison with other baselines (e.g., GraphSage and GraphSaint) and the pros and cons: Thanks for the suggestion. In the original submission, we focused on comparisons with two baselines (full computation of $AX$ and reduced computation via uniform sampling) to validate and highlight the conceptual contributions of our approach. But we agree that comparing our approach to popular methods such as GraphSage and GraphSaint is helpful. The pros and cons between our algorithm and other baselines (GraphSage and GraphSaint) are summarized below: 1. Both our algorithm and the baselines use some form of subsampling of graphs motivated by the large computational/storage complexity of running GNNs with large-scale graphs. However, our algorithm comes with a theoretical guarantee of getting $(1+\epsilon)$-approximation guarantee to the optimal MSE (for the specific setting of a two-layer linear GCN), while only observing $O(nd \log n)$ entries of $A$. GraphSage and GraphSaint, to the best of our knowledge, do not provide similar theoretical guarantees. 2. To obtain the performance guarantee, our algorithm employs a more complicated sampling strategy based on leverage score sampling, which leads to a non-uniform subsampling of the graph. GraphSage uses uniform sampling for choosing the neighborhood of a node when performing a feature aggregation. Furthermore, we demonstrate comparisons between our sampling approach and the sampling strategies utilized in (1) GraphSage and (2) GraphSaint (not the tools themselves), in addition to applying regression solvers. Figure 2 in the attached file demonstrates the MSE of our proposed algorithm, the baselines we have considered, and GraphSage and GraphSaint. Figures 2(a) and 2(b) demonstrate that the sampling ideas from GraphSage and GraphSaint do not seem to outperform our proposed sampling technique (tailored for the graph-weighted linear regression). W3. Extension to nonlinear GCN: We thank reviewers for suggesting interesting directions for future work. As illustrated in Figure 2(d) in the attached pdf, we have run additional experiments to demonstrate our proposed algorithm's efficacy on nonlinear GCNs. Using ReLU activation function and one hidden layer, Figure 2(d) plots the mean squared error as a function of the observation budget, shown as a percentage of the number of observed nodes in the graph. Interestingly, we observe that running the non-linear GCN with our proposed sampling techniques have a similar performance to the one with exact $(AX, \mathbf{y})$, as the budget increases. Also, the error reduction on Algorithm 1 from Regression with uniformly sampled $(AX, \mathbf{y})$ is 60% when the budget is at 5%. Besides, as compared to Figure 2(c), we observe that although our proposed algorithm is designed to optimize the MSE on linear GCNs, the performance works even better for the non-linear case. It would be interesting to see whether a slight modification of our node subsampling technique will lead us to prove a theoretical guarantee. Pdf: /pdf/db4eb987ba4449bbcb1eec86ed7a57bf10bad30d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Continuous-time Analysis of Anchor Acceleration
Accept (poster)
Summary: The paper extends the continuous time analysis anchor ODE in [55] to involved a more general choice of the coefficient $\beta(t)$ and show that the choice in [55] is in some sense optimal. They then go on to show: - correspondance with the discrete anchoring schemes APPM/FEG/AEG - Anchor ODE converges to a minimal $\ell_2$-distance from initialization (like APPM [51]) - Tightness of continuous time rates for anchor ODE - Convergence of APPM for more general choice of stepsize than $1/(k+2)$. - faster convergence under $\mu$-strongly monotonicity for anchor ODE by choosing $\beta(t)$ dependent on $\mu$. - convergence with adaptive stepsize for both anchor ODE and APPM (single-valued) that recovers the rates for monotone and strongly monotone+Lipschitz Strengths: Overall an impressive work. It seem interesting that one can adapt to both the smoothness parameter and the strong monotonicity modulus. The work seems fairly completely, I only leave a few comments/remarks below. Weaknesses: - My main concern is why we consider $\beta(t)$ other than $1/t$ in the first place. The proof seems much more involved, but what does it buy us considering $1/t$ is already in some sense optimal. My understanding is that if only $1/t$ was considered then [55] and [34]/[Theorem 18](https://large-scale-book.mathopt.com/LSCOMO.pdf) already proofs convergence for anchor ODE and APPM respectively. At least clarifying this would be helpful. - The work contextualizes the results but usually quite late. Overall contextualizing w.r.t. existing work upfront and motivating the sections. E.g. elaborate on [55] in l. 20 and explicitly say "adapt to the strong monotone modulus" in l. 221. Explicitly compare theorem 5.1 with existing results for APPM. Comments: - In terms of limitations maybe mention that the convergence results for the discretized schemes are only for implicit schemes (APPM) - I would mention existing discretization result up front (in [55] in contrast they only consider GD with anchoring and does not get $1/k^2$) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can we obtain convergence for explicitly schemes such as AEG/FEG? - It seems very interesting that you can adapt both to L and $\mu$. Would this work for explicit scheme? - Why distinguish between Halpern and APPM in Fig. 2 if they are equivalent? - What is $M$ in the $y$ axis of Figure 2? - FEG generalizes to cohypomonotone problems. Can anchor ODE handle cohypomonotonicity? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: (see comments) Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive and constructive review, we're glad that the reviewer found our paper to be "impressive" and "complete". We would like to start by sharing the most interesting observation we've gained while addressing your questions. **Q5.** The anchor ODE introduced in our paper does not handle cohypomonotonicity. However, we thought about this question and found a novel ODE that handles cohypomonotonicity. We are exited to share it here. Consider the ODE \\begin{align*} \\dot{X} = - \\left( 1 - \\frac{\\rho}{t} \\right) \\mathbb{A}(X) - \\frac{1}{t} ( X - X_0 ) + \\rho \\frac{d}{dt} \\mathbb{A}(X) . \\end{align*} Define Lyapunov function as \\begin{align*} V(t) = \\left( \\frac{t^2}{2} - t \\rho \\right) \\left\\| \\mathbb{A}(X) \\right\\|^2 + t \\left\\langle \\mathbb{A}(X), X-X_0 \\right\\rangle. \\end{align*} By differentiating, we have \\begin{align*} \\dot{V}(t) &= \\left( t - \\rho \\right) \\left\\| \\mathbb{A}(X) \\right\\|^2 + \\left( t^2 - 2 t \\rho \\right) \\left\\langle \\frac{d}{dt} \\mathbb{A}(X), \\mathbb{A}(X) \\right\\rangle \\\\ &\\quad + \\left\\langle \\mathbb{A}(X), X-X_0 \\right\\rangle + t \\left\\langle \\frac{d}{dt} \\mathbb{A}(X), X-X_0 \\right\\rangle + t \\left\\langle \\mathbb{A}(X), \\dot{X} \\right\\rangle \\\\ &= t^2 \\left\\langle \\frac{d}{dt} \\mathbb{A}(X), \\left( 1 - \\frac{2 \\rho}{t} \\right) \\mathbb{A}(X) + \\frac{1}{t} ( X-X_0 ) \\right\\rangle + t \\left\\langle \\mathbb{A}(X), \\left( 1 - \\frac{\\rho}{t} \\right) \\mathbb{A}(X) + \\frac{1}{t} ( X - X_0 ) + \\dot{X} \\right\\rangle \\\\ &= t^2 \\left\\langle \\frac{d}{dt} \\mathbb{A}(X), - \\dot{X} - \\frac{\\rho}{t} \\mathbb{A}(X) + \\rho \\frac{d}{dt} \\mathbb{A}(X) \\right\\rangle + t \\left\\langle \\mathbb{A}(X), \\rho \\frac{d}{dt} \\mathbb{A}(X) \\right\\rangle \\\\ &= - t^2 \\left( \\left\\langle \\frac{d}{dt} \\mathbb{A}(X), \\dot{X} \\right\\rangle - \\rho \\left\\| \\frac{d}{dt} \\mathbb{A}(X) \\right\\|^2 \\right) \\\\ &\\le 0. \\end{align*} Last inequality holds since $\\mathbb{A}$ is $\\rho$-cohypomonotone. Now from $V(t)\\le V(0) = 0$, we have \\begin{align*} \\frac{t^2}{2} \\left\\| \\mathbb{A}(X) \\right\\|^2 &\\le t \\left( - \\left\\langle \\mathbb{A}(X), X-X_0 \\right\\rangle + \\rho \\left\\| \\mathbb{A}(X) \\right\\|^2 \\right) \\\\ &= t \\left( - \\left\\langle \\mathbb{A}(X), X-X_\\star \\right\\rangle + \\rho \\left\\| \\mathbb{A}(X) \\right\\|^2 + \\left\\langle \\mathbb{A}(X), X_0-X_\\star \\right\\rangle \\right) \\\\ &\\le t \\left\\langle \\mathbb{A}(X), X_0-X_\\star \\right\\rangle \\\\ &\\le t \\left\\| \\mathbb{A}(X) \\right\\| \\left\\| X_0-X_\\star \\right\\|. \\end{align*} Reorganizing we have $\\left\\| \\mathbb{A}(X) \\right\\| \\le \\frac{2}{t} \\left\\| X_0-X_\\star \\right\\| $, thus we conclude \\begin{align*} \\left\\| \\mathbb{A}(X) \\right\\|^2 \\le \\frac{4}{t^2} \\left\\| X_0-X_\\star \\right\\|^2. \\end{align*} **Q1.** Yes, it is obtainable. We can show that explicit schemes such as EAG/FEG converges to the optimum point closest to the starting point. As proved in Theorem 3.5, we know the convergence to the closest optimum point is a property for continuous anchor schemes. On the other hand, as observed in appendix D.4, we know explicit schemes like EAG/FEG also correspond to our continuous scheme when limiting the stepsize to zero. Thus we expect the trajectories of the anchor-based algorithm to converge to the closest optimum for proper stepsizes. This expectation is true and is formalized in [71 Theorem 1]. **Q2.** It's an intriguing question, and it seems to be a nontrivial issue. We believe that this could be an interesting direction for future work. **Q3.** The label `Halpern' in Figure 2 corresponds to the method in Theorem 5.1 with varying $p>0$ and $\\gamma>0$, whereas APPM corresponds to the method in Line 71, which is the specific case of $p=1$ and $\\gamma=1$ in Theorem 5.1, which uses optimal $p$ and $\\gamma$. Figure 2 shows that even if APPM is the method with optimal $p$ and $\\gamma$ for (merely) monotone condition, it may not be optimal for each instance. The choice $p=1.5$ and $\\gamma=2.0$ may not even converge under monotone condition (see Table 2), but it outperforms APPM for specific instance in our experiment. **Q4.** In the experiment section, we solve a compressed sensing problem in decentralized manner using PG-EXTRA [59]. PG-EXTRA can be understood as a proximal point method with respect to certain metric defined with matrix $M$ [54], and corresponding monotone operator is actually monotone in $\\langle \\cdot,\\, \\cdot \\rangle_M$ where $\\langle x,\\, y \\rangle_M = x^\\intercal M y$ and $\\|x\\|_M = \\sqrt{x^\\intercal M x}$. Therefore, the performance measure of this problem should be $\\|\\tilde{\\mathbb{A}} x\\|_M$ rather than $\\|\\tilde{\\mathbb{A}} x\\|$ (Euclidean norm), and our theoretical guarantee applies to this new metric. **W1. Why we consider $\\beta(t)$ other than $\\frac{1}{t}$ in the first place?** Through the analysis with general $\\beta(t)$, we obtain a more refined understanding of anchor acceleration, and using this understanding, we were able to design our adaptive method. Also, as the answer for the Question 3, the choices besides $p=1$ and $\\gamma=1$ may be suboptimal when the operator is merely monotone, but the ther choices may be useful when the operator satisfies stronger properties. **W2, C1, C2.** Thank you for your valuable comments. In the later version, we will add the point in the limitation section that our work is for implicit schemes. Additionally, we will explicitly mention the prior comparable results and the motivations up front. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed response of the authors – I think it would be valuable to include the continuous time treatment of cohypomonotonicity if space allows. I've raised my score accordingly.
Summary: This paper conducts a continuous-time analysis of an acceleration method called "anchoring", where the main contributions are four-fold. The authors provided a unified analysis of the convergence rate of anchor acceleration, which includes both the constant and adaptive cases. Then the authors presented an adaptive method for anchor acceleration that is inspired by our analysis and achieves faster convergence rates than the constant method. After this, the authors proved that the adaptive method is robust to noise and can handle non-convex optimization problems. Finally, the authors provided numerical experiments that demonstrate the effectiveness of the adaptive method on various optimization problems. Overall, the paper provides a valuable contribution to the field of optimization and is in general well-written and well-presented. Strengths: This paper provides a comprehensive analysis of the convergence rate of anchor acceleration, which includes both the constant and adaptive cases. The paper provides a clear and concise presentation of both the theoretical analysis, where I checked the technical proofs of all results with a detailed focus on Theorem 3.1 [Section E.5] and Theorem 6.2 [Section H.2] and they are solid. The idea of using the factor $\frac{2 \mu}{e^{2 \mu t}-1}$ is is new and reasonable, since for \mu-strongly convex objectives $\mu\to 0^+$ it is consistent with the standard $\frac{1}{t}$ anchoring rate (the analysis provided is significantly more general, which should be honored). The adaptive method presented in the paper is also interesting, which appears faster convergence rates than the constant method and is robust to noise and non-convex optimization problems. The paper also clearly presented numerical experiments which demonstrate the effectiveness of the adaptive method on various optimization problems. Weaknesses: The paper assumes a certain level of mathematical background, which might be difficult for some readers unfamiliar with literature to follow. Further, the paper seems not provide a comparison of the adaptive method with other state-of-the-art optimization methods. In addition, the paper does not provide a detailed discussion of the limitations of the adaptive method and areas for future research. I have not checked but the assumptions of the adaptive method presented in the paper might be more stringent than required. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Are there some mismatches in the definition of Lyapunov functions at various places? For instance, the $V(t)$ definition in Line 133 [Corollary 3.3], when pinning $\beta = \frac{1}{t}$, and the last in Line 147 [last but one display of Section 3.1] differs by a factor of 2. These are minor, but I do encourage the authors to check them carefully. Can you provide examples of applications where anchor acceleration might be particularly useful? I understand that anchor acceleration has been discovered to be an acceleration mechanism for minimax optimization and fixed-point problems, but would anchor acceleration be useful in other optimization problems as well? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: This is a theoretical paper and admits no negative social impacts, to my best knowledge. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are pleased that the reviewer found our paper to "provide a valuable contribution to the field of optimization". Especially, we're sincerely grateful that the reviewer thoroughly engaged with the technical proofs and highlighted the potential applicability of the ideas to more generalized cases. **Weakness : Further, the paper seems not provide a comparison of the adaptive method with other state-of-the-art optimization methods.** We haven't provided the detailed comparison due to the fact that (i) there is no other adaptive methods to compare our method with and (ii) we already compared the state-or-the-art optimization methods other than adaptive methods. To the best of our knowledge, our adaptive method is the only purely adaptive method present in the implicit method for monotone inclusion setup, which does not require any line search or backtracking procedure. (One can compare our method with that of [23], which requires backtracking of some sort.) Furthermore, non-adaptive schemes we compare our methods with (APPM, EAG, FEG) are all guaranteed with optimal convergence rate with matching lower bound. We will make sure that to further emphasize on this point. Thank you for pointing this out. **Weakness : In addition, the paper does not provide a detailed discussion of the limitations of the adaptive method and areas for future research. I have not checked but the assumptions of the adaptive method presented in the paper might be more stringent than required.** We first want to clarify that our adaptive method only requires the standard assumption of maximal monotonicity, and it is guaranteed to have at least the same convergence rate as the state-of-the-art algorithm APPM. However, with additional assumptions, the adaptive method can be faster. This is the first adaptive of its kind (implicit method adapting to monotonicity or strong monotonicity without any linesearches or inner loops) and we derive it from continuous-time analysis. This adaptive method will certainly have limitations and we expect it to be possible to relax the assumptions guaranteeing linear rates. This is a very interesting direction of future work, and we view our contribution to be finding the first adaptive method of this kind and providing a proof of concept that this type of adaptivity is possible. **Q1.** Thank you for pointing out this typo. The Lyapunov function in Line 147 has to be divided by factor 2. We will correct this in our revision. **Q2.** The fixed-point iteration subsumes convex optimization algorithms such as ADMM, PDHG, Condat-Vũ, three-operator splitting, and the anchor mechanism can provide acceleration for these algorithms. For a specific task, anchor acceleration is practically useful in detecting infeasibility for constrained optimization problems [Park \& Ryu, 2023], and accelerating value iteration for dynamic programming and reinforcement learning [Lee \& Ryu, 2023]. - J. Park and E. K. Ryu, Accelerated Infeasibility Detection of Constrained Optimization and Fixed-Point Iterations. ICML'23. - J. Lee and E. K. Ryu, Accelerating Value Iteration with Anchoring. arXiv preprint. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' informative response, and maintain my current score.
Summary: The paper focuses on the analysis of anchor acceleration, a recently discovered acceleration mechanism for minimax optimization and fixed-point problems. The authors provide tight and unified analyses to characterize the convergence rate of anchor acceleration and present an adaptive method inspired by continuous-time analyses. The contributions include a differential inclusion model of anchor acceleration, well-posedness analysis, convergence rate analysis with a power-law anchor coefficient, and an adaptive method for choosing the anchor coefficient. Strengths: Given the limited understanding of the anchor acceleration mechanism compared to Nesterov acceleration, the authors aim to provide a formal and rigorous treatment of the anchored dynamics through continuous-time analyses. They also seek to gain insight into the anchor acceleration mechanism and its accelerated convergence rate. The authors present a differential inclusion model of anchor acceleration and establish its well-posedness. They analyze the convergence rate of the anchor ODE with a power-law anchor coefficient. The trade-off between the vanishing speed and the contracting speed of the anchor term is discussed. The authors also provide a proof outline of Lemma 3.4 and Theorem 3.1 to derive the convergence result. Additionally, the study presents an adaptive method for choosing the anchor coefficient based on continuous-time analyses. Weaknesses: - Novelty of applying continuous analysis to anchor acceleration: It would be helpful for the authors to clarify the motivation behind applying continuous analysis techniques specifically to the problem of anchor acceleration. Given the existence of continuous analysis techniques, what is the significance of applying them to anchor acceleration? This clarification would strengthen the novelty of the paper. - Importance of anchor acceleration: The paper could benefit from providing a clear explanation of why anchor acceleration is important and how it differs from other acceleration mechanisms, such as Nesterov acceleration. This would help readers understand the relevance and practical implications of anchor acceleration. - Technical challenges of applying continuous analysis: Can you elaborate on the specific technical challenges encountered when applying continuous analysis to the anchor acceleration problem? Discuss any unique aspects or complexities involved in analyzing the convergence properties of anchor acceleration using continuous-time techniques? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can you provide more details on the motivation behind using anchor acceleration and how it differs from other acceleration mechanisms, such as Nesterov's acceleration? - Can you provide more insights into the choice of the anchor coefficient $\beta(t)$ and how it affects the convergence rate of the algorithm? - Can you provide more details on the assumptions made in the analysis and how they may affect the applicability of the proposed method to real-world problems? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors does not seem to discuss the limitations adequately, they do mention that carrying out more advanced analyses for the anchor ODE are interesting directions of future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the valuable feedback and thoughtful questions. The answers to your questions are as follow. **Importance of anchor acceleration, difference between Nesterov's acceleration (W2, Q1) :** The main difference between Nesterov acceleration and anchor acceleration is that they are optimal methods for **different settings**. Nesterov acceleration is optimal in convex minimization problems, whereas anchor acceleration is optimal in minimax problems and fixed point problems. For minimax problems, an anchor-based method called EAG [70] was the first algorithm to achieve $\\mathcal{O} \\left( \\frac{1}{k^2} \\right)$-rate, and FEG [39] improved convergence rate by constant and generalized the setting. for fixed point problems, Halpern method of [42] and OC-Halpern [51] are probably exactly optimal [51], and equivalently APPM [34] and OS-PPM [51] are proven to be exactly optimal for monotone inclusions. All aforementioned methods use anchor acceleration. Therefore we believe that the anchor acceleration is the primary acceleration mechanism for these settings (which are not convex minimization) and their optimal use in discrete-time algorithms motivates our work. **Novelty of applying continuous analysis to anchor acceleration, technical challenges of applying continuous analysis (W1, W3) :** We think the Lyapunov functions we've introduced in Corollary 3.3 represent a non-trivial technical challenge that we overcame. Specifically, we introduce the Lyapunov function of the form \\begin{align*} V(t) &= \\frac{C(t)^2}{2} \\left( \\left\\| \\mathbb{A}(X(t)) \\right\\|^2 + 2\\beta(t) \\left\\langle \\mathbb{A}(X(t)) , X(t) - X_0 \\right\\rangle + ( \\beta(t)^2 + \\dot{\\beta}(t) ) \\left\\| X(t)-X_0 \\right\\|^2 \\right) \\\\ &\\quad - \\int_{t_0}^{t} \\frac{d}{ds} \\left( \\frac{C(s)^2\\dot{\\beta}(s)}{2} \\right) \\left\\| X(t)-X_0 \\right\\|^2 ds, \\end{align*} which is very different from the Lyapunov functions use for Nexterov acceleration [Su et al., 61, 62]: \\begin{align*} \\mathcal{E}(t) = \\frac{2t^2}{r-1} ( f(X(t) - f^{\\star} ) + (r-1) \\left\\| X(t) - x^\\star + \\frac{t}{r-1} \\dot{X}(t) \\right\\|^2. \\end{align*} In the analysis of optimization algorithm, the construction of an appropriate (continuous- or discrete-time) Lyapunov function is often the main technical challenge, and we argue that discovering this Lyapunov function and using it to characterize the continuous-time dynamics is a novel contribution. Using this Lyapunov function, we obtained convergence rates for generalized coefficient $\\beta(t) = \\frac{\\gamma}{t^p}$ with $\\gamma>0$ and $p>0$ as in Table 1. This result provides us with a more refined understanding of the anchor mechanisms. These results also extend to discrete setup, as section 5 confirms such correspondence. We then provide analysis for strongly monotone conditions as well (section 6). The insights from these analyses culminate in designing the adaptive method of section 7, which we believe to be useful in practice due to its adaptability to both monotonicity and strong monotonicity. **Insights into the choice of the anchor coefficient $\\beta(t)$ (Q2) :** We answer this question by summarizing Section 3 of the paper. As the name 'anchor' suggests, the anchor term pulls the trajectory towards the anchor. Intuitively speaking, sufficient amount of pull by anchor leads to contracting behavior of the dynamics and convergence as well. However, the anchor should eventually vanish, since our goal is to converge to an optimum $X_\\star$ making $\\mathbb{A}$ zero, not to the anchor. Therefore, the vanishing speed $\\beta(t)$ of anchor should be fast enough for the flow to safely converge to the optimum. Proper balancing between 'necessity of anchor' and 'vanishing anchor' leads to fast convergence, and choice of $\\beta(t) = \\frac{1}{t}$ for merely monotone $\\mathbb{A}$ can be found with such intuition. **Applicability of assumption (Q3):** One of the most significant applications of anchor acceleration is minimax optimization. There are several papers using anchor acceleration for minimax optimization that have been published in ML venues, such as EAG [70, ICML 2021] and FEG [39, NeurIPS 2021]. Our work considers the problem of making the norm of monotone operators small efficiently. The saddle differential operator $G_L = (\\nabla_x L,\\, - \\nabla_y L)$ of convex-concave minimax function $L \\colon \\mathbb{R}^n \\times \\mathbb{R}^m \\to \\mathbb{R}$ is a monotone operator, thus anchor scheme is applicable. The point where $G_L$ outputs zero vector is the saddle point of $\\min_x\\max_{y} L(x,y)$, thus minimizing $\\left\\| G_L(x,y) \\right\\|^2$ solves this minimax optimization problem. We thank the reviewer for the clarifying questions. We hope we have addressed the reviewer's primary concern about the significance of the anchor acceleration. If so, we kindly ask the reviewer to consider raising the score. --- Rebuttal Comment 1.1: Comment: Thank you for your clarification. I have raised my score accordingly.
Summary: The paper analyzes the dynamics of anchor acceleration using a differential inclusion. It derives a convergence rate that depends on the anchor coefficient and shows that the rate is tight using a certain instance. By discretizing the differential inclusions, the authors derive an algorithm that generalizes the state-of-the-art APPM algorithm and provide an analysis of its convergence rate. In addition, the authors propose an algorithm that adaptively varies the anchor coefficient and show its performance both theoretically and empirically. Strengths: - Compared to existing papers that provide analysis of anchor methods through continuous-time analysis, this paper provides better convergence rates in a more general setting. - The adaptive anchor acceleration method in Section 7 is interesting, and the method is inspired by the analysis of continuous-time dynamics. It is a meaningful example of the potential of ODE analysis for optimization algorithm design. Weaknesses: - In terms of algorithms and their theoretical analysis, the contribution of this paper does not appear to be significant. In my understanding, the main result of this paper is deriving a known convergence rate in a different way, and proposing a different algorithm that achieves the same rate as the known one. There is insufficient discussion of how the results of this paper are superior to existing studies. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Since A is treated as a set-valued operator in "Monotone and set-valued operators" of Section 1.1, I understood that Ax in the equations in Lines 39 and 41 is a set. Then those equations involve the inner product of a point and a set in Euclidean space. Such an inner product is not clear to me. Additionally, in Line 50, A is assumed to be a differentiable operator, but the definition of the differentiability of set-valued operators is nontrivial. So, I guessed that A here is a single-valued operator. Is it correct? If so, it would be good to clarify it. - Theorem 2.2 states the uniqueness of the solution of the differential inclusion (6) rather than a differential equation. Is this a valid claim? It seems for me that there is more than one solution depending on which of the elements in A(X(t)) in equation (6) is chosen at each time t. Adding a comment on this point would facilitate the reader's understanding. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable review. We're glad that you've felt our adaptive anchor acceleration method in Section 7 as a "meaningful example of the potential of ODE analysis for optimization algorithm design". We address the answers to your questions and feedbacks as follows. **Q1.** - As mentioned by the reviewer, $\\mathbb{A}x$ is a set. The definition of the inner product between a point and a set is clarified in Line 40, which is referenced from [Ryu \& Yin, 54]. We will replace 'i.e' in Line 40 to other expressions such as 'which means that' to avoid confusion. Thank you. - We missed to mention that the definition of differentiable operator includes singe-valuedness. As you mentioned, $\\mathbb{A}$ in Line 50 is single-valued, and we will clarify this issue. Thank you for pointing this out. **Q2.** Yes, it is a valid claim, and we provide the proof in Section B.1. of the appendix. Recall that in Line 59, the solution was defined as a function $X\\colon [0,\\infty) \\to \\mathbb{R}^n$ that satisfies certain condition, and in case of theorem 2.2, it is: *"absolutely continuous and satisfies (6) for $t\\in(0,\\infty)$ almost everywhere, with initial condition $X(0)=X_0$."* The claim of theorem 2.2 is that if a function $X \\colon [0,\\infty) \\to \\mathbb{R}^n$ satisfy such conditions, it is unique as a function. Note that this fact holds true regardless of $\\mathbb{A}(X(t))$ being a set. To provide further clarification, for the unique solution $X$, the element of $\\mathbb{A}(X(t))$ satisfying $\\dot{X}(t) \\in -\\mathbb{A}(X(t)) - \\frac{\\gamma}{t^p} (X(t)-X_0)$ is uniquely chosen for each $t>0$ up to measure zero. Once the absolutely continuous function $X\\colon [0,\\infty) \\to \\mathbb{R}^n$ is determined, its derivative $\\dot{X}$ is defined almost everywhere and is unique up to measure zero. As a consequence, the selection defined in Line 61, $\\tilde{\\mathbb{A}}(X(t)) := -\\dot{X}(t) - \\frac{\\gamma}{t^p} (X(t)-X_0)$ is determined almost everywhere and is unique up to measure zero as well. A standard reference that rigorously establishes the well-posedness of such differential inclusions is [33]. **Weakness.** Through the convergence analysis for the generalized anchor coefficient $\\beta(t)$, we were able to obtain a more refined understanding of the role of the anchor coefficient, which further motivated our adaptive method. Theorem 3.1 and the results in Table 1 give the intuition that the (i) vanishing speed of $\\beta(t)$ must be fast enough in order not to slow down the convergence, and (ii) the optimally balanced choice for monotone condition is $\\frac{1}{t}$. This intuition is strengthened via checking the tightness and the correspondence to the discrete setup in section 4 and 5. However, Theorem 6.2 shows that the linear rate is attainable for $\\mu$-strongly monotone operator with $\\beta(t)$ vanishing linearly fast, but not with $\\beta(t) = \\frac{1}{t}$ (Line 196). Overall results of section 3 to 6 lead to another message that "anchor coefficient should be chosen to adapt to the operator's property.'' This motivates our adaptive method. Moreover, observing the commonly occurring terms from Proposition 3.2 and Proposition 6.1 referred to as $\\Phi(t)$, we could get inspiration to design our adaptive method which keeps $\\Phi(t) = 0$. Again, we thank the reviewer for the constructive feedback. We believe we have addressed the reviewer's concern about the validity of Theorem 2.2. In that case, we kindly ask the reviewer to consider raising the score. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response. The response addressed my concern. Regarding Q2, I had inadvertently overlooked the condition "absolutely continuous" in Line 59. I now agree with you that the claim of Theorem 2.2 is valid. I also understand that the observation that "anchor coefficient should be chosen to adapt to the operator's property" was obtained through the ODE analysis, and this observation is also the contribution of the paper. My score has been modified.
Rebuttal 1: Rebuttal: # Common Response First and foremost, we thank all reviewers for their time and feedback on our paper. We are pleased to see that most of the reviewers found our contributions to be valuable, especially the adaptive anchor method that we obtained using the insight gained through the continuous-time analyses. Although one reviewer expressed concerns regarding novelty, we believe this is perhaps due to a misunderstanding, and we clarify that Nesterov and anchor accelerations are accelerations applied to different/disjoint settings, making them not in competition with each other. We provide further details of the distinction in the individual response, and we hope this resolves the misunderstanding.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Preconditioning Matters: Fast Global Convergence of Non-convex Matrix Factorization via Scaled Gradient Descent
Accept (poster)
Summary: This paper considers the low-rank matrix factorization problem (LRMF). Recent work provided global convergence for gradient descent on LRMF starting from small random initialization and small learning rate, but the convergence rate there depends on the matrix condition number. This paper considers a variant of gradient descent where the update is rescaled akin to steepest descent, called ScaledGD (and its alternating version, AltScaledGD). The main result of this paper is proving that ScaledGD (and AltScaledGD) converges at a faster rate, with no dependence on the matrix condition number. Moreover, AltScaledGD converges even without small initialization and small step-size. Strengths: The main theoretical results of this paper are quite solid. ScaledGD (and AltScaledGD) are very reasonable algorithms for solving the LRMF problem, and this paper proves that they converge at a fundamentally faster rate than plain GD on the LRMF problem. Weaknesses: The presentation is pretty good, although it does seem written in a rush in many places. The main weakness of this paper is its motivation. - The low-rank matrix factorization problem can solved very efficiently using specialized matrix algorithms, so if one studies simple general algorithms like gradient descent for LRMF, the end goal should not be to use it for LRMF, but rather to gain some understanding on the behavior of the simple algorithm on more complicated optimizations like neural networks. But while gradient descent is feasible on more complicated problems, I do not see how to implement ScaledGD (or AltScaledGD) on neural network optimization problems, or on any optimization problem where a specialized algorithm can already do better. Technical Quality: 3 good Clarity: 3 good Questions for Authors: To address the concerns in the 'weaknesses' box, please defend the motivation for studying ScaledGD (or AltScaledGD). Could these algorithms reasonably be implemented on nonlinear neural network optimization problems (or on optimization problems beyond matrix factorization where it could be the state-of-art algorithm)? Can you provide experimental evidence that the algorithms here perform significantly better than GD (or AltGD) when the underlying matrix is not exactly low-rank, but just approximately low-rank? Of course, any algorithm for LRMF should remain competitive when the matrix M is not exactly low-rank. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer for your positive evaluation of our work, and we also thank you for your valuable and constructive suggestions. As for your concerns, we make detailed responses as follows. We will deeply appreciate that you can raise your score if you find our responses resolve your concerns. **Q1: The motivation for studying ScaledGD/AltScaledGD on LRMF and could these algorithms be implemented on nonlinear neural network optimization problems?** **A1**: Gradient descent algorithm is of great significance in non-convex optimization problems as studied by recent works Chi et al. [2019], Li et al. [2019a], Ye and Du [2021]. ScaledGD is a simple extension of GD by scaling the gradient with a precondition matrix with minimal computational overhead, while it resolves the convergence barrier of GD for ill-conditioned non-convex optimization problems such as low rank matrix recovery (LRMR) Tong et al. [2021]. ScaledGD is also computationally more efficient than the current SOTA algorithm for the LRMR problem as analyzed in Tong et al. [2021]. While the current convergence analysis of ScaledGD for LRMR in Tong et al. [2021] is only local (with spectral initialization), it is more important to study the global convergence of the ScaledGD for the non-convex LRMR problem. Our work makes the first attempt to study the global convergence of ScaledGD and AltScaledGD for asymmetric LRMF, and our results can be directly extended to general low rank matrix recovery problem with the help of conditions such as RIP for matrix sensing. The ultimate goal of studying ScaledGD is to use it for more non-convex optimization problems such as the training of deep neural networks. We show our recent results of ScaledGD and AltScaledGD on more challenged optimization problems such as deep linear network and nonlinear network in Figure 1 and Figure 2 of the PDF file in the Author rebuttal part. Specifically, we consider the following deep linear model: $$\min_{W_3, W_2, W_1} f(W_1, W_2, W_3) := \frac{1}{2} ||W_3W_2W_1 - M||_F^2$$ with the ScaledGD iteration given by: $$W_1^{k+1} = W_1^{k} - \eta (W_2^{k\top} W_3^{k\top}W_3^k W_2^k)^{-1} \textcolor{red}{{{\nabla} _{W_1}}f}$$ $$ W_2^{k+1} = W_2^{k} - \eta (W_3^{k\top} W_3^k)^{-1} \textcolor{red}{{{\nabla} _{W_2}}f} (W_1^kW_1^{k\top})^{-1}$$ $$W_3^{k+1} = W_3^{k} - \eta \textcolor{red}{{{\nabla} _{W_3}}f}(W_2^k W_1^kW_1^{k\top} W_2^{k\top})^{-1} $$ and the nonlinear model: $$\min_{W_2, W_1} f(W_1, W_2) := \frac{1}{2} ||W_2\sigma(W_1) - M||_F^2$$ with ScaledGD iteration given by: $$W_1^{k+1} = W_1^k - \eta {G^k} \odot \left( (W_2^{k\top} W_2^k)^{-1} (H^k\odot \textcolor{red}{{{\nabla} _{W_1}}f})\right)$$ $$W_2^{k+1} = W_2^k - \eta \textcolor{red}{{{\nabla} _{W_2}}f }\left( \sigma(W_1^k) \sigma(W_1^k)^{\top}\right)^{-1}$$ where $G^k$ is a matrix with $G_{i,j}^k = \frac{\partial \sigma(W_{1_{i,j}}^k)}{\partial W_{1_{i,j}}^k}$, $H^k$ is a matrix with elements $H_{i,j}^k = 1/G_{i,j}^k$, $\eta$ is the step-size and $\sigma(\cdot)$ is a piece-wise linear function as Leaky ReLU. It can be seen from Figure 1 that ScaledGD converges much faster than GD and the convergence rate is independent of the condition number $\kappa$ of the target matrix $M$, whereas GD convergence at different rates for different $\kappa$. Figure 2 shows the result of ScaledGD/AltScaledGD for the nonlinear network problem. Figure 2 (a) plots the loss curves of ScaledGD and GD under different condition number $\kappa$, Figure 2 (b) plots the zoomed-in curves of GD in Figure 2 (a). It can be seen from these figures that ScaledGD converges much faster than GD even for this nonlinear network model, and the convergence rate of ScaledGD is also independent of the $\kappa$, while in contrast GD converges at different rate under different $\kappa$. In Figure 2 (c), we compare ScaledGD with AltScaledGD, it can be seen that both ScaledGD and AltScaledGD converges linearly to the global minima, while AltScaledGD converges faster than ScaledGD. Please refer to the PDF file for the figures. **Q2: Provide experimental evidence to show that the algorithms here perform significantly better than GD when the underlying matrix is not exactly low-rank.** **A2**: We have tested the proposed methods on real data sets such MSI data and video data which are not exactly low-rank but are approximately low-rank. We show the experimental results in Figure 3 and Figure 4 in the attached PDF file at the author rebuttal part. It can be seen from these figures that even for real data sets with not exactly low-rank, the ScaledGD and AltScaledGD still converge much faster than the vanilla GD. --- Rebuttal 2: Title: Request for discussion Comment: Dear reviewer Cp2y, We understand that reviewing is a time-consuming task and we want to express our gratitude for your dedication. Since the end of the discussion period is approaching, if you have any further concerns please feel free to let us know and we are pleased to discuss with you. We will greatly appreciate that you can raise your score if you find our responses resolve your concerns. Thank you very much ! --- Rebuttal 3: Comment: Dear reviewer, Since the end of the discussion period is approaching, if you have any further concerns please feel free to let us know and we are pleased to discuss with you. Thank you very much! --- Rebuttal Comment 3.1: Comment: I have read all the comments, and I will maintain my score.
Summary: This paper considers the problem of low-rank matrix recovery using preconditioned gradient descent. Traditionally, although GD can be used to solve this problem, it becomes extremely slow when the problem is ill-conditioned. Recently a method called ScaledGD was proposed that makes GD immune to ill-conditioning. However, previously, the theoretical analysis of ScaledGD required spectral initialization, and this work extends that analysis to small or moderate random initialization. The authors also analyze an alternating version of ScaledGD, which they call AltScaledGD, and also prove global convergence for this algorithm. Strengths: Previously, algorithms like ScaledGD with make GD immune to ill-conditioning relies on an initial point close to the ground truth. Although this can be achieved through a method known as spectral initialization, it can be very expensive. This work strengthens previous results by showing that a *random* initialization can also achieve linear convergence. Similar results have been established for vanilla GD with small initialization. Here the authors extend such results to ScaledGD, and show that a moderate random initialization also works. To me this is the main novelty of this work. Weaknesses: The main weakness that prevents me from giving this work a higher score is the organization of the technical sections. In the introduction the authors claim that they prove global convergence of ScaledGD with moderate initialization. However, in section 3.1 the main results are only stated for the **rank-1** case. Again, in section 4, the authors claim that they will present the theoretical analysis for the rank-1 case. In the appendix, however, they authors present Theorem 3, which is the moderate initialization case for the rank-d case. But the proof of Theorem 3 refers to some results in section 4, which was presented for the rank-1 case. As a result, it is hard to decipher which parts of the rank-1 case generalize directly to the rank-d case, and which parts need more work. Overall I think the organization of the technical sections is confusing and needs significant improvement. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for your valuable comments on our work. As for your concerns, we give detailed response below which we hope can help you fully understand our work. Please feel free to let us know if you have any further concerns. We will deeply appreciate that you can raise your score if you find our responses resolve your concerns. **Q1: The organization of the technical sections should be improved, some of the results are only stated for rank-1 and the proof from rank-1 to rand-d is confusing.** **A1**: The overall picture of the technical sections is that we first present the main results (rank-1 for ScaledGD and rank-d for AltScaledGD) in section 3. Then to have a proof guideline, we give the proof sketch of the rank-1 case of both ScaledGD and AltScaledGD in section 4, as the rank-1 case is easy to understand and follow. At last, in Appendix we present more detailed proofs of both rank-1 and rank-d based on the proof sketch in section 4. Since the proof of rand-d follows the proof sketch of rank-1 with some moderate changes of the lemmas and inequalities from rank-1 to rank-d, we refer the proof sketch of rank-1 as the proof guideline of Theorem 3 in Appendix. We are sorry to make you feel confused on our proof. Note that ScaledGD is closely related to GD but it has quite different convergence property compared to GD, if $rank(M)=1$, ScaledGD becomes exactly GD but with varying step-size. Therefore, to make a concise and clear comparison between ScaledGD with GD, we present the convergence results of ScaledGD for rank-1 case (the preconditioning relates to a number rather than a matrix). Then for AltScaledGD, to fully understand the convergence property we present the results for general rank-d in section 3.2, therefore section 3 consists of all our main results. Once again, we thank the reviewer for your valuable comments, we will carefully revise the current manuscript according to the suggestions provided by the reviewer to further enhance the writing. --- Rebuttal 2: Title: Request for a discussion Comment: Dear reviewer beQc, We understand that reviewing is a time-consuming task and we want to express our gratitude for your dedication. We value your expertise and opinion, and hope that you can take time to have discussions with us if you have any further concerns. We would greatly appreciate it if you could raise your score if our responses have addressed your concerns. Thank you very much ! --- Rebuttal Comment 2.1: Comment: I thank the authors for the clarifications. However, I feel this organization is very confusing, even after the clarification. If I understand correctly: the main results of this paper are: rank-1 convergence for ScaledGD and rank-d convergence for AltScaledGD? In other words, the authors do not prove global convergence for rank-d ScaledGD, is this correct? --- Reply to Comment 2.1.1: Comment: Thank you for your response. To study the results of an optimization algorithm for rank-1 can help us gain more insight and understanding on the theoretical analysis, which has been widely used in low rank matrix recovery. We do not provide the results of ScaledGD for rank-d but instead provide the results of AltScaledGD since the results of ScaledGD follow directly from Theorem 1 and Theorem 2, and the corresponding proof is only trivial given all the results available (please refer to our discusses below). **1. Why do we provide the convergence results for rank-1?** ScaledGD is closely related to GD and specifically it becomes exactly GD but with varying step-sizes (implemented by the scaling number) when $rank (M) = 1$. Since the comparisons of ScaledGD and GD in the rank-1 case is easy to understand and follow, we first presented the convergence results of ScaledGD for rank-1, which is similar to [1] to deal with the global convergence of GD. [1] Simon S Du, Wei Hu, and Jason D Lee. Algorithmic regularization in learning deep homogeneous models: layers are automatically balanced. Advances in Neural Information Processing Systems, 31:382–393, 2018. **2. Why is the proof of ScaledGD for rank-d trivial?** We have provided detailed comparisons and analysis (the proof sketch) of the convergence of ScaledGD and AltScaledGD of rank-1 (section 4) for easier understanding. Based on the proof sketch of rank-1, the proof of the convergence of AltscaledGD for rank-d follows easily by extending the inequalities and lemmas from rank-1 to rank-d. With all these in hand, to prove the results of ScaledGD for rank-d is only trivial by following the proof guideline of its rank-1 case and the corresponding lemmas and results of the proof of AltScaledGD. If the reviewer concerns that the results provided in section 3.1 (for rank-1) make the organization of the paper confusing, we can simply replace the results in Theorem 1 and Theorem 2 to rank-d (our results for rank-1 can be directly generalized to rank-d) and append the aforementioned proof which only needs minor revision of the current Appendix. Once again, thanks for your comments and response. --- Reply to Comment 2.1.2: Comment: Dear reviewer, By far, your main concern is that the results provided in section 3.1 is for rank-1 rather than for rank-d. Yet, the overall picture and organization of this paper are clear. Meanwhile, to present the results for rank-1 is commonly used in low-rank matrix recovery. In our work, given the rank-1 results/proof of ScaledGD and the rank-d results/proof of AltScaledGD, the proof of ScaledGD for rank-d, which we omitted in the initial submission, is only trivial by using existing lemmas and inequalities which follows the proof sketch. According to the reviewer’s comment, we would like to revise the corresponding part in 3.1, **which only needs minor revision of the current manuscript.** We hope that replacing the results of Theorem 1 and Theorem 2 by rank-d as given by AltScaledGD can address your concern. Thank you very much.
Summary: This paper studies low rank matrix factorization, which is an important topic with many applications in machine learning. The main challenge of this problem comes from the non-convexity of the objective function, especially, this objective function can be non-smooth. As a consequence, the global convergence remains a difficult question in this area. The most recent approximate global minima convergence guarantee depends on quite a few parameters, and this, especially small initialization and small learning rate, might reduce the practical application potential of the convergence results. To address this issue, this paper shows that precondition helps in accelerating the convergence and the scaled gradient descent converges to approximate minima after a better number of iterations. Strengths: The problem in this paper is well motivated and studied. The main strength of this paper is the convergence result does not sensitively depend on condition number and small learning rate, which improve the application potential of the current algorithm and convergence analysis. Another strength is that this paper provide some variants of the main algorithms and simultaneously provide comparable convergence rates, which might of independent interests. Weaknesses: 1. The main weakness of this work is lacking of real data based experiments. Despite the theoretical analysis is complete and technical, matrix factorization is a powerful method in real application. 2. The convergence rate is theoretically improved, however, the practical impact of matrix factorization algorithm is inevitably tied with real data set experiments. It is not clear how the algorithm works in real data set, so it might cause some difficulty to judge the contribution of this theoretically sound paper. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. It is not quite obvious to me that why the initialization can be completely random. Since the objective function is non-convex, the initial points can still converge to different local minimum, and this might cause a difference in the resulting output. 2. Why the saddle avoidance phase hold for random initialization? I might miss some point, but one of the contribution of this paper is about the initialization can be random, and then, at least intuitively, this will cause some technical challenges in analyzing the saddle avoidance phase. Could you please provide some intuition on the proof of 157-176 in Appendix, so that it is more clear that the saddle avoidance can be done for random initialization? 3. Another question on the theoretical side is: what is the main technical challenge in proving the convergence of alternating scaled GD, compared to the main algorithm? 4. Line 71, why near zero initialization is an issue? How does it effect specific ill-conditioned LRMF problems? 5. Line 78, results of Tong et al are local or global? The presentation causes confusion. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: As stated before, the main limitation of this paper is the experiment, it is not convincing that current theoretical results is powerful enough to have strong impact in applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly thank the reviewer for the valuable comments and the constructive suggestions. Here we response your concerns one by one. We will deeply appreciate that you can raise your score if you find our responses resolve your concerns. **Q1: How does the algorithms work on real data set.** **A1**: The ScaledGD and AltScaledGD are not only work on simulated data but also on real data set such as multi-spectral image (MSI) data and video data sets. We provide a PDF file in the author rebuttal part, and report the experimental results of GD, ScaledGD and AltScaledGD on MSI and video data sets in Figure 3 and Figure 4. In these figures, we can see that ScaledGD and AltScaledGD converges much faster than GD on both MSI data sets and video data sets. Specifically, the relative error of ScaledGD/AltScaledGD is less than $10^{-2}$ for MSI data "flower" and is less than $10^{-5}$ for MSI data "Simu_Indian" within $500$ iterations, while the relative errors of GD are still very large after $2000$ iterations. The experimental results are consistent with the theoretical results provided in the main paper. We will also replenish these real data sets experiments on the Appendix of our revision. Meanwhile, our follow-up works indicate that ScaledGD/AltScaledGD work well for more applications, please refer to our response to Q1 of reviewer Cp2y and Figure 1, Figure 2 on the PDF file which show that ScaledGD/AltScaledGD work well for neural network model. Overall, this conference paper focuses on proving the global convergence of ScaledGD and AltScaledGD for the non-convex LRMF model and analyze the descent trajectory of the algorithms. The results are significant improvements to the work of Tong et al. 2021 for the analysis of the ScaledGD. More applications of ScaledGD and AltScaledGD and their convergence analysis are our ongoing work. **Q2: Why the initialization can be completely random.** **A2**: For non-convex optimization, if the objective functions satisfy the strict saddle property (SSP) that is all saddle points have a decent direction, then gradient based algorithm can converge to local minima [1]. The low rank matrix factorization as well as low rank matrix recovery satisfy the SSP, meanwhile, the objective functions have benign landscape (all local minima are global minima) [2], therefore even with random initialization ScaledGD/AltScaledGD is able to converge to the global minima. While, existing works do not provide global convergence rate of ScaledGD/AltScaledGD on asymmetric LRMF, we are the first to give a detailed convergence rate analysis of ScaledGD and AltScaledGD with random initialization. [1] Lee, J. D., et al. Gradient descent only converges to minimizers. In COLT, 2016. [2] Ge, R., et al. Matrix completion has no spurious local minimum. In NeurIPS, 2016. **Q3: Provide some intuition on the proof of 157-176 for the saddle avoid.** **A3**: The saddle avoid phase (line 157-176 in Appendix) is analog to the rank-1 case analyzed in section 4.1.2 line 295-305 in the main paper, which guarantees that ScaledGD does not trap into the saddle point. Specifically, for general Gaussian initialization with the scale of the initial valuable $U_0$ and $V_0$ greater than some constant $c_{init}$, according to Lemma 4, we know that the norm of the matrix $U$ and $V$ decrease toward the saddle point $U = 0$ and $V = 0$ until the inequality $ <U_k V_k^{\top}, M > \geq || U_k V_k^{\top}||_F^2$ is satisfied. The inequality is crucial to our analysis since once it is satisfied, according to Lemma 4 the norm of the matrix $U_k V_k^{\top}$ tends to increase as shown in Figure 4 of the main paper, and meanwhile Lemma 5 and Lemma 6 in the main paper show that the angle between the subspace of $U_k$ ($V_k$) and $ U^*$ ($ V^*$) decrease linearly, which reveals that the ScaledGD has escaped the saddle region (since in rank-1 case the saddle point is 0 matrix with norm 0). The proof of 157-176 in Appendix is to show that after the initial phase we have $< U_kV_k^{\top}, M> \geq \tau_k ||U_kV_k^{\top}||_F^2$, and in the saddle avoid phase the variable $\tau_k$ is increasing from 1/2 to 1 such that the inequality $<U_k V_k^{\top}, M> \geq ||U_kV_k^{\top}||_F^2$ is fulfilled. After $\tau_k$ grows up to 1, the algorithm enters the linear convergence phase as analyzed in line 177-185 in Appendix. **Q4: Technical challenge in proving the convergence of alternating scaled GD ?** **A4**: The proof of the convergence of the AltScaledGD is similar to that of the ScaledGD, with only difference in that the term ④ in Eq. (12) limits the learning rate $\eta$ to be smaller than a constant $c_{\eta}$, while there is no corresponding term ④ in Eq. (25), therefore the learning rate $\eta$ can be set as $0\leq \eta \leq 1$ in AltScaledGD. In consequence, AltScaledGD enjoys lager learning rate and faster convergence rate than ScaledGD as specified in line 318-336 of the main paper. **Q5: Why near zero initialization is an issue?** **A5**: Near zero initialization as well as spectral initialization limit the initialization to a local region near some certain points, which has strong initialization bias. While for the theoretical analysis of non-convex optimization, one is interested in the global convergence property of the optimization algorithm which starts from any random initialization (without initialization bias) rather than a local region of some certain point. We prove that the ScaledGD and AltScaledGD are not sensitive to the scale of the initialization, they converge fast in both general random initialization and near zero initialization. **Q6: Results of Tong et al. are local or global?** **A6**: The results of Tong et al. are local, since it relies on the spectral initialization (initialization in a local region the global minima). In contrast, our results are global as our initialization is random without initialization bias. We will revise our statement in the main paper. --- Rebuttal Comment 1.1: Comment: Thanks for the additional experiments and answers to my questions, I have no questions at this point. The score can be improved due to my concerns on real data experiments has been addressed. --- Reply to Comment 1.1.1: Title: Acknowledgment Comment: We sincerely appreciate your time and efforts in providing us with your response. Your support has made a significant difference in our work, and we are confident that it will lead to a stronger final product.
Summary: This paper considers the classical problem of low rank approximation. In particular, given a matrix $M\in \Re^{m \times n}$ with rank $d$, we want to find $U\in \Re^{m\times d}, V\in \Re^{n\times d}$ that minimizes $f(U,V) = |UV^T - M|_F$ (i.e $U,V$ are low rank matrices whose product approximate $M$). The problem is very well studied and classical. Obviously, one can perform classical gradient descent on $U,V$ to optimize for $f$ (although this is not desirable because it relies on the condition number of $M$). It is shown that the local minima of $f$ do not misbehave, so gradient descent works fine. However, in the literature, there are two "gradient scaled" iterative variants of gradient descent that "scale" the gradient by right multiplying it by appropriate matrices. This is well studied in the literature, and the two variants considered in the paper in ScaledGD and AltScaledGD (where AltScaledGD is similar to ScaledGD but "alternates" the scaling matrix multiplied by the gradient). There has been a lot of work that studies the theoretical convergence of these two problems. The latest is the work from 2021 by Ye and Du that get linear convergence until very strong assumptions (technically, they're not "assumptions", but the convergence rate is not clean at all). In this paper, the authors give an elegant proof that both ScaledGD and AltScaledGD converge linearly for the low rank approximation problem. This is the first "clean" analysis (in the sense that all the constants are clear) for the two algorithms. Strengths: - The paper is cleanly written, and reads well. - The analysis is quite elegant and sheds a lot of answers on the intuition on why these algorithms work well and do not rely on the condition number of $M$. - Some of the lemmas proven along the way might me of independent interest (for example, while I am not related directly to this line of research, one of the Lemmas proven will probably very useful for my next paper). The proofs are well written (although there are some rough patches there). Weaknesses: - Some areas are rushed in the main paper. Some areas need to be explicitly written instead of using "it can be deduced" which is frustrating to the reader and is used a couple of times by the authors (when it really shouldn't). I've noted some examples below, but please avoid doing this and write the steps explicitly. I understand this would take longer to write, but it makes the reading process much more pleasant. - The author(s) can spend a bit more time on explaining the intuition behind some of the inequalities they derive. The proofs sometimes seem very mechanical, and in other areas seem to shed a lot of intuition on what is happening. It would be nice if this is standardised so that the overall picture of the proof is explained first, before churning out the math. For example, some of the proofs from Section 3 are quite heavy and go through math without offering why said computations are being made. It's only in the end after rereading that you get what was happening. ------------------------------------------------------------------ Lines 80: AltScaledGD is not introduced earlier. Please introduce it earlier. Line 108: Scaled -> scaling Line 117: "converges much faster than gradient descent while ther" ==> "converges much faster than gradient descent. However, ...." Line 120-122: "The local convergence .. problem (1)." This is an unnecessary sentence. Remove or rephrase. Line 154: Theorem 1, I might be missing something, but where is the dependence on \sigma in Theorem 1 statement? Line 216: "It can be deduced tha.." Don't do that, please spell it out explicitly (i.e sub in eq(2) and expand.. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Line 154: Theorem 1, I might be missing something, but where is the dependence on \sigma in Theorem 1 statement? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Lines 346-351. Well addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer for your careful review, constructive suggestions and positive feedbacks. The followings are our responses to your concerns. We will greatly appreciate that you can raise your score if you find our responses resolve your concerns. **Q1: Write some of the proof steps explicitly.** **A1**: We will carefully check our proof and rewrite the proof with more details such that it is easy to read. Thank you for your kind suggestions. **Q2: Explain the overall picture of the proof and the intuition behind some of the inequalities.** **A2**: To help us understanding the proof of the main results in section 3, we provided the proof sketch of the main results for rank-1 case in section 4 since it is easy to follow and understand (the proof sketch of AltScaledGD for rank-1 case is the similar to that of ScaledGD). The proof of the rank-d case follows the sketch of the rank-1. Specifically, the error $ || U_k V_k^{\top} - M||_F $ is upper bounded by four terms in ScaledGD as in Eq. (12) and three term in AltScaledGD as in Eq. (25), all the remaining lemmas and inequalities are to guarantee that each of the upper-bound terms decrease linearly. To this end, the inequality $ <U_k V_k^{\top}, M > \geq || U_k V_k^{\top}||_F^2$ is crucial to our analysis since once it is satisfied, according to Lemma 4, Lemma 5 and Lemma 6 the upper bound terms in Eq. (12) and Eq. (25) decrease linearly, which reveals the linear convergence of ScaledGD and AltScaledGD. Once again, we thank the reviewer for the valuable and constructive suggestion. We will revise our manuscript of section 4 and the Appendix in the revision such that the overall picture of the proof can be clear and easy to follow. **Q3: Some typos need to be rectified and the writings need to be improved.** **A3**: We will carefully proofread the entire manuscript to ensure that all the writing mistakes will be rectified. **Q4: Where is the dependence of $\sigma$ in Theorem 1 statement?** **A4**: Thank you for your careful reading and pointing this out. $\sigma$ determines the scale of the initialization and correspondingly the initial error $||U_0 V_0^{\top} - M||_F$, which further determines the time the initial phase lasts. Specifically, the variable $\sigma$ is contained in $T_1$ as $T_1 = O(\ln \frac{\sigma d}{\delta})$, since $\sigma$ is only a constant ($O(1)$) compared to the problem dimension and other parameters, we thus omitted it in our previous submission. We will revise the corresponding part in the revision. --- Rebuttal Comment 1.1: Title: Acknowledgment. Comment: Authors answer my questions sufficiently and promised to address the changes in later revisions. Another point is that they promised that that the constants will be made more explicit in the paper. With those in mind, I do not mind raising my score by one. --- Reply to Comment 1.1.1: Comment: We are genuinely grateful for your time and effort to reviewer our work and response to our rebuttal. It is with your the help of your thoughtful review that we have been able to enhance the quality of our work, ensuring that it meets or exceeds the expected standards. Your attention to detail has helped us refine our writing and bring greater clarity to our message.
Rebuttal 1: Rebuttal: Dear AC and reviewers, Thank you so much for your valuable comments and we truly appreciate the time and effort you've taken to review our work. We are also glad that the reviewers found our work valuable and give us positive feedbacks. Your feedback is very important to us, and according to the reviewers comments, we have revised the manuscript carefully and made detailed response to all your concerns (**please refer to our rebuttal for each reviewer for more detailed responses**). As this paper focus on the theoretical side to prove the global convergence of ScaledGD and its variant AltScaledGD on the basic model of the low rank matrix recovery problem, reviewers may concern whether ScaledGD/AltScaledGD can be used for more advanced non-convex optimization problem such as nonlinear neural networks. To use ScaledGD/AltScaledGD for the optimization of deep learning is a very interesting research direction, we are still working on this problem. We provide one page PDF file to show some experimental results on this problem. Specifically, we show in the attached PDF file Figure 1 and Figure 2 the experimental results of ScaledGD/AltScaledGD on deep linear network and non-linear network, which all certificates that ScaledGD/AltScaledGD converge much faster than vanilla GD. Though ScaledGD/AltScaledGD work well for deep network, their convergence analysis require new theoretical tools, which is quite challenging compared to the convergence of LRMF, we will present the results in our next work. Meanwhile, we also tested the ScaledGD/AltScaledGD on real data sets in Figure 3 and Figure 4 of the PDF file. In real data sets, the target matrix is not exactly low-rank but approximately low-rank with very large condition number $\kappa$. The experimental results show that ScaledGD/AltScaledGD still work well for real data sets compared to GD. Though the proof of our results is a little bit lengthy, and requires some important lemmas and inequalities, we have provided the overall picture of the proof by the proof sketch for rank-1 case in section 4. The proof for rank-1 case is easy to follow and understand. The proof of rank-d follows directly the proof sketch of rank-1 but with moderate changes of lemmas and inequalities from rank-1 to rank-d, we thus leave the proof in Appendix. In this paper, we first present the main results in section 3 and provide an overall picture of the proof in section 4, detailed proof are provided in the Appendix, therefore the overall structure of the manuscript is clear. In the revision, we will further improve the organization of this paper to make it easy to read. Once again, thank you for taking the time to review our work and providing valuable insights, and we will take all the suggestions given by the reviewers into consideration for future improvements. We would also like to have discussions with all the reviewers if you have any further concerns. Pdf: /pdf/888364357714cfdbdfe7d9d80bee89181b3f221a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image Collections
Accept (poster)
Summary: ARTIC3D is a self-supervised framework that reconstructs 3D articulated shapes and textures of animals from sparse and noisy online images. It uses a skeleton-based surface representation and 2D diffusion priors to enhance the input images and guide the 3D optimization. It also enables realistic animation by fine-tuning the rendered shape and texture under rigid part transformations. ARTIC3D outperforms prior methods in terms of shape and texture fidelity, robustness to occlusions and truncation, and pose transferability. The authors also introduce E-LASSIE, an extended dataset with noisy web images, to evaluate model robustness. Strengths: * ARTIC3D is a self-supervised framework that can reconstruct 3D articulated shapes and textures of animals from sparse and noisy online images, without relying on any pre-defined shape templates or per-image annotations. This makes it scalable and adaptable to different animal species and poses. * The method leverages 2D diffusion to enhance the input images by removing occlusions and truncation, and to extract semantic features and 3D skeleton initialization. This improves the quality and robustness of the 3D outputs, as well as the efficiency and stability of the optimization process. * Usage of diffusion-guided 3D optimization to estimate shape and texture that are faithful to the input images and consistent across different viewpoints and poses. It also introduces a novel technique to calculate more stable image-level gradients via diffusion models, which enhances the convergence and robustness of the optimization. * ARTIC3D produces realistic animations by fine-tuning the rendered shape and texture under rigid part transformations, which preserves the articulation and details of the 3D shapes. It also enables explicit pose transfer and animation, which are not feasible for prior diffusion-guided methods with neural volumetric representations. Weaknesses: * The manuscript is not easy to follow for anybody who is unfamiliar with LASSIE and HI-LASSIE. This work is a step forward from Hi-LASSIE using Stable Diffusion to avoid several pitfalls with respect to optimization and image pre-processing. * ARTIC3D depends on the 3D skeleton initialization from Hi-LASSIE [38], which may be inaccurate for occluded or truncated animal bodies, resulting in unrealistic part shapes. Also, it struggles with fluffy animals with ambiguous skeletal configuration, such as sheep, which pose challenges in skeleton discovery and shape reconstruction. * The reconstruction results are not significantly better than Hi-LASSIE in most case 1% better, however it is not clear from the manuscript whether this is statistically significant or not. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * The usage of SD improves texture quality on a perceptual level. However, as seen in many figures (ex 3, 4 in supp, 3 main) it mainly hallucinates the texture to appear realistic however further apart from the image used as reference. For example, the reconstructed elephant looks nothing like the image, similarly the kangaroo, tiger, zebra. The work is presented as reconstruction work and as such diverging from the images can't be considered a proper reconstruction. Have you examined a way to mitigate this texture drift? * The user studies on animation are flawed. A 55% preference from 100 users means that ARTIC3D'a animations are only slightly better than rigid transform which is close enough to random choice. Could you explain in detail what sample size was used for the user study? Whether the studies were cherry picked prior to the user study? I find the user study explanation to lack a lot of key details. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: --- **”More details of LASSIE and Hi-LASSIE are needed”** We thank the reviewer for the feedback and will add more details to the preliminaries section (3.1) in the manuscript. --- **”3D skeleton from Hi-LASSIE may not work for occluded/truncated images”** Our 3D skeleton initialization is performed on the reference image after input preprocessing with DASS, which can effectively complete the occluded or truncated animal bodies. That is, our skeleton is more robust and accurate compared to Hi-LASSIE in the partial-body cases. Fluffy animals are a common challenge in our ill-posed problem setting. However, we are able to produce reasonable reconstruction of sheep as shown in supplemental Figure 8, which is a considerable improvement from prior works. --- **Reconstruction performance gain** As shown in Table 1 of the main paper, ARTIC3D achieves 0.6-1.7% PCK gain on the clean LASSIE images and 1.9-3.7% PCK gain on the noisy E-LASSIE images compared to Hi-LASSIE. It demonstrates a consistent advantage on ARTIC3D over Hi-LASSIE on all animal classes, especially in the occlusion/truncation cases. Moreover, the CLIP similarity evaluation in Table 3 also shows consistently favorable textured reconstructions by ARTIC3D. For more detailed discussion, please see “Contribution beyond LASSIE / Hi-LASSIE” in the General Response above. --- **Unfaithful texture from Stable Diffusion** Please see “Unfaithful texture from Stable Diffusion” in the General Response above. --- **Details of user study** In our user study, we randomly select 3 examples per animal class from the E-LASSIE and Pascal-Part datasets. More details of the user study will be added to the manuscript. As shown in the video and discussed in supplemental Sec. 2.3, rigid transformation produces static texture during motion and undesirable gaps around joints, and per-frame DASS outputs contain sharp details but are flickering and temporally inconsistent. We propose T-DASS as an alternative to find a better tradeoff between high-fidelity details and temporal smoothness. Despite the small gap, the user study still shows an overall preference of T-DASS considering both realism and temporal consistency. Please also note that all three methods (rigid transformation, DASS, and T-DASS) are our contributions, which enable animation from a **single image without any annotations** and that no prior method can achieve this. --- Rebuttal 2: Title: Please let us know whether you have additional questions after reading our response Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal. We hope to address all the potential issues during the discussion period. --- Rebuttal Comment 2.1: Title: Please let us know whether all questions have been addressed Comment: Dear Reviewer, As we are approaching the midpoint of the discussion period, we would like to confirm whether we have successfully addressed the raised concerns in your review. Should any lingering issues require further attention, please let us know as early as possible so we can answer them soon. We appreciate your time and effort in enhancing the quality of our manuscript. Thank you
Summary: This paper proposes a method to reconstruct the shape and texture of articulated objects from noisy web image collections. To achieve this, ARTIC3D proposed a diffusion-based 2D image enhancement module DASS, and then reconstructed the shape and texture maps using Hi-LASSIE. Moreover, to increase the animation results, ARTIC3D introduced a T-DASS module for animation fine-tuning. Experiments on the E-LASSIE dataset show that this method can produce high-fidelity animation results from noisy inputs. Strengths: --ARTIC3D can directly reconstruct the shapes and texture maps of articulated objects from noisy web image collections, greetly increase the robotness of Hi-LASSIE. --The proposed DASS and T-DASS modules are novel, intuitive and effective. --The paper is well-writen and easy to follow. Weaknesses: --The texture map obtained by ARTIC3D is not good enough. To solve this problem, ARTIC3D relies on the diffusion-based DASS module and the animation fine-tuning module to achieve high-fidelity animation and novel view rendering results. However, this may lead to 3D inconsistency. How does this method solve this problem? --Although enhanced by T-DASS, the animation results are still blurry and the texture moves over time. Also, the T-DASS module seems to make the results blurrier than directly using DASS module. --The reconstructed texture and shape are not faithful to the input image in some cases. The DASS module change the shape and appearance of the input images. Moreover, the images generated by this module may lose their 3D consistency. Technical Quality: 3 good Clarity: 3 good Questions for Authors: --ARTIC3D optimized the textured images per instance(L220-211). However, the appearance of a single animal may also vary with the lighting conditions. How does this method deal with this problem? --ARTIC3D ustilizes the T-DASS module to enhance animation. It would be great if the paper can also include some discussions on rendering speed and other relevant computational costs. --The T-DASS module may lead to 3D inconsistency. How does this method solve this problem? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: --The shapes and textures generated by this method is inaccurate and blurry, and are not faithful to the input images in some cases. --The DASS module may lead to 3D inconsistency. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: --- **3D inconsistency and unfaithful texture from Stable Diffusion** Most diffusion-based 3D generation methods like DreamFusion [21] only rely on the 2D diffusion prior to produce 3D shapes and texture, and thus they are prone to inconsistent outputs from different views (e.g. multiple faces on one animal body). ARTIC3D, on the other hand, combines the 2D diffusion prior, semantic correspondence, and 3D geometric priors (silhouettes from multiple views/poses, 3D part surface and pose regularizations, etc), which largely mitigates the 3D inconsistency issue. For more discussion on texture reconstruction, please see “Unfaithful texture from Stable Diffusion” in the General Response above. --- **Blurry animation results with T-DASS** As discussed in supplemental Sec. 2.3, rigid transformation produces static texture during motion and undesirable gaps around joints, and per-frame DASS outputs contain sharp details but are flickering and temporally inconsistent. We propose T-DASS as an alternative to find a better tradeoff between high-fidelity details and temporal smoothness. Although the tradeoff might be suboptimal, we argue that T-DASS makes a good contribution considering that the animation is created from a **single image without any annotations** and that no prior method can achieve this. --- **Lighting conditions** Considering that our problem setting is highly ill-posed (sparse, unannotated, and uncorrelated images), we do not explicitly model lighting and thus the texture image incorporates both albedo and lighting information of the input image. Note that our diffusion-based DASS module can recover natural texture from poorly illuminated or shadowed images. --- **Rendering speed and computational costs** To render 30 frames at 512x512 resolution with rigid part transformation, it takes roughly 2-3 minutes on a single GTX 1080 GPU. Fine-tuning the animation frames with the DASS module for 300 iterations increases the total rendering time to 8 minutes. With the T-DASS module, the total fine-tuning + rendering time is roughly 10 minutes. Since the 2D flow fields and temporal consistency loss are computed on the low-resolution (64x64) feature maps, the T-DASS module marginally increases the rendering time by 2 minutes per 30 frames to enforce temporal consistency. --- Rebuttal 2: Title: Please let us know whether you have additional questions after reading our response Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal. We hope to address all the potential issues during the discussion period. --- Rebuttal Comment 2.1: Title: Please let us know whether all questions have been addressed Comment: Dear Reviewer, As we are approaching the midpoint of the discussion period, we would like to confirm whether we have successfully addressed the raised concerns in your review. Should any lingering issues require further attention, please let us know as early as possible so we can answer them soon. We appreciate your time and effort in enhancing the quality of our manuscript. Thank you
Summary: This paper proposes a new framework, named ARTIC3D, to address the task of 3D reconstruction of articulated shapes and texture from noisy and few images. It is based on pre-trained diffusion models. Specifically, the authors use a novel decoder-based accumulative score sampling (DASS) to replace score distillation sampling (SDS). This has been used in many other frameworks to calculate pixel gradients in order to use diffusion priors more efficiently. Besides, they also extend the LASSIE dataset with more annotated data, called E-LASSIE, which could be useful for future works , especially for robustness of 3D reconstruction models. Strengths: (1) Good writing. (2) Plentiful experiments to prove the effectiveness of their framework and necessity of each module. (3) Novelty of the method (DASS) to better implement 2D diffusion priors in 3D reconstruction tasks, especially when big and clean datasets are not available. Detailed techniques are elaborated in the method section, including how to use them in preprocessing noisy input image, shape and texture optimization, and animation fine-tuning. (4) Extension of the LASSIE dataset with more annotated images to a new dataset, E-LASSIE, which could be useful for future research in this area, especially for evaluating the 3D reconstruction model robustness. Weaknesses: (1) This framework is highly based on LASSIE, both the method and the necessary input. Specifically, this framework needs the 3D skeleton from a pre-trained LASSIE model. And the framework also shares many parts with LASSIE's. But as mentioned in the Strengths part, the authors propose a new method to better incorporate diffusion models in 3D reconstruction and they propose a new dataset. (2) Though the title claims "learning from noisy web image collections", the noise actually only involves truncation and occlusions. There are also many other types of noise that have not been explored, including illumination variations, too small instances, multiple instances, etc., which are common in web images. They also uses DINO-VIT to do the foreground-background segmentation, which is acceptable but tricky, since the noisy background is also an important and common difficulty when dealing with web images. Technical Quality: 3 good Clarity: 3 good Questions for Authors: n/a Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations have already been elaborated clearly in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: --- **Contribution beyond LASSIE / Hi-LASSIE** Please see “Contribution beyond LASSIE / Hi-LASSIE” in the General Response above. --- **”There are many other types of noises that are not explored”** We thank the reviewer for the feedback and will revise the wording in the manuscript. Please note that ARTIC3D can deal with general noise beyond occlusion/truncation to some extent as long as the DINO-ViT feature clusters are robust. For instance, several images in the E-LASSIE dataset are either greyscale, poorly illuminated, or contain small instances and noisy backgrounds, as shown in the examples through this [anonymous link](https://www.dropbox.com/scl/fi/ykuubnkgn1j5emo6d4tl9/noisy_images.png?rlkey=mfzegdj9az9t5q0wl34ql27xv&dl=0). Although not within the scope of this paper, automatic filtering of web images is an important next step towards real-world application. --- Rebuttal 2: Title: Please let us know whether you have additional questions after reading our response Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal. We hope to address all the potential issues during the discussion period. --- Rebuttal Comment 2.1: Title: Please let us know whether all questions have been addressed Comment: Dear Reviewer, As we are approaching the midpoint of the discussion period, we would like to confirm whether we have successfully addressed the raised concerns in your review. Should any lingering issues require further attention, please let us know as early as possible so we can answer them soon. We appreciate your time and effort in enhancing the quality of our manuscript. Thank you
Summary: This paper introduces an articulated 3D shape reconstruction method from noisy web images with the help of diffusion models. The authors use a diffusion method to enhance the noisy input images to get clean reference 2D images and masks. Then, skeleton-based surface representations are optimized from the reference images. Then, fine-tuning improves the animations of the reconstructed shapes. Strengths: 1. The overall narrative of the paper is sound and readable. 2. The authors propose to produce clean input reference images from noisy web images using diffusion models as a preprocessing step. 3. The authors propose Decoder-based Accumulative Score Sampling (DASS) to improve efficiency and reduce artifacts. 4. The authors designed a fine-tuning step to allow better animation of the reconstructed objects. Weaknesses: 1. The reconstruction part heavily relies on previous works like LASSIE[39] and Hi-LASSIE[38], it seems that the authors did not contribute very much to the core algorithm in this reconstruction task. Diffusion models helped improve the results, but most of the contribution is in the data cleaning and preprocessing. 2. Diffusion models help in image preprocessing, but Stable Diffusion would also produce results from its knowledge based on the text prompt. Therefore we can observe that some textures of the reconstructed objects are different from the reference input images, even for the unoccluded parts. 3. Figure 2 is not clear enough to show the whole workflow of the proposed methods, it is hard to relate the DASS module with the shape and texture optimization. 4. Lacking ablation studies, it is unclear whether the Distilling 3D reconstruction is working in section 3.4, without the Distilling 3D reconstruction part, the reconstruction technique would be mostly relied on LASSIE[39] and Hi-LASSIE[38] as I mentioned above. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the diffusion-guided optimization of shape and texture perform? It would be nice to see the ablation study on this part to distinguish this method from LASSIE[39] and Hi-LASSIE[38]. 2. How do you deal with the extra noise or overcorrection from the diffusion model? The preprocessed image may look like an average animal in the species from the diffusion models. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors mentioned some limitations but not enough for me. Like the possible noise introduced from Stable Diffusion as mentioned above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: --- **Contribution beyond LASSIE / Hi-LASSIE** Please see “Contribution beyond LASSIE / Hi-LASSIE” in the General Response above. --- **Unfaithful texture from Stable Diffusion** Please see “Unfaithful texture from Stable Diffusion” in the General Response above. --- **“Figure 2 is not clear enough”** We thank the reviewer for the feedback and will update Figure 2 in the manuscript. We also provide a more detailed overview figure through this [anonymous link](https://www.dropbox.com/scl/fi/7abzk19c7cunubatnsp84/illustration.png?rlkey=hnlzefqxwvxf75hboktjhelw1&dl=0) and will add it to the supplemental document. --- **Ablation studies on distilling 3D reconstruction** Our ablation study of individual components is shown in Table 1 in the supplemental document, and we further report the detailed PCK gain (compared to Hi-LASSIE) in Table-T3 below. Note that the proposed DASS loss for 3D reconstruction brings 1.0-2.1% PCK gain from Hi-LASSIE without input preprocessing. **Table-T3**: PCK@0.05 on the E-LASSIE image sets (higher the better). | Method | Input enhance. | $L_{dass}$ | Elephant | Giraffe | Kangaroo | Penguin | Tiger | Zebra | | :----------- | :-----------: | :-----------: | :-----------: | :-------: | :-----------: | :---------: | :-----: | :-------: | | Hi-LASSIE | | | 37.6 | 54.3 | 31.9 | 41.7 | 57.4 | 60.1 | | ARTIC3D | | v | 38.8 (+1.2) | 56.1 (+1.8) | 34.0 (+2.1) | 42.7 (+1.0) | 58.5 (+1.1) | 61.9 (+1.8) | | ARTIC3D | v | | 39.0 (+1.4) | 57.3 (+3.0) | 34.6 (+2.7) | 43.4 (+1.7) | 58.5 (+1.1) | 62.4 (+2.3) | | ARTIC3D (full) | v | v | 39.8 (+2.2) | 58.0 (+3.7) | 35.3 (+3.4) | 43.8 (+2.1) | 59.3 (+1.9) | 63.0 (+2.9) | --- Rebuttal Comment 1.1: Comment: Thank the author for the detailed response. Most of my concerns are resolved. I would raise my rating to 6. --- Rebuttal 2: Title: Please let us know whether you have additional questions after reading our response Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal. We hope to address all the potential issues during the discussion period.
Rebuttal 1: Rebuttal: # General Response We thank the reviewers for the constructive feedback. We address the common concerns in the General Response and specific comments in the individual response to each reviewer. --- ### **Contribution beyond LASSIE / Hi-LASSIE** While ARTIC3D deals with the same reconstruction task and leverages the skeleton/shape representation as in LASSIE and Hi-LASSIE, there are several key differences that allow ARTIC3D to handle occluded/truncated images and produce detailed texture in novel views. At a high level, LASSIE and Hi-LASSIE focus on **using geometry priors to learn detailed articulated shapes**, whereas ARTIC3D proposes to **incorporate generative 2D diffusion priors in a more challenging scenario with noisy images**. We emphasize that **combining such 3D geometry and 2D diffusion priors** is challenging, and that the proposed DASS module can effectively improve the results in all 3 stages (input preprocessing, 3D reconstruction, and animation). As shown in Table 1 in the manuscript and Table-T1 below, Hi-LASSIE+ (Hi-LASSIE with SDS loss) marginally improves the PCK on E-LASSIE images by 0.1-1.2% by naively applying the common SDS loss, while ARTIC3D achieves a 1.9-3.7% PCK gain. Our CLIP similarity evaluation in Table 3 also shows consistently favorable textured reconstructions by ARTIC3D in different views. Moreover, our ablation study (supplemental Table 1 and Table-T2 below) demonstrates that the DASS loss for 3D reconstruction leads to 1.0-2.1% PCK gain from Hi-LASSIE without input preprocessing. We believe these PCK gains from ARTIC3D compared to prior works are considerable. **Table-T1**: PCK@0.05 on the E-LASSIE image sets (higher the better). | Method | Elephant | Giraffe | Kangaroo | Penguin | Tiger | Zebra | | :----------- | :----------: | :-------: | :------------: | :---------: | :-----: | :------: | | Hi-LASSIE | 37.6 | 54.3 | 31.9 | 41.7 | 57.4 | 60.1 | | Hi-LASSIE+ | 38.3 (+0.7) | 54.8 (+0.5) | 32.8 (+0.9) | 41.8 (+0.1) | 57.7 (+0.3) | 61.3 (+1.2) | | ARTIC3D | 39.8 (+2.2) | 58.0 (+3.7) | 35.3 (+3.4) | 43.8 (+2.1) | 59.3 (+1.9) | 63.0 (+2.9) | **Table-T2**: PCK@0.05 on the E-LASSIE image sets (higher the better). | Method | Input enhance. | $L_{dass}$ | Elephant | Giraffe | Kangaroo | Penguin | Tiger | Zebra | | :----------- | :-----------: | :-----------: | :-----------: | :-------: | :-----------: | :---------: | :-----: | :-------: | | Hi-LASSIE | | | 37.6 | 54.3 | 31.9 | 41.7 | 57.4 | 60.1 | | ARTIC3D | | v | 38.8 (+1.2) | 56.1 (+1.8) | 34.0 (+2.1) | 42.7 (+1.0) | 58.5 (+1.1) | 61.9 (+1.8) | | ARTIC3D (full) | v | v | 39.8 (+2.2) | 58.0 (+3.7) | 35.3 (+3.4) | 43.8 (+2.1) | 59.3 (+1.9) | 63.0 (+2.9) | --- ### **Unfaithful texture from Stable Diffusion** Due to the highly ill-posed nature of our problem setting, there exists a tradeoff between realism and faithfulness to input images, especially for the unseen/occluded surface. For instance, whether a colored or black-and-white output is preferred given a greyscale/poorly illuminated image, or whether we preserve the noisy texture (small occlusions like dirt, water splash, or shadow caused by rough surface / objects) in the original image. Since the dense surface visibility is hard to obtain, we optimize the shape and texture that are slightly biased towards realism (detailed and clean texture that resembles the animal class) in this paper. In our ablative analysis of the DASS module (Figure 3), we show that **we can control the realism-faithfulness tradeoff** by tuning the diffusion timestep $t$ and guidance weight $w_g$. Specifically, larger $t$ and $w_g$ allows DASS to hallucinate shape and texture that are not present in the original image. In addition, one can enforce a more faithful reconstruction by setting a higher weight of the texture reconstruction loss $\alpha_{text}$ (L223). We also show additional visual results on the tradeoff through this [anonymous link](https://www.dropbox.com/scl/fi/hshbqmf2gemq0ou32qa7f/faithfulness.png?rlkey=m6qt98hk5kuvuqoeoz96l2o3q&dl=0). Although we acknowledge that the current tradeoff is not optimal and some example outputs are not faithful to the input images, our evaluation of image-to-image CLIP similarity in Table 3 shows that ARTIC3D outputs are still generally more faithful to input images across different views compared to the existing methods. We believe that ARTIC3D is a good first step in this novel scope, and automatically finding the best tradeoff for each animal class/instance forms an interesting future work.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Adversarial Counterfactual Environment Model Learning
Accept (spotlight)
Summary: An accurate environment dynamics model is crucial for various downstream tasks, such as counterfactual prediction, off-policy evaluation, and offline reinforcement learning. Currently, these models were learned through empirical risk minimization by step-wise fitting of historical transition data. However, we first show that, particularly in the sequential decision-making setting, this approach may catastrophically fail to predict counterfactual action effects due to the selection bias of behavior policies during data collection. To tackle this problem, the authors introduce a novel model-learning objective called adversarial weighted empirical risk minimization (AWRM). AWRM incorporates an adversarial policy that exploits the model to generate a data distribution that weakens the model's prediction accuracy, and subsequently, the model is learned under this adversarial data distribution. They implement a practical algorithm, GALILEO, for AWRM and evaluate it on two synthetic tasks, three continuous-control tasks, and a real-world application. The experiments demonstrate that GALILEO can accurately predict counterfactual actions and improve various downstream tasks, including offline policy evaluation and improvement, as well as online decision-making. Strengths: 1. This paper addresses the problem of accurate environment dynamics model learning, which exhibits wide impacts in many downstream tasks, like counterfactual prediction, off-policy evaluation, and offline reinforcement learning. This is a very important and meaningful research topic. 2. This is the first research on faithful dynamics model learning in sequential decision-making settings like RL, which demonstrates the novelty of this paper. 3. The analysis to the challenges brought by the conventional empirical risk minimization method is deep and insightful. Especially, the authors use a vivid example to illustrate them. Based on this, the transition to the propose of adversarial weighted empirical risk minimization objective is smooth, which strongly supports the following the Generative Adversarial Offline Counterfactual environment model learning (GALILEO) method. 3. The experiments of this work is sufficient and persuasive. The authors conduct experiments in two synthetic tasks, three continuous-control tasks, and a real-world application. They first verify that GALILEO can make accurate predictions on counterfactual data queried by other policies. Then, they demonstrate that the model learned by GALILEO is helpful to several downstream tasks. Weaknesses: 1. It can be better if the authors can add several baselines to better validate the superiority of the proposed GALILEO method. 2. The description of the method part is deep and comprehensive. But the authors can consider making it easier to understand if possible. 3. Some typos need to be fixed in the future version, like the subtitle of Section. 5.3. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I am little bit curious to the details of metrics and downstream tasks used in the experiments. For example, the AUUC, value gap, regret@1, and rank correlation metrics and the details behind the off-policy evaluation experiments. Could you please consider adding some corresponding explanations in the future version? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the Weaknesses and Questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude for your constructive feedback and the time you dedicated to evaluating our paper. Your insightful remarks help us in refining our work and emphasizing its importance in the reinforcement learning domain. **Clarify Experiments and Metrics:** We understand the concerns regarding the metrics and downstream tasks utilized in our experiments. While the main body of the paper had constraints regarding space, we have provided a comprehensive explanation of these metrics and tasks in Appendix H.6.1 and Appendix H.7. To make this information more accessible, we will emphasize this section prominently in the main text and provide a direct link ensuring that readers can easily navigate to it. **Baselines Justifications:** Thank you for raising the point about the selection of baselines. Our choices were deliberate and comprehensive to verify the algorithm: - **SL**: A representation of a standard algorithm for ERM, which is common-used for model learning in current offline model-based RL algorithms. - **IPW**: Chosen for its standing as a benchmark in counterfactual modeling for WERM through IPS, it helps elucidate how AWRM objective compares and advances beyond existing techniques. - **SCIGAN**: SCIGAN is perceived as subsets of GALILEO for optimizing AWRM, using SCIGAN provides a more distinct reference to assess the effectiveness of our approach. It is also a good baseline since its rigorous testing on TCGA benchmarks [1]. In response, we will provide a more detailed explanation of our baseline choices at the beginning of Section 5 to ensure absolute clarity for our readers. We believe the baseline selection is representative and comprehensive, and we hope the above clarification can solve your concerns. We commit to introduce GAIL as an additional baseline. It's important to note, however, that the conventional GAIL method focuses on policy learning rather than environment model learning. Given the inherent instability of GAIL-style algorithms, we cannot guarantee optimal results within the constrained timeframe of the rebuttal and discussion period. **Method Description:** We genuinely appreciate your feedback regarding the method's description. Our intention was to provide an exhaustive understanding, which, in hindsight, might have resulted in a dense exposition. In light of your observations, we are inclined to simplify certain sections for better clarity. In conclusion, we sincerely thank you for helping us improve our paper's quality. We believe that addressing these points will make the paper more accessible and insightful for the community. We are optimistic that these changes will elevate our work and make it a valuable contribution to the field. --- Rebuttal Comment 1.1: Comment: Thanks for your explanations. I suggest you to mention these contents in the main text to help readers better understand this paper. My previous concerns have all been addressed.
Summary: The paper presents a novel method for improving the accuracy of environment dynamics models for counterfactual prediction, off-policy evaluation, and offline reinforcement learning. Currently, these models learn via empirical risk minimization (ERM), which the authors show can lead to failures in counterfactual action prediction due to selection bias during data collection. To address this, the authors introduce adversarial weighted empirical risk minimization (AWRM), where an adversarial policy weakens the model's prediction accuracy to encourage improvement. They implement this approach via an algorithm named GALILEO, which is evaluated on synthetic tasks, continuous-control tasks, and a real-world application. Results show that GALILEO can accurately predict counterfactual actions and improve several downstream tasks. Strengths: 1. The concept of an adversarial weighted empirical risk minimization (AWRM) is a novel idea that brings together ideas from adversarial training and reinforcement learning. The use of an adversarial policy to manipulate the model's data distribution and improve its learning process is an innovative approach. 2. The paper provides a theoretical foundation for AWRM and shows its implementation through the GALILEO algorithm. The authors also conduct a variety of tests to assess GALILEO's performance, including synthetic tasks, continuous-control tasks, and a real-world delivery platform. 3. The paper is generally well-written and the use of figures to illustrate key concepts also enhances the clarity of the paper. Weaknesses: 1. The authors didn't cover the preliminaries and related works adequately. I'm not too familiar with counterfactual modelling techniques, and think the authors didn't present enough to situate their work. 2. Justifications for choosing the three baselines are missing. Why not choose the more recent GAIL based methods? 3. Typo: line 320 Downstream 4. Formatting issue: The upper margins in page 2, 7, 8 and 9 seem too small. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see above. Could you please better situate your work and justify the choice of baseline methods? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors didn't address the limitations nor broader societal impacts in the paper. But I didn't see any ethical concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We'd like to genuinely express our gratitude for your thoughtful feedback and the time you've invested in reviewing our work. We've taken your concerns to heart and have attempted to address them as follows. **1. Contextualizing the Work:** We acknowledge the feedback on the need for a more detailed presentation of the preliminaries and related works. While we did provide information on counterfactual modeling techniques in Appendix F, including previous techniques based on WERM through IPS, and structural causal models for counterfactual inference. We understand that our inclusion of counterfactual modeling techniques in Appendix F might not have been sufficiently prominent. To address this, we commit to elaborating on these concepts within the main sections of the paper (potentially in Section 3) to ensure a more comprehensive and upfront presentation for all readers. **2. Baseline Justifications:** Thanks for the comment. Our choice of baselines were indeed deliberate: - **SL**: A representation of a standard algorithm for ERM, which is common-used for model learning in current offline model-based RL algorithms. - **IPW**: Chosen for its standing as a benchmark in counterfactual modeling for WERM through IPS, it helps elucidate how AWRM objective compares and advances beyond existing techniques. - **SCIGAN**: An adversarial model learning technique that can be regarded as a partial implementation of GALILEO for optimizing AWRM. While both GAIL and SCIGAN can be seen as partial implementations of GALILEO, the results from SCIGAN gave us a clearer benchmark against which we could measure our algorithm's success. We opted for SCIGAN over GAIL because it is well-tuned in [1] and has been tested on TCGA benchmarks. We appreciate the point raised about the potential inclusion of GAIL-based methods. We will further clarify our choice of baselines in the paper, ensuring readers understand the rationale behind our selections in the start of Section 5. We believe the baseline selection is representative and comprehensive, and we hope the above clarification can solve your concerns. We commit to introduce GAIL as an additional baseline. It's important to note, however, that the conventional GAIL method focuses on policy learning rather than environment model learning. Given the inherent instability of GAIL-style algorithms, we cannot guarantee optimal results within the constrained timeframe of the rebuttal and discussion period. **Technical Corrections** We acknowledge the typographical error and formatting issue. We assure you that these will be rectified in our revised submission, and we'll double-check the document to preempt any similar issues. In wrapping up, we reiterate our gratitude for your invaluable feedback. We believe these insights will undeniably enhance our paper's clarity and depth. Should you have further queries or need additional clarifications on our revisions, we are open to discussions and more than willing to engage. [1] Estimating the effects of continuous-valued interventions using generative adversarial networks. --- Rebuttal Comment 1.1: Comment: Thanks for the response. Most of my concerns are addressed. I've modified my rating accordingly. Cheers. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your constructive feedback and are glad to hear that our response addressed most of your concerns. Thank you for your time and consideration.
Summary: This paper considers the problem of environment modeling. An adversarial method is proposed that the adversarial counterparts is trained to exploit the model to generate a data distribution that weakens the model’s prediction accuracy, and then the model is trained under the adversarial data distribution with a weighted empirical risk minimization objective. Experiments are conducted on synthetic, control and real-world tasks. Strengths: The paper is clearly written. The illustration examples are clear and well motivate the problem. Applying IPS and the surrogate optimization step is interesting. I also agree that conservative or pessimistic offline model-based RL methods often try to limit policy exploration, which might make it hard to obtain an accurate environment model. Weaknesses: Complexity and stability of the adversarial method might be concerned, where two discriminators should be learned. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It is known that training GAN-like methods suffer from instability and often the hyperparameters should be carefully tuned. The proposed method involving training two discriminators and I am worried about the stability of the optimization step in Eq. (6) if it is easy to rise collapse problem. 2. It is mentioned that in HalfCheetah, all policies just keep the cheetah standing. I think similar problems might also exist for other tasks, while they are not shown up since reward curves or numbers are compared more often without accessing how well the policies really do in these tasks. Despite this, could any other intuitive information on the performance of the environment modeling be conducted considering a specific task, instead of focusing on the rewards only? For example, in halfcheetah, which transitions are more likely to have a larger discrepancy? Minor: Title of Section 5.3, ‘Ddownstream’ -> ‘downstream’ Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our gratitude for your thoughtful feedback on our submission. We've taken your concerns to heart and have attempted to address them in the following manner. **Addressing Stability Concerns of GALILEO:** 1. **Instability of training two discriminators:** The reviewer are right in observing that the GALILEO, much like other GAN-style algorithms, can be challenging to tune. Primarily, the challenge lies in striking a balance between the learning rate of the discriminator and the generator, a known issue in the GAN framework. However, we found that the two discriminators setup did not introduce further complications in this respect. In our practice, we just ensure the uniformity between the two discriminators regarding learning rates, frequency, and network structure, then the GALILEO algorithm can learn the model stably. 2. **Ablation and Implementation Techniques:** We appreciate your concerns about the stability and effectiveness of the introduced techniques. We have discussed crucial techniques for the GALILEO implementation in Appendix E. Further, ablation studies centered on these techniques can be found in Appendix H.3. 3. **Code Open-sourcing:** To foster transparency and facilitate reproducibility, we will open-source our code, which would share exhaustive details on the algorithm. 4. **Add to Limitation:** We agree that the instability is an important issue of GAN-style algorithms, we will mention this limitation in Section 6. This is also in our future work to investigate. **Algorithm Performance Beyond Reward Metrics** In light of your comments, we decided to provide a more comprehensive view of GALILEO's efficacy by delving into its sample efficiency and policy behavior for downstream tasks. We will supplemented our results with trajectory visualizations for asymptotic policies obtained from various models. Interestingly, we found that policies trained via IPW and SL either remain standing or fall backward in all three environments, whereas those from GALILEO and SCIGAN demonstrate forward movement in Walker2d and Hopper, and keep standing in HalfCheetah. In the revised version, we will add these visualization results to Appendix H. **Minor Corrections:** We acknowledge the typographical error highlighted by you in Section 5.3 and will rectify it in the final version of the paper. In conclusion, we appreciate the time and effort you've invested in reviewing our work. We've endeavored to address the concerns raised, and we hope our explanations and additions provide clarity. We're optimistic that the enhancements and clarifications enhance the overall quality and impact of our contribution.
Summary: The paper proposes a model-learning approach for counterfactual prediction (CP), off-policy evaluation (OPE), and offline reinforcement learning (ORL). The authors introduce the adversarial weighted empirical risk minimization (AWRM) objective to facilitate learning models that accurately evaluate target policies. Additionally, they present the GALILEO algorithm, a generative adversarial training method that approximates the data distribution induced by the optimal adversarial policy. Strengths: + The paper effectively addresses the problem of learning accurate models for CP, OPE, and ORL, which is particularly significant in domains with costly data collection. + The authors provide a comprehensive discussion on the impact of selection bias on CP, which serves as motivation for their objective, AWRM, and the GALILEO algorithm. Weaknesses: - However, it is unclear how novel is the weighted version of empirical risk minimization (ERM), compared to prior research. The main contribution lies in the adversarial aspect. Therefore, I recommend revising the introduction to emphasize and motivate the adversarial part. - The derivations appear reasonable. However, including a brief discussion about Algorithm 1 could enhance the paper's readability. - While the experimental results are generally ok, one notable limitation is that the authors only consider three tasks from the D4RL and DOPE benchmarks. In addition, the proposed approach does not outperform other methods in one of these tasks (HalfCheetah). To convincingly demonstrate the efficacy of their algorithm, the paper should include more tasks in their evaluation. Additionally, the constraint of limiting the time horizon to {10, 20, 40} is very strong and lacks proper motivation. - Furthermore, the proposed algorithm lacks comparison with state-of-the-art algorithms for the D4RL benchmark, such as the one mentioned in https://openreview.net/pdf?id=VYYf6S67pQc. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In Table 1, it is stated that the returns are normalized to lie between 0 and 100; however, there are negative values in the HalfCheetah column. Is this a typo? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time and effort you have put into reviewing our paper. We appreciate your feedback and would like to address the concerns you've raised as follows: 1. **Experimental Evaluation**: 1. *Limiting the Time Horizon to {10, 20, 40}*: As detailed in Line 328, our motivation for this choice, was to *rigorously* examine the models' capabilities. By excluding certain tricks that constrain policy exploration and risky region learning which are commonly done in offline model-based RL algorithms like MOPO, we can fully exploit the learned models using standard RL algorithm. However, we observed a large compounding error in the 1,000-step rollout in this setting, making all algorithms fails in learning a reasonable policy. To better verify the effects of models on policy improvement, we opted for smaller horizons of {10, 20, 40}. We found that after learning in 40-horizon setting, there are significant gaps between the policy learned in real environment and dynamics model, thus we do not further scale up the horizons. 2. *Choice of Tasks from the D4RL and DOPE Benchmarks. To convincingly demonstrate the efficacy of their algorithm, the paper should include more tasks in their evaluation.* As mentioned in Line 263, we opted for the three `medium` datasets in D4RL and DOPE explicitly because these are the only datasets within the benchmark exhibiting selection bias. Other datasets in the benchmark were amassed without an evident selection bias due to the utilization of mixed policies during data collection. In addition, we have also incorporated two synthetic benchmarks comprising 18 tasks and a real-world application to demonstrate the effectiveness of our proposed method. **We believe these comprehensive experiments significantly validate our approach, and we hope they aren’t overlooked.** 3. *Performance on the HalfCheetah Task*: As elucidated in Lines 334 to 341, we found certain challenges unique to the HalfCheetah dataset that led to the observed results. All the policies derived, including ours, had the cheetah either standing stationary or moving backward, which means that all policies actually fail in complete the task, even though IPW reaches a bit better performance. 4. *Comparison with State-of-the-Art Algorithms:* We recognize the importance of benchmarking against state-of-the-art methods. Due to the distinct settings in our experiments using D4RL, direct comparisons are challenging. However, we acknowledge the related work highlighted by the reviewer and commit to incorporating it into Appendix F. We understand the author's concerns about missing evaluation in standard offlineRL benchmarks. **We would like to emphasize that our experiment is to evaluate the effects of selection bias, and the ability of learned models' predictions in counterfactual actions, instead of purely pursuing the SOTA policy performance in offlineRL benchmarks.** We will revised the paper in the experiment section to clarify this point. 2. **Concern with Table 1**: In Table 1, only the column `avg. norm.` uses the normalized return, all other scores is the raw returns. 3. **Regarding the Novelty of the WERM and AWRM**: Thanks for the suggestion. We agree that there may have been some confusion regarding the emphasis on the weighted version of ERM (WERM). Our primary contribution is the adversarial objective AWRM. The introduction of the weighted version of ERM served mainly as a scaffold to lead into the discussion of AWRM, ensuring a smoother narrative flow. We will revise the introduction section to stress the importance and novelty of the adversarial aspect of our contribution. 4. **Discussion about Algorithm 1 could enhance the paper's readability:** Thanks for the valuable suggestion, we will revise the last paragraph of Section 4.3 to improve the readability of the implementation of algorithm 1. In closing, we are grateful for your insights and will undertake necessary revisions to address the mentioned concerns. We believe that our contributions offer a valuable perspective in the domain of reinforcement learning and hope that the clarifications provided here assuage any reservations. --- Rebuttal Comment 1.1: Comment: Thank you for the comments. Some of my concerns have been addressed and I have increased my score. --- Reply to Comment 1.1.1: Comment: We're grateful for your insightful feedback on our work. We will revise the paper following your suggestions.
Rebuttal 1: Rebuttal: We would like to extend our heartfelt appreciation to the esteemed reviewers for their invaluable feedback and insightful comments. Their constructive input has undoubtedly enhanced the quality of our manuscript. The common strengths highlighted across the reviews are encouraging, and we are grateful for the recognition of our efforts: **Clarity of Presentation**: Multiple reviewers (E1XE, h4bm, wiCk, qQGz) emphasized that the paper is well-articulated, with clear illustrations and examples. The structure, as highlighted by E1XE, made the derivation of AWRM easy to follow, while the use of figures as pointed out by wiCk added to the paper's clarity. **Novelty and Significance of Methodology**: The introduction of the adversarial weighted empirical risk minimization (AWRM) was recognized as innovative and novel by reviewers So7i, wiCk, and qQGz. The method effectively addresses significant challenges in the domain, including the problem of selection bias as mentioned by So7i. **Comprehensive Experiments**: Reviewers E1XE, wiCk, and qQGz lauded the comprehensive experimental evaluation provided, noting the wide range of tasks, including synthetic, continuous control, and real-world applications. Particularly, the experiments showcased that GALILEO can accurately predict counterfactual actions and significantly enhance downstream tasks. **Depth of Analysis**: The in-depth analysis of the challenges posed by conventional empirical risk minimization methods and the subsequent development of our proposed method was highlighted by qQGz as being insightful and backed by strong rationale. We sincerely thank the reviewers for recognizing these strengths, and we have diligently addressed other feedback and suggestions to further refine our work.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces an adversarial training approach to model learning that improves performance for counterfactual data that may differ widely from the data used to train the model. This is particularly relevant when training from offline data and expecting the model to generalize when deployed later. This paper extensively presents an adversarially weighted empirical risk minimization objective drawing inspiration from inverse propensity scoring/weighting. A practical algorithm, GALILEO, is introduced and extensively tested across synthetic, continuous control, and real-world data. The paper clearly lays out the advantages of unbiased and accurate counterfactual models in a wide array of use cases in RL. The authors do this to contrast to major limitations of a majority of model-learning approaches that use supervised learning to perform empirical risk minimization. This is a complete and well written paper. The development of the proposed method are clearly justified with sufficient grounding in the formal exposition of the equations. The derivation of AWRM was easy to follow with the structure put in place by the authors. The included experiments clearly lay out the intended contribution of the proposed approach, that GALILEO provides a more accurate and counterfactually correct model. Impressively, this improved model is shown to have clear benefits for downstream performance in tasks beyond the “pre-training” task. Strengths: The paper clearly lays out the advantages of unbiased and accurate counterfactual models in a wide array of use cases in RL. The authors do this to contrast to major limitations of a majority of model-learning approaches that use supervised learning to perform empirical risk minimization. This is a complete and well written paper. The development of the proposed method are clearly justified with sufficient grounding in the formal exposition of the equations. The derivation of AWRM was easy to follow with the structure put in place by the authors. The included experiments clearly lay out the intended contribution of the proposed approach, that GALILEO provides a more accurate and counterfactually correct model. Impressively, this improved model is shown to have clear benefits for downstream performance in tasks beyond the “pre-training” task. Weaknesses: I’m not sure I agree that having real-world offline data being biased is a catastrophic problem. Obviously for model-learning it creates major difficulties but perhaps we can focus on identifying better approaches to using expert data than expecting an RL agent to “figure it out” when there aren’t reliable demonstrations of counterfactuals. There’s likely a good reason why alternative or “exploratory” behaviors are not represented in the data? See Fatemi, et al (2021; NeurIPS) and Killian, et al (2023; TMLR) for a formulation of how we may more adequately think about risk and decision making in such environments. The spacing between paragraphs and definitions+equations is really tight. This makes the paper difficult to read. I understand that this was likely a response to the NeurIPS template and page limitations but it should be fixed. While the downstream performance of MBRL methods using GALILEO models is promising. I wish that more analysis was done in the learning dynamics + performance in these downstream tasks regarding sample efficiency and deviations from the behavioral data / optimal policies in these environments. Do the GALILEO agents have predetermined action sequences that they exploit early on or are they flexible to the change in domain/task? >Fatemi, Mehdi, et al. "Medical dead-ends and learning to identify high-risk states and treatments." Advances in Neural Information Processing Systems 34 (2021): 4856-4870 >Killian, Taylor W., Sonali Parbhoo, and Marzyeh Ghassemi. "Risk Sensitive Dead-end Identification in Safety-Critical Offline Reinforcement Learning." Transactions on Machine Learning Research (2022). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: For the MuJoCo experiments it is unclear between Section 5.1 and 5.2 what the set-up is. It appears that the models are trained with the “medium” datasets but are evaluated on “expert” and “medium-replay” datasets. Is this correct? For Figure 5b and 5c, what is meant by “update steps” on the horizontal axis? Is this an evaluation of model performance on the test data while training on the training data? This should be made more clear. I was at first inclined to believe that the models were being fine-tuned on the test data… Title for Section 5.3 has typo: “Ddownstream” I was curious why Causal Curiosity (Sontakke, et al 2021ICML) wasn’t used as a baseline? The paper is included among the References but there is no mention of this work in the Related Work section. I would imagine that it would be a relevant baseline to a counterfactually driven model learning approach like GALILEO. > Sontakke, Sumedh A., et al. "Causal curiosity: Rl agents discovering self-supervised experiments for causal representation learning." International conference on machine learning. PMLR, 2021. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: As stated by the authors, there are several simplications to the modeling process which may be a cause for the deviation in GALILEO performance when applied to downstream tasks. Additionally, as mentioned in the “Weaknesses” section, there is an assumption that counterfactual modeling (on action sets outside the support of the dataset) is admissible. This may eliminate the use of GALILEO among safety critical domains, which would be a majority of real-world settings where model-learning could be useful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed feedback on our paper. Your constructive comments are greatly appreciated. We would like to address the concerns and questions you raised as follows: 1. **Concern about Offline Data Bias and other potential solution in Fatemi, et al [1] and Killian, et al [2]** We genuinely appreciate the meaningful references you shared. We believe that these two solutions are suitable for different scenarios. They [1,2] focus on dead-end discovery, intend to prevent agents from navigating towards potential terminal states, making them highly apt for risk-sensitive scenarios with explicit definitions, such as medical contexts. However, in some other tasks, there isn't a clear dead-end region. The profits or rewards of the tasks are continuous to the action space. To develop an effective policy, we inevitably have to handle the negative influence of biased data on dynamics model predictions, catering to a different set of applications and scenarios between these works. Take the task BAT we've explored as an example, in such tasks, for any order, raising the allowance will increases its acceptance intention. The optimization focus lies in devising a budget allocation policy to maximize the system's overall order intent. In this scenario, no order is truly a "dead end"; there are only differences in the speed of acceptance. To develop an effective policy, we have to learn an accurate dynamics model to find an accurately allocate budgets among different targets. In the revised version, we will add these related studies to Appendix F. 2. **Learning Dynamics and Performance Analysis:** Thank you for the suggestion. To provide a more comprehensive view, we will include supplemental results in the revision, including (1) The return curve of the algorithms in dynamics models and the testing environments; (2) Visualizations of the trajectories of asymptotic policies learned by different models. We summarize our finding as follows: 1. The RL algorithm requires roughly 4e5 samples to find the optimal policies in the dynamics models learned by these algorithms. However, after policy learning in the dynamics models within about 0.5e5 samples, the policies' performances evaluated the deployment environments will be stable. 2. Interestingly, policies trained via IPW and SL either remain standing or fall backward in all three environments, whereas those from GALILEO and SCIGAN demonstrate forward movement in Walker2d and Hopper, and keep standing in HalfCheetah. 3. **Omission of Causal Curiosity in the Comparison:** We did mention this work in Appendix F, recognizing its value in another thread of applying causal inference techniques to RL. In these studies, the researchers consider that the transition function is relevant to some hidden noise variables, where the concept of causal factors in [3] is a special instance of the hidden noise variable. These studies focus on reconstructing the representation of the noise variable, or discovering and estimating the effects of the noise variables. Our studies and the previous studies using IPS focused on handling the unbiased causal effect estimation problem of actions in the offline dataset under behavior policies collected with selection bias. In this branch of studies, we only consider the environment model learning problem in the fully observed setting thus the **hidden noise variable does not exist**. In the revised version, we will split the references into the main-body part and the appendix part. We then will leave this citation to the appendix reference. 4. **Minor Typos and tight space** Thank you for highlighting this. We commit to correcting these in the revised version for a more reader-friendly presentation. 5. **Questions Regarding MuJoCo Experiments:** - **Q1: do the GALILEO agents have predetermined action sequences that they exploit early on or are they flexible to the change in domain/task?** No. All agents don't employ any predetermined action sequences prior to deployment. - **Q2: It appears that the models are trained with the “medium” datasets but are evaluated on “expert” and “medium-replay” datasets. Is this correct?** The understanding are correct that models are trained on "medium" datasets and assessed on "expert" and "medium-replay" datasets. We will clarify this in the revision. - **Q3: For Figure 5b and 5c, what is meant by “update steps” on the horizontal axis? Is this an evaluation of model performance on the test data while training on the training data? This should be made more clear. I was at first inclined to believe that the models were being fine-tuned on the test data…** The observation about "update steps" is accurate. We load all datasets, train the model in the medium dataset, and periodically evaluate performance in other datasets. We will elucidate this further in the revised manuscript. 6. **Limitation on Safety-critical Domains:** We agree with the observation that in highly safety-critical domains where no risks are allowed, the mentioned works [1,2] might be more appropriate. GALILEO, on the other hand, can be applicable in contexts where safety is essential but certain types of risks are permissible. We believe this distinction clarifies GALILEO's positioning and application scope in the field. We will leave this discussion in Section 6. In conclusion, your thoughtful feedback has guided us in identifying areas for enhancement and clarification. We believe that these revisions will significantly improve our paper and more effectively convey the contributions and applicability of GALILEO. [1] Medical dead-ends and learning to identify high-risk states and treatments. [2] Risk Sensitive Dead-end Identification in Safety-Critical Offline Reinforcement Learning. [3] Causal curiosity: Rl agents discovering self-supervised experiments for causal representation learning. --- Rebuttal Comment 1.1: Title: Thanks for the responses Comment: After reviewing the author response, I am satisfied and my concerns have been addressed. I have no further questions. However, one point of clarification about the dead-end discovery approaches. From my understanding of the work, the notion of "dead-end" is unknown a priori and the formulation of how the value functions are trained help to identify possible dead-end regions in the state-action space. True, these methods do depend on the definition of a clear negative outcome that would ideally be avoided but this does not necessarily mean that the suboptimal decisions or regions of the state space are also known. My suggestion of these works is not to point out important weakness in the submitted paper but is to perhaps raise some point of discussion about the claim that biased real-world datasets present irrecoverable limitations. Perhaps, as motivated in part by the insights derived for AWRM, dealing with complex data sources requires alternative approaches to derive helpful insights. Altogether, I'm pleased with the current state of this paper and do feel that it would be of high interest to the MBRL community. It should certainly be published. --- Reply to Comment 1.1.1: Comment: Sincerely thank you for your appreciation of our work. We would like to further clarify the point made in the previous discussion about the studies of dead-end discovery. What we like to say is that dead-end discovery is more suitable for applications where avoiding reaching the terminal function is crucial to completing the tasks, e.g., health care. In these applications, we can use the formulation of the value function, e.g., $Q^*_D$ [Fatemi, et al], to discover the risky regions to help the policy training directly, instead of learning the dynamics models. However, in some applications, it is not very important to avoid reaching the terminal state. For example, in the BAT task, none of our intervention actions in the process of fulfilling orders will really lead the deliverymen to be terminal or ``dead'', where the only differences are in the acceptance time and delivery time; what we really care about is allocating the budget to reduce the system's overall order-taken time. In such applications, we have to make accurate predictions on the effects of actions on the order-taken rate for finding an effective policy.
null
null
null
null
null
null
Formalizing locality for normative synaptic plasticity models
Accept (poster)
Summary: Many biologically plausible learning algorithms have been developed in the recent years but much less effort have been deployed to compare them and understand how they could yield experimentally testable hypothesis. The present work introduces a framework that allows comparing existing algorithms. In particular, the authors introduce the concept of Sp-locality which highlights the quantities needed to be available to synaptic plasticity, the p part focusing on quantities made available by neural computations and the S one on additional signals learning signals. Different groups of plasticity rules arise from the proposed analysis, helping formulating empirically falsifiable hypotheses. **Score increased from 6 to 7 after the rebuttal**. Strengths: The paper is technically sound, introducing formal definitions of locality and thoroughly examining their implications. It is well written, with examples and illustrative figures facilitating reader comprehension of the newly introduced concepts. The analysis conducted allows a straightforward comparison of various learning rules. Furthermore, the authors transparently outline the framework's limitations. Those points make the paper valuable to the NeurIPS community, particularly due to the lack of prior work on the question it addresses. Weaknesses: Some of the properties deriving from Sp-locality do not seem particularly aligned with what one could picture of biological plausibility. As this may result from a misunderstanding from my side, I address those points in the question section. I am happy to improve my score once those questions are addressed. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: **Note:** all the following questions are based on my current (incomplete) understanding of the paper. In a feedforward neural network, a plasticity rule for $W^l_{ij}$ (synaptic weights between layer $l-1$ and $l$) that depend on $r^{l-1}$ (activity of the **entire** $l-1$ layer) and of $r^{l}_j$ is consider $p$-local. Is this compatible with any notion of biological plausibility? This point makes me wonder whether the notion of $p$-locality is fully aligned with locality in physical space. What are the main differences of the proposed approach with an alternative one focusing on the computational graph of neural computation? Such an approach would, for example, not suffer from the FF issue described above. It would start from the neural computations performed and end up with the learning rules. Of course, one downside is that it now depends on the details of neural computation. How would the Sp-locality framework extend to algorithms attacking the temporal credit assignment problem, e.g. e-prop of Bellec et al. 2020 or FOLLOW of Gilra and Gerstner 2017? Finally, which conclusions the framework enables establishing that a more vanilla approach that just analyze the learning rules “manually” does not achieve? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback, here we will do our best to clarify the points that you raised. In some cases we will refer to the General Comments [GXX] if your question was also raised by other reviewers. Regarding your question about whether or not p-locality is fully aligned with our intuitions about locality in space: in our example from Section 2.4, we do see that updates in a MLP network are dependent on all neurons in the preceding layer. However, this is because MLP networks are all-to-all connected. If a neuron in layer $l-1$ were not directly synapsing onto & communicating with neuron $r_i$, then no synaptic updates would be able to depend on it under p-locality. We do see that some update rules depend on ALL presynaptic neurons however, rather than only the particular synapse’s presynaptic input, $r_j$. This is acceptable, however, because in this example all neuronal inputs are being summed to contribute to the membrane potential $V_i$. Within neuroscience it is well-known that synapses onto the same neuron can interact and share information (i.e. heterosynaptic plasticity, a well documented phenomenon), and if the update depends on the sum of all activity in particular, then it actually coincides quite well with our basic intuitions about spatial locality in neurons (since the sum of activity will be reflected in back-propagating action potentials in the dendrites). However, as we do note in considerable detail in Appendix F, when unrealistic or over-simplified networks are used, p-locality does not necessarily correspond to our physical, biological notions of locality. One benefit of p-locality is that the more realistic you make your biophysical neuron model, the more realistic the p-locality constraints become (which is why the examples in Section 2.4 and Appendix D produce sets of allowed variables under p-locality that closely correspond with physical, biological intuitions). For a discussion of the relationship between our formulation and one focusing on the computational graph, please refer to response [G3]. This framework certainly can (and does) account for algorithms that do temporal credit assignment. For example, models that support general DAG architectures (in particular REINFORCE and Wake-Sleep) can also be used for learning through time, and these temporal network architectures do not disrupt our proof in any way because they are subsets of general DAG models (see Fig. 1b). We will add a discussion of these matters and highlight some additional specific algorithms, including BPTT and several of its approximations (RFLO, e-prop), plus FOLLOW (Gilra & Gerstner 2017)--the derivations for these algorithms are very similar to the one provided for backpropagation (Section 3.3 and Appendix E.9). For a discussion of the relationship between our approach and a more ‘vanilla,’ manual approach, please refer to response [G4]. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed answers. Most of my questions are answered. The one about the feedforward example is not yet totally solved. I understand the authors’ argument as follows. Synaptic rules of the form pre-synaptic x post-synaptic values are a specific example of Sp-local rules for this kind of model but Sp-local rules are not limited to that, heterosynaptic plasticity being such an example. Is that correct? Given that this prediction is different from what a connection / computational graph approach would give, I encourage the authors to discuss this point in detail in the next version of their paper. Given that the authors answered my question, I am increasing my score to 7. The paper is definitely technically very strong and provides non-intuitive answers to an important question. --- Reply to Comment 1.1.1: Title: Reply to Reviewer GWia Comment: Thank you for the increase in score! Your understanding is correct, that Sp-local rules are more general than just pre-synaptic x post-synaptic updates, with heterosynaptic plasticity being an important example. We will make sure to add a discussion of this to our revised manuscript.
Summary: The paper describes a systematic and mathematically formal method for categorizing learning algorithms into different types and degres of locality. It also applies the method to certain such algorithms and determines their type of locality. Strengths: The authors chose to work on a topic that is indeed very important, and in a field has been very much in need of systematic evaluation. The claims of biological plausibility and locality in the recent literature have been increasing and widening, so a method to validate such claims has been necessary. The fact that the paper does not merely provide a verbal definition of types of plasticity but rather it attempts to make it mathematical is very interesting. The framework is mostly reasonable, judging by its results when applied to the categorization of specific learning rules. Weaknesses: The weakest aspect of the framework presented here is that it falls victim to one of the issues that it intends to resolve. More specifically, algorithms such as equilibrium propagation or predictive coding require that the network reach a global equilibrium through multiple forward and backwkard propagations before a training example completes its updating action on the network parameters. This is arguably less local than even standard backpropagation, as it requires not only that global signals propagate through the network once, but indeed multiple times. This has been recognized in recent works (https://iopscience.iop.org/article/10.1088/2634-4386/aca710, https://openreview.net/forum?id=8gd4M-_Rj1) and this recognition has even led to a formal proof that the time-complexity of these algorithms is lower-bounded by that of backpropagation (https://arxiv.org/abs/2304.02658). Therefore, this type of locality is clearly not the same as that of, for example, Contrastive Divergence for Boltzmann machines or Deep Belief Networks which do not presuppose such a global and multilayer equilibrium. In turn, even in the cases where such an equilibrium requires iteration only within individual layers and where greedy layer-wise training is possible, e.g. in Boltzmann machines and contrastive Hebbian rules, the final weight update cannot be said to be equaly local to that of simpler Hebbian rules that do not require such iterative processes or feedback, and complete the update truly locally in a single timestep. The distinction of these categories of locality is crucial. Disregarding these in a paper that claims to resolve such confusions risks reinforcing and perpetuating them instead. The above issue of the paper might have been caused by the fact that many of the simplest and most local learning rules are not normative but rather are based on heuristics, whereas the presented framework is concerned with normative rules. However, there do exist such rules that are normative, eg the type of STDP in Nessler et al., NIPS 2009/Plos Comp. Biol. 2013; the work of Pehlevan & Chklovskii, NIPS 2015; and SoftHebb in Moraitis et al., Neuromorph. Comput. Eng. 2022. In fact, what appears to be the current state of the art in biological plausibility of deep learning and in performance of bio-plausible deep learning (Journé et al., ICLR 2023) is one of the above normative rules, namely SoftHebb. Therefore, I recommend that the authors adjust their framework to account for the substantial differences in locality between the three categories of algorithms that are exemplified by (a) Equilibrium Propagation, (b) Contrastive Divergence/Boltzmann Machine, and (c) SoftHebb or STDP. This should be accompanied by making sure that examples of all these categories are included in Table 1. As a minor note, the paper's Table 1 mentions Boltzmann Machines as an algorithm, but BMs are models. I believe Contrastive Divergence is meant by the authors, and if so it should be corrected. More generally, the fact that the current formalization suffers from similar issues as other less formal claims in the literature, reveals that perhaps the mathematics in this case are superfluous, since its conclusions depend mainly on the assumptions, which in turn depend on the categories that the authors intend to achieve. In other words, I suspect that the main benefit of a systematic categorization could be achieved without mathematical terms. Could the authors please comment on this? What do they regard as added value from the use of mathematics instead of mere natural language and logical arguments in this case? Related to this, the paper would be strengthened significantly if both the framework and the resulting categories of locality could be described in a dedicated paragraph in natural language, with an intuitive explanation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: "We subsequently use this framework to distill testable predictions from various classes of biologically plausible synaptic plasticity models that are robust to arbitrary choices about neural network architecture" This is a quote from the abstract. Could the authors please elaborate on this? What exactly are the testable predictions? Perhaps this refers to the last paragraph of the paper, but that paragraph is not entirely clear to me, and it also seems like it does not make any specific prediction. If the authors could address these issues both in the revised manuscript and in their rebuttal here, I am open to re-evaluating the submission. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There is no discussion of the paper's limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their very detailed comments! We will respond to your specific comments here, and for questions asked by multiple reviewers, we will refer you to the appropriate response in the General Comments section [GXX]. All of our comments below will be incorporated into our subsequent draft. We agree with the reviewer that a good framework for analyzing locality should contend with the difference between algorithms that require MCMC sampling and those that do not. If, as the reviewer suggests, our framework masked this distinction that would indeed be problematic. However, that is not the case. Indeed, our definition of Sp-locality requires one to determine the architecture required for locality, and some architectures will not require MCMC (such as a DAG), while others will (such as an energy-based model). Thus, Sp-locality does not ignore the question of whether MCMC sampling is required or not. We will be sure to clarify this in our revisions. However, a related though slightly different question that the reviewer raises is how many samples of MCMC are required by an algorithm for effective training. Here, we agree with the reviewer that our framework does not fully account for this. But, it does partially. Indeed, we should clarify the similarities and differences between Contrastive Divergence, Equilibrium Propagation, and more typical Hebbian rules in this respect. In particular, we note that the Nessler et al. NIPS 2009/Plos Comp. Bio 2013 papers construct their learning algorithm as a very close approximation of the generalized Expectation Maximization (GEM) algorithm, which is as a consequence $p_m$-local (see Table 1), where $p_m$ is defined by the generative model used in these papers. While generative models typically have neither analytically tractable nor biologically plausible inference distributions, by making a careful choice of generative model, these authors are able to perform exact inference through a type of winner-take-all neural circuit. As a consequence, they do not need to rely on slow MCMC dynamics or MAP inference, and are also able to avoid offline ‘sleep’ phase sampling (this is why this model does not require information about a global clamping variable $\gamma$). This sets this family of algorithms apart from both Contrastive Divergence and Equilibrium Propagation, and our framework captures this. Similarly, SoftHebb derives its locality via very similar mechanisms without requiring sampling from multinomial variables, and again, this can be identified via our formalisms. We will add these models to Table 1, and will note their relationship to the GEM algorithm in the appendices. But, in line with the reviewer’s point, it is true that Contrastive Divergence does not require full mixing for MCMC sampling, unlike Equilibrium Propagation, and yet this fact does not affect the p-locality of the Contrastive Divergence learning rule versus Equilibrium Propagation or Predictive Coding (though it does affect the supported architectures for the learning rule, see column 2, Table 1). This raises an interesting possibility: would it be possible to extend our framework to not only distinguish models that require MCMC sampling from those that don’t, but also to distinguish models that require only a little bit of MCMC sampling from those that require a lot? We think this is a fascinating potential direction for future work building on our framework here. We will add a note on limitations highlighting this point to our paper, and emphasize that future work should address this distinction. To ensure that we are not adding to any confusion in the literature we will also add a note in column 2 Table 1 to emphasize that Contrastive Divergence is somewhat more flexible than predictive coding and equilibrium propagation in terms of its time complexity, because it requires only K-Step MCMC sampling where K is a small integer. Further, thank you for catching our misuse of terminology. We will change our use of ‘Boltzmann Machine’ to ‘Contrastive Divergence’ in Table 1 and in the appendices. For a discussion of the relationship between our approach and a more manual/linguistic approach, please refer to response [G4]. We provide a detailed explanation of the different Sp-locality categories, as well as their experimental predictions in General Comments [G2]. For a broad description of the approach in general, we refer to Lines 45-59 of the main text. We will use these to expand and clarify the final paragraph in the Discussion section. --- Rebuttal Comment 1.1: Title: Remaining concern Comment: I would like to thank the authors for their clarifications. These are helpful, and the promised changes would bring important improvements. However, my remaining concern is an essential unresolved part of my main concern from my earlier comment. It is the following. I believe that the "p_m-local" category is too broad if it considers as equally local e.g. (a) simple hebbian learning over a single layer and in a single timestep, and (b) predictive coding that involves propagating over multiple layers and in many timesteps for each learning example. Clearly, learning of type (a) is much more local than learning of type (b). The framework should cover this difference, but it does not, as it groups both very local and very non-local models in "p_m". Again, the reason why I focus on this is because I anticipate that this will continue perpetuating such severe misunderstandings in the literature. I presume that the authors might not be able to formalize this in the time remaining for this paper. In that case, what are the other exact changes that the authors will make to the manuscript to emphasize the existence of these important differences in locality? --- Reply to Comment 1.1.1: Title: Response to additional concerns Comment: Thank you for your response! Though predictive coding and SoftHebb learning do require similar information for their parameter updates, and are consequently both $p_m$ local, you are correct that the network architectures themselves used in these two different algorithms are very different (predictive coding requires the computation of a dynamic equilibrium that takes many time steps, whereas SoftHebb and similar rules require a single time step in a winner-take-all circuit). The way that we have handled these differences in our formalism is to note in the second column of Table 1 the network architectures required for an algorithm to satisfy a given $Sp$-locality constraint. To indicate the various implausible aspects of predictive coding, we have noted that the algorithm requires MAP estimation through equilibrium dynamics, and have also indicated that the algorithm requires weight symmetry. For SoftHebb and its variants, we will specify that the $p_m$-locality has been proven for a winner-take-all circuit. Furthermore, we will provide more details about these circuits in the supplemental material where we prove the networks’ $Sp$-locality properties. We will also take great care to emphasize in our discussion that $Sp$-locality can only be informative when one also considers the types of network architectures supported by a given algorithm (in this case MAP estimation vs. winner-take-all dynamics). This will be added in addition to the limitation mentioned in our previous comment, that our current framework does not distinguish between the amount of time required for MCMC sampling across different algorithms.
Summary: The paper introduces a general definition of locality of updates in neural networks, which is an important requirement of biological plausibility. Variants of locality requirements allow to describe classes of networks and algorithms. Several important algorithms are analyzed through this lens. Strengths: Definition of locality (and variants) is novel (to my knowledge), considerations are mathematically sound. The paper outlines several patterns of locality in existing algorithms, which deserves interest. Weaknesses: The need for the general definition of locality is not substantiated sufficiently. Was that need already expressed in the literature? Were there other attempts? Is there an example where such definition would resolve a controversy? Whenever a model is formally defined, its biological plausibility (which of course goes beyond locality) is up for debate. However within the model the information available for parameter updates must be clear from the definition (or I can't think how it can not be). Therefore I am not convinced that suggested framework will help "guide claims of biological plausibility", as the paper says. It is not quite convincing that introducing probabilities is worth the trouble. After all, the answer to locality question is always binary: yes or no. It seems that definition through graphs (directed or undirected) would be sufficient in overwhelming majority of cases, and whenever the probabilities are already defined, the graphical model can be referred to. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors claim there is lack of clarity about locality for some models. Can you give an example? Definition 2.1: line 87, what are you trying to say by underscoring the word 'direct'? what would be indirect? Fig. 1b Node r_j for t_2 is not a child or coparent of W_{ij} but not excluded. Coloring in figures can be guessed, but better be explained in words. Eq.4 How would the definition 2.2 work when $h$ is not differentiable, e.g. ReLU? line 239 A(p(X, \Theta)) = f(Z) - it looks like update function is somehow fully determined by the probability distribution? Please define the meaning of A(p(...)) here and in theorems 3.1, 3.2. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No problem nere Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we thank the reviewer for their detailed feedback on our manuscript. We will make sure to add relevant material in relation to our responses below in our subsequent draft. For a discussion of the current state of the field, and an explanation of what our framework adds to discussion surrounding experimental validation of existing models, please refer to our general reviewer responses [G1] and [G2]. But, to expand on these points a little bit here, there are two key reasons that these formal mechanisms were needed. First, in contradiction to the reviewer’s intuition, it is not the case that the information required for model updates is always apparent based simply on the definition of the learning rule. Consider, for example, the predictive coding algorithm. The update rules themselves appear to be purely local, and at first blush, they provide the impression of clear biological plausibility. Yet, when analyzed under the lens of Sp-locality, we see that these apparently local equations depend on an architectural assumption of maximum a posteriori gradient descent dynamics, which is biologically questionable. Thus, one benefit of Sp-locality is precisely that it forces us to actually contend with the assumptions being made to achieve locality. Without this formal tool it can be easy for researchers to convince themselves that something is “biologically plausible” because it is local, while ignoring the assumptions required to achieve that locality. Second, our framework based on probabilities is useful beyond a basic verbal description because it provides a formalized and standardized method for describing the biological plausibility of a given plasticity model–beyond this, it also abstracts away details that are not useful for comparison across different network architectures and algorithms (see [G2]). For a full discussion of our particular choice to use probability distributions to ground our definition of locality, please see [G3]. Here we will specifically justify our choice to use stochastic—rather than deterministic—computation graphs. More than half of the algorithms listed in Table 1 were originally derived in terms of probabilistic networks. Given that the probabilistic formulation is more general (we can always look at no-noise asymptotes for the deterministic case), it makes sense to work in probabilistic terms. Further, the probability distribution provides the cleanest mechanism for deciding how to structure the edges and nodes of the computation graph, as we describe in [G3]. Fig. 1b Node $r_j$ for $t_2$ is not a child or coparent of $W_{ij}$ but not excluded. This is a good catch, thank you! It is a coparent for $W_{ij}$ at the next time step--we'll add a line that is implicitly connecting to the next time step to clarify this. We will also explain the color coding in more detail. We only need our update f in Eq. 2 to be differentiable at a single point and for that derivative to be nonzero, so for a ReLU this would be fine (it's differentiable and nonzero for all x > 0). In this case our assumption of differentiability is not so important. The Heaviside function is a better example, because at the only point at which it does depend on a variable, the derivative is not well-defined. In this case, there is no clear way for the derivative operation to detect a functional dependence. A way to relax our assumption of differentiability would be to require that there are two values of $Z_i$ such that $f(Z_{i1}) \neq f(Z_{i2})$ on the left side. This would not disrupt anything for S-locality, but would potentially disrupt our proofs for the properties of p-locality, so we chose to require differentiability throughout. Another alternative would be to use a function that very closely approximates the heaviside function but preserves its differentiability (e.g. sigmoid(k x), where k >> 1). Clarifying the meaning of $\mathcal A(p(X, \Theta)) = f(Z)$. The notation here indicates that an algorithm $\mathcal A$ receives a probabilistic network architecture $p(X, \Theta)$, and outputs a parameter update function $f(Z)$. This update function changes if the probability distribution changes. For instance, the REINFORCE update Eq. 6 changes if we change our choice of nonlinearity $h$ which is used to parameterize our probability distribution over network states. One of our highly nontrivial results is that we can show the REINFORCE update will be Rp-local no matter what choice of probability distribution p we make. --- Rebuttal Comment 1.1: Comment: Most of my concerns have been addressed, thank you. I think the work should be made available to the community, where the discussion will most likely continue. The score upped to 7.
Summary: The authors propose "Sp-locality" as a formal definition of locality in models of synaptic plasticity, as well as two intermediate definitions, S-locality and p-locality. S-locality explicitly enumerates the set of variables which directly participate in a synaptic update for each synapse. p-locality replaces the explicit set S with the Fisher information of that variable (conditioned on all other variables to eliminate indirect dependencies) with respect to the synapse. Finally, Sp-locality expands this by measuring influence as defined by p-locality, additionally allowing a dependence of the synaptic update on an explicit set of variables as with S-locality (although eliminating the need for an exhaustive enumeration). Finally, the authors apply these definitions to a number of well-known models of synaptic plasticity, delineating four major classes of models and suggesting experimental predictions. Strengths: I am unaware of similar work attempting to organize plasticity rules according to their locality, although I am interested to see what other taxonomies have been suggested. The paper is exceptionally well written, building up the concept of Sp-locality from simpler intuitive concepts, providing examples and intuitions, and expanding on these in the Appendices. The authors classify a wide range of models using their framework and show useful mappings between the graphical model and their definitions of locality. Overall, this paper is a useful initial effort to formally categorize "local" plasticity rules, and is a valuable conceptual contribution and starting point for follow-up work in this direction. Weaknesses: - Although the authors do a thorough job of applying their framework to a diversity of plasticity models, I am curious about further contextualization of the work within meta-topic of "taxonomy of plasticity rules" - As the authors point out, p-locality does not imply biological plausibility (nor vice versa). Sp-locality is an even weaker constraint, as it allows for not only p-locality but also dependence on an explicit subset S of network variables. These definitions punt the problem of "biologically plausible plasticity" to the problem of defining "biologically plausible architecture," which is again often defined intuitively and ad-hoc. As such, the practical utility of this framework is unclear to me. - It is unclear to me how this framework adds power to experimental predictions beyond the ones made by the individual plasticity rules and their ad-hoc appeals to biological plausibility. I would be interested to see more discussion about this with explicit examples of predictions which can be made with this framework and followup experiments that can be performed (even conceptually, disregarding their practical feasibility), or even a retrospective "post-diction" of a known experimental phenomenon which could have been predicted by this framework. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Is there other work attempting a conceptually similar grouping of plasticity rules based on the information used by synapses? - Can these definitions account for temporal locality or lack thereof? e.g. how would backpropagation through time or its biologically plausible approximations fit into this framework? - Is there a mapping between p-locality and S-locality? i.e. given a p-local rule (for a particular architecture), is there always a way of defining S-locality, and vice versa? (Is this even a useful mapping to make?) - Are there plasticity models that do not fit into this framework (excepting the trivial cases in which S=Z)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: The authors carefully analyzed the limitations of their definitions, although as far as I can tell these limitations severely handicap the practical utility of the proposed framework. Due to the basic science nature of this work, potential for direct societal impact is minimal. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very comprehensive review; we will elaborate here on your criticisms and requests for clarification. All of our comments below will be incorporated into our subsequent draft. For our response on previous related work, please see general comment [G1]. It is a genuine issue that our framework does not answer the question: “which network architectures are biologically plausible?”. Whether a mathematical framework could (or should) be constructed that would answer this question is unclear; arguably, that question can only be answered by careful anatomical and neurophysiological investigations. However, it's worth noting that Sp-locality does not completely punt the question of “biological plausibility” to that of “biologically plausible architecture”. A proof of Sp-locality necessarily comes with the class of models that the locality properties have been proven for (see Table 1). This helps us to make clear the generality & restrictions of various local learning algorithms and facilitate their comparison. In the ideal case, an algorithm would be Sp-local for any network architecture, because this would guarantee that an algorithm at least does not restrict a modeler to using networks that violate important biological plausibility assumptions. Of course, if an algorithm is only Sp-local for a given type of architecture then one is still left with the question, “is that type of architecture biologically plausible?”. But, critically, Sp-locality forces us to contend with this fact: that is its practical utility. Without it, researchers can (and often do) mask the biologically implausible architecture assumptions they are making when they claim that their algorithm is biologically plausible because it is “local”. Put another way, the utility of Sp-locality here is to eliminate any uncertainty as to the architectural assumptions being implicitly made in claims of locality. For our response on experimental predictions made by our framework, please refer to general comment [G2]. This framework certainly can (and does) account for algorithms that do temporal credit assignment. For example, models that support general DAG architectures (in particular REINFORCE and Wake-Sleep) can also be used for learning through time, and these temporal network architectures do not disrupt our proof in any way because they are subsets of general DAG models (see Fig. 1b). We will add a discussion of these matters and highlight some additional specific algorithms, including BPTT and several of its approximations (RFLO, e-prop), plus FOLLOW (Gilra & Gerstner 2017)--the derivations for these algorithms are very similar to the one provided for backpropagation (Section 3.3 and Appendix E.9). There is always a mapping from a p-local rule to an S-local rule for a given network architecture–to see that this is true, we need only recognize that for every p-local rule, there is a finite set of allowed variables for each parameter. We may take $S_k$ for parameter $k$ to be equal to this set, thus demonstrating that the rule is also S-local. Of course, S-locality is in general radically less concise than p-locality, because without the machinery p-locality provides, it is not possible to automatically and cleanly generate a set of allowed variables, which would otherwise have to be done manually. It is also radically less useful because the set S is architecture-specific, and cannot be used for proving general properties of learning algorithms as we do in Table 1. By contrast, there is not always a mapping from an S-local rule to a p-local rule. This is because p-locality itself makes a hard commitment to an intuitive notion of locality defined on probability distributions, and does not accommodate additional variables that violate this intuitive notion. To our knowledge, there are no normative plasticity models that do not fit into our framework. In fact, we have been unable to find normative plasticity models that do not fit into the four broad categories outlined in Table 1. However, it is worth noting that p-locality becomes less useful for non-normative (e.g. phenomenological or mechanistic) plasticity rules, where models are normally highly architecture-specific, and there is less value in constructing an architecture-general notion of locality, because in these cases there is no proposed generic optimization algorithm (e.g. REINFORCE) producing different plasticity rules for different neural network architectures. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their careful responses to my questions. I have raised my score from 5 to 6 and look forward to reading the extended discussion in the final version, particularly the comments regarding biologically plausible architectures and normative plasticity models, which has helped me better understand the scope of this work.
Rebuttal 1: Rebuttal: #### General comment We would like to thank the reviewers for their extremely helpful feedback. Here, in the global reply, we will address critiques that were raised by multiple reviewers. Importantly, we are grateful that most reviewers recognized the importance of a formal framework to study learning algorithm locality. We believe our work is an important first step in that direction, and that it has the potential to be used widely. We also note that our manuscript is considerably improved thanks to excellent reviewer feedback. As such, we believe it is timely, and that it is worthy of sharing with the NeurIPS community. [G1] Previous work. First, in our introduction we will cite other papers that have attempted to construct taxonomies of locality in the past and clarify our unique contributions. To our knowledge, no previous study has attempted to construct a mathematical framework for locality, but some papers have reviewed locality as a whole and discussed potential ways to test differences between different proposed normative plasticity rules. In particular, the four p-locality categories that we uncover with our analysis loosely correspond to those identified in Lillicrap et al. 2020. But, our mathematical framework provides additional useful distinctions, e.g. it identifies target-based, generative learning algorithms like Wake-Sleep as distinct from error-based learning algorithms derived from backpropagation, because these algorithms derive their locality from very different principles and make different assumptions about the nature of feedback. In addition, Marschall et. al 2020 complements our approach by discussing constraints on the allowed computations, memory, and temporal complexity of bio-plausible temporal credit assignment algorithms and taxonomizes existing approaches accordingly. These amount to constraints on the network architecture itself, and constraints on the types of operations allowed in computing updates (e.g. matrix-vector multiplication), whereas we focus on allowable variables for updates, agnostic to the update’s functional form. [G2] Clarifying experimental predictions. We discuss this point briefly on Lines 351-55, but will elaborate here (and in our revised discussion). Critically, Sp-locality abstracts away details that are not important for testing predictions and helps identify those that are. For example, suppose we are working with the feedforward network model of Section 2.4. Sp-locality tells us that the REINFORCE (Eq. 6) and Wake-Sleep algorithms (Eq. 8) are equivalent with respect to their ‘p’ in p-locality. This tells us that the set of allowed local variables under p-locality (e.g. pre- and post-synaptic information) are not the best targets for experimental testing between these algorithms, and instead the focus must be on the variables included in ‘S’. Turning to this, we have identified four different kinds of Sp-locality, which vary based on the choice of set S (S = the reward signal $R$, the empty set, the global clamping variable $\gamma$, or the error feedback signal $e_i^{l}$). REINFORCE is of the first kind, and thus predicts that a global reward signal will modify synaptic plasticity. In contrast, Wake-Sleep is of the third kind, and thus predicts that there is a global clamping variable $\gamma$ that switches the network between different modes of synaptic plasticity independently of reward. Therefore, across these different Sp-locality classes we have very different prescriptions for what to look for experimentally in terms of signals that regulate synaptic plasticity globally. [G3] Probabilistic versus Graphical locality. Several reviewers have suggested that we might alternatively formulate locality in terms of the graph of computations performed by a given network, rather than in terms of a probability distribution over network states. However, within each probabilistic node in our formulation there are several computations being performed (linear matrix + pointwise nonlinearity in the MLP case). A graphical approach would have to arbitrate which collection of computations counts as a ‘node’ on the graph. There are several different ways to do this: one way is equivalent to our approach, where the graph is taken to be a minimal DAG or UG which characterizes the distribution p. We demonstrate that this graph can be used to read out the allowed variables under p-locality (Properties 2.1 and 2.2), and we find that the reviewers are certainly correct that these properties are easier to use than the basic definition of p-locality itself. However, the definition of p-locality is still required to justify these properties and to unify them across different graph types (DAG vs. UG). Other approaches for defining the appropriate computation graph would be problematic, primarily because they would be more arbitrary, and consequently could not be used to prove general locality properties of normative learning algorithms. [G4] Language versus math. With any mathematical framework, as several reviewers have noted, it is critical to ask whether natural language would have sufficed. In our case, the principal added value of Sp-locality is that it disentangles the locality properties of an algorithm from the specific network architecture used in the model, by exploiting mathematical insights into how algorithms actually achieve locality (Properties 2.5-2.7). This allows us to focus on properties of normative plasticity algorithms as a whole, without nitpicking details about their particular instantiations for particular model choices, something that could not have been done without our mathematical formalisms.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
FLuID: Mitigating Stragglers in Federated Learning using Invariant Dropout
Accept (poster)
Summary: The paper proposes (Federated Learning using Invariant Dropout) FLuiD to address the straggler problem in FL. Due to the presence of system heterogeneity, straggling nodes (nodes with resource constraints) become a bottleneck in the FL training process. In this paper, the authors propose a FLuID to address this problem. First, the identification of invariant neurons, neurons that quickly optimize and remain relatively stable for the remainder of the training. Second, the straggling devices participating in the training. This information will be used to “prune” the global model and send a subset model to the straggling devices in order to efficiently utilize the available computation and communication resources. Strengths: + The straggler experiments included setups conducted using mobile devices. + The experiments show the proposed invariant dropout speeding up execution time relative to ordered dropout and random dropout. Weaknesses: - It appears most of the experiments involved only 5 devices. While it is commendable that the experiments involved hardware, it is not clear to claim that the accuracy margins would hold as the number of clients increases, as is expected in practical FL environments. The scaled up experiments involving 50 to 100 clients can easily be scaled up to 1000 - 3000 clients in FEMNIST for instance. This might require an FL simulation environment, but it would give a good sense of how the proposed scheme yields improved training time and accuracy. - FL usually requires client sampling in each round. This is necessary as the system scales, and not all clients can participate in each round. I didn’t see the client sample ratio discussed. Thus I assume, in your experiments, all clients are participating in each training round. If that’s the case, how will client sampling in each round affect the dropout scheme? It is not clear if the invariant dropout scheme will maintain its performance margin if client sampling is incorporated into the training process. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - How the proposed approach is scalable with increasing the number of clients (clients can easily be scaled up to 1000 - 3000 clients in FEMNIST for instance.)? - Does the invariant dropout scheme will maintain its performance if client sampling is incorporated into the training? Clarification on this and further experiments would be helpful. - The authors should enhance the plot labels in Fig. 5 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: - Scalability is a major limitation that is not discussed and experimented well. - Sampling during the training process needs to be addressed clearly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments, which have enabled us to add scalability analysis and sampling to the paper. **Question 1: Scalability** We scale FLuID to **1000 clients** with the FEMNIST dataset for 500 global training rounds. We run with a **client sampling ratio of 10%**, as used by the prior works in federated learning space such as FjORD. We run these experiments on a private cloud cluster of Intel Xeon Silver CPUs and NVIDIA Tesla V100 GPUs. We emulate 20 clients on each machine using the Flower framework, as detailed in the scalability studies in Section 6.1. We remain consistent with our other evaluations in Section 6.1, identifying the slowest 20% of clients as stragglers. At each global round, we randomly sample 10% of the available clients for the training round. FLuID maintains a record of the stragglers' cohort among all available clients. FLuID monitors the training times of the clients within the sampled group after every global training round. It updates the straggler record if any changes or new stragglers are identified. Below, we present the accuracy results against each sub-model size for Invariant Dropout and the baseline techniques. Invariant dropout maintains a better accuracy profile than the baselines even when scaled up to 1000 clients while incorporating client sampling. We will add these experimental results in the final paper to showcase the efficacy of FLuID as it scales to a large number of devices for the evaluated datasets. Dropout Method | r=0.95| r=0.85| r=0.75| r=0.65| r=0.40| ---------------|-------|--------|-------|------|-------| Random|87.9|87.5|87.5|86.9|85.7 Ordered|87.8|88.0|87.5|87.3|87.0 Invariant|88.1|88.2|88.0|87.7|87.2 **Question 2: Client sampling** At any point in training, FLuID is capable of recalibrating stragglers and supports dynamic changes during runtime. This allows FLuID to easily incorporate sampling in its process, as showcased by the new results presented here. If certain clients become unavailable or device properties change at any moment, FLuID can take that into account, as discussed in Section 6.1 and Figure 4b. However, all clients participated in each training round for the experiments included in the paper. We scale FLuID to execute on 1000 clients in a system that emulates each client on the server described in Section 6 of the paper. We perform the experiments with the FEMNIST dataset with a client sampling ratio of 10%, as used by the prior works in federated learning space such as FjORD. We will add these experimental results **as presented above** in the final paper to showcase the efficacy of FLuID to incorporate client sampling as the number of devices grows. **Question 3: Enhancing Fig. 5** We will enhance all the plot labels to ensure it is legible.
Summary: The paper focuses on the issue of mitigating stragglers in a heterogenous FL environment through dynamic load balancing, introducing a technique called Invariant Dropout and an adaptive training framework called FLuID. Invariant Dropout dynamically creates customized sub models that include only the neurons exhibiting significant changes over a certain threshold. Through experimental evaluation, the papers shows that this approach mitigates performance overheads caused by stragglers while also achieving higher accuracy compared to the state-of-the-art technique - Ordered Dropout. Strengths: - Up to 18% speedup in performance - Up to 1.4 pp improvement in accuracy over state-of-the-art Ordered Dropout - Evaluation is detailed - on 3 models and datasets and compared with two techniques: Random and Ordered - Improves on previous work which have drawback of incurring training bias, creating performance centric sub models, or entirely reconstructing the sub model Weaknesses: - Accuracy gains compared to Ordered and Random across datasets and models, while statistically significant may not justify the additional complexity of this system. - There is scope to improve the presentation - In the lines 40-47 2 sentences are repeated - Figure 2b has no label on the x axis Technical Quality: 3 good Clarity: 2 fair Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: In addition to the weaknesses mentioned above, the authors note two limitations - Overhead to handle stragglers and maintain system performance which may increase with changes in straggler performance - Currently the system only uses pre-defined sub model sizes mapped to straggler performance which keeps the framework lightweight, but for varied edge devices, fine grained sub model determination will be required. The impact of this fine grained approach on the overhead needs to be measured to check for its suitability on different edge devices. If the overhead increases, the performance improvements noted here may diminish. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. We answer the most pressing questions first, followed by the remaining questions. **Weakness 1: Accuracy Gains of FLuID** We have implemented FLuID as a lightweight system with minimal overheads. We empirically observe that the FLuID calibration process takes significantly less time (less than 5%) than the actual training time. Our evaluation takes these overheads into account while reporting the performance benefits. As we observe more heterogeneity in the system of devices, frameworks like FLuID will play a crucial role in mitigating performance bottlenecks. **Limitations** FLuID can scale to support more submodel sizes without significantly increasing overhead. As discussed in section 5, line 205, the sub-model size selection process is straightforward. FLuID simply chooses a sub-model size closest to the inverse of the required speedup. Therefore, even with the addition of more sub-model sizes, the processing overhead complexity does not experience a significant increase. Notably, on real mobile phones, we have observed that the FLuID calibration process takes less than 5% of the training time. **Weakness 2: Improving the presentation** We will clean the text to remove the repetition and ensure all figures have labels.
Summary: FLuID authors tackle a straggler problem in Federated Learning, where a central model is trained across a set of heterogeneous devices. This problem is particularly challenging when performance capabilities at training time are actually variable. This variable heterogeneity at training time requires a mechanism that can change the model sent to a client per round, in order to create a control mechanism that mitigates the clients’ straggler effect. FLuID achieves this with a proposed Invariant Dropout method. The Invariant Dropout technique quite simply refers to identifying neurons the updates for which have become minimal (close to invariant) over some number of rounds. This observation is used to decide to exclude such neurons (or “drop” them) from training for a given client. Neurons can be ranked based on the extent to which they are “invariant”, and a line can be drawn as a function of computation budget to determine how many neurons to drop. FLuID builds on several technical challenges: 1. identifying invariant neurons 2. identifying stragglers 3. determining a subset of the model weights to send to each client/straggler FLuID is able to match or exceed accuracy across several datasets. Performance includes real-world mobile client evaluation. Straggler effects are mitigated using Invariant Dropout. Strengths: + FLUid tackles a quintessential problem of stragglers in FL, which very practical and pervasive + FLUid avoids using asynchronous aggregation techniques, which jeopardize model convergence. It sticks with synchronous aggregation + the dropout technique is well known in DNN training and has been shown to improve performance, when random dropout is used + The dynamic nature of invariant dropout is effective and can adapt to dynamically changing client compute (or communication) capacity. Support for this dynamic heterogeneity is rare in literature + the paper is well written + the experimental methodology is well executed Weaknesses: * the idea is very simple, some could consider the adaptation of dropout from previous literature to FL setting incremental? I’m certainly familiar with this idea and surprised this hasn’t been published yet. * this paper compares to different types of dropout as baselines, but doesn’t consider other published baselines, such as SuperFed [1]: * https://arxiv.org/abs/2301.10879 [1] SuperFed, https://arxiv.org/abs/2301.10879, 26 Jan 2023 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Questions: * It would be interesting to get your perspective on comparison between FLuID and SuperFed [1]. SuperFed proposes sending clients subgraphs of the central model, which is isomorphic to structured dropout. SuperFed does NOT track neuron invariance, of course, but the idea of using structured dropout to send a smaller model to the client is there. In its conclusion, SuperFed mentions “adjusting .. for client resource constraints (e.g., compute, bandwidth, memory capacity)”. [1] SuperFed, https://arxiv.org/abs/2301.10879, 26 Jan 2023 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: yes, credit to authors for pointing out limitations: minimal overhead to handle stragglers and degradation of system perf in the worst case when the dynamics of client capacity variability causes instability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We provide the comparison against SuperFed below: Both SuperFed and FLUID propose sending a subset of a global model to edge clients, and both frameworks can send submodels of varying sizes to each client. However, these two frameworks differ significantly in two main aspects: their objectives and the approach used to form and distribute the subnets to clients. Firstly, the objective of SuperFed is to co-train a global "family of models" in a federated fashion, which ultimately reduces the cost of training multiple global models. On the other hand, FLuID aims to mitigate the performance bottleneck caused by stragglers in federated learning while training for a single global model. As a result, these differences in objectives lead to distinct design approaches. **In SuperFed, all clients receive subnetworks of different sizes, while in FLuID, most clients train on the full global network, and only a smaller percentage of clients (stragglers) train with sub-models.** Secondly, the formation and distribution of sub-models also exhibit significant differences between the two frameworks. SuperFed focuses on optimizing the spatial and temporal distribution of subnetworks, ensuring that each subnetwork receives equal exposure to all data partitions. Additionally, it keeps track of the number of times each client has been assigned to the smallest and largest subnetworks. During each round, SuperFed utilizes this information to assign the smallest, largest, and random subnetworks to the clients. **In contrast, Fluid takes a different approach, as pointed out by the reviewer, by tracking neuron invariance to generate submodels of sizes specifically optimized for the computational capabilities of the straggler device.** We find SuperFed to be highly interesting, focusing on co-training model families in a cost-efficient federated fashion. The insights obtained from FLuID could prove valuable for the future advancement of SuperFed. By considering system heterogeneity and individual device capabilities in the load balancing and subnetwork distribution techniques, these frameworks can achieve improved performance and significant savings in training costs.
Summary: The paper presents a framework, FLuID, for cross-device federated learning, where some of the clients are “stragglers”. The training time of these stragglers is significantly higher, hence they dictate the overall training time. FLuID uses Invariant Dropout to dynamically reduce the stragglers’ training time, hence alleviating their overall FL training time degradation. Strengths: The authors tackle a very important problem in cross-device federated learning. Clients are usually edge and mobile devices that are running several processes and applications at the same time. Therefore, the training time varies as compared to dedicated servers used in cross-silo FL setups. Therefore, proposing a dynamic approach to identify the stragglers and the appropriate dropout rate is of major importance in this domain. Weaknesses: While this paper targets a very important and relevant problem in the FL domain, I have concerns regarding the proposed framework, as detailed below: * How FLuID measures the training time of the stragglers, and how can it distinguish between stragglers due to high training time, and high upload/download latency? FLuID targets the former, while the latter requires a different solution than proposed in this paper. * Some of the design decisions/details lack further explanations. For instance, the setting of T_target and th (see more about it in the Questions section). * The number of invariants is bounded (at any given training round for any given model), while “Speedup” (as defined in the paper in line 197) is generally unbounded (depending on the stragglers’ training times). Therefore, FLuID might result in an aggressive dropout rate, and thus diminishing return. * The evaluation is insufficient: (1) It includes small and relatively “easy” datasets to learn, as well as small ML models. (2) The obtained accuracy improvement is quite negligible. (3) Some of the evaluation setup information is missing. Mostly the distribution of the stragglers' training time, see also question (1) in the Questions section. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: (1) Lines 196-198: “T_target is set equal to the training time of the ‘next slowest client’ … Thus speedup ensures optimal utilization of available clients”, can you explain this argument? How does such a setting of T_target ensure optimal utilization of the clients, regardless of the stragglers’ training time distribution? (2) Line 200: how does the server measure the training time of clients? The server is aware of the end-to-end latency of given clients that generally depends on the client’s (1) downlink bandwidth and latency, (2) training time, and (3) uplink bandwidth and latency. (3) Line 11 in Algorithm 1 is unclear: what’s “inv”? (4) The discussion about setting th in lines 209-219 is unclear, is it a heuristic? Does it have any theoretical properties/guarantees? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. We answer the questions in order. **Weakness 1: Training time of stragglers** FluID considers the end-to-end latency, upload/download latency, communication time, and training time of the device to determine if it is a straggler. However, similar to prior works in the area, such as Federated Dropout (Caldas et al., 2018) and FjORD(Horvath et al., 2021), the FLuID infrastructure is built to reduce the compute and communication load by only training on sub-models. FLuID focuses on identifying stragglers based on hardware processing capabilities. This approach effectively decreases the computation load on the device, focusing on training a sub-model and reducing the communication load by synchronizing fewer parameters. We empirically observe that download/upload latencies are similar and **do not pose significant bottlenecks** for any device or dataset. For both CIFAR10 and Shakespeare datasets, the download/upload time accounts for less than 3% of the end-to-end training latency. Similarly, in the case of FEMNIST, even with relatively short actual model training times, the download/upload time remains within 15% of the end-to-end latency. FLuID assigns T_target as the "next slowest client's training time." This choice optimizes non-straggler idle time reduction. FLuID is flexible to accommodate various t_target values. However, setting T_target lower than the "next-slowest client's" training time offers no gain as non-stragglers cannot accelerate. Conversely, setting t_target above training time leads to longer idleness and suboptimal performance. **Weakness 3: Unbounded Speedup** FLuID enforces a minimum model size to ensure accuracy is not significantly impacted. Consequently, in scenarios where the straggler's performance is exceptionally slow, FLuID might not **completely** eliminate the performance bottleneck caused by the straggler. Nevertheless, even in cases with high skewness in performance, FLuID will still alleviate some performance bottlenecks and reduce idle time for non-stragglers. Throughout our evaluations, we maintain a lower bound for the model size at 20%. **Weakness 4-(1) Evaluation datasets** We evaluated FLuID on established models, datasets, and settings as used by the prior works in federated learning space such as Federated Dropout (Caldas et al., 2018), FjORD(Horvath et al., 2021), Adaptive Gradient Sparsification for Efficient Federated Learning (Han et al., 2020), PruneFL (Jiang et al.,2022), and FLANC(Mei et al., 2022). **Weakness 4-(2) Impact of FLuID** The primary goal of FLuID is to address the performance bottlenecks caused by stragglers in a federated learning scenario without compromising the model's accuracy. Compared to the state-of-the-art federated dropout techniques, FLuID offers up to 18% speedup in training time. FLuID introduces a novel dropout technique that considers neuron updates and performs dropout on invariant neurons to achieve this. This approach, moreover, results in accuracy improvements of up to 1.4%. **Weakness 4-(3) Distribution of the stragglers' training time** We apologize for the confusion on the evaluation setup. Figure 2a shows the per epoch training time distribution across the mobile devices in the log scale. The standard deviation between the training times of each client is 0.5, 22, and 21 seconds for FEMNIST, CIFAR10, and Shakespeare, respectively. We will add these details to the paper. **Question 1: Setting of T_target** We will include the following explanation in the paper. FLuID aims to determine the optimal training time for straggler devices to mitigate non-straggler idle time in federated learning efficiently. To achieve this, we set a maximum training window while minimizing model dropout that effectively manages straggler impact without sacrificing accuracy. Notably, FLuID can enhance accuracy by up to 1.4% points compared to leading federated dropout techniques. **Question 2: Training time measurements** The **reported training time** is each client's **actual wall clock training time**. The server's end-to-end training latency is also measured by taking the time difference between when it sends the global model and receiving training results from each client. Thus, this training time includes the upload/download latency, training time, and communication time. We will clarify these details in the paper. **Question 3: Line 11 in Algorithm 1 “inv”** The inv is a typo. Inv instead is the variable IN in line 4 and line 16 of the algorithm. We will fix this in the final version of the paper. **Question 4: Threshold selection** The threshold selection is guided by a heuristic approach rooted in preliminary results and backed by evaluation data. We will clarify this in the paper. The design of FLuID is inspired by the preliminary results regarding the characteristics of invariant neurons and their impact on the model accuracy. We have presented such preliminary results in Appendix A.1, Figure 6. These results show the number of invariant neurons across the training process. After only 30% of the training rounds are completed, 15%-30% of the neurons exhibit invariance across CIFAR10, FEMNIST, and Shakespeare datasets. A neuron is invariant if its update is below the threshold value. Different models exhibit distinct characteristics concerning the magnitude of neuron updates. Therefore, choosing different threshold values will lead to different numbers of neurons being categorized as invariant. In Appendix A.2, we showcase the threshold sweep and observe its impact on invariant neurons observed in the training process. Intuitively, the percentage of invariant neurons increases as the threshold value increases. Table 3 shows the percent of neurons observed with increasing threshold value as invariant and the overall training accuracy of the model trained for FEMNIST, with a sub-model size of 0.75 for the stragglers. --- Rebuttal Comment 1.1: Comment: The authors have addressed most of my concerns. However, they aren't addressed in the submission, hence it's crucial to revise the paper in case of acceptance accordingly. Based on that assumption, I'm upgrading my ratings to 5. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their comments. We will incorporate all the clarifications in the final version of the paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning better with Dale’s Law: A Spectral Perspective
Accept (poster)
Summary: The author present a simulation study that investigates gradient-based optimisation of RNNs that obey Dale’s Law, i.e. networks with neurons that are strictly excitatory or inhibitory at initialisation and during training. In particular, the authors disentangle the effect of enforcing Dale’s Law during training (synaptic weights can not flip sign) and the initial spectral properties that result from initialisation with separate excitatory and inhibitory populations of neurons. Strengths: The paper investigates an important open question: What are the origins of the performance gap between RNNs that do / do not obey Dale’s Law. Through a large set of simulation studies and numerical analysis, the authors show in a convincing way that the performance gap does not solely stem from enforcing Dale’s Law during training (and thus restricting the solution space by enforcing signs of weights) but also significantly depends on how networks are initialised and parametrised. Initialising RNNs with columns of positive / negative weights to create populations of excitatory / inhibitory neurons (“ColEI networks”) causes a skewed and multi-modal singular value spectrum – leading to the well known effect of exploding neural activations / gradients that hamper training and performance. The authors further provide empirical evidence that the Normalised SVD Entropy thus provides an easy to compute predictor of network performance before the onset of training. Further, the authors provide analytical intuition for their simulation results in the discussion. The paper is well structured, written and accessible. The simulation details are explained in great detail; which is very helpful for understanding the research question, results and potential limitations. The figures are generally clear and accessible. Weaknesses: - My main concern is, that as written in appendix 5.3, the learning rate of DANNs is scaled independently for E and I weights to balance the impact and updates of E and I populations. As ColEI networks seem not to be trained with a similar scaling scheme, this has the potential to explain away or confound results and observations. I am happy to revise my score if this is adequately addressed. - The authors base their analysis on ColEI networks that are initialised following [5], however, the authors do not provide insight (through simulations or analytically) if there generally can or cannot exist a weight initialisation scheme for ColEI networks which does not lead to a skewed and multimodal spectrum of singular values. As thus, the results seem to be limited to a specific initialisation scheme and not to ColEI networks in general. - The authors apply gradient clipping during network training. This may significantly reduce the effects of skewed, multimodal singular value spectra and outliers in the singular value spectrum. As the authors try to disentangle the effects of enforcing Dale’s Law and the spectral properties of initial recurrent weight matrices, gradient clipping may confound results and conclusions. - If I understand correctly, recurrent weight matrices in DANNs are parametrised by a linear combination of three weight matrices. As such, it seems like DANNs have three times more free parameters than ColEI networks. If true, this would pose the question if comparisons between DANNs and ColEIs as presented in figure 6 are fair. Minor: - In figure 2D it is difficult to see what is going on – maybe log-axis and a higher alpha value for the scatter point’s colour would help? - In figure 2B it looks like the networks haven’t been trained until convergence - Missing “in” in line 94 - Broken reference in line 110 Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) As written in appendix 5.3, the learning rate of DANNs are scaled independently for E and I weights to balance the impact of E and I and their updates. As ColEI networks seem not to be trained with a similar scaling scheme: How does the IE weight scaling influence simulation results and conclusions; in particular with respect to changes in EI ratio and network size? 2) In line 109 the authors write “[…] the activation variance did not scale with depth […]”. Did the authors control for mean and variance shift across layers and increasing depth? It would help to better understand the three different training regimes if there would be a plot that is visualising the mean and variance shift at initialisation across layers and depth for random data for RNNs, DANNs and ColEIs for varying EI ratios. 3) Related to 2 – In order to better understand the learning dynamics and in order to detect anomalies, a plot that visualises the mean and variance of synaptic weights and biases and how they evolve during training would be helpful. Do you expect the mean and variance of the weight matrices and biases within the E and I population to evolve similarly for RNNs, DANNs and ColEIs? 4) Related to 2 – Judging from the learning trajectories (e.g. in Figure 2B/C), to me, it looks like that ColEI networks have a significantly higher initial error (at t=0) – however, since the y-axis is cut – I might be wrong? If the initial errors are different, why are they different? 5) In line 116 the authors write “[…] each row of W^IE was initialised as the mean row of W^EE […]”. What does that mean? Do all entries in the row have the same value? 6) In Figure 3 it looks like ColEI networks have a large variance in performance. Do you have an explanation or intuition why some of the initialisations fail and others succeed? 7) From equation 3, I conclude that DANNs have 3x more free parameters in the recurrent part of the network than ColEIs. This makes comparison tricky, especially when it comes to network size. How does performance look like as a function of free parameters (instead of number of neurons in the hidden layer)? 8) I would assume that spectrum transplants alter the EI balance. Did you correct for that effect? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed limitations of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks to the reviewer for their thoughtful review. We are pleased they share our perspective that this paper investigates an important, unanswered question: the origins of the performance gap between EI and standard RNNs. Furthermore, it is our desire that this paper contributes to a broad spectrum of computational neuroscience research, so we are pleased that you find it well written, accessible and clear. Please see the general comment for the references list. **Weaknesses:** > My main concern is... …I am happy to revise my score if this is adequately addressed. In short, the scalings used for DANN models are specific to its architecture, not the parameters being E or I, and they do not apply to the ColEI network. In [1], the learning rates for DANN I parameters are scaled based on an analysis of how much an update to each parameter type (Wee, Wie, Wie) will change a layer’s output distribution, as quantified by KL divergence of the layer before and after the update. In a ColEI network, the E and I parameters have the same impact on the network’s output function (ColEI E and I parameters are like DANN Wee parameters), hence we did not originally run experiments with such scalings. However we agree this is valid scientific concern, and have ran additional ColEI experiments on sequential MNIST with outgoing (I->I, I->E) and incoming (E->I) I parameter-learning scaled as in DANNs (Rebuttal Fig 4). We find that this scaling impairs learning, in line with our theoretical intuitions. We will add the figure to the appendix. We hope this addresses this important concern. >The authors base their analysis on ColEI... Thanks for this comment. To our knowledge, there is no better method for initializing ColEI networks [2, 3]. Nonetheless, we share the reviewer’s suspicion that an initialization exists for ColEI networks that results in better learning. As such, one contribution of our work is providing a potential foundation for this (currently) unknown method of initialization. Our results imply that if one can initialize ColEI networks with better spectral properties, then they should train well. We will add to the discussion highlighting this point. >The authors apply gradient clipping… Thank you for raising this. We missed an opportunity to highlight just how problematic the standard ColEI spectrum is! Gradient clipping is a common technique for training RNNs [4] and we applied it under the assumption that it would benefit ColEI networks. Indeed, we find that without gradient clipping ColEI networks perform very poorly, please see (Rebuttal Figure 3). >If I understand… DANNs do have more parameters than ColEI networks, but not as many as 3 times. Wei and Wie^T are both of dimension # hidden* # inhibitory. So for the case of 10% inhibitory units (Fig 6) there are only 20% more parameters. This is a relatively small difference, but if the reviewer wishes we can run and include experiments in which the total number of free parameters are equal. However in this case the number of hidden units and the dimensions of the hidden to hidden connectivity matrix would no longer be the same between DANNs and ColEI networks, and therefore these simulations would be unfair in a different way. Minor: >In figure 2D … We’ll add Fig 2d with zoomed in axes to the appendix (Rebuttal Fig 6A). >In figure 2B … Please see Rebuttal Fig 6B, we have increased the number of parameter updates from 5 to 18 M. **Questions** Thank you for this list, we will update the camera-ready version to address all of these points. >As written in appendix 5.3… Please see our comment in the weaknesses section. If desired we can include experiments with different ratios and network size, but we predict the ColEI performance will track the trends of the current results but be worse. >In line 109 the authors… Here, we just meant that we adopt the same general initialization strategy as found in [5,6]. Regarding differences between hidden layer variance and mean over timesteps. We expect that there will indeed be differences, but that these will be another fingerprint of the spectral properties. >Related to 2 – In order… We expect there will be differences but expect it will be unclear how they relate to spectral properties. We are happy to discuss this and the previous point further if the reviewer wishes. >Related to 2 – Judging… All untrained networks have the same initial error, but the first point in the plot is the error after 100 updates. i.e. the error is not logged at the beginning. >In line 116 the authors… We apologize for an error in our text. We will change the text to “each row of W^EI was initialized as the mean row of W^EE”. By this we mean that W^EI_ji = 1/n \sum_k W^EE_ki for all j, and I unit equivalence is then broken with W_ie weights. Therefore all entries in the columns of W^IE have the same values. >In Figure 3… Thank you for highlighting this aspect of ColEI training, we did reference it in a previous draft of the text and will reinsert this observation in the updated text. Our intuition is that variance in ColEI networks is due to them having fewer activity modes with appropriate singular values for learning computations over time (the motivation behind SVD entropy) and therefore small differences at initialisation can strongly impact learning.We also find that without gradient clipping ColEI networks are highly unstable, likely due to the distribution of singular values at initialisation [4], and this can also lead to variable performance given small differences at initialization. >From... Please see our response to this issue above. >I would… The spectrum transplant experiments convert EI RNNs into networks without separate populations of E and I units. We didn’t correct for this because to do so would require changing the spectral properties of the donor networks, and we would not be directly testing the impact of the different network spectral properties. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their thorough reply. Most of my concerns have been adequately addressed. Q2: While I agree that some of the differences in the mean and variance of activations would result from spectral properties, vanishing and exploding activations (and as a result, gradients) can also simply be the result of weights having a too small / large initial scale. Optimising the scale to ensure that the mean / variance stays as stable as possible at initialisation would be a simple way to dissociate the two. Q8: Did you consider evaluating how strongly the transplant experiments alter the EI balance, or at least to flag in the text that it does? --- Reply to Comment 1.1.1: Comment: We’re happy that our previous response managed to adequately address the majority of the reviewer’s concerns. Again, we would like to reiterate our thanks for their thorough and helpful review. We hope our answers below adequately address the reviewer’s final points. Finally, we encourage the reviewer to update their score in line with the degree of their increase in confidence. > I would like to thank the authors for their thorough reply. Most of my concerns have been adequately addressed. Q2: While I agree that some of the differences in the mean and variance of activations would result from spectral properties, vanishing and exploding activations (and as a result, gradients) can also simply be the result of weights having a too small / large initial scale. Optimising the scale to ensure that the mean / variance stays as stable as possible at initialisation would be a simple way to dissociate the two. Though we agree that regulating the mean and variance of activations is potentially useful in RNNs, the scale of the weights is part of what determines the spectral properties of the weight matrices, namely, the spectral radius, $\rho$ (Rajan & Abbott, 2006; Bordenave & Chafaï, 2012). Thus, in Figure 2D when we swept over different $\rho$ values, we were in fact sweeping over weight initialization scales. Note that ColEI networks are always worse than the best performing RNNs, no matter the scale of the weights. This strongly suggests that there is no scale of weights that would allow ColEI to perform as well as regular RNNs without fixing the other spectral properties. In fact, the $\rho$ = 1.5 scale used in Song et al. (2016), and throughout the rest of the figures, is selected in part to help keep the initial activations in an appropriate range and avoid vanishing and exploding activations/gradients as much as possible. We hope this clarifies the matter, but if we have misunderstood the reviewer’s point, please let us know. >Q8: Did you consider evaluating how strongly the transplant experiments alter the EI balance, or at least to flag in the text that it does? Thank you for re-raising this, we agree that it is valuable to evaluate and report any impact of these experiments on network EI balance. We therefore ran simulations and evaluated empirical balance at initialization (i.e. the mean of the sum of incoming weights to each neuron) and did not find a statistically significant difference. For a ColEI network with 10% inhibitory neurons and 1000 hidden units, the mean balance across units was -0.0006 with std. deviation 1.12, and for ColEI with the RNN spectrum it was 0.0696 with std. deviation 0.88 (t test, p=0.12). We will highlight this potential caveat and include these results in the text.
Summary: The authors apply Dale's law (that neurons provide exclusively excitatory or inhibitory outputs) to RNNs with two different architectures: ColEIs which are the “straightforward” way of applying sign constraints per neuron, and recurrent DANNs based on an architecture with two layers per recurrent step. They compare singular value spectrums, network sizes and Excitation/Inhibition ratios in all three architectures and analyze how they affect performance. They conclude that the reason why DANNs perform as well as RNNs, while ColEIs perform worse is largely due to differences in the SVD distribution (as measured by Normalised SVD Entropy). Strengths: Dale’s law is an important and ubiquitous property of biological neural networks, but its consequences have not been thoroughly explored. It is interesting that a simple architecture can enforce Dale’s law while remaining trainable, and while keeping many properties of unconstrained RNNs. This architecture features feedforward inhibition, a biologically plausible motif. Understanding consequences of such biological structure is a key topic of Neuro-AI. The work offers some nice empirical results, both in performance and in spectral properties. The writing and figures are clear, and good efforts are made to provide explanations and interpretations. Weaknesses: While being advertised as a biological neural network, the architectural constraints that make the recurrent DANN trainable may be the very same ones that make it biologically unrealistic. For instance, there is no direct recurrent inhibition (I to I), and there cannot be direct reciprocal connectivity between E and I units. There is not much mathematical analysis of the results, so the insights are somewhat limited. Clearly there are differences in the spectral distributions, but why these appear and most importantly how these are “directly responsible” for decreased performance is not deeply discussed. There is some discussion of the low-rank nilpotent component of ColEI matrices but there is more related literature to connect to, most notably from the Ostoijc group (no, this reviewer is not from that group). For example, Mastrogiuseppe and Ostojic 2018, Schuessler et al 2022, in addition to the Shao and Ostoijc 2022 paper the authors do cite (now evidently published in PLoS comp bio). Many of the analyses are done only on the sequential MNIST task, which is not particularly natural. Claims are being made on the basis of small differences in errors, and some qualitative analysis. The small differences suggest that the networks are not being pushed very far to demonstrate their inductive biases. Where did the ρ=1.5 and ρ ~= 1/sqrt(3) come from in Section 3.1? How were the sign constraints implemented? Minor: watch out for punctuation errors and typos. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Are the inhibitory units in a DANN strictly linear? This seems necessary if the DANN weight matrix is actually an exact reparameterization of the RNN weight matrix, as advertised (L95). Either way, this is unclear. And this has important implications for the story: if the DANN is an exact reparameterization, then isn’t it trivial that RNN and DANN spectra are identical? Why the variance of line 115? Why ρ=1.5 and ρ ~= 1/√3 in Section 3.1? Can you explain the 3-layer architectures used in the Sequential MNIST figures? Do the eigenvalues and singular value visualizations of Figures 5,6 include values from weights from all layers? If so, are there any differences between the eigen/singular values between layers? Line 200: Is initialization the only difference between a ColEI network without sign constraints and a standard RNNs? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful review. Please see the general comment for the references list. > **Strengths**: Dale’s law is an important and ubiquitous property... Many thanks! **Weaknesses:** >While being advertised… Thank you for highlighting this aspect of DANNs, we will add to the text, noting that the lack of I-I connections in DANNs is not biologically realistic and that this could be changed in the future to make them more realistic, an important point of clarification. However, we want to highlight that the central motivation of this paper is to understand why EI networks can fail to train well, and our data supports the conclusion that the spectral properties of the weights is the main reason. Importantly, this central conclusion is not dependent on the specific DANN formulation being completely biologically faithful. The reviewer is correct that adding I-to-I connectivity would increase the biological faithfulness of recurrent DANNs. But the spectral properties would be the same, because fast-recurrent inhibition provides a “winner-take-all” mechanism, which would simply sparsify the activity. Therefore for simplicity we did not explore this aspect in our work. But, the reviewer’s point holds, and we will be sure to highlight this biological infidelity of DANNs in our camera ready manuscript. We think it is an exciting direction of future research. One thing: we are unsure what the reviewer means by “there cannot be direct reciprocal connectivity between E and I units”, because the absence of I-I connections should not affect reciprocal E-I connectivity. >There is not much… We agree with the reviewer that our work is largely an empirical analysis rather than a mathematical analysis of the origins of poor learning in ColEI networks. We provide a discussion of the low-rank nilpotent matrices mainly for the purpose of intuitions. However, we would strongly contest that the empirical nature of our work means the insights are limited. As far as we know, the understandings we present here through our experiments are extremely novel and this is the first work that directly associates spectrum pathology with poor ColEI learning performance. Overall, this work is a major shift in the field’s understanding of why recurrent networks of E and I units perform poorly. We also wish to distinguish our study from prior works like Mastrogiuseppe and Ostojic 2018, and Schuessler et al 2022. First, our work focused on RNNs respecting Dale’s law whereas the mentioned works did not. Also, the mentioned works mainly focused on exploring the relationship between low-rank connectivity, (low-rank) network dynamics and how network computations are related to them, rather than how to understand the learning performance of E-I RNNs. In contrast, in this work we aim to directly link singular value spectrum with the learning performance of E-I RNNs when we train them with gradient descent. Past research also mainly looks at the spectral properties of random networks, with little focus on learning performance [7,8] Therefore, we hope this work can lay the foundation for future research on optimizing network performance without breaking biological constraints. But we agree the mentioned works are relevant and we are happy to cite them. >Many of the analyses … Thank you for highlighting this. We agree it is very important to verify that our results and analyses are consistent across data distributions, especially more naturalistic ones given this work’s connections to neurobiology. As a result of your comment we have included a new naturalistic image task and also ran the analyses presented in Figures 5 and 6 on the Penn Treebank. Please see the general response for more details. We believe these experiments substantially improve the paper - thank you again for suggesting this. >Where did the ρ=1.5 and ρ ~= 1/sqrt(3) come from in Section 3.1? Rho = 1.5 is from [2]. 1/root(3) comes from Pytorch’s default initialization for RNNs. How were the sign constraints implemented? Sign constraints are implemented via projected gradient descent. After a parameter update outgoing parameters from E or I neurons are clamped to either be all positive or all negative respectively. **Questions:** >Are the inhibitory units… Thank you for raising this. As noted in [1], the nonlinearity used for inhibitory units in DANN is ReLU. The subtlety here is that the post-activation from the previous layer of DANN is non-negative due to ReLU, rendering the pre-activation of inhibitory units of DANN non-negative as well (because Wei is a non-negative matrix). Therefore, ReLU and linear activation functions are equivalent for the inhibitory units in DANN. The expression in line 95 is a simplification of the subtlety explained above. In addition, we wish to emphasize that the equivalence of RNN and DANN spectra is nontrivial because both the low-rank matrix Wei*Wie and the fully positive matrix Wee in DANNs can lead to outliers in the spectrum at initialization [9] and must be carefully chosen in order to cancel out each other. Rebuttal Fig. 5 shows the spectrum of a poorly initialized DANN weight matrix although all elements of the net-effect W=Wee - Wei*Wie matrix have mean zero. >Why the variance of line 115? Why ρ=1.5 and ρ ~= 1/√3 in Section 3.1? Please see above and also footnote in the main text (page 5) for the correspondence between variance and rho. >Can you explain the 3-layer architectures… By three layers we mean that there are 3 RNN modules stacked on top of each other, >Do the eigenvalues …between layers? All layers have the same number of recurrent units and are initialized with the same random initialization scheme, therefore they are the same. >Line 200: Is initialization …RNNs? Yes, correct!
Summary: The paper investigated the problem of why columnEI networks have impaired learning, and they experimentally found that instead of sign constraint, the spectral property of weight at initialization contributed the most; they further experimentally showed that E/I ratio and network size change the spectral properties thus lead to different learning performances. Additionally, they showed DANN in the RNN form show similar performance as normal RNN in 3 different tasks. Strengths: **Originality** This is a follow-up work on applying DANN to RNN. I appreciate the detailed discussion on why it is hard to train E-I separated RNNs. **Quality**The effectiveness of DRNN is tested in three different tasks which is great! And all hyperparameter selections are listed in the appe ndix. The discussion on initialization spectral property contain proper ablation experiments and extended discussion on E-I ratio and network size. **Clarity** The paper is clearly written, with details in the appendix. Some part of the result explanation can be confusing (see second question) **Significance** I believe the paper will be of interest to the computational neuroscience community. Weaknesses: - The discussion of initialization spectral property is only done for sequential MNIST. Does the observation on SVD entropy hold across data distributions? I'm a bit concerned on how generalizable the results are (details see question). - For the sign constrained training, how are the gradients rectified? Set to zero? If set to zero, it may leads to silent unit problem. Did the authors exclude the possibility that sign constrained RNN learn worse due to increased number of silent units? - Typo: line 110 citation Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: My main concern is on the spectral discussion - Does it extend to other tasks with different data distribution? - Why clusterness of singular values matter for learning? The intuition currently given in the paper seems to mainly pertain to large singular values; yet it is shown large singular value ColEI can learn well. Then what is the intuition behind SVD entropy tracking performance? This lack of intuition is what prompted me to ask the first question as it is not immediately clear to me how generalizable the current results are. And thus a current score of 5. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful review. Please see the general comment for the references list. >**Weaknesses** The discussion of initialization spectral property is only done for sequential MNIST. Does the observation on SVD entropy hold across data distributions? I'm a bit concerned on how generalizable the results are (details see question). Thank you for raising this. We agree it is very important to verify that SVD entropy is predictive of performance across different data distributions. We have therefore run more experiments (Penn Treebank and the new naturalistic image task) to verify the generalizability of our results. Please see the global response and the Rebuttal Figures. Thank you for this comment, it has helped us improve the paper. >For the sign constrained training, how are the gradients rectified? Set to zero? If set to zero, it may leads to silent unit problem. Did the authors exclude the possibility that sign constrained RNN learn worse due to increased number of silent units? We would like to clarify that we do not rectify the gradients. Sign constraints are realized using projected gradient descent, i.e. after a parameter update via gradient descent as normal, parameters were then clamped to be all positive or all negative depending on whether the parameters were E or I. We will make this clear in the methods section. In response to the reviewer’s second comment we investigated the distribution of weights and found that the sparsity of the weight matrices for ColEI networks and regular RNNs are very similar, making silent units from sign constraints unlikely to be an issue. This observation also aligns with our observation that sign constraints have a minimal effect on learning performance. > **Questions:** My main concern is on the spectral discussion. Does it extend to other tasks with different data distribution? Please see our response to this question above. > Why clusterness of singular values matter for learning? The intuition currently given in the paper seems to mainly pertain to large singular values; yet it is shown large singular value ColEI can learn well. Then what is the intuition behind SVD entropy tracking performance? This lack of intuition is what prompted me to ask the first question as it is not immediately clear to me how generalizable the current results are. And thus a current score of 5. Thank you for communicating this point. We originally shared the reviewer’s puzzlement, as larger ColEI networks have even larger maximum singular values (sigma_max) and yet train better. Indeed this observation led us to propose normalized SVD entropy as a candidate metric for tracking performance and spectrum pathology. As explained in the main text (lines 279 - 287), the intuition behind SVD entropy tracking performance is that the bulk of the singular values also matters [10,11]. Put another way, for the large networks, although there are a small number of directions with a large singular value, the bulk of them are smaller, which helps ameliorate the problem of exploding gradients, by rotating the activity away from the direction corresponding to the large singular values. As such, to understand learning performance we require a metric that captures the overall effect of the entire singular value spectrum, rather than just a few singular values, which we do using the SVD entropy metric. Thanks to the reviewer’s comment, we realize that the current manuscript may benefit from describing this insight earlier in the results, rather than in the discussion (lines 279 - 287) after the results are presented. We will also note that the additional experiments with the Penn Treebank and the naturalistic images further demonstrate the validity of using SVD entropy to track ColEI performance. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response and added experiments! My major concern with generalizability is now resolved. With the added experimental results, I believe the paper is now stronger thus increased my score accordingly. > Intuition explanation The rebuttal made the author's point very clear! I'd suggest substitute part of lines 279 - 287 with the writing in the rebuttal. > Sign constraint implementation I understand they are clamped, but clamped to what value specifically? Say SGD updates w_ij from 0.1 to -0.1, then is it clamped to 0 or 0.1 or some other positive value? If it's clamped to 0, then the problem of silent unit stands. Also, to check for silent unit problem, I'd check for activity sparsity instead of weight sparsity. I would encourage the authors to make my first part of the question clear in the final version. --- Reply to Comment 1.1.1: Comment: We would like to express our gratitude again for your insightful feedback. We will change the main text as suggested and hope our responses below could adequately address the reviewer's remaining concerns. - To clarify sign constraint implementation: Yes, if $w_{ij}$ is initialized as 0.1 and SDG attempts to change it to -0.1, then we clamp it at 0. But if the update given by another mini-batch attempts to change $w_{ij}$ back to positive weights, e.g. 0.05, then $w_{ij}$ will change to 0.05 as normal. - To exclude the possibility of a silent unit problem due to sign constraint, we ran experiments on the sequential MNIST task using ColEI networks with/without sign constraints, and we checked if there was any silent units every 100 updates during training. Interestingly, we did not find any silent units in all checkpoints regardless of sign constraints. Our intuition is that to be a silent unit in an RNN, a neuron must be inactive for all datapoints and across all timesteps, yet in feedforward networks silent units only need to be inactive for all datapoints. Therefore, intuitively it should be much harder to have silent units in RNNs than in feedforward networks.
Summary: This paper presents a comparison between the performance of standard RNNs and two classes of models: the ColEI network and the DANN, which incorporate Dale's Law with the constraint that units in a circuit should be excitatory or inhibitory but not both. The DANN achieves similar performance to a non-signed RNN, but ColEI shows inferior performance. The paper demonstrates that the spectral properties of the recurrent weight matrix at initialization have a more significant impact on network performance than sign constraints, which may explain why some forms of EI network learn better than others. Strengths: - This paper uses analytical approaches from machine learning to explore network models that adhere to biological principles, which may inspire the development of more biologically realistic models that also have good performance. - A novel contribution is the introduction of normalised SVD entropy as a measurement of spectrum pathology during the initialization stage which predicts the final performance of a network before training. - It also includes extensive experiments that test three RNN architecture on three different tasks with progressive difficulty levels. Weaknesses: This work builds upon existing research about EI networks with incremental advancements, offering interesting insights into how the spectral properties of the recurrent weight matrix at initialization may impact network performance. However, the practical application for designing EI networks that perform well requires further clarification. It is unclear to me whether DANNs can be effectively improved with this insight, and even with a more appropriate spectrum, ColEI falls short of standard RNN or DANN performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Considering the biological motivation of having dedicated excitatory and inhibitory units, it would be valuable to provide a deeper validation or interpretation of the results in the context of neurobiology. For example, could the finding about the different effects of changing the ratio of E/I units on ColEI networks and DANNs offer insights into the biological relevance of these two models to the neural circuit in the brain? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: While the focus on RNNs is relevant to the paper's objectives, it would be helpful to briefly acknowledge potential implications for other types of networks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review. Please note that the reference numbers refer to the references list in the general comment. >**Weaknesses:** However, the practical application for designing EI networks that perform well requires further clarification. It is unclear to me whether DANNs can be effectively improved with this insight, and even with a more appropriate spectrum, ColEI falls short of standard RNN or DANN performance. Thank you for this comment, we agree that adding a paragraph to the discussion that clarifies and expands on the practical applications of our results to future work would improve the paper. First, the novel insight that the spectral properties are the main issue for EI network learning vs standard networks can indeed be applied to DANNs. We can improve the DANN spectrum by carefully selecting Wee, Wei, and Wie to approach an orthogonal net-effect matrix W=Wee - Wei Wie with singular values~=1, which has been shown to be helpful for RNNs [10,11]. We have data showing improved performance on sequential MNIST from this strategy. We did not include this direction in the original draft due to the page limitation and our focus on explaining poor training in ColEI networks, but we can add it to the appendix if helpful. Second, it is true that ColEI networks fall short of standard RNN and DANN performance, even with hyperparameter choices that make the spectrum more appropriate (i.e. larger sizes and ratios of E to I). However, the insight that it is the spectrum that underlies the poor ColEI performance is a foundational one. We hope our work serves as the basis and inspiration for an alternative initialization for ColEI networks that improves their learning. We will add a comment in the discussion highlighting this as a future research direction. Finally, another exciting future direction is to hybridize DANNs and ColEI networks, as DANNs will remove the far-outlying singular values of the ColEI spectrum and will have the effect of enabling sign changes in the effective recurrent weight matrix (which still have a minor contribution to poor ColEI performance). This hybridization would also be satisfying from a neuroscientific standpoint as the inhibitory neurons in ColEI networks can be thought of as modeling different inhibitory populations (eg. SOM+, CCK+ etc. cells) than the fast inhibitory populations modeled by DANNs (PV+ cells). Again, we omitted this discussion originally due to the page limit but believe this is a very promising direction and would be happy to add some discussion of it. > **Questions:** Considering the biological motivation of having dedicated excitatory and inhibitory units, it would be valuable to provide a deeper validation or interpretation of the results in the context of neurobiology. For example, could the finding about the different effects of changing the ratio of E/I units on ColEI networks and DANNs offer insights into the biological relevance of these two models to the neural circuit in the brain? We agree, and will strengthen the neurobiological focus for the camera ready version of the paper. Related to our response above, we think that the two models have slightly different neurobiological interpretations as the populations of inhibitory neurons that they naturally model are different: ColEI RNNs are a more general model of I cells, whereas DANNs specifically model fast PV+ interneurons. Interestingly, and to the reviewer’s point, our finding that ColEI networks learn better with more inhibitory cells (a smaller E/I ratio), and that DANNs learn fine even with very large E/I ratios, fits with the small number of PV+ cells compared to the total pool of inhibitory cells (~20%) [12] in real neural circuits. Furthermore, recent work has found that the human and macaque cortex contains approximately three times the amount of inhibitory interneurons than mice [13], which is consistent with our finding that a more balanced ratio of E to I results in better spectrum and learning for ColEI networks. >**Limitations:** While the focus on RNNs is relevant to the paper's objectives, it would be helpful to briefly acknowledge potential implications for other types of networks. Thank you for pointing this out. We will edit the text highlighting how we expect these insights to apply to all EI networks: e.g. feedforward MLPs, convolutional networks and potentially spiking neural networks.
Rebuttal 1: Rebuttal: We thank all reviewers for their time and effort in providing a set of positive, high-quality helpful reviews. We would like to take this opportunity to reiterate the main contribution of our work. We investigate the following question: why do recurrent neural networks (RNNs) that have separate excitatory (E) and inhibitory (I) neurons often fail to train as well as regular RNNs? We present evidence that strongly supports the theory that, instead of sign constraints, the traditional column E-I network design is handicapped because the synaptic weight spectrum is multimodal and disperse, unlike standard RNNs. We also show that there are E-I networks based on fast inhibition (Dale’s ANNs or DANNs) that have spectral properties akin to standard RNNs, and these networks train well. This work substantially shifts our understanding of the origins of poor learning in column EI networks, and suggests that contrary to popular assumptions, the imposition of sign constraints on a neural network is not the main impediment to good training. Here we communicate the more general and most substantial improvements we have made as a result of reviewer feedback: * We have added a new naturalistic image categorization task and verified that all our results and analyses apply in this new setting (see “Natural object recognition task description” below and Rebuttal Fig 1). Consistent with our previous findings, spectrum transplant experiments decrease/increase error by ~40% whereas adding/removing sign constraints only account for ~0.3 and 8% error differences respectively (Rebuttal Fig 1C & Table 1). * Similarly, we have extended the sequential MNIST results presented in Fig 5, 6 to the Penn Treebank dataset which is both more challenging and more naturalistic (as it is a natural language processing dataset). Again, results here are similar to sequential MNIST; we find that normalized SVD entropy is anti-correlated with perplexity. * We have run additional experiments without gradient clipping, which showcase more obviously the problematic spectral properties of ColEI networks. We believe that these additions attend to some of the most pressing questions raised by the reviewers and strengthen the conclusions of the paper. We also hope each of the individual, more directed responses adequately address the additional reviewer comments, and we look forward to the discussion in the next phase. Due to the new character limit per response we have had to prioritize brevity for some of the discussion points, so please do not hesitate to request further clarification. ## Natural object recognition task description: Following Majaj & Hong, et al., 2015, we designed the following naturalistic object recognition task for RNN: at the first time step, the input to the RNN is a flattened achromatic image of one of 64 natural objects from 8 categories (e.g., Fruits, Faces) with a random natural scene background. Then the network receives no inputs and must retain information about the image over 5 timesteps to output the classification result at the last time step. The task presents two main challenges: retaining the information over time without additional inputs and correctly mapping different natural objects, like raspberry and watermelon, to the same category, such as Fruits. Please see Majaj & Hong, 2015 for more details. https://www.jneurosci.org/content/jneuro/35/39/13402.full.pdf ## References: [1] Cornford et al 2021 [2] Song et al, 2016 [3] Yang & Wang, 2021 [4] Pascanu et al., 2012 [5] He et al., 2015 [6] Glorot & Bengio, 2010 [7] Rajan & Abbott, 2006 [8] Harris et al., 2023 [9] Benaych-Georges & Nadakuditi 2012 [10] Arjovsky et al., 2016 [11] Le et al., 2015 [12] Bezaire & Soltesz, 2016 [13] Loomba et al., 2022 Pdf: /pdf/1f62347222cee867ddfb06f775f83f2f8384f8a9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Invariant Molecular Representation in Latent Discrete Space
Accept (poster)
Summary: This paper presents a new graph neural network architecture and objective function that encourages models to identify features that are invariant to distribution shifts in the data. The proposed method, iMoLD performs invariant feature extraction in the latent embedding space and leads to improved performance across an extensive set of molecular property prediction tasks. Strengths: - The presented idea is novel and leads to improved performance across a variety of datasets and tasks. - Experimentation is extensive with good results. - The ablation analysis is Section 5.3 and sensitivity analysis from Appendix C are useful. Weaknesses: #### **Incorrect definitions in Section 3.1** - I believe there is some issue in the notation of Section 3.1. Specifically, the definitions of $P_{train}, P_{test}, P_{all}$ as collections of distributions, means that they are not themselves valid probability distributions. I believe some re-normalization would be required here. --- #### **Use of term “Discrete Latent space” is unclear** - Why do the authors claim they have a **“discrete”** latent space? The residual connection between $\mathbf{H}$ and the quantized representation means that embeddings are continuous. Additionally the element-wise gating to create $\mathbf{H}^{\mathrm{Inv}}$ and $\mathbf{H}^{\mathrm{Spu}}$ means the model does not have a discrete latent representation. --- #### **Unclear elements about the learning objective** - The notation in Equation (11) is confusing. Specifically, what is the dimensionality of the $\tilde{\mathbf{z}}_i^{\mathrm{Inv}}$? Are you concatenating multiple batch samples from $\mathbf{z}^{\mathrm{Spu}}$ to $\mathbf{z}_i^{\mathrm{Inv}}$ or just one random one? - There seems to be an inherent tension between the residual connection and the commitment loss $\mathcal{L}_{\mathrm{cmt}}$. That is, if this loss were perfectly minimized, then the residual connection would be negated. - The role of $\gamma$ in the scoring regularization is not well described. --- #### **Baseline presentation is confusing** - It seems that the authors are conflating baselines in terms of loss objectives and in terms of model/architecture designs. It would be good to clarify which baselines rely on the same architecture but have different objectives (e.g. ERM) and which constitute an entirely different modeling scheme (e.g., CIGA). For the baselines that simply differ in objective, it would be good to also make explicit (could go in Appendix) if any model / architecture adjustments were also applied. --- #### **Other minor comments** - In line 147, the notation for edges $\mathcal{E}$ is overloaded, since the same variable is used to denote environments in Section 3.1. - At the end of Section 5.4 (lines 341-345), the authors seem to be mixing the meaning of low/high in terms of whether low = “good” or low = ”bad”. - $D$ and $Score$ should be defined explicitly in Figure 4 caption. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Q1) It is not clear to me why vector quantization (VQ) is the right “bottleneck” to use here. Other than restricting the model’s expressivity, which can be done in other ways such as weight regularization, why is VQ particularly suited for this setup? Q2) Why is the stop gradient applied in equation 12? Is this simply for computation efficiency / stability? If so, this should be made explicit in the text. Q3) For the GOOD-PCBA experiment, why is average precision (vs. average accuracy, recall, or ROC-AUC) used? Q4) I know that there is an extensive sensitivity analysis in the appendix, but what are the hyperparameter configurations for the reported results in the main text (Tables 1 and 2)? Are the “best” iMoLD models sensitive to hyperparameter choice or do you see a general trend as to which configurations perform best? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: - The current methodology does seem quite intricate with many loss terms that are justified in a somewhat ad hoc manner. - There is no real discussion of limitations / potential pitfalls relative to previous work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### (Q1) Distribution definition The $P_{train}$, $P_{test}$ and $P_{all}$ given in the manuscript are the form of a collection of distributions, but not a distribution, which needs to be renormalized to avoid ambiguity. We will clarify that in the manuscript. ### (Q2) The claim of discrete space The utilization of the term "discrete space" in the title highlights the incorporation of Vector Quantization (VQ) operation within the methodology. This emphasis is placed to underscore the utilization of VQ in tackling the OOD challenge related to molecules, which is a contribution within this study. We agree that the residual connection between the embedding H and its quantized representation is continuous. Indeed, we did not mean the final representation is discrete. In line 185 we indicate that the final representation is a combination of both continuous and discrete components. We will refine this claim in the final version. ### (Q3) Some unclear elements - In equation (11), the dimensionality of $\widetilde{\mathbf{z}}^{\mathrm{Inv}}_i$ is 2$d$. To derive $\widetilde{\mathbf{z}}^{\mathrm{Inv}}_i$, we first shuffle a batch of $\mathbf{z}^\mathrm{Spu}$ randomly, then concatenate ${\mathbf{z}}^{\mathrm{Inv}}_i$ with the corresponding counterpart $\mathbf{z}^\mathrm{Spu}$ in the shuffled batch. - The effect of the commitment loss is to restrain the input $\mathbf{h}$ , preventing them from excessively deviating from the embeddings $\mathbf{e}$ within the codebook. It is crucial for $\mathbf{e}$ to be in close proximity to $\mathbf{h}$ to effectively serve as a quantized representation. When the commitment loss is perfectly minimized, the distribution of $\mathbf{e}$ and $\mathbf{h}$ are overlapped. It is important to note that the residual connection cannot be fully eliminated. This residual gap persists due to inherent limitations imposed by the finite size of the codebook. - $\gamma$ is used to constrain the size of the selected invariant features. As we employ $\frac{<\mathbf{J},\mathbf{S}>\_{F}}{|\mathcal{V}| \times d} $ to calculate the ratio of identified invariant features. To prevent abundance or scarcity of the invariant features, we set a threshold $\gamma$. This threshold, in turn, functions to optimize the model towards the selection of an invariant feature ratio that closely approximates the predefined value of $\gamma$. ### (Q4) The baseline presentation Thanks for this advice, we will add a table to illustrate the loss objective and model architecture design for each baseline in Appendix. ### (Q5) Other minor comments Thank you for pointing out these issues and we will revise the manuscript correspondingly. ### (Q6) Why VQ is chosen We choose VQ for the following main reasons: 1. The citations [56, 57, 58] in lines 122-123 provide theoretical analyses that demonstrate how VQ can enhance noise robustness. 2. Intuitively, VQ can act as a bottleneck. In instances where the input is subject to perturbation induced by distribution shifts, the act of discretization emerges as a potent mitigator of such noise, ensuring the output remains unaffected, thereby enhancing modal generalization and alleviating the issue of easy overfitting caused by distribution shifts. 3. Certain weight regularization methods, such as IRM, do not perform well when confronted with the molecular OOD problem, as shown in Tables 1 and 2. Their performance is even worse than the ERM baseline. ### (Q7) The stop-gradient in equation 12 The stop-gradient operation is used for preventing collapsing. This technique is presented in a self-supervised learning method (citation [36] in line 217). We will clarify this in the manuscript. ### (Q8) Why average precision is used in PCBA Due to the extremely unbalanced classes (only 1.4% positive labels), we use the Average Precision (AP) as the evaluation metric. ### (Q9) The hyperparameter configuration and the hyperparameter sensitivity The chosen hyperparameters are detailed in the uploaded PDF file in the global response. Through parameter sensitivity analysis experiments, we observe that iMoLD outperforms ERM within a reasonable range of parameters. We also observe some general trends through hyperparameter analysis, for example, the performance tends to first improve and then slightly decrease as the codebook size increases, the standard deviation of performance increases as $\lambda_1$ increases, and the standard deviation is the largest when $\lambda_3$ is zero, indicating that the commitment loss in VQ can increase the performance stability. --- Rebuttal Comment 1.1: Title: Thank you for the detailed response. Comment: I appreciate the detailed response. I do not have any additional comments or questions at this time. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable feedback and unwavering support throughout the review process.
Summary: While significant advances have been made in molecular representation approaches, conventional approaches typically assume that data sources are independent and sampled from the same distribution. However, molecules in real-world drug development often show different characteristics, which might be from a different distribution. This issue is called the out-of-distribution (OOD) problem. OOD challenges the generalization capability of molecular characterization methods and can degrade the performance of downstream tasks. Unlike previous studies' "first-separation-then-encoding" approach, this study proposes a "first-encoding-then-separation" molecular graph representation paradigm. Specifically, the authors first employ a GNN to encode the molecules and then employ a residual vector quantization module to alleviate the overfitting of the training data distribution while preserving the expressiveness of the encoder. Then, they score molecular representations using another GNN that measures the contribution of each dimension to the target in the latent space, thus clearly distinguishing between invariant and spurious representations. Finally, the authors propose a self-supervised learning objective that encourages the recognition of invariant features and effectively preserves label-relevant information while discarding environment-relevant information. The authors conducted experiments on real-world datasets. The experimental results show that the proposed method outperforms the SOTA methods. Strengths: - Unlike the traditional "first-separation-then-encoding" approach, the authors propose a "first-encoding-then-separation" paradigm that uses an encoding GNN and a scoring GNN to identify invariant features from a graph. The authors use the residual vector quantization module to make a balance between the model's expressivity and generalization. The quantization is used as a bottleneck to strengthen the generalization, and the residual connection complements the model's expressivity. Moreover, the authors design a self-supervised invariant learning objective to facilitate the precise capture of invariant features. This objective is generic, task-independent, and applicable to a variety of tasks. - The model is cleared described. - The authors conducted comprehensive experiments on 18 real-world datasets. The experimental results show that the proposed model achieved stronger generalization against SOTA baselines. - Code has been released and will be valuable for future related research. Weaknesses: - There are some typos in the paper. For example, a lack of space before references in line 31. - OOD is repeatedly defined in lines 29 and 90. In addition, please use "OOD" for "out-of-distribution" that appears later in the text, such as lines 130 and 353. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors show 3D visualization graphs representing the extracted features, as in Fig. 4? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### (Q1) Typos Thanks for pointing out these typos, we will revise them accordingly in the updated version. ### (Q2) 3D visualization We provide the 2D and 3D visualization of extracted features together in the uploaded PDF file in the global response. The 3D visualization results show similar characteristics with their 2D counterparts. For instance, the distribution pattern of ERM representations displays increased discreteness, and both ERM(+VQ) and ERM(+RVQ) representations show partially extended flows. In contrast, iMolD yields a more uniform and consistent representation. --- Rebuttal Comment 1.1: Comment: Thanks to authors' effort for improving the paper. No other questions. --- Reply to Comment 1.1.1: Comment: We are truly grateful for your valuable comments and unwavering support.
Summary: This paper proposes a molecular self-supervised learning method for out-of-distribution generalization. The authors introduces a "first-encoding-then-separation" framework to learn invariant features. For doing so, the authors design discrete latent space with VQ-VAE. The experimental results show that their method improves previous baselines in various out-of-distribution downstream tasks. Strengths: - The paper is well written and easy to understand. - The pre-training objective to separate invariant and spurious features seems to make sense to me. Weaknesses: - The complexity of the proposed method is high. The loss function contains several tunable parameters and the ablation study (in Figure 5) shows that the performance is quite dependent on the choice of hyperparameters. - It seems vague why discrete latent space is needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How are the hyperparameters chosen in Table 1, 2? - Is there specific intuition why discrete latent space is useful for out-of-distribution molecular representation learning? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### (Q1) Hyperparameters The results presented for the baseline models within the benchmark are derived from an exploration of their respective hyperparameters. In a congruent manner but limited by computational resources, we have searched a partial set of hyperparameters of our method and empirically set some hyperparameters to fixed values. The ranges of the hyperparameters are described in Appendix B.2. And the chosen hyperparameters are detailed in the uploaded PDF file in the global response. ### (Q2) Intuitive understanding of why discrete latent space In this work, we leverage VQ to discretize continuous representations into discrete ones. For every input representation, VQ looks up and fetches the nearest neighbor in a pre-defined codebook and takes it as output. Intuitively, VQ acts as a bottleneck. In instances where the input is subject to perturbation induced by distribution shifts, the act of discretization emerges as a potent mitigator of such noise, ensuring the output remains unaffected. Therefore, the discretization can enhance model generalization and alleviate the easy-over-fitting issue caused by distribution shifts. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I do not have other comments or questions at this time. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your positive comments and recognition of the efforts we put into addressing your concerns. Your contribution to our work is highly valued and greatly appreciated.
Summary: The paper presents an invariant and robust representation learning approach for molecules to improve the out-of-distribution generalization performance of the predictive models. Specifically, they first map the molecule to the latent representation and then do a separation step where they separate the latent word into invariant and spurious representations. They also propose using residual vector quantization on the latent representation to avoid over-fitting while preserving the expressiveness power of the encoder. Strengths: 1. The paper tries to address an interesting problem. 2. The proposed idea is novel. 3. They included a detailed ablation study which helps identify the effectiveness of each component. Weaknesses: 1) The experimental results when compared to the baseline do not have noticeable improvement. 2) Some more details on the experiment section would be helpful, for example in Figure 4. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1) Intuitively, what is the difference between applying the Frobenius norm directly to the S matrix versus the regularization defined in equation (14)? 2) It is a bit confusing to use "h" in equation (2) and equation (12) to represent different meanings. 3) It is not clear what effect the discrete term Q(h) has on the learned final note representation H', since H' is the sum of the discrete and continuous representations. Is it possible for the model to completely ignore the discretization step and focus only on the continuous representation? 4) In line 191, it is mentioned that "It is worth noting that our separation is not only performed at the node dimension but also takes into account the feature dimension in the latent space." I didn't quite understand this. Could you provide more explanation? 5) Could you explain the intuition behind what S is learning in equation 8? Essentially, it seems S is just reweighing every element in H'. Intuitively, for the parts of H' that are not very important/invariant/main motif, S should be low so that those elements mainly contribute to the spurious representation, and vice versa. But how does the model enforces this? 6) I'm not sure if the learned high-level representation can be seen as the sum of the invariant and spurious representations. In other words, can we really break down the abstract learned representation of such a complex structure into invariant and spurious parts? Would each of these components eventually represent some substructures if decoded? 7) The paper states that the model learns discrete latent representation, but according to equation 6, the continuous representation is added back to the discretized representation. Can we still claim that the final learned representation is discrete? 8) It would be very helpful to provide a brief explanation of how the dataset is split, how the out-of-distribution is represented in the training/test/validation data, and what the terms "covariates" and "concepts" refer to in Table 1. This would provide context, especially for readers who are not familiar with the dataset. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper did not discuss the limitations of the work and there is no potential negative societal impact of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### (Q1) The difference between Frobenius norm and the regularization in Equation (14). The $<\mathbf{J},\mathbf{S}>\_F$ in Equation (14) is equal to the 1-order Frobenius norm of $S$. However, the Frobenius norm merely encapsulates the summation of the elements in the matrix, and the range of its value varies with the size of the matrix. To confine this variability to an appropriate scope, we therefore employ $\frac{<\mathbf{J},\mathbf{S}>\_F}{|\mathcal{V}| \times d} $ to constrain the norm between 0 and 1 despite the variance of matrix size. Intuitively, this is the ratio of the identified invariant features (as delineated in Equation (9) for the invariant and spurious features separation). And to prevent abundance or scarcity of the invariant features, we set a threshold $\gamma$, tasked with optimizing the model to incline towards a selection of invariant attributes' proportion approximating the predefined $\gamma$ value. ### (Q2) Confusion of notation Thank you for pointing this out, we will substitute the "h" in equation (12) with a different notation. ### (Q3) The effect of the discretization VQ discretizes the continuous representation. Intuitively, it plays the role of a bottleneck, constraining the expressive ability of the neural network. This quality of VQ prevents overfitting on the training distribution and improves the generalization as discussed in lines 119-124. However, VQ also potentially leads to under-fitting because of the limited expressivity. Thus we consider both continuous and discrete representations to strike a balance between generalization and expressivity. And the experimental results in Table 3 verifies the effectiveness of our approach. It can be observed that both discretization and the residual connection contribute to the performance, and removing residual connection leads to a significant performance degradation on PCBA while removing discretization has a more serious impact on HIV. ### (Q4) Explanation of node dimension and feature dimension For the representation matrix $\mathbf{H}^\prime \in \mathbb{R}^{|\mathcal{V}|\times d}$, the node dimension corresponds to the rows, spanning from 1 to $|\mathcal{V}|$, while the feature dimension aligns with the columns, encompassing the range from 1 to $d$. Given that the score matrix $\mathbf{S}$ has the same size with $\mathbf{H}^\prime$, and an element-wise product is conducted with $\mathbf{S}$ and $\mathbf{H}^\prime$ to obtain invariant features. This separation process is employed at each element within the representation matrix, thus encompassing both the node and feature dimensions. ### (Q5) What S is learning and how to enforce this The element at $(i,j)$ position in $\mathbf{S}$ denotes the contribution weight of the $(i,j)$ counterpart in $\mathbf{H}^\prime$ to the invariant representation. For not very important/invariant element in $\mathbf{H}^\prime$, the corresponding element in $\mathbf{S}$ should be low, and vice versa. To enforce the model to produce precise scores as well as invariant representations, we design a task-agnostic self-supervised invariant learning objective as described in Section 4.3, and we use it with task prediction loss to jointly optimize the model. ### (Q6) Separation into invariant and spurious parts Identifying invariant parts is a viable strategy to solve the OOD issue, given that these invariant attributes maintain an exclusive affiliation with labels, remaining unaffected by environment shifts. The remaining parts subsequent to the extraction of these invariant attributes are termed as the "spurious" components. As shown in Figure 1, preliminary studies follow a "first-separation-then-encoding" paradigm, which first divides the graph into invariant and spurious substructures explicitly, and then encode each separately. We argue this practice is suboptimal for extremely complex and entangled molecule graphs. And the detailed motivation is illustrated in the global response. Thus we propose a "first-encoding-then-separation" paradigm, that encodes molecules first and then identify invariant features in the latent space. As mentioned by the reviewer, we cannot ensure complete separation of the invariant and spurious components in the abstract learned representation. This is primarily due to the complex and entangled characteristics of real-world molecule graphs. We can solely rely on analyzing the experimental results to show the enhanced separation achieved, as evidenced by its superior performance compared to other baseline methods. Decoding the invariant and spurious parts in the latent space into structural space is a worthy direction for further investigation, and it is believed that this can provide interpretability to the model. We leave this as future work. ### (Q7) The claim of discrete representation Sorry for the confusion. We did not mean the final representation obtained is discrete. In line 185 we indicate that the final representation is a combination of both continuous and discrete components. We use "discrete space" in the title to emphasize the Vector Quantization (VQ) operations to address molecule OOD problem, which is one of the contributions of this work. We will refine this claim in the final version. ### (Q8) Explanation of dataset We explain "covariate" and "concept" shift in lines 141-145. And we provide details of different distribution shifts in Figure 2 and Appendix A.2. We will provide an explanation of how the dataset is splited in Appendix.
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers for providing us with their valuable and affirmative comments regarding our submission. First of all, we want to clarify our technical contributions. The proposed method mainly consists of three technical contributions: 1. _**The first-encoding-then-separation paradigm.**_ Preliminary studies follow a "first-separation-then-encoding" paradigm, where the graph is first divided into invariant and spurious substructures, and then each part is encoded separately. We argue that this paradigm is suboptimal for extremely complex and entangled molecules, as we have illustrated in lines 48-53, since molecular intricate properties usually cannot be easily determined by analyzing a subset of molecular structures. Citations [29,30] in line 53, which discuss the necessity and importance of considering the molecule as a whole in the study of molecular physicochemical properties, substantiate the motivation. To go into more detail, molecules are composed of atoms, and atoms contact through a cloud of electrons to form covalent bonds. Typically, chemical properties are expressed through electrons, which can conduct across the backbone. The surface potential, polarity, and other chemical properties on some substructures are affected by the charges of atoms in other substructures. The explicit separation of the molecular structure entails a loss of information to inter-substructural interactions. Therefore, distinguishing between invariant and spurious parts in the structural space is suboptimal for molecules and we propose a "first-encoding-then-separation" paradigm to make the distinction in the latent representation space. 2. _**The RVQ module.**_ Since we do not divide the molecule in the structural space, but encode the whole molecule directly. When the environment changes, the obtained representation will be disturbed by the distribution shift noise. To alleviate this issue, we propose to use VQ to improve the generalization ability by discretizing the continuous representation. However, we observe that VQ would also limit the model's expressivity and potentially lead to under-fitting. To address this concern, we propose to equip VQ with a residual connection to strike a balance between generalization and expressivity. 3. _**The task-agnostic self-supervised invariant learning objective.**_ Following the Invariance Principle, we investigate to focus on the causal factors that remain invariant to distribution shifts while overlooking the spurious parts. To accurately make a separation between invariant features and spurious features, an invariant learning objective is needed. We have analyzed that downstream tasks related to molecules are diverse, including regression, single- and multi-label classification. However, the existing invariant learning objectives cannot be applied to certain molecular tasks/datasets (marked with "/" in Table 1). Therefore, we propose the task-agnostic self-supervised invariant learning objective, which is independent of the downstream task and makes our method applicable to various tasks. In the following sections, we present a detailed point-by-point response to the questions raised. Pdf: /pdf/a996782ab3c057077725e68d46b5929934224c11.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a new approach to obtain robust molecular representation through a first-encoding-then-separation method. The proposed method utilizes a graph neural network (GNN) as a molecule encoder and applies a residual vector quantization module to modify the representation. Additionally, a scoring GNN is employed to separate the resulting representations into spurious and invariant categories. The learning process involves contrastive-based self-supervised learning (SSL) loss and task prediction losses. Experimental results on three molecule-related benchmarks demonstrate the superiority of the proposed method over traditional debiasing techniques and recent methods designed specifically for molecule debiasing. Ablation studies and visualization techniques are conducted to provide further insights and analysis. Strengths: 1. The proposed first-encoding-then-separation approach is novel. 2. Experiments on various datasets have shown that the proposed method has the ability to achieve better results. Weaknesses: 1. The motivation is not clear. Why the first-encoding-then-separation approach is reasonable? 2. The reason to combine different components is also not clear, making the technical contribution not strong. The current version is like a straight forward combination without sufficient insight or understanding on the problem. For example, why we need a RVQ module in the molecule representation? 3. It is not clear why the proposed method has the ability to mitigate spurious biases to achieve better OOD results. Is there any theory to support that? 3. The experiments are not sufficient. For example, only improved results have been demonstrated, without sufficient analysis. In the ablation study of different modules are not consistent on different data, making the technique very ad hoc. Though some visualization have been provided, they are not sufficient to support the claim that the proposed method obtain better invariant features. What does it mean by a uniform distribution? Is the uniform distribution equivalent to a good feature? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above in the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### (Q1) Motivation for "first-encoding-then-separation" In contrast to previous works, we adopt a novel paradigm that encodes the whole molecule first, then do separation in the latent representation space. The motivation for this approach is detailed in the global response. ### (Q2) Technical contribution We explain the technical contributions in detail in the global response. Our analysis discerns the unsuitability of dividing in structural space. Instead, we advocate for the "first-encoding-then-separation" paradigm, bolstered by the integration of the RVQ module, which enhances the refinement of the encoded representation. Further, we recognize that the downstream tasks related to molecules are diverse, and some existing methods cannot be applied to all of them. Towards this end, we propose a "task-agnostic self-supervised invariant learning objective" that is applicable to all tasks. We believe our method is devised after we have analyzed the intrinsic differences between molecules and other graph data, with sufficient insight and understanding on the problem. ### (Q3) Theory for mitigating spurious biases Our approach is theoretically supported. Notably, both the Invariance Principle and the VQ theory lend support to our model, contributing to its enhanced generalization performance. 1. _Invariance Principle._ Our method follows the Invariance Principle, which guides us focusing on the causal factors that remain invariant to distribution shifts while overlooking the spurious parts. Formally, our objective can be represented as $\min I({z}^{inv}, {z}^{spu}), \max I({z}^{inv}, y)$,where ${z}^{inv}$ and ${z}^{spu}$ are defined in Equation 10 to represent invariant and spurious features, respectively. To practically achieve $\min I({z}^{inv}, {z}^{spu})$, we leverage $\max I({z}^{inv}, \widetilde{{z}}^{inv})$ as an approximation, as outlined in Equation 12. 2. _VQ._ The robustness of noise against distribution shifts gains theoretical validation in citations [56, 57, 58]. [56] Discrete-valued neural communication. NeurIPS'21. [57] Discrete key-value bottleneck. ICML'23. [58] Adaptive discrete communication bottlenecks with dynamic vector quantization. AAAI'23. ### (Q4) Experiments We conduct comprehensive experiments in Section 5 and Appendix, including performance comparison on 2 benchmark datasets (Table 1 and Table 2), ablation study (Table 3), visualization (Figure 4) and hyperparameter analysis (Figure 5). We will explain the ablation study and virilization here to address your questions. ***Ablation study.*** In the ablation study, we explore 3 groups of model variants. The first two groups involve the sequential removal of individual modules. The third group focuses on the replacement of our proposed task-agnostic self-supervised invariant learning objective, $L_{inv}$ in Equation 16, with alternative counterparts. Specifically, "w/ $L_{CIGA}$" and "$w/ L_{GREA}$" denotes the substitution of $L_{inv}$ with the objective function of CIGA[1] and GREA[2], respectively. The "/" in Table 3 specifies that CIGA's objective function is incompatible with PCBA dataset due to its suitability solely for single-label classification tasks, while PCBA is a multi-label classification task. This is not a case of "different modules are not consistent on different data", but rather highlights the specificity of certain approaches to particular tasks. Our approach, in contrast, boasts a broader applicability that encompasses a range of tasks, setting it apart from task-specific alternatives. ***Visualization.*** In visualization, we use two approaches to evaluate the goodness of features: - _Distance-based Evaluation._ We measure distances between features in class-specific training and validation sets to assess invariant feature quality, given that features of the same class should be nearer. Illustrated in Fig. 4 titles as "D(Y=0)" and "D(Y=1)", these distances represent data labeled 0 and 1 in training/validation sets. Notably, our method achieves the smallest distance, affirming robustness of our invariant features to environment shifts. - _Inference Score Analysis._ We compute inference scores for both training and validation sets, depicted as "Score(train)" and "Score(val)" in Fig. 4 titles. These scores represent performance on respective sets. We observe that other methods could not achieve the highest performance on both two sets, while our method is able to. This demonstrates the ability of our method to overcome the easy overfitting and achieve the best generalization ability. We employ the above two approaches to assess the quality of features. The uniform distribution that emerges from our model is a noted phenomenon, yet it does not serve as a definitive criterion for evaluating the quality of the features. We use the term “uniform distribution” to refer that the features locate and expand uniformly in the latent space, with few isolated clusters. As depicted in Fig. 4, the visualization outcomes of the ERM, ERM(+VQ) and ERM(+RVQ) exhibit discernible clusters, unlike that obtained through our method. [1] Learning causally invariant representations for out-of-distribution generalization on graphs. NeurIPS'22. [2] Graph rationalization with environment-based augmentations. KDD'22. --- Rebuttal 2: Comment: We sincerely appreciate your increased rating and recognition of the efforts we put into addressing your concerns. Your contribution to our work is highly valued and greatly appreciated.
null
null
null
null
null
null
Learning to Group Auxiliary Datasets for Molecule
Accept (poster)
Summary: This paper studies an interesting and meaningful problem, that is, how to make the best use of available molecule data for achieving superior performance on the prediction of molecule properties. This paper first conducts an extensive study on many widely used benchmark datasets and discovers some interesting patterns. Then the authors propose a routing-based method that considers both task similarity and molecule similarity to predict the best combination of auxiliary datasets for each target dataset. Strengths: 1. The studied problem is interesting and meaningful because it is a very often case in the drug discovery pipeline. 2. The paper is clearly written and easy to follow. 3. The empirical study is well-designed and conducted, and the proposed method is well-motivated. Weaknesses: 1. Computation cost will be high if there are many tasks involved. 2. Some datasets studied are essentially formed by multiple properties (e.g. Tox21, Clintox). It would be better if these properties are treated as one single task, instead of considered together. 3. This paper studies many datasets, and therefore it would be better if the authors can give a brief introduction to each of the datasets (e.g. what property is the dataset about?). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What does $\mathbb{z}$ in Eqn. (1) mean? Is it the representation for a batch or the whole dataset? 2. Why the learnable paratmeter of $g_m(\cdot,\cdot)$ is not in Eqn. 7? 3. Is the proposed method sensitive to the threshold in Sec. 4.3 ? 4. In addition to task similarity and structural similarity, what are the other possible factors determining whether an auxiliary task is helpful or not? I hope the authors can discuss this. 5. What is the difference between search-based methods and group-based methods (line 237)? 6. As shown in Fig. 6, FreeSolv and ESOL are not quite helpful to each other. This seems strange --- FreeSolv is about Hydration Free Energy (HFE) and ESOL is about water solubility. Theoretically, these two properties should be strongly correlated as HFE is an important factor in water solubility. Do you have any comments on this result? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable comments on our paper. We will explain your concerns point by point. ``` Q1: Computation cost will be high if there are many tasks involved. ``` **Response**: It is indeed true that handling numerous datasets, especially when including large auxiliary sets, can consume significant computational resources. However, our proposed method can alleviate this by automatically grouping the datasets with the routing mechanism in a single model. This approach leads to improved efficiency compared with the other baseline models, as demonstrated by Fig.5(b). ``` Q2: Some datasets studied are essentially formed by multiple properties (e.g. Tox21, Clintox). It would be better if these properties are treated as one single task, instead of considered together. ``` **Response**: Thanks for your great suggestions! The current model only considers dataset-level information so it cannot fully exploit the sub dataset-level information in terms of task or structure. We will explore it in our future work. ``` Q3: This paper studies many datasets, and therefore it would be better if the authors can give a brief introduction to each of the datasets (e.g. what property is the dataset about?). ``` **Response**: Good suggestion! The introduction can be found in [MoleculeNet](https://moleculenet.org/). We will include it in the next version. ``` Q4: What does 𝕫 in Eqn. (1) mean? Is it the representation for a batch or the whole dataset? ``` **Response**: It represents the representations of the input batch. We apologize for any confusion and will make a revision in the next version. ``` Q5: Why the learnable parameter of $g_m(\cdot,\cdot)$ is not in Eqn. 7? ``` **Response**: Sorry for any confusion. $\alpha_m$ is produced by $g_m(\cdot,\cdot)$ and we only present the former in Eqn.7 for simplifying the expression. We will clarify the relationship between $\alpha_m$ and $g(\cdot, \cdot)$ in the revised version of the paper. ``` Q6: Is the proposed method sensitive to the threshold in Sec. 4.3 ? ``` **Response**: In our experiments, we found that setting a high threshold, e.g., 0.8, may lead to unstable selection results, as it potentially filters out most auxiliary datasets in the first or second round. A low threshold, like 0.2, requires more iterations for a more refined grouping. We have shown the learning curves of each iteration in Appendix D.3 with the threshold of 0.6, from which a stable selection process can be observed. ``` Q7: In addition to task similarity and structural similarity, what are the other possible factors determining whether an auxiliary task is helpful or not? I hope the authors can discuss this. ``` **Response**: Thanks for the suggestions! Besides the task and the structure, one significant factor is the strategy employed in training the model using multiple auxiliary datasets. Here we simply merge the datasets and sample them with equal possibility, which can potentially be extended. For example, selectively training different portions of the model parameters with specific datasets might lead to improved performance. We will thoroughly discuss this in the revised version of our manuscript. ``` Q8: What is the difference between search-based methods and group-based methods (line 237)? ``` **Response**: The distinction between search-based methods and grouping-based methods lies in two aspects: 1. Criterion: Grouping-based methods utilize a learnable metric to evaluate the affinity between datasets, such as the task embedding similarity of Task2vec. Conversely, search-based methods assess affinity by directly testing performance or by integrating fundamental features, such as similarities in fingerprint features. 2. Selection process: Grouping-based methods select the top-K auxiliary datasets based on the defined criterion. Search-based methods explore the auxiliary datasets through a breadth-first approach, retaining a candidate set at each level of exploration. ``` Q9: As shown in Fig. 6, FreeSolv and ESOL are not quite helpful to each other. This seems strange --- FreeSolv is about Hydration Free Energy (HFE) and ESOL is about water solubility. Theoretically, these two properties should be strongly correlated as HFE is an important factor in water solubility. Do you have any comments on this result? ``` **Response**: Although these two datasets share similar tasks, the difference in molecule structure distribution can still result in performance degradation, as demonstrated in Fig.7(a)(b). Besides, a similar task doesn't always lead to better performance, and actually, the structure distribution of ESOL dataset doesn’t exhibit a high correlation to the performance gain, as shown in Fig.3(b). --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thank the authors for their reply. I have no more questions and would like to keep my rating unchanged. --- Reply to Comment 1.1.1: Title: Response to the reviewer sjEd Comment: We appreciate your consideration and thoughtful feedback! The paper will be revised based on your suggestions and comments.
Summary: The authors address the problem of, in a transfer learning, meta-learning, or few-shot learning setting involving molecules, which datasets might be most useful in providing positive auxiliary information that does not damage model performance on the task of interest. The paper proposes a method, MolGroup, to identify the datasets which, if provided as auxiliaries to a model, most increase the score of a target dataset. The method makes use of a routing mechanism to select the optimal auxiliaries, and demonstrates improvement of both GIN and Graphormer models on some of 11 target datasets. Strengths: The method is novel and well-motivated, with routing mechanism and bilevel optimization which has not previously been applied in this setting. The method does find auxiliary datasets which provide a relative improvement on the target datasets, where the gains are small but nonetheless present across the board. In addition the method is not computationally infeasible, and the mention of efficiency of the method as measured by wall clock time is valuable. The experiments are comprehensive and carefully performed, taking into account SOTA modelling techniques when comparing final performance. Weaknesses: It is rather unclear initially whether the authors propose to calculate an affinity score based upon calculated task embedding and fingerprint distribution differences between datasets, or learn an affinity score as the gating score. Line 65-68 in particular are unclear on this point. Clarification would be very useful at this point in the manuscript. The assertion that fingerprint and task embedding similarity are strictly different measures. In practice, task embeddings are highly dependent on the distribution of input features, regardless of labels, and therefore are rather similar to fingerprint embedding similarities. An earlier justification for this assertion in the manuscript would be very helpful. While Figure 3 demonstrates that these correlation measures between auxiliary and target dataset are themselves correlated with relative improvement of performance, it is not especially convincing that the two correlation measures are distinct. For instance, some tasks show positive correlation for one measure and negative for another, others do not. It is not clear how to interpret this information. Expansion around these plots would be valuable. It would be useful to see whether the method works on another domain outside of molecular property prediction -- have the authors considered this? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: A number of questions are raised in the "weaknesses" section above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors do not discuss limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback on our work. We will address your main concern point by point. ``` Q1: It is rather unclear initially whether the authors propose to calculate an affinity score based upon calculated task embedding and fingerprint distribution differences between datasets, or learn an affinity score as the gating score. Line 65-68 in particular are unclear on this point. Clarification would be very useful at this point in the manuscript. ``` **Response**: We apologize for any confusion and will make a revision in the next version. In our work, we propose to learn the gate scores with a routing mechanism to quantify the affinity between datasets. Thank you for bringing this to our attention. ``` Q2: The assertion that fingerprint and task embedding similarity are strictly different measures. In practice, task embeddings are highly dependent on the distribution of input features, regardless of labels, and therefore are rather similar to fingerprint embedding similarities. An earlier justification for this assertion in the manuscript would be very helpful. ``` **Response**: Thank you for the suggestion! Actually, the task embedding is obtained using the GIN which applies extracted node and edge features rather than directly utilizing fingerprint features. It can inherently minimize the correlation between fingerprint and task embedding similarity, a fact evidenced by the substantial difference in their Pearson coefficients (0.16 vs 0.06). We will include this discussion in the next version to prevent any confusion. ``` Q3: While Figure 3 demonstrates that these correlation measures between auxiliary and target dataset are themselves correlated with relative improvement of performance, it is not especially convincing that the two correlation measures are distinct. For instance, some tasks show positive correlation for one measure and negative for another, others do not. It is not clear how to interpret this information. Expansion around these plots would be valuable. ``` **Response**: Thanks for your suggestions! Here are some examples using asymmetric KL divergence as our similarity measurement: 1. For the datasets Esol and Freesolv, the fingerprint similarity is 0.0212, while the task similarity is 0.4084. Their tasks are both related to water solubility but structural distributions vary substantially. 2. For the datasets Tox21 and SIDER, the fingerprint similarity is 0.8852, while the task similarity is 0.0062. The result reveals that the two datasets share similar structure distribution, yet their tasks differ significantly. We will incorporate more discussion in the revised version. Besides, we have included some case studies in Section 5.3, which shows the learned structure affinity scores among the datasets with the toxicity-related task (Tox21, ToxCast, and ClinTox). ``` Q4: It would be useful to see whether the method works on another domain outside of molecular property prediction -- have the authors considered this? ``` **Response**: Thanks for your suggestion! We believe that MolGroup can be potentially extended into other biomedical domains, such as protein and single-cell data, where data distribution varies and label annotations are costly. We will keep exploring it in our future work. ``` Limitations: The authors do not discuss limitations. ``` **Response**: The limitation is included in the Conclusion section. We apologize for any oversight and will ensure greater clarity in the revised version.
Summary: The paper addresses the challenge of limited annotations in small molecule datasets and proposes a method called MolGroup to identify auxiliary datasets that can benefit the target dataset when jointly trained. MolGroup utilizes a routing mechanism optimized through a bi-level optimization framework to separate dataset affinity into task and structure affinity. The proposed method demonstrates its effectiveness in predicting the optimal combination of auxiliary datasets for each target dataset and outperforms existing baselines. Strengths: 1. The paper provides a clear motivation for the problem of limited annotations in small molecule datasets and the challenges associated with incorporating auxiliary datasets. This highlights the practical relevance of the research. 2. The proposed MolGroup method is well-explained and builds on the insights obtained through empirical analysis. Particularly, the preliminary study on the relative improvement and the similarities is interesting. 3. The extensive experiments demonstrate the efficiency and effectiveness of MolGroup. Weaknesses: 1. The Pearson coefficients presented in Figure 3 are relatively low, all below 0.5. This raises doubts about the claim that the combination of task and structure leads to better performance. There is a potential risk of negative transfer that could negatively impact the main task's performance. It would be better to provide further insights or explanations to address this concern. 2. Given that the authors propose to use meta learning to strengthen the learning process, it is suggested that an ablation study be conducted to demonstrate the effectiveness of this approach. Comparing the performance with and without meta learning would provide a clearer understanding of its contribution to the proposed MolGroup method. 3. In line 187, the authors propose to assign learnable embeddings for the tasks. Why not use Task2vec to generate the task embedding, as is done in Section 3? The reasoning behind this choice is not adequately explained. It would be beneficial for the authors to provide a justification for this decision and discuss any potential implications or advantages of using learnable embeddings over Task2vec. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback on our work! We will explain your concerns point by point. ``` Q1: The Pearson coefficients presented in Figure 3 are relatively low, all below 0.5. This raises doubts about the claim that the combination of task and structure leads to better performance. There is a potential risk of negative transfer that could negatively impact the main task's performance. It would be better to provide further insights or explanations to address this concern. ``` **Response**: The single metric(fingerprint/task similarity) indeed doesn’t correlate strongly with the performance gain. However, **the primary purpose** of this analysis is to demonstrate that integrating both structural and task information yields a stronger correlation with performance gains rather than proposing a novel metric to predict affinity. Besides, compared with these single metrics, the proposed MolGroup can better capture the affinity between datasets in terms of task and structure, as demonstrated in Table1,2 and Fig.7(a). Combining both sides of information can lead to a more comprehensive understanding of the affinity between datasets. We will add more analysis in the revised version. ``` Q2: Given that the authors propose to use meta learning to strengthen the learning process, it is suggested that an ablation study be conducted to demonstrate the effectiveness of this approach. Comparing the performance with and without meta learning would provide a clearer understanding of its contribution to the proposed MolGroup method. ``` **Response**: Thanks for your suggestion! We have presented the learning curves of the affinity scores without utilizing the bi-level framework in Fig.5(a). The result shows a homogeneous distribution among the auxiliary datasets, indicating that the model struggles to distinguish the affinity of the auxiliary dataset without bi-level training. We will conduct more analysis to provide a more comprehensive understanding in future work. ``` Q3: In line 187, the authors propose to assign learnable embeddings for the tasks. Why not use Task2vec to generate the task embedding, as is done in Section 3? The reasoning behind this choice is not adequately explained. It would be beneficial for the authors to provide a justification for this decision and discuss any potential implications or advantages of using learnable embeddings over Task2vec. ``` **Response**: As demonstrated in Fig. 3(a) and Tables 1 and 2, Task2vec exhibits a weak correlation with the performance gain (0.05 Pearson coefficient). Additionally, the performance of the model trained with the auxiliary datasets selected by Task2vec is poor. We will add more discussions in the revised version. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: My concerns are addressed and I would like to increase my score to 6. --- Reply to Comment 1.1.1: Title: Response to the reviewer NGKd Comment: We greatly appreciate your insightful comments and suggestions, which have significantly improved our paper.
Summary: The paper proposed MolGroup, a dataset grouping method designed to aid molecule property prediction. Motivated by preliminary empirical analysis, MolGroup separates the dataset affinity into task and structure affinity, and uses a routing mechanism to quantify the affinity between a pair of datasets. The routing mechanism is optimized through a bi-level optimization framework. Experiments on 11 target molecule datasets show that MolGroup yields a ~4% increase across two architectures. Strengths: **Originality** The separation of task and structure affinity is novel. The application of bi-level optimization for quantifying dataset affinity is novel. **Clarity** The paper is well-structured and easy to follow. Weaknesses: **Quality** In Fig. 3a, I don't think we can draw the conclusion that structure and task affinities are compensatory. If we remove the outliers, the points seem randomly distributed, which means that the affinities and the relative improvements are not correlated. This phenomenon might significantly undermine the subsequent arguments. **Significance** I agree that molecular property prediction is a very important task with limited data. But if putting this much effort and data result in a mere 4% performance increase, how do you convince the research community that we shall continue on this direction? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Could you elaborate/reiterate on how you train the final model? Why do you not use the routing mechanism in your final model (I suppose it is due to performance issues)? 2. Please address the quality and significance issue stated above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I do not see any significant, unreported negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the insightful comments! We will address your concerns point by point. ``` Q1: In Fig. 3a, I don't think we can draw the conclusion that structure and task affinities are compensatory. If we remove the outliers, the points seem randomly distributed, which means that the affinities and the relative improvements are not correlated. This phenomenon might significantly undermine the subsequent arguments. ``` **Response**: Thanks for your suggestions! The outliers are the cases of dataset pair with significant improvements or degradations, such as the FreeSolv dataset with others. Here we exclude all the datasets exhibiting huge performance changes, i.e., FreeSolv, qm8, and qm9, and the regression curves are included in the attachment. It can be observed that the combination of structure and task information still exhibits a stronger correlation compared to the use of the individual one, which is consistent with our analysis shown in the paper. Besides, the advantage of applying both task and structure affinity can be further demonstrated by the case studies in Section 5.3, where the structure affinity score explains why datasets with similar tasks cannot benefit from each other. We will add more analysis in the revised version. ``` Q2: I agree that molecular property prediction is a very important task with limited data. But if putting this much effort and data result in a mere 4% performance increase, how do you convince the research community that we shall continue on this direction? ``` **Response**: Actually improving the small molecule datasets with a 4% gain is non-trivial and challenging, which can also be observed in the previous state-of-the-art methods[1,2]. These previous methods employ large parameter space and additional information (i.e., 3D structure and large pretrained dataset) to achieve a relative improvement of 4%-6%, comparable to ours. Moreover, our proposed method has the additional advantage of efficiency compared with search-based methods, as illustrated in Figure 5 (b). The grouping results are model-agnostic and hold for various backbone models, as demonstrated by Tables 1 and 2. We would like to highlight that the model trained with the grouping dataset can still benefit from other techniques like pre-training. [1] Zhou, G., Gao, Z., Ding, Q., Zheng, H., Xu, H., Wei, Z., ... & Ke, G. (2023). Uni-Mol: a universal 3D molecular representation learning framework. [2] Rong, Y., Bian, Y., Xu, T., Xie, W., Wei, Y., Huang, W., & Huang, J. (2020). Self-supervised graph transformer on large-scale molecular data. *Advances in Neural Information Processing Systems*, *33*, 12559-12571. ``` Q3: Could you elaborate/reiterate on how you train the final model? Why do you not use the routing mechanism in your final model (I suppose it is due to performance issues)? ``` **Response**: We first merge all the selected auxiliary datasets together and train an initial model on this merged dataset. Note that the initial model, i.e., the final model, doesn’t employ the routing mechanism which is only utilized for auxiliary grouping datasets. The data is sampled from each dataset with equal probability during the training. Our proposed routing mechanism indeed can be used to train the final model on all the datasets and adjust the influence between datasets in an end-to-end manner. But such a method suffers from two limitations: 1. Training a model with more datasets will consume significantly more computational resources and time. 2. Our empirical study shows that the model trained without the negative auxiliary datasets outperforms significantly the model trained with negative datasets, although when it is equipped with a routing mechanism. It can be demonstrated in the following table: | GIN | BBBP(↑) | toxcast(↑) | tox21(↑) | esol(↓) | freesolv(↓) | | --- | --- | --- | --- | --- | --- | | Only-target | 0.6662(±0.0284) | 0.6069(±0.0102) | 0.7423(±0.0057) | 1.5635(±0.0408) | 3.8421(±1.5796) | | Molgroup | 0.6836(±0.0163) | 0.6391(±0.0058) | 0.7566(±0.0044) | 1.4028(±0.0372) | 3.1166(±0.2790) | | Final Model with routing mechanism | 0.6683(±0.0307) | 0.5706(±0.0114) | 0.6937(±0.0180) | 3.0400(±0.1383) | 5.4833(±1.3547) | MolGroup can be considered a **hard version** of the final model including the routing mechanism. Rather than incrementally eliminating negative training signals, it directly filters out negative datasets. Furthermore, a final model trained with the routing mechanism fails to yield grouping results that can be transferred to other models. --- Rebuttal Comment 1.1: Title: Reply to the authors Comment: Thank you for your clarifications. Overall, I think your paper is solid but contains too many arbitrary choices to be considered elegant. E.g., the method is based on the usefulness of structure and task affinities. The former metric is based on molecular fingerprints, which could be viewed as an additional data modality. It is possible that combining GIN/Graphphormer with fingerprints could yield simpler models with comparable improvement (this is purely a hypothesis, of course). Therefore, I maintain my initial evaluation. --- Reply to Comment 1.1.1: Title: Response to the reviewer Comment: Thanks for your response! We’d like to clarify that the fingerprint feature is **only** used for grouping datasets, and is not incorporated during the final model training. Tables 1 and 2 in our paper show that adding FP features to beam search cannot consistently improve results compared to the original one. This suggests that the FP features do not guarantee better performance. Here we compare the performance of MolGroup with the model trained with FP features, which is shown in the following table. It can be found that the FP features fail to consistently improve the performance of all the datasets (Only-target vs Only-target+FP). Besides, our proposed MolGroup can outperform or matches the performance of the model trained with FP features in most cases. | GIN | BBBP(↑) | toxcast(↑) | tox21(↑) | esol(↓) | freesolv(↓) | | --- | --- | --- | --- | --- | --- | | Only-target | 0.6662(±0.0284) | 0.6069(±0.0102) | 0.7423(±0.0057) | 1.5635(±0.0408) | 3.8421(±1.5796) | | Only-target+FP | 0.6624(±0.0169) | 0.6121(±0.0081) | 0.7278(±0.0076) | 1.2309(±0.1014) | 2.2646(±0.2444) | | Molgroup | 0.6836(±0.0163) | 0.6391(±0.0058) | 0.7566(±0.0044) | 1.4028(±0.0372) | 3.1166(±0.2790) |
Rebuttal 1: Rebuttal: We thank the reviewers for noting that we propose a novel method (VvkN,ohh4) to address a meaningful problem (B1V2,VvkN,sjEd) with a clear motivation (VvkN,NGKd,ohh4,sjEd), and the paper is well-written and easy to follow (VvkN,NGKd,ohh4,sjEd). We further summarize our key contributions as follows: 1. We study a new angle to improve the performance of the molecule dataset with limited annotations by utilizing the auxiliary datasets with high affinity. The strategy is compatible with the other training strategy such as pertaining. 2. We conduct a study to analyze how different molecule datasets affect each other in terms of task and structure similarity. This investigation has led to some interesting findings, such as the compensatory relationship between structure and task, and the performance gains achieved by integrating both aspects. 3. A routing-based molecule grouping method optimized by a bi-level learning framework is proposed which achieves the SOTA performance across 11 molecule datasets. Besides, we include the regression curves between relative improvement and the task/structure similarity measurement excluding the datasets with huge performance changes in the attachment (reviewer VvkN). Pdf: /pdf/9221ba35c8520ba3bd66a54f745d581047e2ea93.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper investigates how different molecule datasets affect each other’s learning, considering both task and structure aspects. It proposes a routing-based molecule grouping method to calculate the affinity scores of each auxiliary dataset based on the graph structure and task information, and select the auxiliary datasets with high affinity. The selected datasets are then combined and fed into the downstream model. Experiments show a large improvement for GIN/Graphormer trained with the selected group of molecule datasets. Strengths: 1. This paper studies how to select additional auxiliary molecule datasets to improve the prediction performance on the target dataset. Annotated molecule dataset is difficult to obtain and naively introduce auxiliary molecule datasets may result in negative transfer. This paper investigates an interesting problem how to properly select auxiliary molecule datasets. The authors design a dataset grouping method for molecules, which considers both task and structure aspects. 2. Design a routing mechanism to quantify the affinity between two datasets and bi-level optimization framework used to update the routing mechanism through meta gradient. The proposed routing function can comprehensively measure the affinity from two perspectives: task and graph structure. The bi-level optimization uses the parameters updated by the auxiliary dataset as a signal to guide the learning of the routing mechanism. 3. Experimental results how strong improvement of the baseline methods. The proposed method outperforms all the baseline methods and consistently improves the performance of the backbone model, with an average relative improvement of 6.47% across all datasets. Extra analysis of the proposed method is presented. Weaknesses: 1. The authors did not study how auxiliary molecule datasets impact different models. If the proposed method is model-agnostic? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: If the proposed method is model-agnostic? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments on our paper! Here we additionally present the performance of GCN (#layer=2, hidden dim=300, dropout rate=0.5) with different grouping methods in the following table, where a consistent improvement can be observed. Such a phenomenon also aligns with the performance of GIN and Graphormer shown in Tables 1 and 2 in our paper, demonstrating that the grouping can be transferred to different models. We appreciate your suggestion and will conduct a more comprehensive evaluation in our future work. | GCN | BBBP(↑) | ToxCast(↑) | Tox21(↑) | ESOL(↓) | FreeSolv (↓)| | -------- | ------- | ------- | ------- | ------- | ------- | | Only-target | 0.6309(±0.0136)| 0.6253(±0.0086)| 0.7427(±0.0061)| 1.4720(±0.0277)| 3.3941(±0.2063)| | TAG | 0.6159(±0.0123)| 0.6164(±0.0037)| 0.6996(±0.0058)| 1.5028(±0.0423)| 2.7482(±0.2939)| | MolGroup | 0.6369(±0.0084)| 0.6290(±0.0062)| 0.7527(±0.0054)| 1.3745(±0.0297)| 2.5496(±0.2700)|
null
null
null
null
null
null
Grounding Neural Inference with Satisfiability Modulo Theories
Accept (spotlight)
Summary: This paper proposes SMTLayer, a layer that incorporates SMT solvers (Z3 in this case) into neural networks. The layer itself is not differentiable. The forward and backward passes of SMTLayer are derived thoroughly. Experiments show that this innovation results in overall more robust, interpretable, and efficient architectures on some tasks where symbolic reasoning is heavily relied on. Strengths: - The idea itself is straightforward and the derivations seem well-presented. - The experiments do confirm the claims, and outperform baseline methods by large margins. ### 12 Aug update: Rebuttal updates and modifications are satisfactory. Bumping up the rating. Weaknesses: Only some minor weaknesses regarding writing: - Do not say "[30] showed that ...". If a citation plays a grammatical role, use the authors' names instead. - ~~Some claims can be made more concrete~~: - ~~Line 13: "that are robust to certain types of covariate shift" what types of covariate shift exactly?~~ - ~~Line 65: "four diverse applications," what applications, how diverse exactly?~~ Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper mentioned a few things as future work, but should do well to comment more on the limitations of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for you considerable review and we address the reviewer's questions as follows. > Some claims can be made more concrete. Line 13: "that are robust to certain types of covariate shift" what types of covariate shift exactly? **A**: In Section 5, we study MNIST addition and Visual Algebra under covariate shift. As discussed in 5.1, we train on 10%, 25%, 50%, and 75% of the possible combinations of digit pairs, and test on 100% of them. For visual algebra, we train on samples where $a$ and $b$ are the same digit, and $x$ is drawn uniformly from the odd numbers between 0 and 9; we test on samples where $a, b$, and $x$ are drawn uniformly. In both of these cases, the training and test data are distributed quite differently. Performing well on the test data requires putting the encoded domain knowledge to use effectively. > Continued: Line 65: "four diverse applications," what applications, how diverse exactly? **A**: There are four tasks we describe and evaluate our method detailed in Section 5.1, including MNIST Addition, Visual Algebra, Liar’s Puzzle and Visual Sudoku. They are diverse as covering both vision and language tasks, varying from a simple logical problem with fewer variables (e.g. addition) to the more complex one (visual sudoku). > The paper mentioned a few things as future work, but should do well to comment more on the limitations of the proposed approach. **A:** Please take the following as a discussion about limitations and will be reflected in writing in camera-ready. Firstly, a key limitation arises from the necessity for the neural network to interact with the theoretical framework using Boolean vectors. This mandates the discretization of continuous values, potentially resulting in suboptimal inference outcomes. Such discretization can also overcomplicate the theory, leading to further potential performance setbacks. Secondly, the choice of neural architecture paired with the SMTLayer plays a significant role in determining the quality of representation learned and the final performance, as outlined in Section 5.3. There is a need for more comprehensive research in this area to fully grasp this occurrence and propose potential solutions or modifications. Lastly, the integration of an SMT solver into a neural network implies a heavier reliance on computations traditionally designated for CPU cores. This can pose a bottleneck, particularly in batched setups. Here, tasks which could be executed concurrently are now being semi-serialized due to the implementation of the proposed method. This not only impacts the speed but also overall computational efficiency. It would be beneficial for the paper to delve deeper into these limitations, offering more insight and potentially suggesting ways to overcome these challenges. Let us know if we are able to address your concerns (especially on the limitation side of the work). We are happy to respond to any follow-up questions. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for the response and clarify what I meant earlier about making the claims more concrete: For key parts of the paper like abstract and contributions, I think it helps the reader to understand if the texts are self-contained. I understood what the robustness and the diversity refer to after reading the paper, but thought that it might be helpful to have the details explained in the abstract and the contributions. This is why my points are in the "weakness" and not "question" category. The limitation response is satisfactory. I'm willing to adjust my rating if authors can make the writing style modifications. --- Reply to Comment 1.1.1: Comment: Thanks for your fast response to our rebuttal. Yes we see that some ambiguity in the abstract and the contribution may confuse the reader and more details would help to clarify our arguments. We are working to improve the writing and will update the draft when we are given a chance to do so.
Summary: This paper studies neuro-symbolic learning tasks on weakly supervised setting (i.e., lacking direct label supervision of neural networks). To incorporate symbolic knowledge into training, this work integrates SMT solvers into the forward and backward passes of a deep network layer. The key idea is to establish the surrogate gradient of SMT solver-based reasoning, allowing for the back-propagation of this SMT layer. Experimental evaluations on four tasks demonstrate the improvement over several existing approaches. Strengths: - The paper presents a well-motivated and easy-to-implement method for incorporating symbolic knowledge into network training. - The formalization derives some good theoretical results, and has proofs presented in the appendix. Weaknesses: - Theorem 2 requires both the hypothesis set and the loss function to be convex, which is impractical. Moreover, it would be better to decompose this theorem into two parts. - The first part is to discuss the convergence of the SGD algorithm. This only requires Lipschitz and smoothness assumptions. - The second part is to analyze the properties of the grounding hypothesis under convex assumptions. - The exactly-one assumption made by Theorem 2 is quite vacuous. From my understanding, with this assumption, one can derive the label supervision by using SMT solver (or correct me if not). Moreover, this assumption avoids the discussion of shortcuts problem (i.e., how to distinguish different satisfying assignments) in such neuro-symbolic learning paradigm [1, 2]. - Some related work is missing. For instance, [3] uses SMT solvers and incorporates MCMC sampling to support network training. Additionally, it is suggested to compare some differentiable logic methods [4, 5]. Particularly, [4] also ensures the satisfaction of symbolic constraints in inference stage. [1] Zenan Li, Zehua Liu, Yuan Yao, Jingwei Xu, Taolue Chen, Xiaoxing Ma, Jian Lu. Learning with Logical Constraints but without Shortcut Satisfaction. [2] Marconato, Emanuele, Stefano Teso, and Andrea Passerini. Neuro-Symbolic Reasoning Shortcuts: Mitigation Strategies and their Limitations. [3] Zenan Li, Yuan Yao, Taolue Chen, Jingwei Xu, Chun Cao, Xiaoxing Ma, Jian Lu. Softened Symbol Grounding for Neuro-symbolic Systems. [4] Nicholas Hoernle, Rafael Michael Karampatsis, Vaishak Belle, Kobi Gal. MultiplexNet: Towards Fully Satisfied Logical Constraints in Neural Networks. [5] Zhun Yang, Joohyung Lee, Chiyoun Park. Injecting Logical Constraints into Neural Networks via Straight-Through Estimators. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Some citations are duplicate, e.g., [31] and [32], [24] and [25]. - Algorithm 2 is unclear to me. For example, in Line 2, how to compute the BCE loss for $\text{sign}(z)$ in $\\{-1,0,1\\}$. Moreover, the gradient of $\text{sign}(z)$ still vanishes almost everywhere? - The authors claim that $\ell(\hat{y}, y^*) \leq \ell(y, y^*)$ (Line 211), how to derive this property? - How does the approach avoid the shortcut problem? For example, in the MNISTAdd task, if the image ``4+7=11`` is incorrectly recognized as ``3+8=11``, the surrogate gradient may reinforce this incorrect prediction instead of correcting it. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes, the limitations of the approach have been partially presented. The work does not have negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your considerable and insightful review. We address the reviewer's concerns & questions as follows. > The exactly-one assumption made by Theorem 2 is quite vacuous. From my understanding, with this assumption, one can derive the label supervision by using SMT solver (or correct me if not). Moreover, this assumption avoids the discussion of shortcuts problem (i.e., how to distinguish different satisfying assignments) in such neuro-symbolic learning paradigm. **A:** This comment is very insightful and gave us an idea to restructure the proof. Condition (1) just puts some structure on the sample and label sets. Condition (3) is the standard condition needed for convergence of SGD. Condition (2) is needed to establish one-to-one correspondence between labels and constants in Z and needed for using techniques for convergence of SGD, so we can minimize over grounding terms. In other words, assumption (2) allows us to “roughly speaking” move seamlessly between labels and constants in Z. So, if there are some other weaker technical conditions for convergence of SGD, it can easily replace condition (3). Relaxing condition (2) is an interesting open problem. We will clarify this in the revision of the paper. We also give the key ideas in the main paper. > Some related work is missing. For instance, [3] uses SMT solvers and incorporates MCMC sampling to support network training. Additionally, it is suggested to compare some differentiable logic methods [4, 5]. Particularly, [4] also ensures the satisfaction of symbolic constraints in inference stage. **A:** Thank you for bringing these to our attention, we will incorporate them into our related work. We do compare the performance of our approach with that of Ahmed et al. (2023), which we see as a method based on differentiable logic. The results are shown in Table 1 (right). The reason that we chose this as our point of comparison is that the authors used the same set of Sudoku instances to benchmark their work as we consider in our evaluation, and their publicly-available code made it straightforward to reproduce on our hardware. We are happy to include other comparisons, but we are concerned that the code we are able to find for [4] does not have Sudoku constraints, and it is unclear how feasible it is to encode them given the DNF restriction on domain theories in that framework. >In Algorithm 2, for example, in Line 2, how to compute the BCE loss for $sign(z)$ in {-1, 0, 1}. Moreover, the gradient $sign(z)$ of still vanishes almost everywhere. **A:** We are taking the gradient w.r.t $z[i]$ instead of $sign(z[i])$ in Algorithm 2 (Line 9), so the gradient does not vanish here. We will clarify this in the writing. > The authors claim that $\ell(\hat{y}, y^\star) \le \ell(y, y^\star)$, how to derive this property? This is elaborated further in the proof of theorem 2, which is given in the supplementary material; it is far from obvious that this is where such an explanation would be, and we will provide a pointer in the main text in future drafts. Let $y^\star$ be an output that achieves smaller loss. Our claim is essentially that the sign of $\hat{y}$ computed on line 3 of both algorithms must be equal to that of $y^\star$ in each dimension. Because the outputs of SMTLayer are always Boolean-valued, this is sufficient, and it follows from two facts: - At any coordinate $i$ where $y[i] \ne y^\star[i]$, $\mathsf{sign}(\partial_y\ell(y, y^\star))[i] = \mathsf{sign}(y)[i]$. - At any coordinate $i$ where $y[i] = y^\star[i]$, $\mathsf{sign}(\partial_y\ell(y, y^\star))[i] = -1 \cdot \mathsf{sign}(y)[i]$. Because this reasoning doesn’t depend on the specifics of $\ell$, and holds if the loss is not simple cross entropy. This also illustrates why the same reasoning applies if the SMTLayer is embedded deep in the network, as the remainder of the network can be seen as comprising part of $\ell$. > How does the approach avoid the shortcut problem? For example, in the MNISTAdd task, if the image 4+7=11 is incorrectly recognized as 3+8=11, the surrogate gradient may reinforce this incorrect prediction instead of correcting it. **A:** This is a great question, and one that we believe requires followup work to investigate. In section 5, we compare the representations learned by SMTLayer versus those learned by Scallop. We found that even when training data is impoverished, the representation learned by SMTLayer is correct with respect to the domain theory on most instances (~99%), whereas Scallop’s is not (50%). We view this as evidence that in these cases our approach successfully avoided the shortcut problem, and our conjecture is that the gradients computed in our backward pass are more sparse, as they are derived from a minimal unsat core. We find that ablating the experiment, replacing minimal unsat core computation with one that instead finds _any_ (non-minimal) set of changes that satisfy constraints, leads to incorrect representations in these cases. We also point out that our approach does not solve (or claim to solve) this problem in all cases. As we report in the same subsection of section 5, ablating the visual algebra experiments by changing the network’s architecture (monolithic versus segmented) also leads to incorrect representations, and similar end-to-end performance as Scallop. So while SMTLayer seems to improve on past work in this regard, understanding precisely why is important future work. --- Rebuttal Comment 1.1: Title: Reply Comment: Thank you for addressing my concerns. I have raised my score to 6.
Summary: This work proposes to incorporate SMT constraints in the neural network to encode domain knowledge. Specifically, it proposes an unsat core based approach, and an MaxSAT based approach for the differentiable training in the presence of SMT constraints. Empirical evaluations on several benchmark problems are presented. Strengths: The proposed methods in this work come with theoretical guarantees on convergence as well as significantly better performance than several baseline approaches. Weaknesses: - The description of the proposed method especially Sec 4 is mostly in English with math hidden behind, which can be confusing for readers who want to know about the technical details. I put some of the technical questions I have in the question part. - The proposed method seems limited to classification task, without discussions on its generalization. - For the experiments, only the performance of the unsat core based algorithms is presented and there is not empirical evaluations of the MaxSMT based algorithm presented. The evaluation would be more comprehensive if both qualitative and quantitative comparisons of the two proposed approaches are discussed. Minor: - symbol at Line 104 is not defined. - double quote at Line 251 is not properly typed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - At Line 38, it says that the learning of the compatible representation can be done "without requiring label supervision". Still, this work seems to focus on supervised learning. Can the authors elaborate more on what they mean by "without requiring label supervision" to clear the confusion? - Throughout this work, the loss is always assumed to be binary cross-entropy. I wonder how much this method generalizes to other loss. More technical questions: - How are the constraints in the MNIST addition problem defined? In Fig 1, it is unclear why the label is represented by five digits. - From Sec 4.1, there is a conversion from Boolean vectors to continuous. I wonder how is the number of bits in the vector decided and if it would affect predictive performance. - How is the amended output \hat{y} computed and is it Boolean or continuous, and how is the inequality at Line 211 is guaranteed? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your considerable review. We address the reviewer's questions as follows. > The proposed method seems limited to classification task, without discussions on its generalization. Throughout this work, the loss is always assumed to be binary cross-entropy. I wonder how much this method generalizes to other loss. **A:** Because the goal of both SAT and MaxSAT is to optimize for the assignment of variables, cross entropy becomes the most natural choice here and in other related work. However, there is no technical limitation that would prevent us from using a type of loss other than cross entropy. This is elaborated further in the proof of theorem 2, which is given in the supplementary material; it is far from obvious that this is where such an explanation would be, and we will provide a pointer in the main text in future drafts. Let $y^\star$ be an output that achieves smaller loss. Our claim is essentially that the sign of $\hat{y}$ computed on line 3 of both algorithms must be equal to that of $y^\star$ in each dimension. Because the outputs of SMTLayer are always Boolean-valued, this is sufficient, and it follows from two facts: - At any coordinate $i$ where $y[i] \ne y^\star[i]$, $\mathsf{sign}(\partial_y\ell(y, y^\star))[i] = \mathsf{sign}(y)[i]$. - At any coordinate $i$ where $y[i] = y^\star[i]$, $\mathsf{sign}(\partial_y\ell(y, y^\star))[i] = -1 \cdot \mathsf{sign}(y)[i]$. Because this reasoning doesn’t depend on the specifics of $\ell$, and holds if the loss is not simple cross entropy. This also illustrates why the same reasoning applies if the SMTLayer is embedded deep in the network, as the remainder of the network can be seen as comprising part of $\ell$. > For the experiments, only the performance of the unsat core based algorithms is presented and there is not empirical evaluations of the **A:** MaxSMT based algorithm presented. The evaluation would be more comprehensive if both qualitative and quantitative comparisons of the two proposed approaches are discussed. **A:** This is correct. Because MaxSMT requires solving a discrete optimization problem, it is *significantly* more costly than SMT with unsat core tracking enabled. Because we found that the MaxSMT backward pass did not lead to better results on any of our benchmarks, we did not run the full battery of experiments on it. We are more than happy to provide these numbers for the sake of comprehensiveness in future versions of the paper. > At Line 38, it says that the learning of the compatible representation can be done "without requiring label supervision". Still, this work seems to focus on supervised learning. Can the authors elaborate more on what they mean by "without requiring label supervision" to clear the confusion? **A:** By “without requiring label supervision”, we are referring to the fine-grained labels on an internal representation that would be required to learn, for example, the representations studied in Concept Bottlenecking [Koh et al. 2023]. For example, for learning Visual Sudoku, supervised learning without SMTLayer (or SATNet [Wang et al. 2019]) requires labels of individual digits, rather than just the solution to a set of sudoku puzzles, to learn a dedicated digit classifier first. Our work sidesteps the need to manually break a problem apart in this fashion and obtain intermediate labels to supervise on, and instead allows learning a predictor end-to-end from just labels on the targeted task. We will clarify this when revising the paper. > How are the constraints in the MNIST addition problem defined? In Fig 1, it is unclear why the label is represented by five digits. **A:** Figure 1 uses a binary encoding of digits, because SMT solvers support constraints involving integers that are encoded this way. However, only four bits per digit are necessary to represent the inputs, while 5 are needed to represent the output. We attempted to simplify our presentation of this theory by uniformly representing numbers as 5-bit integers, and apologize if this caused confusion. We will make this more clear in the text by discussing the encoding. > From Sec 4.1, there is a conversion from Boolean vectors to continuous. I wonder how is the number of bits in the vector decided and if it would affect predictive performance. **A:** To make sure we understand this question, first continuous values provided by layers below SMTLayer are converted into a vector of Boolean values by taking their sign. SMTLayer’s output is computed by taking a vector of Boolean values, and making them “continuous” by letting components be -1 for False entries in the vector, and +1 for True entries. The number of bits in each vector is determined by the constraints (domain theory) in the SMTLayer. For visual algebra, for example, there are four digits given as input to the theory, and one digit as output (the solution to the equation). Each digit can be represented in four bits, so the input must be 16 bits, and the output 4 bits. If one added more bits on either end, it would not impact performance, as the theory would not have a use for them. If one used fewer bits, it would mean that the theory was operating over, e.g., 3-bit integers, which would be unable to represent all of the digits depicted in training and test instances. > How is the amended output \hat{y} computed and is it Boolean or continuous, and how is the inequality at Line 211 is guaranteed? **A:** $\hat{y}$ is boolean and constructed at Line 3 in Algorithm 2 (and Algorithm 4), which is to reverse or retrain the sign of each element in $y$ based on the gradient. Please refer to our response to the first question in our rebuttal where we further elaborate how $\hat{y}$ is constructed. Let us know if we are able to address your concerns and we are happy to respond to any follow-up questions. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the clarifications. I'll keep my score.
Summary: The authors implement SMTLayer, which integrates an SMT solver into a differentiable module, suitable for use with deep learning. SMTLayer takes a vector of floating-point values as input, and produces a vector of floating-point values as output. These input vector is cast to boolean values, and the output of SMTLayer is a set of boolean assignments to variables such that some formula $\phi$ is satisfiable. The formula is not learned: it represents a constraint that is specified as part of the training task. In order to calculate a gradient, the problem is run in reverse -- given a gradient on the outputs, SMTLayer will again solve for satisfiability, attempting to find an alternate set of inputs and outputs such that the outputs have lower loss, and $\phi$ is satisfiable. The authors provide two different SMT solvers for the forward pass, and two different mechanisms for computing gradients on the backwards pass. They then demonstrate that SMTLayer can be used to solve challenging problems (e.g. MNIST sodoku puzzles) with far greater accuracy than existing SOTA. Strengths: This paper is very well written, and it addresses an important problem -- namely how to incorporate symbolic and logical computations into deep neural nets in a way that is principled, interpretable, and differentiable. The experiments are well-designed, the results are compelling, and the authors plan to make the code available as open source. I believe this to be an important paper with potentially high impact. Weaknesses: There were a few areas that I did not quite understand -- see questions, below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: On the backwards pass, SMTLayer must take the gradient of the output $y$, and use it to construct an alternative output $\hat{y}$ with lower loss. Could you elaborate a bit on how this is done? What happens if SMTLayer is embedded deep within another neural network, so that you have a gradient on $y$, but that gradient does not come from a simple cross-entropy loss function? (E.g., there are a bunch of additional NN layers between $y$ and the loss function.) Do you sample several alternative options for $\hat{y}$, in order to find one that works? Note that I am not intimately familiar with the details of SMT solvers. What is typical cost of running an SMT solver, compared to the rest of the neural network? Does the solver dominate training time? Can you think of additional areas where an SMT solver could potentially be applied? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors do not discuss negative societal impacts, but IMO, any impacts are likely to be positive -- incorporating interpretable logical constraints into models seems like an improvement over SOTA wrt. to most issues of AI alignment. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your considerable review. We address the reviewer's questions as follows. > Can you elaborate how gradients are computed in the SMTLayer. **A:** This is elaborated further in the proof of theorem 2, which is given in the supplementary material; it is far from obvious that this is where such an explanation would be, and we will provide a pointer in the main text in future drafts. Let $y^\star$ be an output that achieves smaller loss. Our claim is essentially that the sign of $\hat{y}$ computed on line 3 of both algorithms must be equal to that of $y^\star$ in each dimension. Because the outputs of SMTLayer are always Boolean-valued, this is sufficient, and it follows from two facts: - At any coordinate $i$ where $y[i] \ne y^\star[i]$, $\mathsf{sign}(\partial_y\ell(y, y^\star))[i] = \mathsf{sign}(y)[i]$. - At any coordinate $i$ where $y[i] = y^\star[i]$, $\mathsf{sign}(\partial_y\ell(y, y^\star))[i] = -1 \cdot \mathsf{sign}(y)[i]$. Because this reasoning doesn’t depend on the specifics of $\ell$, and holds if the loss is not simple cross entropy. This also illustrates why the same reasoning applies if the SMTLayer is embedded deep in the network, as the remainder of the network can be seen as comprising part of $\ell$. > What happens if SMTLayer is embedded in another network so gradients on $y$ are not directives from Cross Entropy. **A:** We use Cross Entropy (CE) as an example in algorithms as we are mostly dealing with assignment problems in this paper, where CE is a natural choice for the loss function and the other related literature. There is no fundamental limitation that would prevent us from using other types of losses and/or more layers between the output of SMTLayer and the groundtruth labels as Theorem 2 (and its proof in the supplementary material) ensures one can construct meaningful gradients over SMTLayers. Please refer to our responses to the question **Can you elaborate how gradients are computed in the SMTLayer. **. > What is typical cost of running an SMT solver, compared to the rest of the neural network? Does the solver dominate training time? **A:** For the applications studied in Section 5, the solver is typically able to find a solution very quickly. Notice the SMT line in Table 1 (right): Sudoku is the most complex theory that we studied, and it takes 0.05 seconds on average for Z3 (the particular solver we used) to find a solution. The runtime does depend on the complexity of the theory, and there is not a straightforward way to predict how long an SMT solver will take, for example, when there are hundreds or thousands of constraints operating over a similar number of variables. However, we are not at the moment aware of a learning task that would benefit from such complex constraints. The main performance bottleneck that comes from incorporating an SMT solver on moderately complex theories is parallelism. Solvers run on CPU cores, which will not be as numerous as CUDA cores, so large batches will likely need to be serialized to some extent depending on available resources. Note that our current prototype does not parallelize solver calls, so the performance figures reflected in Table 1 will likely stand to improve from further engineering effort. > Can you think of additional areas where an SMT solver could potentially be applied? **A:** Some of the areas where SMT solvers have been useful include program analysis and verification, planning, scheduling, control, model-based design, and systems biology, to name a few. Many of these are also areas that learning has shown good potential, and we are very excited to continue exploring how these tools can be used together to solve interesting problems in all of these domains! Let us know if the reviewer has more questions or comments and we are happy to respond. --- Rebuttal Comment 1.1: Title: Further questions... Comment: I'm not quite sure you answered my question... > Let $y^{\star}$ be an output that achieves smaller loss... Okay, but how to you get $y^{\star}$? If the SMTLayer feeds directly into a cross-entropy loss function, then you have ground truth for what $y$ --- the output of the SMTLayer --- is supposed to be, and thus can trivially find a $y^{\star}$ with lower loss. However, if the SMTLayer is embedded deep within another neural network with some arbitrary loss function (i.e. there are many NN layers between $y$ and the loss), then all you have is $y$ and a gradient on $y$. Since $y$ is boolean-valued, simply applying the gradient with a small learning rate (as is usual for gradient descent) will yield a non-boolean $y^{\star}$ that is very close to the original $y$, and unlikely to differ in sign. So you probably need to sample different possible boolean outputs, perhaps informed by the gradient, and re-run the latter half of the NN to see if the loss goes down, which is highly non-trivial. Am I missing something here? It's perfectly fine to say: "in our implementation the SMTLayer must be connected directly to the loss function, and only some losses are supported," I just want to be clear about it. --- Reply to Comment 1.1.1: Title: Clarification Comment: > Since is boolean-valued, simply applying the gradient with a small learning rate (as is usual for gradient descent) will yield a non-boolean that is very close to the original, and unlikely to differ in sign That is absolutely correct. We compute $\hat{y}$ via $\mathsf{sign}(y) - 2 \times \mathsf{sign}(\partial_y)$, rather than the usual gradient descent update. Because $y$ and $\hat{y}$ are boolean-valued, we only care about the sign of the gradient, and not its magnitude. We will clarify the writing around this point, as it is subtle. We also want to be clear that we have only done experiments that use cross-entropy loss, with the SMT layer connected at the top. While we believe that the same underlying reasoning applies to other losses and configurations of the layer within a network, we do not claim to show that this is effective in practice on any benchmarks.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
TexQ: Zero-shot Network Quantization with Texture Feature Distribution Calibration
Accept (poster)
Summary: This paper starts from an interesting viewpoint that the synthetic of the previous ZSQ method usually fails to model the similar text feature like real data. As a result, the authors suggest retaining the texture feature. At first, they synthesize calibration images with a LAWS texture feature energy preserve loss. Then, the calibration images are used to provide mean and variance centers for each class, which is then used to guide the generator to synthesize samples. At last, they proposed using a mix-up strategy to further augment the synthetic data. Strengths: 1, The viewpoint is interesting and persuasive. Also, the results do demonstrate the effectiveness of their method. 2, Sec 5.1 provide many meaningful discussion, which makes this paper more convincing. Weaknesses: 1, To be honest, the writing of this paper is not good enough and I get lost in many parts. For example, Line 171, ‘Q always faces linear decisions when inferencing’ is very strange. I can't understand the meaning behind it. Line 166, where µ_l (\bar(x)| \bar(y) = k) and σ_l (\bar(x)| \bar(y) = k) should be the calibrating mean and variance center since calibration set C = {( \bar(x), \bar(y))} in Line 160. 2, Why do the authors want to use a calibration center in Step2: Synthetic samples generation? The paper lacks some discussion. 3, The principle behind Calibration set generation and Synthetic samples generation is not clear. Why do you still need Synthetic samples, even though you already have Calibration set. I know such a way could give a better performance. The authors should provide some ablation study when only using Calibration set. 4, The role of Eq6. The K set only selects top-k texture elements that account for more than 80%. However, the images are randomly initialized from Gaussian noise. I am wondering if the distribution of the K set keeps the same for each synthetic image. For example, R5R5 is very fit for Gaussian noise and is selected for each image. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1, Line 193, what is \bar(D)? 2, It actually uses synthetic data from the generator and from direct optimization. It may an unfair comparison. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors do provide a discussion about the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **We would like to thank the reviewer for the thorough and thoughtful revision of the paper. In the following, we will address each point raised.** We not only respond to or clarify the points but have also made changes accordingly to the main text to make them more clear to future readers. --- **Q1: Line 193, what is $\bar{D}$?** **A1: $\bar{D}$ denotes the synthetic sample set $\bar{D} = \\{\( \bar{x}, \bar{y}\)\\} $. The definition is presented in Section 3.1.2 Data synthesis (line 117).** --- **Q2: It actually uses synthetic data from the generator and from direct optimization. It may an unfair comparison.** **A2: Zero-shot quantization (ZSQ) requires strictly avoiding real data. In our work, both the synthetic dataset $\bar{D}$ and the calibration dataset $C$ are synthesized, so it conforms to the fairness of ZSQ comparison.** --- **Q3: Line 171, ‘Q always faces linear decisions when inferencing’ is very strange. I can't understand the meaning behind it.** **A3: We rewrite this paragraph and hope it is clearer.** Line 171: Our motives lie in the fact that quantized model Q always faces samples with soft labels when inferencing, we hope to synthesize samples with soft labels, too. --- **Q4: Line 166, where $µ_l (\bar{x}| \bar{y} = k)$ and $σ_l (\bar{x}| \bar{y} = k)$ should be the calibrating mean and variance center since calibration set $C = \{\( \bar{x}, \bar{y}\)\}$ in Line 160.** **A4: We checked the expression and found no problem, but we would like to make this point more clear to future readers.** * In the synthetic sample generation stage, we used layered BNS alignment to constrain the generator to synthesize samples that fit the calibration centers. Therefore, Eq. 8 includes the calibrating center and BNS of synthetic samples. In other words, with this method, texture feature distribution knowledge is transferred from the calibrating center to the synthetic samples. * We will add superscript $^\bar{D}$ to the variables involved in the synthetic samples to clarify the data source. --- **Q5: Why do the authors want to use a calibration center in Step2: Synthetic samples generation? The paper lacks some discussion.** **A5: We will supplement the relevant details in Section 3.2.2 (line 155).** * The introduction of calibration centers is a strategy for information transfer, which is also done in FDDA[12] and ClusterQ[17]. * The principle is that the batch normalized statistics (BNS) of the convolutional neural network are highly related to the class, so introducing a calibration center in Step2 means transferring the information of the calibration center extracted in Step 1 to the generator. --- **Q6: The principle behind Calibration set generation and Synthetic samples generation is not clear. Why do you still need Synthetic samples, even though you already have Calibration set. I know such a way could give a better performance. The authors should provide some ablation study when only using Calibration set.** **A6: These are commonly used information transfer strategies, and we will supplement the detailed motivation in Section 3.3.2 (line 185) for easy understanding by readers:** * Distillation (optimization) [14] and generation (generator) [13] are two commonly used methods for synthesizing data. Distillation scheme directly optimizes Gaussian noise to synthesize samples, which takes a long time but less information transfer links, so the knowledge is more accurate. Generation scheme require training a generator. It is suitable for mass data generation, but the increase in information transfer links is not conducive to accurate knowledge transfer. * Our work combines the best of both schemes. We use the optimization method to synthesize accurate calibration centers for each class and introduce calibration centers into the generator to synthesize samples in batches, taking both accuracy and speed into account. * In addition, another possible scheme is to directly apply texture feature constraints to the generator. However, applying calibration lost to generator results in slow iterations and homogeneous samples. So, this option was not adopted. We conducted ablation studies on calibration samples and synthetic samples. **The three sample synthesis schemes and their results are shown in Table G. The scheme adopted in this paper is the optimal solution.** Due to characters limit, ***Tabel G are displayed in the global response area.*** --- **Q7: The role of Eq6. The K set only selects top-k texture elements that account for more than 80%. However, the images are randomly initialized from Gaussian noise. I am wondering if the distribution of the K set keeps the same for each synthetic image. For example, R5R5 is very fit for Gaussian noise and is selected for each image.** **A7: The K sets are similar when Gaussian noise is initialized, but finally they become different, especially for different classes of images.** * Many studies [8-10] have shown that texture features are conducive to classifying images, so the final K sets are various from different classes of images. * R5R5 is very suitable for Gaussian noise, so it is selected by most images in the initial stage, and acts as a noise reduction function. ``` [8] Filtering for texture classification: A comparative study. IEEE Transactions on pattern analysis and machine intelligence. 1999. [9] Image Classification Using Laws' Texture Energy Measures. 1987. [10] Texture and shape biased two-stream networks for clothing classification and attribute recognition. CVPR. 2020. [12] Fine-grained data distribution alignment for post-training quantization. ECCV. 2022. [13] Generative low-bitwidth data free quantization. ECCV. 2020. [14] Intraq: Learning synthetic images with intra-class heterogeneity for zero-shot network quantization. CVPR. 2022. [17] ClusterQ: Semantic Feature Distribution Alignment for Data-Free Quantization. arXiv. 2022. ``` --- Rebuttal Comment 1.1: Comment: The author solves my problems and I am willing to raise my rating.
Summary: TexQ is a novel zero-shot quantization (ZSQ) method that addresses the limitations of conventional synthetic samples in retaining texture feature distributions. It achieves state-of-the-art results in ultra-low bit width quantization, with a significant accuracy increase compared to existing methods on ImageNet. Strengths: Strengths are shown below: 1. This paper tried to exploit the important direction for inevitable privacy and security. 2. The performance on the benchmark datasets is promising. 3. The paper is well-organized. Weaknesses: 1. Maybe the structures used are insufficient with ResNet-18 and MobileNet-V2. What about the most commonly used ResNet-50? 2. Maybe the datasets verified are insufficient with Cifar and ImageNet. What about the others tasks, such as CoCo of detection. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors could refer to the Weaknesses. Additionally, I also have some open questions to discuss. 1. The authors could discuss how to ensure the generalization of the method since the calibration is conducted on a specific dataset. 2. Could the method be used in the Transformer structure? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Thank you for your positive review and constructive comments. We have performed some of them and will include them in the camera-ready version.** --- **Q1: Maybe the structures used are insufficient with ResNet-18 and MobileNet-V2. What about the most commonly used ResNet-50?** **A1: Thank you for your advice. We supplement the experiments on ResNet-50, and the results are shown in Table D.** * **We continue to show advanced performance on the ResNet50, such as we lead AdaDFQ with 7.64%/2.34% Top1 accuracy on 3/4 bits case**. * For our experiments, we follow the settings of Section 4.1 except that we raise the weight of L-BNS loss and BNS alignment loss to ensure the convergence of the model. --- #### **Table D. Top 1 accuracy (%) results of ResNet50 on ImageNet. WBAB indicates the weights and activations are quantized to B-bit.** | Methods | W4A4 | W3A3 | |---|---|---| | GDFQ(ECCV 2020) | 54.16 | 0.31 | | ZAQ (CVPR 2021) | 53.02 | - | | ARC (IJCAI 2021) | 64.37 | 1.63 | | Qimera (NeurIPS 2021) | 66.25 | - | | ARC+AIT (CVPR 2022) | 68.27 | - | | AdaSG (AAAI 2023) | 68.58 | 16.98 | | AdaDFQ (CVPR 2023) | 68.38 | 17.63 | | **TexQ (Ours)** | **70.72** | **25.27** | --- **Q2: Maybe the datasets verified are insufficient with Cifar and ImageNet. What about the others tasks, such as CoCo of detection.** **A2: Generally, most zero-shot quantization works [12-16] in the research community are validated in CIFAR10/100 and ImageNet, for these datasets are persuasive and convincing.** * For our work, we are the first to propose the idea of texture calibration and validate it, which is the focus. * Expanding to downstream tasks will be a new research hotspot, which will also be the direction of our future exploration. --- **Q3: The authors could discuss how to ensure the generalization of the method since the calibration is conducted on a specific dataset.** **A3: This is an open question worth exploring.** * In our work, we apply the same texture feature distribution calibration method (including hyperparameters) across different image sizes and scales of datasets. Advanced results show that our method is capable of generalization, even if these settings are not optimal for every dataset. * Further considering a specific dataset, the following two aspects can be explored. First, trainable texture feature filters can be used to extract specific features on a specific dataset. Second, adaptive method can be introduced to balance multiple losses. --- **Q4: Could the method be used in the Transformer structure?** **A4: The calibration idea proposed can be transferred to other computer vision models including the transformer structure.** * However, currently CNN and Transformer require different quantizers, and the quantization framework we adopted does not support the transformer structure model. * More importantly, our main contribution is to propose the idea of texture calibration in zero-shot quantization and verify it. We will introduce the quantizer for the transformer structure in future work. --- ``` [12] Zhong, Yunshan, et al. Fine-grained data distribution alignment for post-training quantization. European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. [13] Xu, Shoukai, et al. Generative low-bitwidth data free quantization. Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16. Springer International Publishing, 2020. [14] Zhong, Yunshan, et al. Intraq: Learning synthetic images with intra-class heterogeneity for zero-shot network quantization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [15] Qian, Biao, et al. Rethinking data-free quantization as a zero-sum game. arXiv preprint arXiv:2302.09572. 2023. [16] Qian, Biao, et al. Adaptive Data-Free Quantization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. ``` --- Rebuttal Comment 1.1: Comment: Thank the authors for their explanation. All my concerns are addressed. I support accepting this paper from my perspective, so I decided to increase my score to 7.
Summary: This work proposes TexQ, which targets keeping the texture information of the synthetic samples of zero-shot quantization. The texture feature energy distribution calibration method is applied to the synthesized samples, and mixup knowledge distillation is introduced to improve the diversity of the synthetic samples. Extensive experiments of ResNet, MobileNet on CIFAR, and ImageNet datasets prove the effectiveness of this method. Strengths: * The paper is well-organized and easy to follow. Fig.3 is an excellent visualization of the proposed system architecture. * To measure the texture feature distribution for the input sample of quantization is novel. The comparison (Fig. 4/5) of synthetic samples and natural images is insightful and may help future research on the ZSQ. * The experiment results are convincing and high accuracy (especially for 3-bit) demonstrates the effectiveness of TexQ. Weaknesses: * The introduction of the concept "LAWS texture feature energy" needs to be improved. The example given in Eq.5 is not straightforward. Visualization of features extracted using $E_5, S_5, W_5, R_5$ would be better. Why these "texture" feature is important to retain should also be included more in the discussion. * There are too many artifact weighting coefficients $\alpha_i, i \in [1,5]$ to balance different loss function. The author claims in Sec. 4.1 that they "empirically" select the value. An ablation study or sensitivity analysis on these parameters would be better to prove that the proposed loss is solid and that better performance does not come from a grid search of the parameters. * The method is only validated on CNNs. The experiment results on Vision Transformer would be a plus. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Most previous work uses the term ''data-free quantization (DFQ)'' instead of "zero-shot quantization (ZSQ)." I want the author to make sure that these two terminologies are identical. * What is the actual (visual/physical/signal) meaning of the feature extracted using the filters $E_5, S_5, W_5, R_5$? * Can TexQ be transferred to language tasks without "texture" in the data? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The author includes a discussion of the problems to be solved in Section 5.2, which is great. One crucial limitation I would like to raise is that this work is built upon the concept of "texture" and is **only meaningful for visual samples**. The application to other tasks with different modality (language, speech, etc.) would be limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Thank you for providing new ideas for straightforward visualization and interesting transferred language tasks. We would like to address the concerns below:** --- **Q1: The introduction of the concept "LAWS texture feature energy" needs to be improved. The example given in Eq.5 is not straightforward. Visualization of features extracted using $E_5, S_5, W_5, R_5$ would be better. Why these "texture" feature is important to retain should also be included more in the discussion. What is the actual (visual/physical/signal) meaning of the feature extracted using the filters?** **A1: We agree to provide more details to introduce the concept of "LAWS texture feature energy"** * More background in the Introduction and Methods (Section 3.2.1) will be provided to introduce the concept of "LAWS texture feature energy", including the details of LAWS texture feature filters [8] and their application [9-10]. This helps future readers to understand our work. * Visualization is a good idea. Straightforward visualization of the texture features extracted will be provided in Figure 5 (line 127) after introducing the concept of "LAWS texture feature energy". * In addition, we will add the actual signal meaning and function of the filters in Section 3.2.1 (line 126). $E_5, S_5, W_5, R_5$ are series of texture feature filters mnemonics standing for Edge, Spot, Wave, and Ripple. The filtered feature map reflects the degree of matching between the texture and the filters. Complex texture feature distribution is composed of simple texture elements, so it is reasonable to introduce basic texture filters to characterize the texture feature distribution of images. These discussions will be supplemented in the camera-ready version. --- **Q2: There are too many artifact weighting coefficients to balance different loss function. The author claims in Sec. 4.1 that they "empirically" select the value. An ablation study or sensitivity analysis on these parameters would be better to prove that the proposed loss is solid and that better performance does not come from a grid search of the parameters.** **A2: Thank you for this suggestion. We agree that the adopted hyperparameter settings are not globally optimal. We adopted the settings of relevant studies [12-14] to avoid time costly grid search.** * More importantly, we focus on verifying the texture feature distribution calibration method, and empirical settings are conducive to fair comparison with the same type of work. * Experiments show that our idea works even when we configure hyperparameters empirically, which is enough to verify our idea. --- **Q3: Can TexQ be transferred to language tasks without "texture" in the data?** **A3: That's an interesting question.** * In computer vision, texture is an important feature of images. Is there any similar feature in the field of language such as speech? I think the answer is absolutely yes. * I agree with the constructive comment made by reviewer yCFw, that we can further introduce trainable filters to achieve automatic feature extraction. With such methods, our method holds promise for applications in other fields. --- **Q4: The method is only validated on CNNs. The experiment results on Vision Transformer would be a plus.** **A4: Thank you for this suggestion, the ideas of our work can be transferred to other visual models.** * Unfortunately, the quantizer adopted in this paper does not involve the quantizer of ViT. * The emphasis of this paper is to put forward the idea of texture calibration and verify it. We take Vision Transformer into consideration in the subsequent work. --- **Q5: Most previous work uses the term ''data-free quantization (DFQ)'' instead of "zero-shot quantization (ZSQ)." I want the author to make sure that these two terminologies are identical.** **A5: The terminologies ''data-free quantization (DFQ)' and 'zero-shot quantization (ZSQ)' are identical in the research community.** We will illustrate these terms in Related work (line 86) and modify the expression in the camera-ready version. --- ``` [8] Randen, Trygve, and John Hakon Husoy. Filtering for texture classification: A comparative study. IEEE Transactions on pattern analysis and machine intelligence. 1999. [9] Gillett, Will D. Image Classification Using Laws' Texture Energy Measures. 1987. [10] Zhang, Yuwei, et al. Texture and shape biased two-stream networks for clothing classification and attribute recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. [12] Zhong, Yunshan, et al. Fine-grained data distribution alignment for post-training quantization. European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. [13] Xu, Shoukai, et al. Generative low-bitwidth data free quantization. Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16. Springer International Publishing, 2020. [14] Zhong, Yunshan, et al. Intraq: Learning synthetic images with intra-class heterogeneity for zero-shot network quantization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. ``` --- Rebuttal Comment 1.1: Title: Thanks for the replies from authors, a follow-up Comment: I have gone through all the replies from the authors, and some of my concerns and questions are well-addressed (Q4, Q5). A1 follow-up: In rebuttal, every author can submit a pdf file containing the images, but I did not see it from the author's rebuttal. It would be good if the visualization in A1 is provided. A2 follow-up: I quickly check the paper [12-14], and there are not too many weighting coefficients to balance different terms. I still encourage the author to conduct an ablation study or sensitivity analysis on these parameters (especially $\alpha_{1,..,5}$) A3 follow-up: For the "texture" in NLP, a concept somewhat analogous to texture in vision could be the distribution of linguistic elements within a text, such as the arrangement of words, phrases, and syntactic structures. This arrangement can provide insights into the text's style. I also agree that trainable filters are a good direction for future work and can help interpretability. --- Reply to Comment 1.1.1: Title: Response to follow-up A1, A2 Comment: ### We have sent you the anonymous visualization material via official comment to ACs **(see the top of the page)**, containing Figure A (Visualization of features extracted with LAWS filters) and Figure B (Influence of the trade-off parameters). **Visualization of the LAWS texture features** Figure A shows the feature map extracted by LAWS texture feature filters. * It can be observed that different types of images contain different dominant texture features. For example, crack mainly contains edge features; cloth mainly contains high-frequency points; while the trees with lots of leaves including V-shape, high-frequency points and edge features. * We will supplement Figure A in the camera ready version, after introducing the concept of "LAWS texture feature energy" (line 127). **Sensitivity analysis on trade-off parameters** We display the influence of trade-off parameters in Figure B. For experiment, we follow the settings of Section 4.3 Ablation study (Hyperparameters). * In “Step1: Calibration set generation”,The $α_1$ and $α_2$ from Eq. 10 balance different losses in the optimization of calibration samples. We have done a preliminary search for tradeoff parameters when designing this loss. In grid search, we can see that the optimal configurations of these 2 parameters are $α_1$ = 2, $α_2$ = 10. * In “Step2: Synthetic samples generation”, the $α_3$, $α_4$, and $α_5$ from Eq. 12 balances the losses in updating the generator for synthetic samples. In grid search, we can see that the optimal configurations of these three parameters are $α_3$ = 0.4, $α_4$ = 0.02 and $α_5$ = 1.8. The configuration is proportional to the previous study FDDA (Fig. 4 in [12], $α_2$ = 0.2 $α_4$ = 0.01 $α_3$ = 0.9). That is, grid search results are consistent under similar frameworks. | $α_1$ | 0 | 1 | 2 | 3 | 4 | |---|---|---|---|---|---| | Acc. (%) | 42.28 | 48.93 | 50.68 | 49.94 | 49.66 | | $α_2$ | 0 | 5 | 10 | 15 | 20 | |---|---|---|---|---|---| | Acc. (%) | 43.52 | 47.57 | 50.68 | 49.60 | 49.50 | | $α_3$ | 0 | 0.2 | 0.4 | 0.6 | 0.8 | |---|---|---|---|---|---| | Acc. (%) | 48.38 | 50.02 | 50.68 | 49.86 | 49.90 | | $α_4$ | 0 | 0.02 | 0.04 | 0.06 | 0.08 | |---|---|---|---|---|---| | Acc. (%) | 45.73 | 50.68 | 50.41 | 50.01 | 49.35 | | $α_5$ | 0 | 0.6 | 1.2 | 1.8 | 2.4 | |---|---|---|---|---|---| | Acc. (%) | 47.12 | 48.98 | 49.85 | 50.68 | 49.76 | *** ``` [12] Zhong, Yunshan, et al. Fine-grained data distribution alignment for post-training quantization. European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. ```
Summary: The paper points out that there is a strong dependency between the performance of CNN and the texture feature of the dataset. For extending this concept to the quantization field, the paper adopts calibration samples which are trained with manually designed texture filters. In addition to synthetic samples which are generally used in ZSQ works (generated by a network that is trained with Batch Normalization layers’ statistics of a full precision model), the paper exploits calibration samples to quantize a model without the original dataset. Furthermore, the paper applies mixup data augmentation to improve the quantized model’s performance. Strengths: - This paper is well-motivated and easy to follow. - The paper adopts a concept of texture feature in Zero-shot quantization for the first time. - The paper demonstrates the proposed method well with several formulas and figures. Weaknesses: - It seems costly that generate calibration samples along with synthetic samples for quantizing a neural network. typo: - In the 215th row, presnted -> presented Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In equation 1, only a rounding operation is applied to obtain quantized integers without a clip operation? - The paper design texture filter manually. Is it possible to get texture filters with the training process? - Calibration samples are obtained without a generator. Is it possible to generate calibration samples with another generator, or the synthetic sample generator by giving the texture feature energy distribution calibration loss? - How expensive it is to generate both samples, compared to other works that generate synthetic samples only? - Qimera [1] executed several experiments with mixup and cutmix, similar with 'Mixup knowledge distillation module' described in 3.2.3, claiming that superposed latent embedding works better than mixup and cutmix. Can a comparison with those methods be provided? [1] Choi et al. Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples. In Conference on Neural Information Processing Systems. 2021 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Previous ZSQ works which exploit Batch Normalization layers' statistics for generating synthetic samples are hard to apply transformer-based models. It is worth analyzing that the proposed method can be applied to those models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **We thank the reviewer for the helpful reviews that will help strengthen our paper. Our replies are as follows:** --- **Q1: In equation 1, only a rounding operation is applied to obtain quantized integers without a clip operation?** **A1: Thank you for reminding us that we omitted a clip operation in Eq. 1, and we have revised it.** --- **Q2: The paper design texture filter manually. Is it possible to get texture filters with the training process?** **A2: This is entirely a promising point.** * With the manually designed texture feature filters, we are the first to introduce the idea of texture calibration and realize it in quantization for better performance. * The iterable filters take advantage of automatic design features to facilitate the adaptation of different datasets and tasks. They will be explored in subsequent work and hopefully improve performance. --- **Q3: Calibration samples are obtained without a generator. Is it possible to generate calibration samples with another generator, or the synthetic sample generator by giving the texture feature energy distribution calibration loss?** **A3: These are two possible options. Considering the time cost, we choose to directly extract calibration samples from the model.** * For introducing another generator, training the generator increases the time and information transfer cost. * For applying calibration lost to the generator, results in slow iterations and homogeneous samples. So, these two options were not adopted. --- **Q4: How expensive it is to generate both samples, compared to other works that generate synthetic samples only?** **A4: Generating calibration samples requires additional time, which is acceptable for offline PTQ. We generate calibration set (one calibration sample per class) on an NVIDIA GeForce RTX 3090 GPU.** * For images of size 3×32×32, the generation speed is about 10 seconds per image. For images of size 3×224×224, the generation speed is about 20 seconds per image. * Taking CIFAR10 as an example, generating the whole calibration set takes 10*10 seconds=100 seconds=1.67 minutes. Parallel processing to accelerate this process is possible. --- **Q5: Qimera [11] executed several experiments with mixup and cutmix, similar with 'Mixup knowledge distillation module' described in 3.2.3, claiming that superposed latent embedding works better than mixup and cutmix. Can a comparison with those methods be provided?** [11] Choi et al. Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples. In Conference on Neural Information Processing Systems. 2021. **A5: This is a very meaningful question. We will address your concerns with several experiments.** * First of all, we need to clarify that **the applications of “Mixup knowledge distillation module” and “Superposed latent embeddings/Mixup/Mixcut” are different.** “Superposed latent embeddings/Mixup/Mixcut methods” in Qimera [11] generate **samples with mixed labels to fine-tune the quantization model with cross-entropy loss**. Different from them, we apply the **mixed samples to knowledge distillation with KL divergence loss** of the pre-trained model (teacher model) and the quantized model (student model) (see Eq.13 in Section 3.3.3), **without using mixed labels to fine-tune** the quantized model with cross-entropy. * Why does superposed latent embedding (Qimera) work better than mixup and cutmix? We observed that traditional mixup and cutmix lead to inaccurate labels. To be detailed, the true label (the inference results of the pre-trained model) of the mixed image is inconsistent with the mixed label. Taking such mixed labels to fine-tune the quantized model with cross-entropy loss tends to be disastrously misleading. A simple experiment as below might be more straightforward. We randomly performed Mixup or Mixcut on 1000 synthetic images and their labels. Similarly, we warm up the generator with 50 epochs and use superposed latent embeddings to generate 1000 synthetic images. Then, we count the consistency of mixed labels and real labels. As shown in Table E, we found that only 34.4% percent of the Mixup or Mixcut samples were correctly labeled, while Qimera's got an advanced Acc. of 66.1%. However, it still can be seen that the labels produced by the above methods are not accurate enough, so we do not recommend using such mixed labels to fine-tune quantization models. Note that we do not use mixed labels to fine-tune the model and do not suffer from such problems. --- #### **Tabel E Comparisons with mix methods on correct label rate (4 bits MobIleNet-V2 on ImageNet)** | Method | Correct label | Incorrect label | Correct rate | |---|---|---|---| | Mixup or Mixcut | 344 | 656 | 34.4% | | Superposed latent embeddings (Qimera) [11] | 661 | 339 | **66.1% (Best)** | --- * Finally, we would like to compare the effect of superposed latent embeddings/Mixup/Mixcut in our framework. We conduct the comparison on 4 bits MobIleNet-V2 case on ImageNet. As shown in Table F, we find that the above three methods have similar effects. Our proposed “Mixup knowledge distillation module” is slightly ahead of the other two, which takes advantage of decoupling the generator to fuse the entire image, maximizing sample diversity. --- #### **Tabel F Comparisons with mix methods for KL divergence distillation (4 bits MobIleNet-V2 on ImageNet)** | Method | Acc. of quantized model | Acc. of pre-trained model | Acc. loss | |---|---|---|---| | No augmentation | 66.21 | 72.49 | -6.28 | | Mixup knowledge distillation module (Ours) | **67.07 (Best)** | 72.49 | -5.42 | | Mixcut | 67.01 | 72.49 | -5.48 | | Superposed latent embeddings (Qimera) [11] | 66.89 | 72.49 | -5.60 | --- --- Rebuttal Comment 1.1: Comment: Thank the authors for the answers. I have a question for Table G in the global response. It is obvious that using both kinds of samples maximizes the performance of quantized models. By the way, in Table G, an experiment with calibration samples only shows better results than that with synthetic samples. Because synthetic samples are generated to get a similar distribution to the original dataset, they are likely more helpful for quantization than calibration samples. Can the authors' analysis of the results be provided? --- Reply to Comment 1.1.1: Title: Reply to Reviewer yCFw Comment: **We thank the reviewer for the insightful review of our paper and greatly appreciate the issues raised. Below we provide a detailed analysis of Table G and hope to make it clear.** *** **Analysis on only synthetic samples case**: Removing the calibration samples means the removal of the calibration method, thus the synthetic sample is uncalibrated. **Details:** * It should be noted that synthetic samples are generated by generator G, who is constrained with ${\mathcal{L}^{G}}$ in Eq. 12 (Section 3.3.2, line 185), where, the $ \mathcal{L}^G_{L-BNS}$ (Eq. 8) and $\mathcal{L}^G_{D-BNS}$ (Eq. 11) introduce the calibration centers of calibration samples: $\mu_l^C(\hat{x_c}|\hat{y_c}=k)$ and $\sigma_l^C(\hat{x_c}|\hat{y_c}=k)$. $$\mathcal{L}^G=\mathcal{L}^G_{CE}+\alpha_3 · \mathcal{L}^G_{BNS}+\alpha_4 · \mathcal{L}^G_{L-BNS}+\alpha_{5} · \mathcal{L}^G_{D-BNS} \text{ } (12)$$ * Thus, with calibration samples removed, the $ \mathcal{L}^G_{L-BNS}$ (Eq. 8) and $\mathcal{L}^G_{D-BNS}$ (Eq. 11) will not function. At this point, the synthetic sample loses the calibration information, and thus cannot retain similar texture distribution to the original dataset. *** **Analysis on only calibration samples case**: The small number of calibration samples causes overfitting. **Details:** * In Step 1 (line 179), calibration samples capture the preferred texture of each class from full-precision model, which is helpful for quantization. However, with only 1 calibration sample per class, quantized model tends to overfit. * To address this issue, in Step 2, Generator G generate synthetic samples centered on calibration samples and interfere with Gaussian noise through $\mathcal{L}^G_{D-BNS}$ , thus alleviating the overfit issue and improving the performance. To this end, it is obvious that both kinds of samples are essential and with both we can maximizes the performance of quantized models. *** We would be more than happy to further engage with the reviewer at any time during the discussion period to clear up remaining issues, and also appreciate the reviewer’s willingness to re-evaluate our paper if the concerns are sufficiently addressed.
Rebuttal 1: Rebuttal: ## **[Global Response] Tables for supplementary experiments** ************************************************ ### **Tabel A Comparisons with MixMix and KW (4 bits MobIleNet-V2 on ImageNet)** | Method | Settings | Acc. of quantized model | Acc. of pre-trained model | Acc. loss | |---|---|---|---|---| | Ours | 1 model, all layers in 4 bits | **67.07 (Best)** | 72.49 | -5.42 | | MixMix [1] | 3 model, all layers in 4 bits | 64.01 [1] | 72.49 | -8.48 | | KW [3] | 1 model, first & final layers in 8 bits | 66.07 [3] | 71.88 | -5.81 | [1] Mixmix: All you need for data-free compression are feature and data mixing. ICCV. 2021. [3] The knowledge within: Methods for data-free model compression. CVPR. 2020. --- ### **Tabel B Comparison with Genie on the same quantizer (4 bits MobIleNet-V2 on ImageNet)** | Synthetic data | Acc. of quantized model | Acc. of pre-trained model | Acc. loss | |---|---|---|---| | Ours | **67.07 (Best)** | 72.49 | -5.42 | | Genie (CVPR 2023) [2] | 65.28 | 72.49 | -7.21 | | AdaDFQ (CVPR 2023) | 65.41 | 72.49 | -7.08 | [2] Genie: Show Me the Data for Quantization. CVPR. 2023. --- ### **Tabel C Comparisons on different synthetic image constraints (4 bits MobIleNet-V2 on ImageNet)** | Method | Settings | Acc. of quantized model | Acc. of pre-trained model | Acc. loss | |---|---|---|---|---| | Our | Texture calibration, Unrestricted tensor range | **67.07 (Best)** | 72.49 | -5.42 | | Our+clamp | Texture calibration, Restricted tensor range of [-1, 1] | 66.26 | 72.49 | -6.23 | | Genie [2] | Restricted tensor range of [-1, 1] | 65.28 | 72.49 | -7.21 | [2] Genie: Show Me the Data for Quantization. CVPR. 2023. --- ### **Table D. Top 1 accuracy (%) results of ResNet50 on ImageNet.** | Methods | W4A4 | W3A3 | |---|---|---| | GDFQ(ECCV 2020) | 54.16 | 0.31 | | ZAQ (CVPR 2021) | 53.02 | - | | ARC (IJCAI 2021) | 64.37 | 1.63 | | Qimera (NeurIPS 2021) | 66.25 | - | | ARC+AIT (CVPR 2022) | 68.27 | - | | AdaSG (AAAI 2023) | 68.58 | 16.98 | | AdaDFQ (CVPR 2023) | 68.38 | 17.63 | | **TexQ (Ours)** | **70.72** | **25.27** | --- ### **Tabel E Comparisons with mix methods on correct label rate (4 bits MobIleNet-V2 on ImageNet)** | Method | Correct label | Incorrect label | Correct rate | |---|---|---|---| | Mixup or Mixcut | 344 | 656 | 34.4% | | Superposed latent embeddings (Qimera) [11] | 661 | 339 | **66.1% (Best)** | [11] Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples. NeurIPS. 2021. --- ### **Tabel F Comparisons with mix methods on TexQ (4 bits MobIleNet-V2 on ImageNet)** | Method | Acc. of quantized model | Acc. of pre-trained model | Acc. loss | |---|---|---|---| | No augmentation | 66.21 | 72.49 | -6.28 | | Mixup knowledge distillation module (Ours) | **67.07 (Best)** | 72.49 | -5.42 | | Mixcut | 67.01 | 72.49 | -5.48 | | Superposed latent embeddings (Qimera) [11] | 66.89 | 72.49 | -5.60 | [11] Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples. NeurIPS. 2021. --- ### **Tabel G Three possible sample synthesis schemes for TexQ (4 bits MobIleNet-V2 on ImageNet)** | Method | Acc. of quantized model | Acc. of pre-trained model | Acc. loss | |---|---|---|---| | Calibration samples + Synthetic samples | **67.07 (Best)** | 72.49 | -5.42 | | Only synthetic samples | 65.42 | 72.49 | -7.07 | | Only calibration samples | 66.04 | 72.49 | -6.45 |
NeurIPS_2023_submissions_huggingface
2,023
Summary: They suggested a zero-shot quantization method to retain the detailed texture feature distribution and introduced the mixup knowledge distillation module to diversify synthetic samples for finetuning Strengths: They identified the new feature required when generating synthetic data for quantization. Weaknesses: They should compare their work with MixMix [1], Genie[2] and KW[3]. The authors only empirically showed their superiority. i.e. lt lacks explanations of intuitive or mathematic. The author should give more reasons. The image they generated showed a little bit of poor quality to argue that it has captured the texture feature distributions. please see the synthetic images in [1], [2], [3]. [1] Li, Yuhang, et al. "Mixmix: All you need for data-free compression are feature and data mixing." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. [2] Jeon, Yongkweon, Chungman Lee, and Ho-young Kim. "Genie: Show Me the Data for Quantization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [3] Haroush, Matan, et al. "The knowledge within: Methods for data-free model compression." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. please provide more detail on why the texture feature distribution calibration is important when generating synthetic data. 2. PTQ methods such as AdaRound, AdaQuant, and Brecq can employ synthetic data to quantize models. how about adapting these post-training quantization schemes for your methods? 3. I expect to see more evaluation on various models in order to convince the superiority of your work. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: It lacks a literature survey. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We would like to thank the reviewer for the thoughtful reviews that will help strengthen our paper. In the following, we address each individual question in detail. **Due to characters limit, all tables for supplementary experiments are displayed in the global response area.** *** **Q1: please provide more detail on why the texture feature distribution calibration is important when generating synthetic data.** **A1: We agreed to provide more details to make it clearer for future readers.** * For intuitive explanations, we appreciate the comments of reviewer F97y. We will provide straightforward visualization of the texture feature extracted by filters in Figure 5, after introducing the concept of "LAWS texture feature energy" (line 127). In addition, we supplement the actual signal meaning of the filters in Section 3.2.1 (line 126). * For literature survey and mathematic explanations, we supplement Related work (line 71) on the importance of texture features and the processing details of texture features. * Many research [4-6] identified the importance of textures feature for CNNs. [4] found that texture representations could capture the statistical characteristics of images for CNNs. [5] showed that classic CNNs were unable to recognize sketches where textures are missing and shapes are left. Similarly, [6-7] validated that CNNs were biased towards textures than shapes, for example, ResNet-50 biased 77.9% texture [6]. * Texture feature extraction is a common method in natural image processing, details can be referred to [8]. Studies [9-10] have shown that image texture features are conductive to image classification and are class separable. We observe that quantized networks suffer from accuracy loss on image that lose texture features (Figure 2). Further, we introduced quantitative indicators of LBP and LAWS texture feature energy to visually demonstrate the texture feature gap between the synthetic and real image (Figure 1,5). Results show that the plug of this gap is beneficial to quantization. **Q2 and weakness: They should compare their work with MixMix [1], Genie[2] and KW[3]. PTQ methods such as AdaRound, AdaQuant, and Brecq can employ synthetic data to quantize models. how about adapting these post-training quantization schemes for your methods?** **A2: We have also noted the studies[1-3]. We provide comparisons with MixMix [1], Genie[2] and KW[3] below. However, it should be noted that MixMix [1], KW [3] has looser quantization settings and no open-source code, Genie [2] is not the same type of work as ours on feature extraction, which makes the comparison not fair.** * As for MixMix [1], it focuses on the generalization of synthetic dataset, and **3 pre-trained models** were used for distilling (Section 5, [1]). However, General **ZSQs allow only 1 pre-trained model** to extract data and quantize itself. For KW [3], they quantized the **first & final layers in 8 bits and 1x1 convolution layers in 8 bits** (Table 1-3, [3]), However, most of works including ours quantize **all layers to the same target bits**. Even under unfair comparison, our method exhibits superior performance. As shown in Table A, **we still lead MixMix with a top1 acc. of 3.06%, and lead KW with acc. loss of 0.39% in the 4 bits MobIleNet-V2 case on ImageNet.** * Genie [2] is a different type of work. They focus on new quantizer (Section 3.2, [2]) and introduced PTQ (AdaRound and BRECQ) for fine-tuning. However, we focus on verifying the idea of texture calibration, regardless of improving the quantizer even though we may achieve better accuracy by optimizing it. Thus, a naive asymmetric quantizer is adopted to maintain consistency with the same type of works. To reach a fair comparison, we adopted 1000 synthetic samples of Genie [2] and conducted the experiment on the same asymmetric quantizer. Results in Table B show that Genie performs similar results to the contemporaneous work. **We lead Genie with 1.79% top1 acc. on the same quantizer in the 4 bits MobIleNet-V2 case.** One of the factors is that Genie ignored class information but we retain class features with dynamic texture calibration. * On the issue of image visualization, MixMix [1], Genie[2] and KW[3] generate images that conform to human vision. However, our goal is not to generate beautiful images, and we even drop the scale limit of [-1, 1] of the image tensor to expand the feature representation space. Some methods as L2 norm, clamp/clip to generate smooth and visual images were tried but seem no benefit in quantization. Relevant experimental results are shown in Table C. **Q3: I expect to see more evaluation on various models in order to convince the superiority of your work.** **A3: We are providing our results for ResNet50 on ImageNet in Table D.** * **We continue to show advanced performance on the ResNet50, such as we lead AdaDFQ with 7.64%/2.34% Top1 acc. on 3/4 bits case**. We will supplement this result in Table 2. * For our experiments, we follow the settings of Section 4.1 except that we raise the weight of L-BNS loss and BNS alignment loss to ensure the convergence of the model. ``` [1] Mixmix: All you need for data-free compression are feature and data mixing. ICCV. 2021. [2] Genie: Show Me the Data for Quantization. CVPR. 2023. [3] The knowledge within: Methods for data-free model compression. CVPR. 2020. [4] Texture synthesis using convolutional neural networks. NeurIPS. 2015. [5] On the performance of GoogLeNet and AlexNet applied to sketches. AAAI. 2016. [6] ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv, 2018. [7] BiasBed-Rigorous Texture Bias Evaluation. CVPR. 2023. [8] Filtering for texture classification: A comparative study. TPAMI. 1999. [9] Image Classification Using Laws' Texture Energy Measures. 1987. [10] Texture and shape biased two-stream networks for clothing classification and attribute recognition. CVPR. 2020. ``` --- Rebuttal Comment 1.1: Title: Thanks for the detailed response. Comment: Thank you for the response. However, my main concerns still stand as follow: zero-shot "quantization" eventually has to pursue higher accuracy when quantizing models. In this regard, many literatures, related to zero-shot quantization including your work, use and rely on an outdated quantizer used in GDFQ, which is a point that is not acceptable to me. when given only the pre-trained models, practically, users would prefer the quantizer showing better accuracy in a relatively short time. Thus, authors need to prove their scheme on the latest quantizers such as AdaRound, BrecQ, and QDrop (Post-training quantization schemes). According to the paper (Genie), ZeorQ, one of the early works, also showed very good performance when combined with such a PTQ scheme, which would be an example showing PTQ is more suitable for ZSQ. Since MixMix and Genis compared various approaches using BrecQ (which is open-sourced), I would like to encourage the authors to prove the performance of their works by using BrecQ as a quantizer. --- Reply to Comment 1.1.1: Title: Reply to Reviewer zKQV Comment: **We thank the reviewer for the time and effort in our paper and greatly appreciate the issues raised.** We will address the reviewer’s concerns below and prove the performance by adopting BrecQ as a quantizer. * Table H shows the results for ResNet18/ResNet50/MobileNetV2 on ImageNet. We continue to show advanced performance on these models. The other models in GENIE did not open source their configurations on BrecQ and cannot be re-implement. * To this end, our method is proved to show advanced performance in both ZSQ and PTQ. This again verifies the proposed texture calibration method. *** **Tabel H Top-1 acc. (%) of ZSQ methods with BRECQ quantizer (Single model and W4A4 case on ImageNet)** | Method | MobileNetV2 | ResNet-18 | ResNet-50 | |---|---|---|---| | Full Precision (W32A32) | 72.49 | 71.08 | 77.00 | | ZeroQ+BRECQ | 49.83 | 69.32 | 73.73 | | KW+BRECQ | 59.81 | 69.08 | 74.05 | | IntraQ+BRECQ | 63.78 | 68.77 | 68.16 | | Qimera+BRECQ | 58.33 | 67.86 | 72.09 | | GENIE-D+BRECQ | 64.68 | 69.70 | 74.89 | | **TexQ+BRECQ** | **64.94** | **69.84** | **74.96** | However, it should be noted that our method is not optimized for PTQ fine-tuning, and we focus on synthesizing samples with class texture calibration for ZSQ distillation. We would like to thank the reviewer again for the valuable feedback. We will be doing our best to continue to answer any outstanding issues during the discussion period.
null
null
null
null
null
null
Learning Adaptive Tensorial Density Fields for Clean Cryo-ET Reconstruction
Accept (poster)
Summary: The authors address the problems of denoising and tomographic reconstruction, specifically in the context of cryoelectron tomography (cryoET), in which 3D structures (e.g. of proteins) are reconstructed using a tilt-series of tomograms. CryoET holds much promise for the elucidation of biological structures with atomic or near-atomic resolution in situ but faces significant challenges in denoising and reconstruction. While a good deal of work exists to address both of these challenges, the authors here introduce a novel method using tensor density fields that jointly solves the reconstruction and denoising problems, a first. Another innovation is the introduction of a isotropic Fourier regularization term in their loss function, which serves to ameliorate issues with streaking artifacts. In sum, the authors make a significant contribution to cryoET reconstruction with applications to structural biology and other fields in which cryoET is used. Strengths: Originality: The authors here present the first (to my knowledge) method in which tomographic reconstruction and denoising are jointly solved for cryoET. The authors employ an innovative architecture and pipeline that enables unprecedented computational speedup for large, data-intensive cryoET datasets. Finally, the authors incorporate a novel regularization term (isotropic Fourier prior) that helps deal with artifacts and yields better results than state-of-the-art approaches for cryoET. Quality: The authors deliver significant improvements on existing methods for denoising, reconstruction, and computation time for cryoET datasets. Their method is validated on both synthetic and real data, demonstrating applicability. The code is efficiently implemented and results in significant improvements to computational efficiency. Clarity: The paper is written exceptionally clearly. The authors provide extensive information on existing methods for denoising and clearly states the ways in which their method differs and innovates on previous ones. The authors also delineate conditions under which their methods performs better or worse. Significance: New methods for the effective reconstruction and denoising of cryoET data promises to make this powerful method more accessible, with the potential for making new discoveries in structural biology and other fields. As a method that jointly addresses reconstruction and denoising efficiently, the author's method makes a significant contribution. The method or representational architecture might also be applied to other tomography modalities. Weaknesses: Clarity: Although minor, some of the figures could be presented more clearly. Quality: The authors mention that the issues they solve in the context of cryoET are general to many tomographic reconstruction problems. While true in principle, the authors do not demonstrate that their method work well in other modalities compared to state-of-the-art. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Figures 4 and 7: Increase the font sizes of the axes for better readability. Adjust some of the lighter line colors for better readability. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes, limitations are clearly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for you recommending. We thank Reviewer 4 for their valuable feedback, and we will make the necessary changes to the font size and color of the graphs.
Summary: This paper proposed a field-based method for 3D image reconstruction in Cryo-EM, which is a challenging task due to strong measurement noise and ill-posedness. The proposed method, **TensorDF**, has the following features: 1) it combines implicit representation and a quadtree, where each node corresponds to a feature tensor, 2) it can automatically update the quadtree structure during training, and 3) it includes three regularizers, that is, total variation, boundary consistency, and penalty in the Fourier space, to improve the reconstruction accuracy. Experimental validations on both simulated and real datasets are presented. **TensorDF** is compared with several baseline methods and yields superior performance. The authors also discuss how the hyperparameters of TensorDF are selected, and demonstrate the robustness of **TensorDF** under different noise levels. Strengths: 1. Clear presentation of the proposed method. 2. Solid experiments with good performance. 3. Thorough supplementary material (ablation study, more visual examples, and discussion). Weaknesses: I do not see any obvious weakness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could the author further explain the factorization of each feature map $p_i^m$? namely $V_x$, $V_y$, $V_z$, and $M_{xy}$, $M_{yz}$, $M_{xz}$. I am not clear on what is trainable and what is not. 2. Although MLP is not scalable to large volumes, the network itself can help regularize the final reconstruction (see [1]). Perhaps not using an MLP makes the additional regularizers necessary? 3. A followed-up question is how good/bad will TensorDF w/o all regularizers perform? This can show if a quadtree structure can impose some regularization. [1] Recovery of continuous 3D refractive index maps from discrete intensity-only measurements using neural fields. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation of the proposed method is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: a1: All the Vs and Ms are trainable variables. Vs and Ms work like volume factorization. Our target is to optimize the volume, and it is represented as a product of Vs, and Ms. Decoder is also optimized by default. a2: The reviewer is correct that the absence of the MLP necessitates the additional regularizer. If we use direct explicit expression, we will encounter many local minima, which is why we need to regularize the model. However, explicit representation has the advantage of being fast. Moreover, even with random initialization, it can produce stable reconstructions. a3: We appreciate the reviewer’s suggestion to perform comparisons to illustrate the quadtree regularization effect in our approach. Indeed, the quadtree structure has more benefits than just speeding up the computation. It also allows us to have more local matrix representations in the XY and XZ planes, compared to TensorDF which can only have global planes for the whole scene. This can reduce the noise and misrepresentation in the reconstruction. That is why our method achieves a significant improvement in quality with only a 50 percent reduction in parameter size. We provide in the rebuttal.pdf file some additional experiments to illustrate the impact of the quadtree as a regularizer. Since TensorDF (TensorRF in the original paper) already incorporates TV prior and does not require boundary consistency prior, we compare our approach to: TensorDF W/O TV prior, TensorDF, and TensorDF W the Isotropic Fourier Prior.
Summary: In their paper, the authors present a learning-based framework that tackles the challenges faced in reconstructing 3D structures from tilt-series cryo-Electron Microscopy (cryo-EM) data. Cryo-EM is a powerful imaging technique known for its ability to achieve near-atomic resolutions. However, it is not without its drawbacks, including missing-wedge acquisition, large data size, and high noise levels. To address these challenges, the authors introduce an innovative approach that utilizes an adaptive tensorial-based representation for the 3D density field of the scanned sample. The framework consists of several key components. First, a quadtree structure is optimized to partition the volume of interest effectively. Then, a vector-matrix factorization technique is employed to learn the tensor representation of the density field within each node. To further enhance the reconstruction quality, the authors incorporate a loss function that combines a differentiable tomographic formation model with three regularization terms: total variation, boundary consistency constraint, and an isotropic Fourier prior. This allows the authors to generate high-quality 3D tomograms and query density information at any location using the learned representation. The authors demonstrate the superiority of their framework over existing methods using both one synthetic and one experimental dataset. They claim that their framework not only enhances the quality of reconstructions but also reduces computation time and memory footprint. Overall, the paper presents a novel framework that addresses critical challenges and provides potential improvements in cryo-EM reconstruction. Strengths: *Adaptive Tensorial Representation:* The framework utilizes an adaptive tensorial-based representation for the 3D density field, which should in theory wllow for efficient partitioning of the volume of interest. This adaptive approach ensures that the representation is tailored to the specific characteristics of the sample, enhancing the accuracy of the reconstruction. *Comprehensive Regularization:* The authors incorporate multiple regularization terms, including total variation, boundary consistency constraint, and an isotropic Fourier prior. This comprehensive regularization scheme should help mitigate the challenges associated with missing-wedge acquisition, large data size, and high noise levels. It should promote smoothness in the reconstructions and improve the overall quality. *Querying Flexibility:* The learned representation enables querying of the density at any location within the reconstructed volume. This flexibility is valuable for further analysis and examination of specific regions of interest, providing researchers with a detailed understanding of the 3D structure. Weaknesses: The motivations behind the Coordinate-based representation (CBR) in this study remain unclear. While the compressed nature of CBR suggests it may possess desirable regularizing properties for this application, the specific effects of this representation have not been thoroughly explored by the authors. It remains uncertain whether CBR offers any data efficiency advantages or exhibits any invariances that could benefit the reconstruction process. To gain more insights into the robustness of the CBR representation, it would be beneficial for the authors to conduct ablation studies that combine alternative representations with the proposed loss functions. Such experiments would shed light on the effectiveness and reliability of CBR, further elucidating its potential contributions to the overall framework. Reproducibility is of concern, since the authors have not provided access to their code or shared example reconstructions. In addition to expanding on the theoretical details of the method (see above), It would be beneficial for them to present more examples beyond just one simulated and one experimental dataset. Including a wider range of examples would enhance the comprehensiveness and generalizability of their findings, providing a more robust evaluation of their method's capabilities. As an application paper, it is worth considering the suitability of the chosen simulated data in this study. The authors should aim to evaluate their method on a more relevant dataset that provides ground truth information. It is crucial to assess the performance of the proposed method in a scenario where accurate and relevant reference data is available for comparison. The paper acknowledges challenges such as the missing-wedge problem, but it lacks results that directly address this issue. Furthermore, there is no mention of colored noise or the point-spread function, which are important factors in cryo-EM reconstruction. The authors assert that their method offers "faster optimization and better feature recovery." However, there is a lack of benchmarks or evidence showcasing the claimed faster optimization. The statement that reconstructions take "less than a day" sounds excessively slow when compared to several existing methods that achieve reconstruction within a few minutes. To strengthen their claims, the authors should provide specific benchmark comparisons demonstrating the improved optimization speed of their method. Additionally, it would be beneficial for them to address the computational efficiency aspect and compare their results with other methods that achieve faster reconstruction times. This would provide a clearer understanding of the advantages and limitations of their proposed approach. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. What are the motivations behind the specific formulation of the Coordinate-based representation (CBR) in this study? What are the data efficiency advantages or invariances exhibited by CBR that could benefit the reconstruction process? 2. Could the authors present more examples beyond one simulated and one experimental dataset to enhance the comprehensiveness and generalizability of their findings? 3. How does the method address challenges like the missing-wedge problem, colored noise, and the point-spread function? 4. Can the authors provide benchmarks or evidence demonstrating the claimed faster optimization? 5. What is the reason behind representing the "one-layer decoder network" (\mathcal{D}) as a function instead of a matrix multiplication? Could you provide clarity on its parameter size? 6. Could you please explain the definitions of the Vs and Ms variables in equation (4)? Additionally, what is the output dimensionality of these variables? 7. Can you clearify which parameters are trained? 8. Can you provide information on the memory footprint of the proposed framework? How does it impact computational resources and efficiency? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: 1. Lack of clarity and exploration of the motivations and effects of the Coordinate-based representation (CBR). 2. Absence of access to code and example reconstructions, affecting reproducibility. 3. Insufficient number of examples, limiting the comprehensiveness and generalizability of the findings. 4. Potential limitation in the suitability of the chosen simulated data, emphasizing the need for evaluation on more relevant datasets with ground truth information. 5. Failure to directly address challenges like the missing-wedge problem, colored noise, and the point-spread function in cryo-EM data. 6. Lack of benchmarks or evidence supporting the claim of faster optimization in comparison to existing methods. 6. Need for specific benchmark comparisons and a focus on computational efficiency to substantiate claims and understand the advantages and limitations of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: a1: Recently, several works [1,2,3,4] have demonstrated the superiority of Coordinate-based representations in solving tomographic problems, especially in missing-wedges scenarios, compared to traditional methods. So there is excellent evidence for the superiority of CBNs in that regard. However, existing approaches are aimed at X-ray projection data, and do not deal with the high level of noise that we encounter in Cryo-ET reconstruction, which is the main issue we want to address in this paper. In our comparison, we selected SART+TV as the baseline of traditional approaches since it performs better than the widely used Weighted Filtered Backpropagation (WFBP) [5]. Our comparison shows better reconstruction results for our approach than SART+TV. Moreover, this traditional approach requires at least 128G (no matter CPU or GPU) for reconstructing a 4K size dataset, while we can run our CBR-based method on a 48G GPU with less CPU memory usage. a2: We conducted experiments on one simulated data and two real datasets in the main paper: EMPIAR 10643-40 (HIV-1 viruses) and EMPIAR 10751 (HEK cell). In the supplement, we added more experiments on another series of HIV-1 viruses (EMPIAR 10643-51). These datasets are representative of the typical data encountered in structural biology, with respect to noise distribution and feature structures. With our approach, learning is performed for each single dataset, so we do not face overfitting or generalization issues in this case. However, we will include more experiments with new datasets in the supplement to further demonstrate our method. Moreover, it is important to note that cryo-ET does not provide data with ground truth information, which limits our validation options. a3: [3,4] prove that CBR-based methods could deal with the missing-wedge problem quite well compared with SART+TV and FBP. The point-spread function (PSF), or contrast transfer function (the Fourier transform of the PSF), is corrected during the preprocessing step using IMOD. For some datasets in EMPIAR databank, the CTF is already corrected and the projections are aligned. In the revised version we will make it clear that the CTF should be corrected as a preprocessing step. We also performed some experiments using the approach of [6] to compensate somehow the CTF, but we did not get any clear improvement with our datasets. This means that the preprocessing resulted in a good CTF correction. By running our approach on real captured data from EMPIAR databank, we have shown that the reconstructed tomogram is less noisy than the results from the baseline approaches. Specifically we computed two metrics (CNR and ENL), that evaluate the contrast improvement and the denoising effect. We also evaluate the intensity profiles along a line that contains virus spikes, to show that our approach has a better distinction between spikes and background. In summary, these experiments show that our approach does a better job dealing with the colored noise in the real data. a4: We test the optimization time using different methods. Ours and I-NGP converge in a similar time (around 2 hours for the 1K dataset). [6] takes several days. SART+TV needs around 4 hours, with a Tomosipo [7] based implementation. Please refer to Part C of the supplementary. a5: The decoder network consists of a single layer that applies a non-linearity function to the model. This non-linearity enhances the performance of the model in comparison to simple matrix multiplication, as demonstrated by [8] for similar tasks. The decoder has only 0.005 M parameters, which is much smaller than the total number of 28.2 M parameters in the model. a6: Vs and Ms correspond respectively to rank-one (vector) and rank-two (matrix) tensor components. The output dimensionality of these variables depends on the dimension decomposition. For example, for a scene with the following dimensions: n * m * p, Ms^(X,Y) has a dimensionality of n * m, while Vs^(Y) has a dimensionality of m. We will add the dimensionality information in the revised version. a7: In our framework, we optimize the vector and matrix factors Vs and Ms for all nodes, as well as the decoder D. a8: We only consider the GPU memory here. It depends on the batch size setting (the number of samples processed before the model is updated). A larger batch size usually requires more memory, but also speeds up the computation. Therefore, we try to use a relatively large batch size to fully utilize the GPU memory capacity, such as 44GB out of 48GB available on our cards. [1]Sun, Yu, et al. "Coil: Coordinate-based internal learning for tomographic imaging." IEEE Transactions on Computational Imaging 7 (2021): 1400-1412. [2]Liu, Renhao, et al. "Recovery of continuous 3D refractive index maps from discrete intensity-only measurements using neural fields. Nat Mach Intell 4, 781–791 (2022). [3]Rückert, Darius, et al. "Neat: Neural adaptive tomography." ACM Transactions on Graphics (TOG) 41.4 (2022): 1-13. [4]Zang, Guangming, et al. "IntraTomo: self-supervised learning-based tomography via sinogram synthesis and prediction." ICCV 2021. [5]Li, Lun, et al. "Compressed sensing improved iterative reconstruction-reprojection algorithm for electron tomography." BMC Bioinformatics 21.6 (2020): 1-19. [6]Kniesel, Hannah, et al. "Clean implicit 3d structure from noisy 2d stem images." CVPR 2022. [7]Hendriksen, Allard A., et al. "Tomosipo: fast, flexible, and convenient 3D tomography for complex scanning geometries in Python." Optics Express 29.24 (2021): 40494-40513. [8]Karnewar, Animesh, et al. "Relu fields: The little non-linearity that could." ACM SIGGRAPH 2022 Conference Proceedings. 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. a1. Kindly address question 1 with a response that directly pertains to the inquiry. While presenting improved results, it remains crucial to provide theoretical underpinnings for your assertions due to the substantial limitations evident in your evaluation. a2. "so we do not face overfitting or generalization issues in this case" It's absolutly possible to overfit to the data, leading to a representation and mapping that might not generalize to held-out data in each datasets. The authors need to consider how this affects the quality of the learnt representation. "cryo-ET does not provide data with ground truth information" Considering this, the incorporation of a more representative synthetic dataset becomes imperative. Please refer to my initial review on this topic. a3. The ability to fully correct for CTF zero-crossings is restricted, emphasizing the necessity of elucidating how your proposed approach manages this challenge. Given the current shortfall in experimental evaluation, it's essential to establish a theoretical foundation showcasing how your method addresses the concerns outlined in question 3. In light of the authors not having sufficiently addressed the core points raised in the initial review, my initial score remains unchanged. The primary issues revolve around the lack of theoretical substantiation for the presented claims and the absence of comprehensive experimental assessment. --- Reply to Comment 1.1.1: Title: Answer to reviewer tcrC additional concerns Comment: Thank you for your time and reply. We assume that our rebuttal has addressed most of the concerns (5+ of 8) in these fields, evidence in faster optimization, etc. a1. We hope we addressed the data efficiency advantages in question 1. Compared with traditional representations, CBR methods provide a continuous mapping from the coordinates to the densities. As in many other fields of machine learning, theoretical models for neural fields lag far behind the state of the art of methods deployed in practice. We do not believe that this constitutes reasonable grounds for rejecting work, especially in the face of overwhelming evidence for the performance of neural fields, demonstrated by both our experiments and the mentioned earlier works on neural fields for missing wedge tomography, mentioned in the first rebuttal and cited in the paper. Besides, we evaluate our methods in different metrics, like 3D PSNR and 3D SSIM in the synthetic dataset, CNR, ENL, and profile analysis in real datasets in the paper. We also analyze the main parameters, like the tensor dimensions and feature size. a2. We use the synthetic dataset from [1], which also targets the cryo-ET reconstruction using neural representation, and consider random densities and shapes for cryo-ET simulation. It could help distinguish between different methods. In this regard, this data is representative of the features and noise levels found in real cryo-ET data. In addition, we took three real datasets to verify our methods. a3. As mentioned in the original rebuttal, we treat the CTF correction as a preprocessing step to our method, using established tool chains routinely used in cryoET reconstruction. Specifically, we use IMOD [2] for this step. Please refer to [3] for analytical and theoretical analysis. [1] Kniesel, Hannah, et al. "Clean implicit 3d structure from noisy 2d stem images." CVPR 2022. [2] Mastronarde, David N., and Susannah R. Held. "Automated tilt series alignment and tomographic reconstruction in IMOD." Journal of structural biology 197.2 (2017): 102-113. [3] Xiong, Quanren, et al. "CTF determination and correction for low dose tomographic tilt series." Journal of structural biology 168.3 (2009)
Summary: The work combines quad-tree structure with low-rank tensorial representation to adaptively model cryo-EM volumetric density for reconstruction problem. The quad-tree structure is updated by merging or splitting to encourage uniformity in the area of each node, while the feature tensors for each node are obtained by outer product of learnable vectors and matrices. For the optimization, three regularization terms are used: total variation on vector and matrix features, boundary constraints to ensure points on the edge of two neighboring nodes have similar features, and Isotropic Fourier Prior. The latter penalizes outliers in horizontal/vertical Fourier coefficients and thus mitigates axis-aligned artifacts. A two-stage, coarse-to-fine optimization with downsampled projects helps avoid overfitting to noise. Through experiments on synthetic data, effect of tensor and feature dimensions as well as robustness to noise is studied. On real data, the proposed method is compared with SOTA and related implicit representation learning. Strengths: 1. Introduction to the problem is well written. It perfectly provides necessary background to understand the challenges in tilt-series tomography. 2. The work combines spatial partitioning idea with tensorial based representation to obtain a compact representation of the density field. 3. Two simple yet effective regularizers of Boundary constraint and Isotropic Fourier are cleverly used to mitigate artifacts and their impact is supported with evidence. Weaknesses: 1. Based on my understanding, the proposed method is building upon TensoRF by locally (rather than globally) defining tensorial representation in an adaptive quad-tree structure, while adding regularizations to avoid artifacts and impose smoothness. Some ablation studies are missing examining the sole effect of each of these additions, e.g. qualitative and quantitative results on TensoRF (global representation) with all the new regularization terms. Currently, it is not clear how much of the improvement in denoising metrics (or qualitative results) is because of quad-tree vs. regularization terms. In other words, quad-tree is claimed to be important contribution but its effect is not fully studied. 2. The isotropic Fourier regularization term requires Fourier Transform which can be expensive and slows down the computation, although a coarse sampling helps reducing the computation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. According to L229-233, the optimization is divided into two stages. In the first stage, both the quad-tree and tensorial representations inside each node are updated. The question is: when a node subdivided, how are tensorial representations of the new nodes initialized? Are they related to the tensorial representation of the parent node? What about the case when some nodes are merged together? 2. How much of the optimization time in the first stage is spent on quad-tree updates vs. tensorial representation updates? How expensive is this discrete optimization problem? How do you set the hyper-parameters, such as the threshold for STD, for this step? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Apart from above questions and concerns, I have a suggestion: you might be able to replace the Isotropic Fourier Prior with another one that can be computed more efficiently in real space (rather than Fourier space). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: a1: In our implementation, we start with a fixed number of nodes (for example, 70, as mentioned in the paper), we initialize each tensor representation with random values between -0.5 and 0.5. After each iteration, some nodes may become inactive, because they are subdivided or merged with others. Then, we map the currently active nodes to the initial 70 nodes. This mapping has no geometrical considerations. Thus, the tensor representations from one iteration to another (of the quadtree update) are not related. However, after only a few iterations of the tensor representations optimization we converge effectively, demonstrating our representation's robustness to initialization. a2: In the first stage of optimization, around 10% of the total optimization time is dedicated to quadtree optimization. Then, we fix the quadtree structure and optimize the tensorial representation. This step accounts for around 85% of the total optimization time. The STD calculation takes around 5 minutes and represents less than 5% of the overall computation time. The discrete optimization takes less than 5 seconds. We do not use a threshold for STD, and the updating takes into account the STD for all current nodes to decide which should be divided, merged or kept the same. However, we impose a limit on the total number of nodes (we set 70 in the paper). We also add discrete constraints similar to [1]: We define three binary variables for each node: ws, wk, and wm, which indicate whether the node should be subdivided, kept, or merged. We ensure that the sum of these variables is one for each node. We enforce the total number of nodes to not exceed a fixed limit. Then, we minimize the objective function, which is the sum of the products of the binary variables (ws, wk, and wm) and the STD of each node. We solve this problem using mixed integer programming with or-tools [2]. [1]Julien N. P. Martel, David B. Lindell, Connor Z. Lin, Eric R. Chan, Marco Monteiro, and Gordon Wetzstein. Acorn: Adaptive coordinate networks for neural scene representation. ACM Trans. Graph., 40(4), 2021 [2]Laurent Perron and Vincent Furnon. Or-tools. answer to limitations: We appreciate your insight and will keep it in mind for our future work. We agree that using a feature extractor or a filter in real space could be beneficial for our design. We tried using a pre-trained VGG model as a feature extractor, but it did not work well because it was trained on ImageNet, which differs from our dataset and feature space. The Isotropic Fourier Prior does not consume much memory after our optimization. We use some techniques to reduce memory usage, such as releasing the memory of the intermediate computations and sampling the strategy. The computation time is still relatively high, but we think it is worth it for improving the quality of the results. --- Rebuttal Comment 1.1: Comment: Thank you for answering the questions. Unfortunately, I see no reply to discussed weaknesses so concerns are remained for validation of choices made in the design of the method. I believe the work at this current stage lacks suggested ablation studies. I should decrease my score to Borderline Reject. --- Reply to Comment 1.1.1: Title: Replying to Reviewer bANQ Comment: Dear bANQ, Thank you for your kind reminding. Actually, we did the experiments regarding your first weakness concern in the first round review, 'TensoRF (global representation) with all the new regularization terms'. It is listed in reply to Reviewer 9Yvf, and the results are attached with rebuttal.pdf in the main rebuttal reply. The questions are similar. We are sorry that we did not specifically reply here. It would be appreciated if you could check the results for further consideration. Regards, Here we attached the reply for question 3 from Reviewer 9Yvf. 'a3: We appreciate the reviewer’s suggestion to perform comparisons to illustrate the quadtree regularization effect in our approach. Indeed, the quadtree structure has more benefits than just speeding up the computation. It also allows us to have more local matrix representations in the XY and XZ planes, compared to TensorDF which can only have global planes for the whole scene. This can reduce the noise and misrepresentation in the reconstruction. That is why our method achieves a significant improvement in quality with only a 50 percent reduction in parameter size. We provide in the rebuttal.pdf file some additional experiments to illustrate the impact of the quadtree as a regularizer. Since TensorDF (TensorRF in the original paper) already incorporates TV prior and does not require boundary consistency prior, we compare our approach to: TensorDF W/O TV prior, TensorDF, and TensorDF W the Isotropic Fourier Prior.'
Rebuttal 1: Rebuttal: We include in the rebuttal.pdf file some additional experiments to answer question 3 of Reviewer 3. Pdf: /pdf/b8f5892e232898894d34c778b6b177f6e51a643e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Meta-Learning Adversarial Bandit Algorithms
Accept (poster)
Summary: This paper considers learning several adversarial tasks with different loss functions simultaneously, hoping to attain better task-averaged regret if the tasks are "similar" enough (e.g., the optimal actions of all tasks concentrate on a small subset). A general meta-learning framework is derived by deploying three different algorithms as meta-learners to optimize the hyper-parameters of the base learners (OMD). It then applies to MABs and BLOs, providing various high-probability or expected regret bounds. Strengths: 1. The studied problem is well-motivated. According to the authors, while the stochastic variant of this meta-learning problem is well-studied, the adversarial version tackled by this paper has never been solved. 2. The algorithmic idea is well-illustrated and thus the framework is easy to understand. 3. The framework is applied to various base-learners to prove its effectiveness, including MABs with implicit exploration or guaranteed exploration and BLOs with self-concordant barriers. Weaknesses: 1. The design of the meta-learner does not look pretty exciting. Given the expression of $U_t$ in Eq. (3), it appears unsurprising that the adopted optimizers (FTL, EWOO, and MW) can be applied to their respective objectives. In other words, it seems that the technical contribution of this paper is not solid enough -- the analyses of meta-learners and base-learners are both not novel. However, I do feel the overall result is interesting. 2. I am unsure whether the task similarity measure can generalize over the "support" of the optimizers. Consider a special case where most of the tasks share the same optimizer but there are a few (say $\sqrt T$) outliers, would the bound scale with $\sqrt[4]{T}$ or $\mathcal O(1)$? In this case, if a meta-learner successfully rules out those outliers, the bound should scale by $\mathcal O(1)$; but using the sparsity $s$ would only give $\sqrt[4]{T}$. Intuitively, as $H_\beta$ is taken to the average of all $\hat x$'s, such a distribution will be very different from a uniform distribution (i.e., there are $\sqrt T$ optimizers each associated with $\sqrt T$ tasks); though I'm not sure whether the current framework can capture this. 3. (minor) The bounds in the main text contain so many terms that it is hard to interpolate. Informal expressions (like Eq. (13) or Eq. (19)) can be shortened using $o_T(1)$ to highlight the main terms. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weakness 1 & 2. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Clearly stated Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review; we hope to address your questions below. 1. [*[...] Given the expression of $U_t$ in Eq. (3), it appears unsurprising that the adopted optimizers (FTL, EWOO, and MW) can be applied to their respective objectives. [...] the analyses of meta-learners and base-learners are both not novel. However, I do feel the overall result is interesting.*] - While the meta-learning component of our analysis indeed builds upon existing work in the full information setting, we view the applicability of FTL and EWOO as “unsurprising” only in light of this previous work (as $U_t$ can be non-convex in $\mathbf{x}$ and non-Lipschitz in $\eta$). Furthermore, the applicability of MW does not follow from past work since it is not obvious that the first term on the RHS of (3) is Lipschitz in $\theta$; in fact, proving it requires setting-specific analysis (c.f. Lemmas B.1 & D.2). Finally, it does require substantial technical effort (c.f. Section C) in order to prove that these meta-algorithms can adapt to the similarity of true (rather than empirical) task optima, which past work does not consider because it has full information. 2. [*I am unsure whether the task similarity measure can generalize over the "support" of the optimizers. Consider a special case where most of the tasks share the same optimizer but there are a few (say $\sqrt T$) outliers, would the bound scale with $\sqrt[4]T$ or $\mathcal O(1)$? In this case, if a meta-learner successfully rules out those outliers, the bound should scale by $\mathcal O(1)$; but using the sparsity $s$ would only give $\sqrt[4]T$.*] - Thank you for the interesting question. While you are correct that just substituting $1+\sqrt[4]T$ for $s$ yields an undesirable bound, we *can* use our framework to obtain a result that is better than the single-task baseline of $\mathcal O(\sqrt{dm})$. In particular, in the setting you describe $H_\beta$ can be bounded by $\tilde{\mathcal O}(1+d^{1-\beta}T^{-\beta/2})$ (c.f. the bottom of this response), so for sufficiently large $T$ the bound will be $\tilde{\mathcal O}(\sqrt m)+o_T(poly(m,d))$, as desired. The caveat here is that the rate in $T$ will be quite slow. We think this is a useful analysis of our result, highlighting the advantage of not simply assuming a small set of optimal arms, and will include it (or a more general version of it, e.g. for $T^p$ outliers, $p\in[0,1]$) in revision. 3. [*[...] Informal expressions (like Eq. (13) or Eq. (19)) can be shortened using $o_T(1)$ to highlight the main terms.*] - In general we agree, although note that keeping the rates in $T$ can be useful for comparison with other works, e.g. Azizi et al. [10]. ### Bound on $H_\beta$ in the presence of $\sqrt T$ outliers Suppose all but $\sqrt T$ tasks have optimal action $a^\ast\in[d]$. Then applying Claim A.1 and the mean-as-minimizer property of Bregman divergences we have that $$\begin{align} H_\beta(\hat{\bar x}) &=-\psi_\beta(\hat{\bar x}) \\\\ &=\frac1T\sum_{t=1}^T\psi_\beta(\hat x_t)-\psi_\beta(\hat{\bar x}) \\\\ &=\frac1T\sum_{t=1}^TB_\beta(\hat x_t||\hat{\bar x}) \\\\ &=\min_{x\in\triangle_d}\frac1T\sum_{t=1}^TB_\beta(\hat x_t||x) \\\\ &\le\min_{\delta\in(0,1)}\frac1T\sum_{t=1}^TB_\beta(\hat x_t||(1-\delta)e_{a^\ast}+\delta 1_d/d) \\\\ &=\min_{\delta\in(0,1)}\frac1T\sum_{t=1}^T\frac1{1-\beta}\sum_{a=1}^d((1-\delta)1_{a=a^\ast}+\delta/d)^\beta-\hat x_{t[a]}^\beta+\frac{\beta(\hat x_{t[a]}-(1-\delta)1_{a=a^\ast}-\delta/d)}{((1-\delta)1_{a=a^\ast}+\delta/d)^{1-\beta}} \\\\ &=\min_{\delta\in(0,1)}\frac1T\sum_{t=1}^T\sum_{a=1}^d((1-\delta)1_{a=a^\ast}+\delta/d)^\beta-\frac{\hat x_{t[a]}^\beta}{1-\beta}+\frac{\beta\hat x_{t[a]}^\beta}{(1-\beta)((1-\delta)1_{a=a^\ast}+\delta/d)^{1-\beta}} \\\\ &\le\min_{\delta\in(0,1)}\delta^\beta d^{1-\beta}+\frac\beta{(1-\beta)T}\sum_{t=1}^T\sum_{a=1}^d\frac{\hat x_{t[a]}^\beta}{((1-\delta)1_{a=a^\ast}+\delta/d)^{1-\beta}} \end{align}$$ For non-outlier tasks we have $a=a^\ast$ so the last summation over $a$ is at most $1/(1-\delta)^{1-\beta}$, while for outlier tasks it is at most $(d/\delta)^{1-\beta}$. Since we have $T-\sqrt T\le T$ of the former and $\sqrt T$ of the latter we have the bound $$H_\beta\le\min_{\delta\in(0,1)}\delta^\beta d^{1-\beta}+\frac\beta{(1-\beta)(1-\delta)^{1-\beta}}+\frac{\beta(d/\delta)^{1-\beta}}{(1-\beta)\sqrt T}$$ Assuming $\beta\in[\frac1{\log d},\frac12]$ this means $H_\beta=\tilde{\mathcal O}(1+d^{1-\beta}T^{-\beta/2})$. --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. I decide to keep my recommendation unchanged.
Summary: This paper focused on online meta-learning with bandit feedback, and developed and applied a meta-algorithm for learning to initialize and tune bandit algorithms, obtaining task-average regret guarantees for both MAB and linear bandits. Specifically, a meta-algorithm was developed for learning the variants of OMD, which was further applied to OMD with the Tsallis regularizer. Furthermore, the meta-algorithm was adapted to the adversarial BLO problem via setting the regularizer to be a self-concordant barrier function. Strengths: - This paper is the first to consider meta-learning under adversarial bandit feedback. - A meta-algorithm was designed for learning the variants of OMD, which can simultaneously tune the initialization and other hyperparameters. - Strong theoretical performance guarantees were presented for the proposed algorithm. - The paper is well-written and easy to follow though it is heavy in theory. Weaknesses: - This paper mainly focused on the pure adversarial settings. As the authors mentioned, there are extensive works studied the stochastic settings, and given the popularity of "best-of-both-worlds" settings in the community, can you envision what is the major challenges to generalize the solutions to the best-of-both-worlds settings, from both the algorithmic and performance analysis perspectives? - As the paper considered the applications of the proposed algorithms, it may be interesting to do some case studies with real-world applications, rather than only on theoretical applications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - This paper mainly focused on the pure adversarial settings. As the authors mentioned, there are extensive works studied the stochastic settings, and given the popularity of "best-of-both-worlds" settings in the community, can you envision what is the major challenges to generalize the solutions to the best-of-both-worlds settings, from both the algorithmic and performance analysis perspectives? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review; we hope to address your questions and concerns below. 1. [*[C]an you envision what is the major challenges to generalize the solutions to the best-of-both-worlds settings, from both the algorithmic and performance analysis perspectives?*] - One challenge is that the online-within-online setting has two parts, each of which can be either stochastic or adversarial, giving rise to four settings that need to be supported. In order to prove that the solution is “best” for each of these settings, four lower bounds will need to be proved first. Additionally, dependence on the true optima-in-hindsight (Section 3.2) will require a best-arm identification procedure that is best-of-both-worlds and simultaneously works alongside the regret-minimization algorithm. 2. [*As the paper considered the applications of the proposed algorithms, it may be interesting to do some case studies with real-world applications, rather than only on theoretical applications.*] - We agree that it will be interesting to develop novel real-world algorithms that are inspired by this theoretical framework. However, our goal in this work was theoretical depth and breadth, with the application of our framework to four different bandit algorithms (Sections 3.1, 3.2, 4, and D.3) across three different geometries (simplex, sphere, and polytope). Thus we view real-world applications as out of scope for us due to constraints on space and the need for conciseness, and leave it for future work. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification.
Summary: In this paper, the authors consider an online meta-learning problem with the adversarial online-with-online partial information setting, where a learner selects parameters across T tasks with m rounds and can utilize similarity between tasks to achieve a low regret. First, the authors propose a meta algorithm which optimizes hyperparameters of the inner loop algorithm (OMD). As an instance of this algorithm, the authors consider OMD for MAB and BLO (bandit linear optimization). Then, they prove several upper bounds of the average regret across $T$ tasks using a similarity of optimal parameters across tasks. Strengths: 1. While existing papers considered online-with-online meta-learning in stochastic or full-information setting, this paper considers the adversarial bandit feedback setting, which is practically important. 2. Although the proposed algorithm is based on one in the full information setting, there is novelty due to the bandit feed back (e.g., estimation of true $H_\beta$). 3. The authors provide regret upper bounds under several settings and results seems valid. Weaknesses: 1. There is no discussion on lower bounds. Therefore, it is not clear whether the proposed method is optimal. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the related work section, the authors compare their algorithm to algorithm for dynamic regret optimization, but they discuss only the number of switches. There are algorithms whose regret bounds involve total variance of the environment (e.g. $\Delta$ in Wei, Chen-Yu, and Haipeng Luo. "Non-stationary reinforcement learning without prior knowledge: An optimal black-box approach." Conference on learning theory, 2021.). Are these algorithms related to the problem setting in Sec 3.2? Is it possible to provide a comparison? 2. In the stochastic setting, is there a known lower bound? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review; we hope to address your questions and concerns below. Weaknesses: 1. [*There is no discussion on lower bounds. Therefore, it is not clear whether the proposed method is optimal.*] - While we agree that lower bounds can be informative, our goal in this paper was to demonstrate the potential benefits of meta-learning across a broad number of settings, including four different bandit algorithms (Sections 3.1, 3.2, 4, and D.3) and three different geometries (simplex, sphere, and polytope). See also our response to your question on stochastic lower bounds below. Questions: 1. [*In the related work section, the authors compare their algorithm to algorithm for dynamic regret optimization, but they discuss only the number of switches. There are algorithms whose regret bounds involve total variance of the environment (e.g. $\Delta$ in [Wei & Luo (COLT 2021)].). Are these algorithms related to the problem setting in Sec 3.2? Is it possible to provide a comparison?*] - Thank you for the reference, which we will add to our related work. Note that Wei & Luo (2021) study several different stochastic settings (from MAB to MDP), whereas we focus on the adversarial setting; thus it is not obvious how to generalize their $\Delta$ to our setting, as it quantifies changes in the loss distributions over time. From looking at the dynamic regret bound they highlight in the abstract—$\tilde{\mathcal O}(\min\{\sqrt{LT},\Delta^\frac13T^\frac23\})$, where $L$ is equal to the number of changes in distribution ($T$ in our setting), $T$ corresponds to the number of rounds ($mT$ in our setting), and $\Delta$ is a type of distributional path length—it seems that their algorithms will have task-averaged regret $\tilde{\mathcal O}(\min\{\sqrt m,\Delta^\frac13m^\frac23/T^\frac13\})$. This will do well if the average change in distributions across tasks is sublinear in $T$ (i.e. $(\Delta/T)^\frac13=o_T(1)$); on the other hand, if the environment switches between a small set of different tasks (the sparse setting where our MAB algorithm does well) then $(\Delta/T)^\frac13=\Theta(1)$ and their approach will not improve upon the baseline of $\tilde{\mathcal O}(\sqrt m)$. Even in this case, of course, their algorithm has the advantage of not needing to know the location of the task switches. 2 [*In the stochastic setting, is there a known lower bound?*] - There are known lower bounds for some specific multi-task stochastic bandit settings [1,2,4,5], but to our knowledge the only one in the sequential setting across tasks is in the full information adversarial setting [3]. [1] Azizi, Kveton, Ghavamzadeh, Katariya. *Meta-learning for simple regret minimization.* AAAI 2023. [2] Cella, Pontil. *Multi-task and meta-learning with sparse linear bandits.* UAI 2021. [3] Khodak, Balcan, Talwalkar. *Provable guarantees for gradient-based meta-learning.* ICML 2019. [4] Simchowitz, Tosh, Krishnamurthy, Hsu, Lykouris, Dudik, Schapire. *Bayesian decision-making under misspecified priors with applications to meta-learning.* NeurIPS 2021. [5] Yang, Hu, Lee, Du. *Impact of representation learning in linear bandits.* ICLR 2021. --- Rebuttal Comment 1.1: Comment: Thank you for clarifications. I would like to keep the current scores.
Summary: The paper proposes an online mirror descent approach for adversarial bandit feedback-based online meta learning, utilizing FTL for initialization, EWOO for step-size, and MW for regularizer-specific parameters. The proposed method is applied to two widely adopted applications: multi-armed bandits (MAB) with Tsallis regularizer and bandit linear optimization (BLO) with self-concordant barrier function regularizer. The authors provide theoretical analysis on asymptotic task-averaged regret for MAB and BLO. Strengths: The paper is well-written and seems to cover an important topic in the community. I check some proofs, and they seems correct. Weaknesses: While the paper is theoretical in nature, it would be beneficial to include some empirical results to further validate the proposed approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The reviewer is not very familiar with this topic and would like to see the comments and feedback from other reviewers before asking further questions. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It would be nice to provide empirical results ( it is not mandatory and the paper's contribution is still valuable without them). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review; we are happy to answer additional questions later. With respect to your concern about experiments: our goal was theoretical depth and breadth, with the application of our framework to four different bandit algorithms (Sections 3.1, 3.2, 4, and D.3) across three different geometries (simplex, sphere, and polytope). Thus we view experimental contributions as out of scope for us due to constraints on space and the need for conciseness. --- Rebuttal Comment 1.1: Comment: I have read the authors' response and other reviewers' comments. I am glad to keep my score.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their helpful reviews, which among other things have led to several useful references and an analysis of the robustness of our result to outliers (c.f. [our response to Reviewer fExn](https://openreview.net/forum?id=r6xGZ0XL2g&noteId=hvQFk8uNIu)); we plan to incorporate these into the revision. We hope to address any follow-up questions in the discussion. To summarize the contributions of our paper: we develop a meta-algorithm for meta-learning the parameters of adversarial bandit methods and apply it to four such procedures (Sections 3.1, 3.2, 4, and D.3) across three different geometries (simplex, sphere, and polytope). This is the first analysis of meta-learning for adversarial bandits, and it yields provable performance improvements for sequences of multiple similar tasks, where performance is measured by task-averaged regret and task similarity is measured by distances between task optima induced by the specific within-task algorithm being studied. As an example, in the case of $T$ multi-armed bandit tasks with $m$ rounds and $d$ arms each where only $s\ll d$ of the arms are ever optimal for any task, the average regret of our approach is $O(\sqrt{sm\log d})$ (ignoring lower order terms in $m$ and $T$), compared to the usual MAB regret of $O(\sqrt{dm})$.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper discusses online meta-learning with bandit feedback to enhance performance across multiple tasks that share a natural similarity measure. The study focuses on the adversarial online-within-online partial-information setting and proposes meta-algorithms that combine outer learners to optimize the initialization and other hyperparameters of an inner learner for two cases: multi-armed bandits (MAB) and bandit linear optimization (BLO). For MAB, the meta-learners use the Tsallis-entropy generalization of Exp3, and the task-averaged regret improves when the entropy of the optima-in-hindsight is small. For BLO, the approach involves learning to initialize and tune online mirror descent (OMD) with self-concordant barrier regularizers, where the task-averaged regret is related to an action space-dependent measure induced by these regularizers. The guarantees provided in the study are based on demonstrating that unregularized follow-the-leader combined with two levels of low-dimensional hyperparameter tuning can effectively learn a sequence of affine functions of non-Lipschitz and sometimes non-convex Bregman divergences, which bound the regret of OMD. Strengths: This paper designs a meta-algorithm combining FTL, EWOO, MW to set the initialization, step-size, and the regularizer. Then, it has to direct applications for MAB and BLO. Then, the authors provide rigorous average regret analysis with Tsallis entropy, which are the first results for online learning with Bregman divergences. Weaknesses: It will be better if authors can provide some real-world applications that the designed algorithm can adapt to. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) For the MAB algorithms, there may be only one served task in each round, i.e., observe the loss. In this case, how to run algorithm 1 and do we need to update the parameters of all tasks? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review; we hope to address your questions and concerns below. 1. [*It will be better if authors can provide some real-world applications that the designed algorithm can adapt to.*] - As noted in the introduction, single-task bandits are widely used in applications such as recommender systems and experimental design; multi-task versions of these applications arise naturally. For example, a recommendation engine may face a sequence of different users with multiple interactions each, or a scientist may run a sequence of similar multi-round experiments. 2. [*For the MAB algorithms, there may be only one served task in each round, i.e., observe the loss. In this case, how to run algorithm 1 and do we need to update the parameters of all tasks?*] - If we are interpreting your question correctly, it concerns the somewhat different setting of *multi-task* online learning (Dekel, Long, & Singer, COLT 2006), where on each round we see a loss associated with one of T tasks, rather than *meta* online learning, where we see T tasks one-after-another. The multi-task setting has generally required rather different algorithms. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I keep my original assessment.
Summary: This paper introduces a meta-learning algorithmic framework for adversarial bandits, designed to fine-tune the initialization and hyperparameters of the internal learner, thereby enhancing performance across multiple tasks. The efficacy of this approach is validated via a theoretical analysis on multi-armed bandits and linear bandits. The authors also propose a task-similarity measure with significant implications concerning entropy and proximity to the task boundary. Strengths: - The paper addresses meta-learning under adversarial bandit feedback, which is an interesting problem by itself, and the meta-learning approach poses a solid contribution to the bandit community, with a new notion of task-similarity based on entropy. - The concrete steps towards building the algorithms is clear and statements in the paper seems solid. Weaknesses: - The paper does not have any numerical validations of the proposed algorithm. This puts doubts on the practicality of the proposed framework and methods. Synthetic as well as real-world examples should be tested upon, and comparisons against meta learning stochastic bandits should be discussed in the numerical experiments as well. - The computational complexity is not adequately addressed within the main body of the paper. Specifically, the computational overhead associated with the Multiplicative Weights (MW) method could be prohibitively high with a large grid and action space. - The interpretation of task-similarity lacks a sufficient explanation of its underlying motivation. Even in application cases where the authors justify the task-similarity measure via average entropy or proximity to the decision boundary, it remains unclear why this measure was chosen and how it compares to other task-similarity measures in stochastic bandit scenarios (and/or those with contamination or parameter noise). As it stands, the presentation in the paper makes it seem more like an artifact of the regret proof. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Could the authors elaborate on the connections to meta-learning in adversarial Reinforcement Learning (RL)? While meta-learning for adversarial bandits appears to be a novel concept, meta-learning in adversarial RL, a generalization of bandits, has been explored in the literature, as evidenced by [1] for example. - Could the authors provide insights on how the algorithm's implementation could be optimized for efficiency? - Could the authors demonstrate the effectiveness of the model and the algorithm in a well-motivated real-world setting through numerical experiments? - Could the authors offer a more intuitive explanation of the task-similarity measure and how it compares to existing methods, beyond comments about being 'distributional assumption-free'? [1] Lin, Zichuan, Garrett Thomas, Guangwen Yang, and Tengyu Ma. "Model-based adversarial meta-reinforcement learning." Advances in Neural Information Processing Systems 33 (2020): 10161-10173. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have acknowledged the limitations of their work, including the absence of a gap-free task similarity criterion, the assumption of known task boundaries, and the lack of a 'best-of-both-worlds' guarantee. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review; we address your questions and concerns below. Weaknesses: 1. [*The paper does not have any numerical validations of the proposed algorithm. [...]*] - Our goal in this work was theoretical depth and breadth, with the application of our framework to four different bandit algorithms (Sections 3.1, 3.2, 4, and D.3) across three different geometries (simplex, sphere, and polytope). This puts experimental contributions out of scope for us due to constraints on space and the need for conciseness. As noted by Reviewer qSEU, they are “not mandatory and the paper's contribution is still valuable without them.” 2. [*The computational complexity is not adequately addressed within the main body of the paper. Specifically, the computational overhead associated with the Multiplicative Weights (MW) method could be prohibitively high with a large grid and action space.*] - While it is in the appendix due to space constraints, our thorough examination of both computational and space complexity in Section A.3 does discuss the overhead of MW; per-iteration its cost is sublinear in T and d, which we do not view as prohibitively high. 3. [*The interpretation of task-similarity lacks a sufficient explanation of its underlying motivation. [... I]t remains unclear why this measure was chosen and how it compares to other task-similarity measures in stochastic bandit scenarios [...] As it stands, the presentation in the paper makes it seem more like an artifact of the regret proof.*] - We view task-similarity measures that arise from distance measures or entropies associated directly with the algorithms being run on those tasks to be inherently natural, rather than simply proof artifacts. For MAB we provide an interpretation of the task-similarity measure below Corollary 3.1 and *directly* compare it to the one that Azizi et al. [10] used for the stochastic bandit setting at the end of Section 3.1. The fact that our results yield meaningful guarantees under their “small set of optimal arms” assumption, which we also highlight in the introduction, is strong evidence of the usefulness of our approach to measuring task similarity. For BLO we provide an interpretation of the task-similarity measure below Corollary 4.1. Questions: 1. [*Could the authors elaborate on the connections to meta-learning in adversarial Reinforcement Learning (RL)? While meta-learning for adversarial bandits appears to be a novel concept, meta-learning in adversarial RL, a generalization of bandits, has been explored in the literature, as evidenced by [1] for example.*] - In RL the losses are stochastic and depend on a state in an MDP, whereas in adversarial bandits the losses are chosen adversarially; thus RL does not generalize adversarial bandits. In the referenced work [1], it is the “meta” aspect that is adversarial, not the RL. Specifically, they study a batch setting across tasks and propose that the learner should choose the tasks using adversarial training in order to do well in the case of distribution shift, *but there is no actual adversary*; our work studies an online setting across tasks where an adversary chooses the tasks. 2. [*Could the authors provide insights on how the algorithm's implementation could be optimized for efficiency?*] - This is discussed in Section A.3; in-particular, we suggest that replacing our use of EWOO by MW may increase efficiency at the cost of worse regret. Please also see our response to your second weakness above. 3. [*Could the authors demonstrate the effectiveness of the model and the algorithm in a well-motivated real-world setting through numerical experiments?*] - Please see our response to your first weakness above. 4. [*Could the authors offer a more intuitive explanation of the task-similarity measure and how it compares to existing methods, beyond comments about being 'distributional assumption-free'?*] - Please see our response to your third weakness above. Note also that “distributional assumption-free” is an aspect of our guarantees, rather than of our task-similarity measure. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarification and hence increase my score from 4 to 6.
null
null
null
null
Deterministic Strided and Transposed Convolutions for Point Clouds Operating Directly on the Points
Reject
Summary: This paper focuses on applying strided and transposed convolutions to point cloud data so that the deterministic network can be directly operated on points. To achieve this, a strided convolutional layer with auxiliary loss is proposed, which ensures a consistent selection of points across the whole learning process. Further, a lightweight autoencoder network is built upon the proposed convolutional operator. Experiments are conducted on KIMO3-6 for point cloud reconstruction and ModelNet40 for shape classification. Strengths: 1. The proposed idea and method of performing strided and transposed convolutions on point clouds are interesting. 2. The related work provides an exhaustive discussion of existing point cloud CNNs, which is meaningful. 3. Fig. 3 intuitively illustrates the point selection during the learning procedure. 4. Code implementations are provided in the supplementary material. Weaknesses: **Main weaknesses**: 1. Poor writing quality: A. The whole paper is presented with very long text without clear paragraph/subsection division, which significantly hurts the reading of the paper. B. Line 228-246 state the network design for applying the proposed convolution operator, but no figure illustration is provided, making it hard to fully understand how to apply this on Point-M2AE. 2. This paper is highly related to SparseConvNet [1] which also uses strided convolution on point clouds. However, it is not included, discussed, or compared. This is very important to evaluate the novelty of the proposed method. 3. Use/comparison of Point-M2AE: A. It is not clear why the proposed method is applied on Point-M2AE. Is the method only suitable for auto-encoder-style networks? B. In Point-M2AE, in addition to SVM evaluation on ModelNet40, the general/few-shot classification on ModelNet40 and ScanObjectNN, part segmentation on ShapeNetPart, and 3D object detection on ScanNetV2. To fully verify the effectiveness of the proposed method, more experiments should be conducted under some of the benchmark settings as in Point-M2AE. 4. The reported results are not promising compared with SOTA point CNNs, as shown in Table 2, although fewer #Params are introduced. As a result, this work does not clearly prove the potential of using the proposed 2D-like strided and transposed convolutions, instead of the existing customized 3D point cloud convolutions. 5. Line 298 states the #params to show the lightweight property of the proposed method. However, the FLOPs and latency are also important to evaluate the efficiency, thus more comparisons of FLOPs and latency should be provided. 6. The model's robustness to permutation, translation, rotation, scaling, and noise is not tested through experiments, which is important for real-world applications. **Refs**: [1] 3D Semantic Segmentation with Submanifold Sparse Convolutional Networks, CVPR 2018. [2] A closer look at local aggregation operators in point cloud analysis. ECCV, 2020. **Additional comment**: PosPool [2] is another representative point convolution method, which is also directly applied to ResNet architecture. It should be included, discussed and compared. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: The idea and the proposed method to perform strided and transposed convolutions on point clouds are interesting. However, a very related work, SparseConv is not discussed and compared, which is very important for evaluating the novelty of the proposed method. Some discussions (better with figures or tables) are expected to be provided. Moreover, the presentation quality (e.g., poor organization, the lack of figure illustration of network designs) of the whole paper is not qualified for the top-tier conference, which significantly hurts the reading of the paper. It shows that the paper is finished in a rush. Additionally, some important experiments are not provided, which is important for verifying the effectiveness of the proposed method. To conclude, the current version is somehow below the bar for a main conference paper, and solving the mentioned weaknesses is non-trivial. Thus, accepting this paper may not be fair considering the deadline is the same for all papers. It is highly recommended to revise the paper carefully, **identify the major bottleneck of performance and try to improve it**, and submit it to another conference. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, First and foremost, thank you very much for taking the time to review our paper and providing us with helpful comments to improve it. We are looking forward to discussing the contents with you. We agree that more figures could help the understanding of the paper and will add them in the future. Nevertheless, we cannot directly see why you consider SparseConvNet to be highly relevant to our work. In our understanding, SparseConvNet uses a clever trick to keep the higher feature levels sparse and thus performs computations with higher efficiency. However, it also operates on a grid structure for point clouds and is therefore very different from our approach which works directly on the points. Could you tell us where we misunderstood or elaborate more on why you think it is crucial to include this reference? We think that it is very interesting work and will add it to the related work section, but do not see the need for an in-depth discussion. We applied our method to M2AE as it is the state of the art for unsupervised learning on point clouds. However, you are right that more experiments with different architectures would improve the paper. Could we convince you as a reviewer of the importance of our sampling method if we showed in further experiments that we can outperform FPS in certain settings, e.g. certain transformer models? Best regards and thank you again for your time Authors --- Rebuttal Comment 1.1: Comment: This rebuttal does not fully address my concerns. This rebuttal seems more like an argument letter, without any sufficient results or figures. I highly recommend authors read some rebuttals available at the open-review website, such as the rebuttal of the ICLR conference. More importantly, only one backbone, PointM2AE is selected to apply the proposed method, and the experimental settings are not complete (in PointMAE, they did the general/few-shot classification on ModelNet40 and ScanObjectNN, part segmentation on ShapeNetPart, and 3D object detection on ScanNetV2, which are missed in this paper). Moreover, the experimental results are not impressive. At the experimental level, this method is for improving the performance of point cloud backbones under full supervision. It is hard to accept such a quality of experiments. As also indicated by other reviewers, the writing quality of this paper is not acceptable for a top-tier conference and needs non-trivial effort to revise the paper. Therefore, I will certainly keep my original rating of 3.
Summary: This paper introduces a learning-based point sampling strategy to deterministically downsample point clouds, which can be used to build a U-shaped network for point cloud reconstruction and representation. To enforce a stable and meaningful sampling (or selection), an auxiliary selection loss is proposed. The auxiliary selection loss enables a network to select central points which are likely to be non-neighboring each other. With the sampling strategy, this paper finally proposes a deterministic strided and transposed convolution for point clouds. The proposed method is evaluated in point cloud reconstruction and representation learning (especially SVM classification on the representation) tasks. In the point cloud reconstruction task, the proposed method shows a lower chamfer distance than the previous methods. The ModelNet 40 experiments show that the proposed method can be integrated with various network configurations. Strengths: 1. [Originality] The proposed sampling strategy and its application to the strided convolutions are interesting. Since many previous point cloud networks utilize farthest point sampling as described in the paper (L62-83), the proposed sampling strategy could be a new alternative and bring robustness to those networks. 2. [Clarity] This paper provides detailed explanations of the auxiliary selection loss with a theorem, its proof, and an example. Especially the example (Table 1) explicitly shows how the attention map matrix (M) is constructed and how the selected points are non-neighboring each other. Weaknesses: 1. [Originality] An highly relevant reference, SampleNet [1], is missing in both related work and experiment sections. Since both SampleNet [1] and this paper propose a learning-based point sampling, I recommend the authors explain how the proposed method differs from the SampleNet and evaluation results on the same experiments SampleNet did; supervised classification on ModelNet40. 2. [Quality] The writing quality and layout of the paper should be improved. For example, the proof of Theorem 3.1 and its example (Table 1) can be moved to the Appendix, although they may help readers to understand what the auxiliary selection loss is. Instead of the proof and example, detailed experiment results (e.g., downstream task results) with the proposed method would be better to be added. 3. [Significance] The current setup of experiments is not enough to show the significance of the proposed sampling strategy. Since various networks with farthest point sampling can use the proposed sampling strategy as an alternative, the downstream task results, which those networks did, should be added. For example, Point Transformers [2, 3] with the proposed sampling strategy can be evaluated in 3D semantic segmentation task on S3DIS or shape classification on ModelNet40. I recommend the authors evaluate the proposed sampling strategy with SOTA networks [2, 3] on downstream tasks (e.g., shape classification, semantic segmentation, and registration). [1] Lang et al., “SampleNet: Differentiable Point Cloud Sampling”, CVPR, 2020.\ [2] Zhao et al., “Point Transformer”, ICCV, 2021\ [3] Wu et al., “Point Transformer V2: Grouped Vector Attention and Partition-based Pooling”, NeurIPS, 2022. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: As described in the weakness section, I have several questions about the contribution of the proposed method and experiments (please see the weakness section for details): 1-1. Compared to SampleNet [1], what are the strengths of the proposed method? \ 1-2. Can those strengths be quantitatively evaluated on the downstream task (e.g., shape classification) SampleNet did? \ 2. Does the proposed sampling strategy outperform farthest point sampling or voxel subsampling on downstream tasks (shape classification, semantic segmentation, and registration)? [1] Lang et al., “SampleNet: Differentiable Point Cloud Sampling”, CVPR, 2020. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The authors partially addressed their work's limitations in the experiment section but did not address the potential negative societal impact. However, I don't think that there is a particular negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, First and foremost, thank you very much for taking the time to review our paper and providing us with helpful comments to improve it. We are looking forward to discussing the contents with you. Thank you for pointing out SampleNet. We agree that it is highly relevant to our work and will add it to the related work section. Nevertheless, we think that it is different from our approach for two reasons: First, it does not enforce diversity, and second, it is trained with an iterative training procedure. This means that first, the actual network is trained on the task, then its weights are fixed and SampleNet is trained so that the selection improves the task performance. This procedure creates difficulties when multiple hierarchy levels are desired. You further mention that the writing quality and layout should be improved. We agree that the layout would benefit from moving the example to the appendix in favor of showing more experiments. However, we were not sure which parts of the paper were of poor writing quality. Could you point us to specific sections? Replacing farthest point sampling in other architectures is a helpful suggestion and we will do this in future experiments. Could we convince you as a reviewer of the importance of our sampling method if we showed in further experiments that we can outperform FPS in certain settings, e.g. certain transformer models? Best regards and thank you again for your time Authors --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer eM7t Comment: Thank you for the rebuttal. I have read the rebuttal and found that it did not address my initial concerns (quantitative comparison with SampleNet [Lang et al., 2020] and farthest point sampling). Therefore, I will keep my initial rating (reject).
Summary: This paper presents a learnable and deterministic point selection layer to uniformly downsample points and a point transposed convolution layer to upsample points. Strengths: 1. The auxiliary loss (Eqn. 1) proposed to supervise the point selection is interesting. Weaknesses: 1. Deficient theoretical soundness. The proposed downsampling layer attempts to learn a point importance score and selects the points with the highest scores. Given that the selection operation is non-differentiable, your importance prediction network is solely supervised by the auxiliary loss (which functions as a uniform sampling regularization). As a result, it appears to be optimized to output uniformly sampled points rather than points based on semantic importance. However, the authors seem to disagree and claim that their proposed subsampling layer is capable of learning how to select points based on semantic importance (L330). Please elucidate how this non-differentiable operation can learn a importance-based sampling. It is worth noting that previous work leverages Gumbel-Softmax to enable a soft learnable selection, which results in a differentiable point subsampling network (L105). 2. Implementation and trade-off details of the learnable subsampling? Is it implemented using a single linear layer as depicted in Figure 1? How efficient it is? How many additional parameters are required? How practical is it to scale this up to a large-scale point cloud? 3. Missing large-scale experiments. What is the performance of the classical PointNet++ or the latest PointNeXt with the proposed subsampling in a large-scale dataset like ScanNet? 4. Visualizations appear to be missing. It would be insightful to see how the selected points differ from FPS. 5. No improvement over FPS (Table 2). Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Figure 3 is not clear. I do not understand why the samples change with epoch. Why not showing how the point subsampling changes with the learning epochs using the same sample. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 1 poor Limitations: In which specific applications is deterministic downsampling critical? Based on the results presented in Table 2, I observed no performance improvement, but rather a decline, when using the proposed downsampling method compared to FPS. For many applications, the determinism isn't a concern as the network is resilient to minor variances in subsampled points. This is particularly true in the case of large-scale point clouds, where the variance in subsampled points is typically small. Could you clarify the necessity and advantages of your deterministic downsampling method in this context? Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, First and foremost, thank you very much for taking the time to review our paper and providing us with helpful comments to improve it. We are looking forward to discussing the contents with you. Regarding the first weakness mentioning the discrepancy between the selection being based only on the auxiliary loss and the observation of a semantic selection, we fully agree with you that the selection is not differentiable with respect to the task loss. We did not mean to say otherwise. However, during the analysis of the selection procedure we observed that the selected points tend to be those with a high activation for the task-dependent features. Thus, we hypothesized that the selection of non-neighboring points guided by the auxiliary loss is easier for the network if it takes semantic information into account. That is why we think that the selection is influenced by semantic information. Would you need more proof for this hypothesis? Our subsampling is implemented using two bottleneck ResNet blocks, and the number of parameters depends on the number of channels used. In the case of 64 channels in the previous block, there are 50707 additional parameters. The module does not scale well with the increasing size of the point cloud due to the nearest neighbor computations, but this is also true for FPS. Figure 3 showed different samples because they revealed different properties of the selection, but we agree that finding a sample that shows all properties would help the reader to better comprehend the figure. We will work on enhanced visualizations. The permutation invariance property is desirable as it increases the guaranteed robustness of the output, however, we agree with you that we should directly show this benefit in further experiments. Could we convince you as a reviewer of the importance of our sampling method if we showed in further experiments that we can outperform FPS in certain settings, e.g. certain transformer models? Best regards and thank you again for your time Authors
Summary: This paper aims to propose a new type of convolutional neural network to point cloud understanding tasks. Besides, the authors present a new loss function to the sampling process of feature extraction for point clouds. And the authors provide theoretical analysis to prove that their method is better for sampling points. The authors conduct experiments on the reconstruction and shape classification tasks. The experimental results on shape classification are not satisfactory. Strengths: - Theoretical analysis for the proposed algorithm. - Better performance on the reconstruction tasks. Weaknesses: - Poor writing. - Unclear motivation. - Unsatisfactory experimental results for point cloud understanding. - Inaccurate statements. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - In L3-4, the authors wrote that "point clouds are less structured than images". I guess that it should be "Point clouds contain more complicated structural information than images." - Why do we need to develop convolution operators for point cloud understanding? For example, with simple geometric projection, we can transform the original point cloud into image grids. It's more efficient and reasonable. P2P [a] with a tiny-scale ConvNeXt can outperform Point-M2AE-e-1 proposed in this paper. - Can we replace the FPS operation with the proposed method? From a practical view, existing FPS operators have been enhanced with the CUDA library, which brings higher computation efficiency. Can the authors provide additional information about the difference in the execution time between the proposed method and the FPS operator? - I can not directly get the advantages of the proposed method, compared with existing methods. Can the authors summarize the advantages of training parameters, inference speed, and experimental performance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: Lack of limitation discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, First and foremost, thank you very much for taking the time to review our paper and providing us with helpful comments to improve it. We are looking forward to discussing the contents with you. Among the critical points you raised, you noted deficiencies in the writing of our paper and that we made inaccurate statements. Could you please elaborate on what parts were not well written? You mention that we wrote “point clouds are less structured than images” in the abstract. We meant it in the sense of organization of the points. An image can be thought of as a point cloud where the points have color information attached and are organized in a grid. A point cloud in general does not fulfil this property. Would you agree with this explanation? Are there further parts in the text that you find misleading? Moreover, thank you for the pointer to the paper “P2P: Tuning Pretrained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting”. The approach is very interesting. However, it relies on a pre-trained image transformer and first utilizes DGCNN for point cloud understanding. Thus, in our opinion, this does not show that there is no need for research on convolution operators for point cloud understanding, as with increased datasets more sophisticated point cloud architectures may again outperform the pre-trained vision transformer potentially utilizing our proposed FPS alternative. Also, the DGCNN model may be improved with our proposed method. You asked about the execution time of our proposed method. We would like to point out that we have not been able to optimize our code in the same way as FPS, as this is beyond the scope of our current capabilities. Nevertheless, during training we did not encounter severe differences with regards to computation time. Would you consider the paper acceptable only if we could obtain a speed benefit in addition to the permutation invariance? The overall advantage of our approach is the permutation invariance of the operation. In the experiments we could show that our relatively small model outperforms TearingNet on the complete reconstruction task. However, we agree that further experiments demonstrating this advantage on different tasks are desirable. Could we convince you as a reviewer of the importance of our sampling method if we showed in further experiments that we can outperform FPS in certain settings, e.g. certain transformer models? Best regards and thank you again for your time Authors --- Rebuttal Comment 1.1: Comment: I thank the authors for providing a rebuttal response. After carefully reading reviews from other reviewers and responses from the authors. I believe that this paper is potentially meaningful for the area. However, based on this manuscript, I still think that it does not satisfy the standard for publication. Please refer to the points shared below: * "point clouds are less structured than images" I have understood what the authors mean. I think that the authors should also take RGB information into consideration. In addition, I recommend a relevant paper [Image as Set of Points, ICLR 2023] for the authors, which may be helpful for your research. * Convolution Operation for Point Cloud. I agree with the authors that P2P still requires convolution operation. But purely convolution networks like ResNet-50 can also work with P2P for point cloud understanding. * Additional Experimental Results Compared with acceptance, I think that more experimental benefits with the proposed method should be a necessary part of the presentation. This paper is not in good shape, therefore, I lean to reject the paper and I will keep my rating.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Global Update Tracking: A Decentralized Learning Algorithm for Heterogeneous Data
Accept (poster)
Summary: This work proposes a novel decentralized learning algorithm based on gradient tracking mechanism, called Global Update Tracking (GUT), which aims to mitigate the impact of heterogeneous data distribution. The proposed GUT algorithm overcomes the bottleneck of communication overhead by allowing agents to store a copy of their neighbors’ model parameters and then tracking the model updates instead of the gradients. Numerous experiments have proven that the proposed Global Update Tracking with Quasi-Global momentum (QG-GUTm) outperforms the current state-of-the-art decentralized learning algorithm on a spectrum of heterogeneous data. Strengths: 1. This paper proposes a powerful global update tracking method and achieves good performance on multiple benchmarks. 2. The authors provide a detailed sensitivity study of the method's performance with respect to model architecture and hyper-parameters. 3. The paper is well written and easy for the reader to read and understand. Weaknesses: 1. In the experimental part, the quantitative experiments about communication parameters(cost) are not clear enough. Can the authors provide specific quantitative results for comparison so that readers can easily understand them? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This paper does not have any specific societal impact, other than the ones associated with other algorithms. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. The following Table 1 provides the communication cost for the experiments on 16 agents ring topology measured in terms of the total amount of data transferred in GB during training per agent. The communication costs for the remaining experiments are presented in **Table R3** of the rebuttal pdf. Table 1: Communication cost per agent for training various datasets on different model architectures over 16 agents ring topology. | Dataset | Model | Method| Communication Cost (GB) | | ------------------ | ----------------- | ------------ | :--------: | |CIFAR-10 | ResNet-20 |Gradient Tracking| 83.66| |CIFAR-10 | ResNet-20 |QG-DSGDm| 41.83| |CIFAR-10 | ResNet-20 |QG-GUTm| 41.83| |CIFAR-100 | ResNet-20 |Gradient Tracking| 85.46| |CIFAR-100 | ResNet-20 |QG-DSGDm| 42.73| |CIFAR-100 | ResNet-20 |QG-GUTm| 42.73| |Fashion MNIST | LeNet-5 |Gradient Tracking| 11.42| |Fashion MNIST | LeNet-5 |QG-DSGDm| 5.71| |Fashion MNIST | LeNet-5 |QG-GUTm| 5.71| |ImageNette | MobileNet-V2 |Gradient Tracking| 68.86| |ImageNette | MobileNet-V2 |QG-DSGDm| 34.43| |ImageNette | MobileNet-V2 |QG-GUTm| 34.43| The communication cost for the proposed GUT or QG-GUTm algorithm is the same as DSGD or QG-DSGDm methods i.e., the proposed methods have no additional communication overhead. We have answered all the questions raised by the reviewer and would be happy to answer any further questions.
Summary: The performance of decentralized learning is limited to the different distribution over devices. To address this issue, this paper proposes a method that is less susceptible to variations in data distribution, named GUT. The proposed GUT tracks the global/average model updates. Experiments show GUT achieves good performance on some benchmarks with various models. The paper also analyzes the convergence of the proposed method. Strengths: 1) The issue of heterogeneous data on devices in decentralized learning is critical, the paper shows a good motivation to address this. 2) On the given benchmarks and models, the proposed method seems to perform well. 3) The analysis of convergence of the method is detailed. Weaknesses: 1) There are too many descriptions of related work in the introduction, making it difficult for readers to find the gist. 2) The analysis of method strengths is not clear enough. Why the proposed method is better than previous methods? Why tracking the model updates instead of the gradients can save twice the communication overhead? Can differences in the data distribution of all agents be mitigated by communicating with neighboring agents? 3) References to employed benchmarks (CIFAR-10, CIFAR-100, Fashion MNIST, and Imagenette) and models (VGG-11, ResNet-20, LeNet-5 and, MobileNet-V2) are missed. 4) The evaluated benchmarks are relatively small. More large-scale benchmarks should be supplemented, such as ImageNet. 5) Some larger and state-of-the-art models should be evaluated to verify the effectiveness of the proposed method, such as ResNet-50, ResNet-100, ViT. 6) The evaluations on Fashion MNIST, CIFAR-100, Imagenette employ different model. The models should be unified. 7) Since one of the main contributions of the method is to reduce overhead, the article lacks a quantitative analysis of overhead. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: In rebuttal, the reviewer hope to see the response to 2),4)-7) in weaknesses. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper provides a compelling discussion about the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We answer each of the questions raised in the weakness section here. 1. We will update the literature review to only include decentralized learning works on heterogeneous data. 2. The difference in data distribution across agents results in a huge variation in the local gradients and hence results in poor performance of decentralized learning. These differences can not be completely mitigated by communicating with neighboring agents. However, we can reduce this variance in local gradients through mechanisms such as gradient tracking. In these techniques, the key idea is to track the global/averaged gradient through a tracking variable. The tracking variable will have less variance across agents than the local gradient and is closer to the global gradient. But tracking mechanisms have a communication overhead of 2x as they communicate the tracking variable along with the model parameters to/from the neighbors at every iteration. Note that model parameters from neighbors are required for the gossip averaging step and tracking variables of the neighbors for updating the current agent’s tracking variable (bias correction). The goal of this work is to design an algorithm similar to gradient tracking but without communication overhead. To achieve this, every agent i communicates model updates i.e., $x_j^{t}-x_j^{t-1}$ which is the same as the tracking variable. Now, agent $i$ can recover the neighbor’s model parameters $x_j^t$ from the model updates by adding them to the local copy of $x_j^{t-1}$. The recovered neighbors’ parameters can then be used for the gossip averaging step. As for the tracking part, the model updates $x_i^{t}-x_i^{t-1}$ include two components -- (a) the gradient update $g_i^t$ and (b) the gossip update $\sum\limits_{j \in N(i)}w_{ij} * (x_j^t -x_i^t)$. Ideally, we would like $g_i^t$ to be the average gradient i.e., $\frac{1}{n}\sum\limits_jg_j^t$ and $\sum\limits_{j \in N(i)}w_{ij} * (x_j^t -x_i^t)$ to be $\frac{1}{n}\sum\limits_n(x_j^t -x_i^t)$ -- as if the graph is fully connected. This implies that tracking the model updates not only inherently tracks the global gradient but also tracks the ideal gossip update. Hence the proposed GUT performs better than the previous methods at no additional communication overhead. Note that $n$ is the total number of agents and $\mathcal{N}(i)$ is the neighbors of agent $i$. We hope we clarified the contributions of the proposed methods and would be happy to provide more details if needed. 3. The reference for all the benchmark datasets and models are presented in Appendix C. We will add them to the main paper as well. 4. We provide the results on the ImageNet dataset trained on ResNet-18 architecture over 16 agents ring topology with alpha = 1 and 0.1 in below Table 1. We use an initial learning rate of 0.1 and decay it by a factor of 10 at epochs 15 and 37. The stopping criterion is set to 50 epochs. Table 1: Average test accuracy of decentralized algorithms evaluated on ImageNet with ResNet-18 over 16 agents ring. |Method|Test ACC ($\alpha=1$)| Test Acc ($\alpha=0.1$)| |-------------|-------------|-------------| |DSGD|$53.15$|$45.09$| |*GUT* (ours)|$53.57$|$46.33$| |QG-DSGDm|$60.85$|$57.17$| |*QG-GUTm* (ours)|$60.88$|$57.85$| 5. Our current computational power cannot handle large ViT models. However, we do present experiments on Mobilenet-V2 which is more suitable for the edge applications. 6. Table 3 of the main paper uses the standard models that are usually employed for the respective datasets. We provide the results on ResNet-20 for all the datasets in the below table 2 Table 2: Test accuracy of decentralized algorithms evaluated on ResNet-20 over 16 agents ring. |Method|Fashion MNIST|Fashion MNIST| CIFAR-100| CIFAR-100| Imagenette|Imagenette| |-------------|-------------|-------------|-------------|-------------|-------------|-------------| | | $\alpha=0.1$ | $\alpha=0.01$ | $\alpha=0.1$ | $\alpha=0.01$ | $\alpha=0.1$ | $\alpha=0.01$| |DSGDm | $87.89 \pm 2.34$ | $79.41 \pm 3.29$| $47.93 \pm 1.69$ | $42.57 \pm 2.71$| $66.89 \pm 3.12$ | $47.87 \pm 4.03$| |QG-DSGDm | $92.21 \pm 0.01$ | $90.59 \pm 0.92$ | $ 53.19 \pm 1.68 $ | $ 44.17 \pm 3.64 $ | $73.93 \pm 2.01$ | $56.30 \pm 5.43$| | *QG-GUTm* (ours) | $\mathbf{92.55} \pm 0.16$| $\mathbf{91.70} \pm 0.36$ | $ \mathbf{53.40} \pm 1.23 $ | $ \mathbf{50.45} \pm 1.30 $ | $\mathbf{75.44} \pm 2.22$ | $ \mathbf{57.47} \pm 5.33$| 7. We provide the quantitative results on communication cost and memory requirements of training for all the experiments in **Table R3** in the attached rebuttal pdf. We also provide the compute and memory overheads in **Table R4** in the attached rebuttal pdf. In Table R4, memory overhead is reported as the fraction of additional memory required per agent during training with a batch size of 32 per agent, and the computational overhead is reported as the fraction of additional FLOPs required per sample per agent during training. $\text{Memory overhead} = \frac{\text{Additional memory due to GUT}} {\text{Total memory}}$ $\text{Compute overhead} = \frac{\text{Additional compute due to GUT}} {\text{Total compute}}$ We have answered all the questions raised by the reviewer and would be happy to answer any further questions. --- Rebuttal Comment 1.1: Title: Re: Official Review of Submission597 by Reviewer tQhr and Rebuttal Comment: 1. Thanks for the authors' rebuttal, I have updated my rating due to their rebuttal and other reviewers' comments; 2. I must say that this submission is out of my scope and encourage the AC consider this issue when they make decision.
Summary: The paper proposes a new decentralized learning algorithm called Global Update Tracking (GUT) that can mitigate the impact of heterogeneous data in decentralized learning without introducing any communication overhead. The proposal was demonstrated 4 computer vision datasets. Strengths: The paper proposes an important and useful application for training deep learning models in decentralized way. There are many strengths of the proposal including the following: 1. GUT is less susceptible to variations in data distribution across devices as shown by different experiments and ablation study 2. The performance is better than DSGD and DSGDm. However, it is shown only for CIFAR-10. Weaknesses: There are several questions: 1. The paper claims that GUT does not introduce any communication overhead but no results are shown to justify that. However using Figure 2, authors have claimed that their method converges faster than quasi global gossip. The comparison/improvement is not clear from the figure. 2. The author claim that agents in GUT store copy of their neighbors. How often this copy is updated? What is the communication overhead in that? 3. Performance comparison with related methods is not shown on all 4 datasets. 4. The data partition is fixed, non-overlapping and never shuffled across agents during training. How does this partition is defined? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There are several limitations that the authors should address: 1. The data partition across agents is not clearly explained. Necessary details for all datasets is required. For example, number of samples per class per agent. Implementation details are also missing from the main paper. 2. Detailed comparison with the related methods is missing. For example, Table 3 compare performance on 3 datasets but no related method is shown. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We answer each of the questions raised in the weakness section here. 1. (a) GUT/QG-GUTm communicates model updates $(x^{t}-x^{t-1})$ whereas DSGD/QG-DSGDm communicates model parameters $x^{t}$. Both these vectors are of the same size i.e., model size. We present the communication cost in terms of the data transferred per agent during training in **Table.R3** in the attached rebuttal pdf. (b) Figure 2 plots the consensus error for a simple gossip averaging task with varying graph sizes (spectral gap). We compare gossip with GUT (dotted blue curve) with simple gossip (dotted red curve). The figure shows that gossip with GUT reaches a lower consensus error than simple gossip. We also plot the consensus error for the quasi-momentum version of both algorithms. Simple gossip with QGM is shown by the solid red curve and GUT with QGM is shown by the solid blue curve. Figure.2(c) illustrates that for graphs with a smaller spectral gap (which corresponds to more agents), the proposed QG-GUTm can converge faster than quasi-global gossip (gossip with QGM). 2. The neighbors’ copy is updated in every iteration (line 9 in Algorithm 1) and does not require additional communication. The main difference between DGSD and GUT is the type of information that the agents communicate but both algorithms communicate vectors of the same size. In the DSGD algorithm, a given agent $i$ receives the model parameters $x_j^t$ at each iteration from its neighbors and uses it for gossip averaging. Whereas in GUT, a given agent $i$ receives the model updates $(x_j^{t}-x_j^{t-1})$ at each iteration from its neighbors and recovers $x_j^{t}$ by adding the received model update $(x_j^{t}-x_j^{t-1})$ to the stored copy of $x_j^{t-1}$. Now we use the recovered $x_j^t$ for gossip averaging and model updates for bias computation or tracking. 3. We compare CIFAR-10 with DSGD, DGSDm, and QG-DSGDm. However, for the remaining datasets in Table. 3, we compare with QG-DSGDm as it is the existing state-of-the-art baseline for heterogenous data with no communication overhead. For more baselines, we report DSGDm along with QG-DGSDm in **Table.R2** in the attached rebuttal pdf. 4. At the beginning of the training, we divide the dataset across the agents where the label distribution follows the Dirichlet distribution and the non-iidness or the skew is controlled by the factor $\alpha$. The dataset is sampled without replacement and hence is non-overlapping across the agents i.e., each agent has a different part of the training dataset. We don't shuffle the dataset across the agents during the training. The visualization of the CIFAR-10 dataset with varying degrees of alpha on a 16-agent ring topology can be found in Figure.1,8 of [1]. We will add similar bubble plots for all our experiments in the appendix. We answer each of the questions raised in the Limitations section here. 1. For the interest of the space in the main paper, we added the implementation details of our experiments in Appendix C. We will include the class distribution across agents in the appendix and also add bubble plots for visualization. 2. We compared the results in Table 3 with QG-DSGDm which gives the best accuracy among the existing works and show that the proposed algorithm outperforms it. For more baselines, we report DSGDm along with QG-DGSDm in Table.R2 of the attached rebuttal pdf and also presented below as Table. 1. Table 1: Average test accuracy of different decentralized algorithms evaluated on various datasets, distributed with different degrees of heterogeneity over 16 agents ring topology. |Method | Fashion MNIST |Fashion MNIST |CIFAR-100 | CIFAR-100| Imagenette |Imagenette| |----------|----------|----------|----------|----------|----------|----------| | | $\alpha=0.1$ | $\alpha=0.01$ | $\alpha=0.1$ | $\alpha=0.01$ | $\alpha=0.1$ | $\alpha=0.01$| | DSGDm | $86.59 \pm 0.92$ | $77.00 \pm 3.53$ | $47.93 \pm 1.69$ | $42.56 \pm 2.71$ |$66.02 \pm 4.59$ | $38.69 \pm 11.8$ | |QG-DSGDm | $ 89.94 \pm 0.44 $ | $ 83.43 \pm 0.94 $ | $ 53.19 \pm 1.68 $ | $ 44.17 \pm 3.64 $ | $ 63.60 \pm 4.50 $ | $ 39.49 \pm 4.57$| |*QG-GUTm (ours)*| $ \mathbf{90.11} \pm 0.02 $ | $ \mathbf{84.60} \pm 1.00 $ | $ \mathbf{53.40} \pm 1.23 $ | $ \mathbf{50.45} \pm 1.30 $ | $ \mathbf{66.52} \pm 3.68 $ | $ \mathbf{43.85} \pm 8.24 $ | We have answered all the questions raised by the reviewer and would be happy to answer any further questions. [1]. Lin, T., Karimireddy, S.P., Stich, S. &amp; Jaggi, M.. (2021). Quasi-global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data. Proceedings of the 38th International Conference on Machine Learning.
Summary: The authors propose an algorithm for decentralized learning, where training datasets with heterogenous distributions are collected across different devices. They propose a global update tracking (GUT)-based approach where the IID assumption for the data is removed, and aim to generalize their method to non-IID data by tracking the consensus model while reducing the communication overhead by their GUT algorithm, at the expense of additional memory for storing the parameters for the neighboring model. They validate their method by experiments on various datasets. Strengths: - Proposed method is applicable to more realistic settings for decentralized learning, where non-IID heterogeneous distribution is more common for the real world data. - Their GUT-based approach adds minimal computational overhead on the overall agents, and they show the performance of their algorithm on multiple datasets in the experimental results. - They provide ablation studies for further analysis on the hyperparameter sensitivity, different levels of heterogeneity, and various implementations of momentum. - Authors provide the source code for their implementation in the supplementary materials. Weaknesses: - Since one of the main claims of this approach is its generalizability to non-IID heterogeneous data distributions for each agent, what aspect in the proposed formulation corrects the gap between the individual distributions? - Compared to previous decentralized learning approaches where the agent tracks gradients, momentum of gradients, or gloabl gradient obtained from the adjacent agents, proposed GUT tracks model updates. What statistical characteristic/effect makes the model update ($x^{t+1}_i - x^{t}_i$) beneficial for tracking? A simple toy (e.g. Gaussian) example experiment with update with/without model update tracking would suffice. - Are there any visualizations for non-IID characteristic on the datasets under varying degree of $\alpha$? Also, are there experimetal results of additional memory consumption and computational overhead due to the additional update terms? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to the questions in the weaknesses section. Since I am not an expert in this field, my positive and negative opinions are mixed. Additionally, repeating the full paper in the supplementary materials seems unnecessary. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors included a separate discussions and limitations section in their paper, with adequate explanations and descriptions on the weaknesses and possible future directions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We answer the questions raised in the weakness section here. 1. In the proposed methods, the local model parameters are updated using the tracking variable $y_i$ instead of the local gradient $g_i$. The key idea is that the tracking variable $y_i$ corrects for the bias in the gradient computation and hence is closer to the true/average gradients compared to $g_i$. The update rule for $y_i$ is given in line-6 of algorithm 1 in the main paper. In the given update rule, $\delta_i^t$ indicates the default/DSGD update vector (without GUT), and $y_i^t$ is the update vector with GUT. The difference between $\delta_i^t$ and $y_i^t$ is the scaled correction/tracking term. For simplicity (easier understanding), let's ignore the correction term $ \implies y_i^t - \delta_i^t = \sum\limits_{j\in \mathcal{N}(i)}w_{ij}y_j^{t-1} - \delta_i^{t-1}$. We compute the difference between the default update $\delta_i^{t-1}$ and the averaged updates of the neighborhood i.e., $\sum\limits_{j\in \mathcal{N}(i)}w_{ij}y_j^{t-1}$ from the previous iteration and add it as a bias correction to the current iteration update. This bias term corrects for the gap between the individual distributions. 2. The goal of this work is to track the average/global gradient without incurring additional communication overhead. To achieve this, we communicate and track the model updates i.e., $x_i^{t+1} - x_i^t$. The term $(x_i^{t+1} - x_i^t)$ can be split into two components - (a) the gradient update $(g_i^t)$ and (b) the correction added through gossip update $(\sum\limits_{j\in \mathcal{N}(i)}w_{ij}*(x_j^t-x_i^t))$. So by tracking model updates, we are inherently tracking global gradient i.e., $\frac{1}{n} \sum\limits_j g_j^t$, and also the global gossip error i.e., $\frac{1}{n} \sum\limits_j(x_j^t-x_i^t)$. Hence, GUT benefits from tracking both global gradients and gossip error. The statistical effect of the tracking is shown in Figure. 2 of the main paper. We set up a simple gossip averaging task -- each agent is initialized with a random vector $x_i$ and the goal is to compute the average value of $x$ i.e., $\frac{1}{n} \sum\limits_jx_j$ through gossiping with neighbors. Figure. 2 plots the consensus error over time with and without model update tracking. We observe that gossip with model update tracking converges (reaches lower error) faster than simple gossip. Note that $n$ is the total number of agents and $\mathcal{N}(i)$ is the neighbors of agent $i$. 3. (a) We use the standard Dirichlet distribution to generate the non-IID Data. The visualization of the CIFAR-10 dataset with varying degrees of alpha on a 16-agent ring topology can be found in Figure.1,8 of [1]. We will add similar bubble plots for all our experiments in the appendix. (b) The memory overhead for GUT is equivalent to 2*model-size and the computation overhead comes from the computation of tracking variable $y_i$. Table 1 below reports the numbers for memory and computational overheads for the memory-efficient implementation of GUT presented as Algorithm. 4 in the appendix. Table 1: Communication, memory, and compute overhead incurred per agent during training of various datasets and model architectures for the proposed *GUT* algorithm. Note that the overheads are independent of the graph topology and graph size. |Dataset| Model | Memory Overhead | Compute Overhead | Communication Overhead| |---------|---------|:---------:|:---------:|:---------:| |Fashion MNIST| Lenet-5|0.099|0.275|0.00| |CIFAR-10| ResNet-20|0.016|0.021|0.00| |CIFAR-10|VGG-11|0.138|0.149|0.00| |CIFAR-100|ResNet-20|0.016|0.022|0.00| |ImageNette| MobileNet-V2| 0.005|0.021|0.00| $\text{Memory overhead} = \frac{\text{Additional memory due to GUT}} {\text{Total memory}}$ Memory overhead is reported as the fraction of additional memory required per agent during training with a batch size of 32 per agent. The total memory includes the memory required to store model parameters, activations, gradients, gossip buffer, tracking variable, and weighted neighbors’ parameters. We observe that for compact models such as ResNet and MobileNet, the memory overhead is less than 2%. However, for larger models such as VGG-11, the memory overhead shoots up to 14%. $\text{Compute overhead} = \frac{\text{Additional compute due to GUT}} {\text{Total compute}}$ The computational overhead is reported as the fraction of additional FLOPs required per sample per agent during training. The total compute includes the forward pass, backward pass, model updates, gossip averaging, and tracking variable computation flops. We observe that for compact models such as ResNet and MobileNet, the compute overhead is around 2%. However, for larger models such as VGG-11, the compute overhead shoots up to 15%. We have answered all the questions raised by the reviewer and would be happy to answer any further questions. [1]. Lin, T., Karimireddy, S.P., Stich, S. & Jaggi, M.. (2021). Quasi-global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data. Proceedings of the 38th International Conference on Machine Learning. --- Rebuttal 2: Title: After reading the rebuttal Comment: Thanks for the feedback for my comments. After reading the response from the authors, I think my concerns were adequately addressed and thus I am inclined to keep my original rating of weak accept.
Rebuttal 1: Rebuttal: We present the additional results and the quantitative results on the overheads in the attached pdf. We reiterate the contributions of the proposed methodology. Decentralized machine learning on heterogeneous data has poor performance due to huge variations in the local gradients across the agents. Methods such as gradient tracking reduce this variation by tracking the global/averaged gradient at the cost of $2\times$ communication. The goal of this work is to employ tracking mechanisms without any communication overhead. To achieve this, we propose to track and communicate model updates at the cost of an additional memory buffer of model size. Tracking model updates inherently tracks global gradients. We recover the model parameters of the neighbors from the received model updates by maintaining a local copy of the neighbors' model parameters. 1. How does *GUT* recovers neighbors' model parameters by communicating model updates? * Let's consider $n$-agents ring topology with adjacency matrix W. Say agent $i$ has neighbors $j,k$. In DSGD algorithm, agent $i$ receives $w_{ij}x_j^t, w_{ik}x_k^t$ from its neighbors $j,k$ respectively and uses them for the gossip averaging step i.e., $w_{ii}x_i^t+w_{ij}x_j^t+w_{ik}x_k^t$. However, in the proposed *GUT* method, agent $i$ receives $w_{ij}(x_j^t-x_j^{t-1}), w_{ik}(x_k^t-x_k^{t-1})$ from its neighbors $j,k$ respectively instead of model parameters. We recover model parameters using the variable $s_i^{t-1}$ which keeps track of the averaged model parameters of the neighbors i.e., $s_i^{t}=s_i^{t-1}+w_{ii}(x_i^t-x_i^{t-1})+w_{ij}(x_j^t-x_j^{t-1})+w_{ik}(x_k^t-x_k^{t-1})$. Note that $s_i^0=x_i^0=x_j^0=x_k^0$ as all the models are initialized to the same values at the beginning of training and $w_{ii}+w_{ij}+w_{ik}=1$ as W is doubly stochastic. By unrolling the recursion, we get $s_i^t=w_{ii}x_i^t+w_{ij}x_j^t+w_{ik}x_k^t$. 2. How does the proposed tracking mechanism work? Why does it not require additional communication? * Ideally we would want $g_i^t = \frac{1}{n}\sum\limits_{j=1}^n g_j^t$ = global gradient or the gossip averaging step to be $ \frac{1}{n}\sum\limits_{j=1}^n x_j^t$ as if the graph is fully connected. However, the decentralized graph structures are sparse and agents only have access to their neighbors' model parameters/gradients. The gradient tracking mechanism introduces a variable $y_i^t$ which tracks the global gradient by correcting the local gradient computation i.e., $y_i^t = g_i^t + (\sum\limits_{j \in \mathcal{N}(i)}w_{ij}y_j^{t-1} -g_j^{t-1})$. The gradient tracking algorithm uses $y_i$ for the local SGD step rather than $g_i$. Since the gradient tracking algorithm requires $y_j$'s for tracking variable update and $x_j$'s for the gossip averaging, it incurs $2 \times$ communication. In contrast, the proposed GUT algorithm takes advantage of the fact that we can recover $x_j$'s from $y_j$'s if we track model updates i.e., $x_i^t-x_i^{t-1}$ instead of gradients and hence have only $1 \times$ communication. In any decentralized algorithm, the local model is updated with two components at every iteration -- (a) local gradient and (b) gossip averaging. For example, DSGD update: $x_i^t = x_i^{t-1} - \eta g_i^t + \sum\limits_{j \in \mathcal{N}(i)}w_{ij}(x_j^{t-1}-x_i^{t-1})$. Therefore tracking the model updates inherently tracks global gradients similar to gradient tracking algorithm. However, the model updates do have a gossip error term i.e., $\sum\limits_{j \in \mathcal{N}(i)}w_{ij}(x_j^{t-1}-x_i^{t-1})$ as residual. We apply reference correction so that the gossip part of the model update tracks the global gossip i.e., $ \frac{1}{n}\sum\limits_{j=1}^n (x_j^{t-1}-x_i^{t-1})$. Therefore, the proposed algorithm successfully tracks global gradients and residual global gossip by tracking model updates without any communication overhead. We present the memory requirements and communication parameters in Table. 1. Table 1: The parameters that are communicated and variables that need storage by various decentralized learning algorithms. |Method| Communicate| Storage| |----------|:-------------------:|:--------------:| |DSGD| $x_i$| $x_i, a_i, g_i$| |Gradient Tracking| $x_i, y_i$ | $x_i, a_i, g_i, y_i$| |*GUT* (ours)| $y_i$ | $x_i, a_i, g_i, y_i, s_i$| Where $x_i$ = model parameters at agent $i$, $a_i$ = activations at agent $i$, $g_i$ = gradients at agent $i$, $y_i$ = tracking variable at agent $i$ (which is model updates for GUT), $s_i$ = $\sum\limits_{j \in \mathcal{N}(i)}w_{ij}x_j$ i.e., copy of the averaged model parameters of the neighborhood Note that size($x_i$) = size($y_i$) = size($s_i$). $\mathcal{N}(i)$ = neighbors of agent $i$. We hope this clarifies the contributions and limitations of the proposed *GUT* method. Pdf: /pdf/d6df6cf70aee0d588ecde0708fb360efcf409f6a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information
Accept (poster)
Summary: This paper considers the class of problems known as Smart Predict and Optimize (or Decision-Focused Learning) where the learning task consists in learning the parameters of an optimization model given some of their features. The difficulty comes from trying to include the optimization model into the learning pipeline, since it is hard to differentiate over the optimization model. The paper proposes to learn two optimization model: one to learn the objective of the optimization model and one to learn the coefficient of the optimization model. These are trained using an iterative algorithm that tries them in sequence. The paper applies this approach to the traditional benchmarks used in SPO and show that the approach has benefits overall. Strengths: Algorithm 1 is the main contribution of the paper. It is an elegant way to solve SPO problems in general. Weaknesses: 1. The formalization really gets in the way in this paper. I will try to list these issues here 1.a The model M which approximates the solver should receive only one set of parameters. I do not understand why you would use c_\theta(y_i) and z_i in (3). It would be better to explain in detail the inputs of the model M 1.b In (4), the notation M_w(Y,Z;\theta^*) does not make any sense. M_w does not receive \theta^* in Algorithm 1. As a result, the bilevel model (4) does not make sense. 1.c why is the learning task expressed as min_{\theta,w} | M_w(c_theta(y)) - f(z) |? 2. Algorithm 1 is an ADMM approach to the optimization as shown above in fact. 3. I do not understand 4.1. The goal of SPO is to find the parameters of the optimization which is what c_\theta does? How is Model M useful on testing instances and model deployment? It does not compute any solution, just the objective. 4. In 4.2, you mention that you need Z to train M. This is contradictory with 4.1 where you are using it in unseen test instances. You do not have Z on unseen instances. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The benefits on the portfolio optimization are computational only. This area seems to always consider the same three problems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for providing valuable comments. While we partially agree with the summary provided by the reviewer, it is essential to note that LANCER is not only applicable to SPO problems but also to another crucial class of problems: solving mixed-integer nonlinear programming via learning surrogates. We unify learning surrogates for MINLP and SPO within the same learning framework, which is another important contribution that we would like to emphasize. According to the reviewer's concerns, the formalization of our approach (section 4) and Algorithm 1 require more clarification, which we address below. Moreover, we will incorporate most of these responses/clarifications into the revised version. 1. **Weaknesses** > 1.a. "The model M which approximates the solver should receive only one set of parameters. I do not understand why you would use c_\theta(y_i) and z_i in (3). It would be better to explain in detail the inputs of the model M" It is important to note that our main objective is to approximate composition function $f \circ \mathbf{g}$ (not just $f$ and NOT the solver $\mathbf{g}$ alone, but jointly), please check the main rebuttal for more detailed explanation. For this, evaluating $f \circ \mathbf{g}$ depends on both $c_\theta(y_i)$ and $z_i$. Therefore, to construct an accurate surrogate model $M_w$ that can effectively approximate $f \circ \mathbf{g}$, it is essential for $M_w$ to take into consideration the influence of both inputs. This consideration ensures that the surrogate captures the intricacies of the composition function and produces reliable predictions to aid the optimization process in both the smart P+O and learning surrogates for MINLP settings. > 1.b. "In (4), the notation M_w(Y,Z;\theta^*) does not make any sense. M_w does not receive \theta^* in Algorithm 1. As a result, the bilevel model (4) does not make sense." We apologize if it caused much confusion. The complete and more explicit form of Eqn. 4 should look like this: $\min_{w}{ \sum_i || M_w ( c_{\theta^*}(y_i), z_i ) - f( \mathbf{g}(c_{\theta^*}(y_i)), z_i ) || }$ &nbsp; s.t. $\theta^* \in $ arg$min_\theta \sum_i M_w (c_ \theta(y_i), z_i) $ As mentioned above in 1.a., the surrogate model $M_w$ takes $c_\theta(y_i)$ as one of its inputs. In this context, we are not concerned with making $M_w$ accurate for all possible $c_\theta$; rather, we are interested in finding the configuration of $M_w$ that yields the closest approximation to $f \circ \mathbf{g}$ near $\theta^*$. By adopting the bilevel optimization framework, we can focus on learning accurate $M_w$ for such $\theta$. In Algorithm 1, we achieve this by first evaluating $f \circ \mathbf{g}$ on the current "optimal" $c_{\theta^*}$, followed by retraining $M_w$ (i.e., $w$-step). Correspondingly, $w$-step ensures that $c_\theta$ optimizes the current surrogate loss $M_w$. We will make this more explicit in the paper and will add detailed description. > 1.c. "why is the learning task expressed as min_{\theta,w} | M_w(c_theta(y)) - f(z) |?" Hopefully, our response above (1.b.) also clarifies this matter. Otherwise, please do not hesitate to follow up during author-reviewer discussion phase. > 2. "Algorithm 1 is an ADMM approach to the optimization as shown above in fact." Although it has similar flavor, Algorithm 1 is not an ADMM because it does not follow the typical ADMM structure and optimization approach. ADMM is used to solve problems with separable objective functions and constraints. It is based on the idea of splitting the original problem into subproblems that can be solved in parallel, and then iteratively updating the variables using the method of multipliers. The general form of ADMM involves introducing auxiliary variables, Lagrange multipliers, and penalty terms to convert a constrained optimization problem into a series of subproblems that can be solved independently and in parallel. Clearly, the formulation in eq. (4) and Algorithm 1 do not exhibit this characteristic structure of ADMM. > 3. "I do not understand 4.1. The goal of SPO is to find the parameters of the optimization which is what c_\theta does? How is Model M useful on testing instances and model deployment? It does not compute any solution, just the objective." $M_w$ can be reused when Z = Y, i.e., there is no partial information in the testing instances, such as MINLP. In this case, $M_w$ can be used to evaluate the quality of target mapping $c_\theta(y)$, and improve $c$ directly (through $\theta$-step only) without calling the expensive composition $f \circ \mathbf{g}$. This is an optional step that one can perform once Algorithm 1 finishes. (see Algorithm 2 in the supplementary materials and refer to lines 143-149 in the main paper). This leads to a significant runtime reduction. We conducted ablation studies in section 5.4 to validate this. > 4. "In 4.2, you mention that you need Z to train M. This is contradictory with 4.1 where you are using it in unseen test instances. You do not have Z on unseen instances. If Z only contains partial information (e.g., P+O) during testing, we may still leverage M for "similar problems", providing computational advantages. For example, after using Algorithm 1 for one dataset of the portfolio selection (PS) problem, we can use the pre-trained M when training on another PS dataset either as a warm start or by executing the $\theta$-step. However, we did not validate this in our experiments, which may encounter potential issues such as distribution shifts and domain differences. --- Rebuttal Comment 1.1: Title: Rebuttal Comment: I read the rebuttal. I believe the formalization is probably correct this time. I will check further. There are a lot of mistakes and unclear statements in this paper, as seen in the rebuttal. But I believe that the algorithm is interesting. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review the rebuttal. Regarding the perceived mistakes and unclear statements, we'd like to clarify that these issues mainly stem from notational choices that inadvertently caused misunderstandings rather than fundamental issues. We are sorry for the imperfect presentation. Many of these concerns have been addressed in the rebuttal, and we will update our paper in the next revision. Let us know if you have more questions!
Summary: This paper proposes the a novel framework for learning the predicted value of an optimization task under limited information. Specifically, the goal is to learn $\mathcal{M}\_w(y, z) = f (g\_\theta (y), z)$ function where $y$ is the limited information, $z$ is the complete information, $f$ is the objective value of an optimization function to be maximized, and $g\_\theta$ is a way to generate a solution to the optimization problem based on the limited information $y$. In essence, there are 2 sets of interdependent parameters to learn -- $\theta$ and $w$, and the paper proposes an alternating update scheme for solving the joint problem. They then apply this framework to 2 classes of problems---Smart Predict-then-Optimize (SPO) and Mixed Integer Non-Linear Programming (MINLP)---and show that this method outperforms similar approaches in each class. Strengths: * **Great exposition**: The paper is well written and easy to follow. * **Good experiments**: The paper uses domains from the literature and compares to relevant past work for each problem class. The experiments are well documented and seem reasonably expansive. * **Interesting connection**: I'm more familiar with the SPO literature, so the connection of learned surrogates to MINLP is quite interesting! Weaknesses: The major weakness of this paper, imo, is that it doesn't document the **training considerations** for $\mathcal{M}_w$. The paper mentions in the conclusions that 'one potential drawback is the complexity of tuning M, requiring model selection and training', but there are no experiments about how hard/important tuning these parameters is. Specifically, it would be good to know: 1. Shah et al. [36] highlight the importance of *convex* surrogates for learning loss functions (because they have to be optimized over) and show specifically on a version of the Portfolio Optimization domain that neural networks performs poorly (Table 1 in their paper). However, in this paper $\mathcal{M}_w$ seems to be a neural network and performs well? Do you not observe this phenomenon of 'convexity being important for loss functions'? 1. There seems to be a recent follow up to [36] --> [A] which seems to do 1 iteration of your alternating update (with a 2-stage warm start) and seems to do well on the Portfolio Optimization domain, even better than the MDFL method (which seems to do much better than both LODLs and LANCER in this paper). Then: * How robust are your results to hyperparameter choices? * How important is it to perform repeated updates? (Is this what you mean by "epoch" in Figure 5? And if so, how much difference is there between 1 epoch with 10x the samples vs. 10 epochs with 1x the samples?) * Relatedly, how do those results change based on how good the initial guess of $\theta$ is? Do you need fewer updates if you warm-start $\theta$ from a 2-stage solution? 1. Why do you measure cost in terms of the number of calls to the BB optimizer in Figure 5? There seems to be a trade-off between (a) using more calls the BB optimizer but training a simple model (as in LODLs) vs. (b) using fewer calls to the BB optimizer but training a more complex model. Do the results look very different if you use wall-clock time? _References:_ [A] Shah S, Perrault A, Wilder B, Tambe M. Leaving the Nest: Going Beyond Local Loss Functions for Predict-Then-Optimize. arXiv preprint arXiv:2305.16830. 2023 May 26. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Apart from the questions in the weaknesses in the section above, I was wondering: 1. **Choice of domains**: I noticed that you have not used the domains from either SurCO or LODLs on which they report good performance. While this is not a weakness in itself, I can't help but wonder how well LANCER performs on those domains. Should LANCER always be the choice of learning method or is the answer more nuanced? 1. **Alternating updates**: Is it important to always do full updates in both spaces? Have you considered, perhaps, doing smaller updates? For example, in a recent paper, [B] seems to do some sort of meta-gradient update based on how $\theta$ would change in response to $w$. 1. **Re-using $\mathcal{M}_w$**: The fact that landscape surrogates can be 're-used' seems to suggest that $\mathcal{M}_w$ is learning something that is common across problem instances. Have you tried to analyze what it is that $\mathcal{M}_w$ is learning? From the experiments, I'm convinced that there exist reasonable domains for which its possible to get this method to work. However, it's not clear at the moment how easy this process is... I'm willing to raise my score to a 7 if the authors provide more detailed answers to the questions in the weaknesses section _References:_ [B] Sivasubramanian, Durga, et al. "Adaptive Mixing of Auxiliary Losses in Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 8. 2023. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I appreciate the inherent conflict of interest, but I think the paper could do a better job of documenting the limitations of their approach. For example, if the authors could answer (1) from the questions section above, that would help to better contextualize the advantages and limitations of this approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **Weaknesses** > 1.a."Do you not observe this phenomenon of 'convexity being important for loss functions'?" Please check the main rebuttal (points 2 and 3). > 1.b. Regarding follow up to LODLs. Thanks for the reference. We were unaware of this work as it appeared after the submission deadline to NeurIPS. As for the results on Portfolio Optimization, there are likely large differences in the data and evaluation. Specifically in the follow-up paper [A], the 2-stage method performs catastrophically poorly compared to LODLs and the proposed approach whereas in our experiments we found that 2-stage performed worse but not disastrously bad. It is thus unclear whether the relative model performance would apply to both settings. It is also important to note that while our results are slightly worse than MDFL, LANCER is clearly performing better in terms of runtime, as shown in Fig. 5 and acknowledged by Reviewer 7fCr. > 1.c. "How robust are your results to hyperparameter choices?" Please see our response to all reviewers above (point 1) for additional experiments we conducted to test the stability of our approach across various hyperparameters and neural network architectures. > 1.d. "How important is it to perform repeated updates? (Is this what you mean by "epoch" in Figure 5? And if so, how much difference is there between 1 epoch with 10x the samples vs. 10 epochs with 1x the samples?)" Yes, in Figure 5, we refer to an epoch as one step of alternating optimization. We find it quite important, and as shown in Figure 5, a larger $T$ (number of updates/iterations) typically leads to better performance. This is also related to our response to Question 2.b. below (making small number of inner updates but keeping larger $T$). As for the "1 epoch with 10x samples", this is prone to overfitting. Whereas, alternating optimization can help us to mitigate overfitting as the model sees the data multiple times with different parameter configurations. Additionally, if replay buffer (section 4.2.) is enabled, then we can naturally (re-)use the target mapping output to enlarge the number of samples. Intuitively, once the number of samples reach a certain threshold, performing more inner updates could be useful. This type of "adaptive" learning scheme we leave for future work. We empirically observed that performing repeated updates consistently improves the performance (around 1.5-2x times in accuracy). > 1.e. "Relatedly, how do those results change based on how good the initial guess of \theta is?" The quality of the initial guess (e.g. 2-stage) affects optimization results, with a good guess leading to faster convergence and better outcomes. For example, in nonlinear shortest path problem, random initialization needs 1.5x-2x more iterations to get the same performance. > 1.f. "Why do you measure cost in terms of the number of calls to the BB optimizer in Figure 5?" In all our experiments, the number of black-box (BB) accesses directly correlates with the runtime, with the w-step significantly dominating over the \theta-step (see Section 1 in the supplementary materials). This is especially true when solving high-dimensional combinatorial problems (e.g., Multidim knapsack, MINLP Portfolio Selection) where even highly optimized solvers (e.g., Gurobi) take time to solve for a single instance. On this note, LODLs as well take a long time during the sampling stage, where multiple calls to the optimizer occur. Additionally, a number of papers from the literature (e.g., [16]) report the number of BB calls as one of the main metrics. &nbsp; 2. **Questions** > 2.a. "Choice of domains: I noticed that you have not used the domains from either SurCO or LODLs on which they report good performance..." While we appreciate your observation, it's important to note that our focus was primarily on relatively high-dimensional problems, and we aimed to cover both domains of learning surrogates for MINLP and DFL. Running all experiments from both SurCO and LODLs papers proved challenging due to the extensive scope. However, it's worth mentioning that both papers reported good results on the benchmarks we tried. For examples, SurCO shows better or similar performance in some settings of nonlinear shortest path problem. > 2.b. "Alternating updates: Is it important to always do full updates in both spaces? Have you considered, perhaps, doing smaller updates? ..." In our approach, we perform a fixed (smaller) number of updates, i.e., we do not exactly solve the minimization in both spaces (lines 10 and 12 in Algorithm 1). For example, most experiment uses 10-20 updated per global iteration. Potential reason is that we don't want $\mathcal M_w$ model to overfit to the data much and increase $T$ instead. Although we discuss this in experimental setup, this will be made clear in the description of the algorithm to avoid any ambiguity. > 2.c. "Re-using $\mathcal M$. ... Have you tried to analyze what it is that is learning?"" Indeed, the reuse of landscape surrogates suggests that the learning process captures common information shared across problem instances. While we haven't explicitly analyzed what the learning process captures, investigating the nature of the learned information could be an intriguing direction for further research. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the clarifications. I'm not convinced that the landscape surrogate is doing anything significantly different from other surrogate losses at a high-level (as long as $\mathbf{g^*}$ in LODLs and $\mathbf{g}$ in this paper are the same), but at the same time I think this paper addresses some key shortcomings of learning surrogate losses (sample efficiency and heuristic sampling). The experiments are quite thorough and seem to clearly show that the proposed improvements improve on past work. I think this paper is a very useful addition to the literature. I raise my score to 7.
Summary: This paper presents a unified framework for "predict-then-optimize" and surrogate cost learning for Mixed Integer Nonlinear Programming (MINLP). These problems are cast as learning an optimizer g with f as the objective. Current solutions either suffer from scalability issues or the sparse gradient problem. To overcome these, the authors propose learning a smooth and tractable landscape surrogate to replace the compound function $f\circ g$. A neural network parameterizes the surrogate loss and it is learned through alternating optimization. This is done by alternately optimizing the target model c and landscape surrogate $\mathcal{M}$ in a manner similar to Generative Adversarial Networks (GANs). Experiments covering both linear and non-linear objectives demonstrate the efficacy of the proposed method under both "predict-then-optimize" and surrogate learning settings. Strengths: The paper proposes the first unified framework for "predict-then-optimize" and surrogate learning. The method is straightforward, and empirical results appear promising in terms of both optimization performance and runtime. Weaknesses: Directly utilizing a neural network to parameterize the landscape may not be a good idea. The complexity of a neural network typically surpasses the original objective f. A neural network can have extremely large number of local minimums and this approach could disrupt the convexity of the original optimization problem, leading to instability in the proposed method's learning procedure. In fact, in Shah et al., several parameterizations, including neural networks, were tested and found to often result in catastrophic outcomes. There's concern about this parameterization across various optimization objectives. While SurCo minimizes the original objective f, LANCER learns an additional surrogate loss. However, in experiment 5.2.2, LANCER significantly outperforms SurCo. The authors should provide more analysis and clarification on why employing such a surrogate loss can surpass the original loss. The paper lacks important implementation details. In 4.1, the authors mentioned the possibility of executing the theta-step during testing since c_{\theta} is available for unseen test data. It is unclear if this step was used during testing in the experiments. The supplementary material also does not provide details about the number of training iterations for this theta-step during testing. If this additional optimization step was used during testing, performance without this step should also be reported to pinpoint the source of the gains compared to SurCo. In fact, SurCo can also have this extra optimization step for the unseen test data. Furthermore, the paper does not provide any implementation details for the replay buffer trick discussed in 4.2, such as the number of points needed to be stored, which could significantly impact the training time. The unification of the "predict+optimization" problem and surrogate cost learning problem under optimization with partial information feels forced as half of the paper focuses on surrogate objective learning for MINLP where full information is available. Section 4 could benefit from restructuring. The main introduction of the proposed method precedes 4.1 and 4.2 and lacks a subsection title, while 4.1 and 4.2 focus on implementation details. Minor: Line 285: MDFL was proposed for combinatorial decision-focused learning and it is not the first DFL paper. A more accurate citation should be Donti et al. (2017). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Have different parameterizations of \mathcal{M} been tested and do they significantly affect performance? In 4.1, the paper said we can execute the theta-step at testing time since c_{\theta} for unseen test data. Did you try this step in your experiment? I did not find the number of training iterations of this theta-step at testing time in the supplementarial material. Was the replay buffer trick, as described in 4.2, used and how many points were needed to be stored? This could significantly impact training time but no implementation details are given in the experimental section or the supplementary material. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please refer to the Weaknesses section for a detailed discussion of the paper's limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **Weaknesses** > 1.a. On utilizing a neural network to parameterize the landscape and comparison with LODLs Please check our response to all reviewers above (points 2 and 3) &nbsp; > 1.b. Why LANCER outperforms SurCo? **LANCER allows nondifferentiable objective $f$**. SurCo relies on (approximately) differentiating through the solver, which can be challenging and slow in combinatorial problems due to sparsity, while LANCER circumvents this issue by constructing a surrogate landscape, M, that models the composition of nonlinear objective f *and* the solver g. M is differentiable and thus provides dense gradients and facilitates faster learning. **LANCER may call solver g fewer times**. SurCo requires that the solver $\mathbf{g}$ be called in each iteration, which can be time-consuming, while LANCER can leverage surrogate landscape $\mathcal M_w$ to bypass the solver $\mathbf{g}$, saving computational cost. The pros and cons of both approaches are thoroughly discussed in sections 3 and 4. > 1.c. Implementation details for section 4.1. We apologize for any confusion. The implementation of section 4.1 simply involves executing $\theta$-step for unseen data once Algorithm 1 terminates. This is especially useful when Z = Y, i.e., there is no partial information in the testing instances, such as MINLP (see LANCER-zero pseudocode in the suppl. mat.). In this case, $M_w$ can be used to evaluate the quality of target mapping $c_\theta(\mathbf{y})$, and improve $c$ directly (through $\theta$-step only) without calling the expensive composition $f \circ \mathbf{g}$. This is an optional step that one can perform once Algorithm 1 finishes. As for the experiments, we found out that this can lead to a significant runtime reduction. We conducted ablation studies in section 5.4 to validate this. > 1.d. Implementation details of the replay buffer and the effect on runtime. We primarily employed replay buffer for MINLP experiments, and it showcased noticeable improvements in performance. We set the number of points to be added to the buffer to N, so the total/max size of the replay buffer progressively reaches TN by the termination of the algorithm, where T is typically between 5-40. Of course, this could be controlled via setting the max buffer size if N is large. This is easy to implement as insert/retrieve operation on a buffer takes constant time. Additionally, we will make our implementation open source to allow full reproducibility. It is crucial to note that the use of the replay buffer has only a marginal effect on the runtime and is barely noticeable. The primary purpose of the replay buffer is to store and reuse past experiences to break temporal correlations and stabilize training. This process is relatively lightweight compared to other computationally intensive parts of the algorithm, such as interacting with the black-box solvers. The runtime increase only appears at line 10 of Algorithm 1, which involves training neural networks that scale linearly with the dataset size. Fortunately, this process is straightforward to parallelize, thanks to the capabilities offered by modern frameworks. &nbsp; > 1.e. "The unification of the P+O and surrogate cost learning feels forced..." We respectfully disagree with this statement. Our approach equally addresses both problems, each of which introduces uncertainty to the optimization problem. Neither of them can be straightforwardly solved in their general formulations. In the case of P+O, the problem descriptors are unknown and must be inferred from observed input $\mathbf{y}$. On the other hand, when learning surrogates for MINLP, the cost vector of the surrogate problem is unknown. We formulate both of these uncertainties as learning problems and propose LANCER as a unified algorithm to address them. The unification is crucial to derive LANCER and successfully apply it to both of these challenging problems, without changing substantially its framework. &nbsp; 2. **Questions** > 2.a. Have different parameterizations of $\mathcal M$ been tested and do they significantly affect performance? Please see point 3 in the main response. > 2.b. Experiments with regards section 4.1. The ablation study in section 5.4 is specifically dedicated to this purpose. Table 3 presents the results, clearly demonstrating the benefits of the "transferrability/reusability" of $\mathcal M_w$. > 2.c. Was the replay buffer trick, as described in 4.2, used? Please, see our response 1.d. in Weaknesses. --- Rebuttal Comment 1.1: Comment: Dear Reviewer kDSQ, I hope our responses have resolved your concerns. We have diligently worked to address the points you raised and believe these revisions strengthen the overall quality of the paper. Your feedback has been invaluable in refining our work. If you find the revisions align well with the paper's objectives and address your initial concerns, we are hopeful that an adjustment in the score could reflect these improvements. Please feel free to ask if you have more questions or if there's anything else we can provide to support your evaluation. Thank you! --- Rebuttal Comment 1.2: Title: Thanks for the rebuttal Comment: Thanks for the detailed clarification. I have also read other reviewers. I will adjust my score accordingly.
Summary: `This paper is concerned with an amortized optimization scheme for challenging variants of canonical decision problems such as MINLPs and nonlinear portfolio selection. The authors propose a method with two components, a target mapping $c_\theta$ that maps partially observed problem descriptions $\mathbf y$ to full descriptions $\mathbf z$ that are passed to a traditional solver $g$ (e.g. an MILP solver), and a landscape estimator $M_w$ (i.e. a value function) that estimates the objective value attained by $g(c_\theta(\mathbf y))$ w.r.t. some distribution of training instances. Together $g(c_\theta(\mathbf y))$ can be seen as a policy mapping problem descriptions to optimal decisions. This is an empirical paper, with proof of concept results on several variants of canonical decision problems. Strengths: Amortized optimization is a common technique across machine learning. For example, LANCER bears striking similarity to actor-critic methods for RL, which amortizes the evaluation of an actor (i.e. policy) $\pi_\theta$ into a critic (i.e. value estimate) $Q_w$, and likewise the policy amortizes the optimization problem $a^*(s) = \mathrm{argmax}_a Q_w(s, a)$. The main difference is the objective value in RL (i.e. the policy returns) are not directly observed, and so $Q_w$ is trained by fitted Q-iteration (i.e. the Bellman backup operator) instead of a supervised loss as proposed in this paper. While the technique proposed in this paper is not new, it is the first time I have seen it combined with traditional optimization solvers like MILPs, which are very underutilized by the ML community. Hence the concept of the paper is very intuitive, and has as much if not more potential for impact than RL papers, which are routinely published in NeurIPS proceedings but as yet have not appeared to have significant impact on industrial optimization problems. Weaknesses: Like many other solutions to bilevel optimization problems (including actor-critic algorithms), LANCER is not guaranteed to converge w.r.t. $\mathbf w$ or $\mathbf \theta$. Although it is not discussed in the paper, I suspect that LANCER is likely unstable if the hyperparameters are not chosen carefully. If this is not the case, please provide evidence in the rebuttal. The connections to other amortized optimization methods in ML generally is not a weakness in my opinion, however I think the related work and discussion could be greatly improved by placing this work in context with other methods like actor-critic RL algorithms and amortized Bayesian inference algorithms to name a few. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How sensitive is LANCER to hyperparameters in terms of performance and stability? - Do you think it may be necessary to regularize $c_\theta$ to better explore the space of possible target mappings (similar to entropy-regularized policies in RL e.g. soft actor-critic)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - While LANCER appears more sample efficient in terms of black-box sample efficiency is does seem a bit slower than the baselines (although it does find better solutions). Hence in latency-sensitive applications it may not be the best choice - Like Offline RL the generalization of LANCER to unseen problems will depend heavily on the support of that particular problem configuration in the training set. For out-of-distribution problems LANCER's performance is likely greatly diminished. Determining LANCER's sensitivity to distribution shift is an interesting research question. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **Weaknesses** > 1.a. Convergence guarantees and stability We acknowledge that, at present, we lack theoretical guarantees on convergence for LANCER. However, we observe empirically that the objective improves as the total number of alternating optimization iterations (T) increases, as shown in Figure 4. Also, please see our response to all reviewers above (point 1) for additional experiments we conducted to test the stability of our approach across various hyperparameters and neural network architectures. > 1.b. Connections to other amortized optimization methods We would like to extend our appreciation to the reviewer for highlighting the insightful parallels between the actor-critic framework and our LANCER methodology. Although we provide a review of several amortized methods in context of smart P+O and MINLP (e.g. [13,16,36]), we agree that enhancing the related work and discussion by placing our work in the context of actor-critic RL algorithms and amortized Bayesian inference algorithms, would be a valuable addition to the paper. We will certainly take your suggestion into consideration and make the necessary revisions to provide a more comprehensive understanding of the contributions and implications of our proposed approach in relation to these other methods. &nbsp; 2. **Questions** > 2.a. How sensitive is LANCER to hyperparameters in terms of performance and stability? See our response in the Weaknesses section (1.a.) above. &nbsp; > 2.b. "Do you think it may be necessary to regularize to better explore the space of possible target mappings (similar to entropy-regularized policies in RL e.g. soft actor-critic)?" In our experiments, we have chosen to utilize simple weight decay as a means of regularizing target mappings. Other regularization approaches could apply, e.g., dropout, sparsity, and/or use networks of fewer number of parameters. Note that entropy-based regularizations are more straightforward to apply when working with probability distributions, which might not be as straightforward in the context of our study, where problems in P+O & MINLP have a deterministic nature. &nbsp; 3. **Limitations** > 3.a. While LANCER appears more sample efficient in terms of black-box sample efficiency is does seem a bit slower than the baselines (although it does find better solutions). Hence in latency-sensitive applications it may not be the best choice In all our experiments, the number of black-box (BB) accesses directly correlates with the runtime, with the w-step significantly dominating over the \theta-step (see Section 1 in the supplementary materials). As indicated in Figure 5, LANCER consistently exhibits much faster runtime compared to the baseline methods. The only exception is shown in Figure 4, where we intentionally use more requests to the BB solver to explore a larger number of iterations for evaluation purposes. Moreover, the w-step in LANCER can be easily parallelized, offering the potential for significant improvements in runtime efficiency. We are confident that the parallelization capability can further enhance the overall performance of our approach. > 3.b out of distribution performance... On the portfolio optimization instances, the test data is quite out of distribution from the training data as the data are split temporally. In the financial setting, it is considered that the data distribution changes wildly over time. Our performance on this domain seems to suggest that the models are able to generalize to unseen data that are dissimilar to the training data. We leave more thorough studies for future work. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thanks for the thorough response! While there are areas for improvement that could likely increase the impact of the paper, I remain very supportive of acceptance. I'm very happy to see that your method does not appear too sensitive to hyperparameters. I really believe the practical impact of this paper could be significant. After reading some of the responses from the other reviewers, I think a challenge you are facing is the way new methods are tend to be held to a bit of a double standard compared to existing methods. Virtually all the criticisms you are facing also apply to every deep RL paper, and yet dozens of RL papers are happily accepted to major ML conferences every year. As you reflect on the relationship of your work to the field of ML as a whole you may find this tutorial on amortized optimization [1] provides a helpful perspective. You cite other papers by the same author several times in your paper so you may already be aware of it. In any case, I hope you find the exercise productive. My comment on the runtime of Lancer was primarily based on Figure 4, which seems to indicate LANCER has the longest runtime of the methods considered. How do you explain this apparent inconsistency? [1] Amos, Brandon. "Tutorial on amortized optimization." Foundations and Trends® in Machine Learning 16.5 (2023): 592-732. https://arxiv.org/abs/2202.00665 --- Reply to Comment 1.1.1: Title: LANCER runtime clarification Comment: Thank you for dedicating time to review the rebuttal and for contributing additional relevant research! > My comment on the runtime of Lancer was primarily based on Figure 4, which seems to indicate LANCER has the longest runtime of the methods considered. How do you explain this apparent inconsistency? The table below expands Figure 4, providing a more comprehensive LANCER vs. SurCo (top-performing methods) comparison via increased SurCo iterations. Despite LANCER's longer runtime, its objective consistently improves, unlike SurCo which plateaus. Thus, extending LANCER's runtime is justifiable. In this specific Figure 4 experiment, the observed runtime difference in LANCER is attributed to LANCER-zero relying on requisite sampling for surrogate loss training (Algorithm 2 in supplementary materials), leading to more black-box requests and increased runtime (which can be parallelized). This acknowledges LANCER's variable performance, including cases of extended runtime. ||LANCER|||SurCo|| |----|--|--|------|--|--| | Iterations | Objective | Runtime | Iterations | Objective | Runtime | | 10 | -0.0429 | 81.12 | 10 | -0.0268 | 10.12 | | 20 | -0.0611 | 124.90 | 20 | -0.0373 | 18.25 | | 30 | -0.1000 | 220.05 | 100 | -0.0421 | 78.48 | | 40 | -0.1233 | 517.74 | 500 | -0.0423 | 253.30 | | 100 | -0.1609 | 2012.37 | 1000 | -0.0425 | 589.12 |
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for providing valuable comments and taking the time to review our paper. Here we address questions raised by several reviewers. > 1. [1xim,kDSQ,Nect] **Stability of LANCER to hyperparameters and NN architecture** This is an empirical question and depends on various factors, as is typical in model selection. Nevertheless, we've observed that LANCER generally exhibits robustness in many cases. As an evidence, we performed additional experiments on the problem from section 5.1.2. In the two tables below we demonstrate that the final objective does not fluctuate significantly when we experiment with different combinations of hyperparameters and variations of NN architecture. We will leave more ablation studies as the future work. Also, see point 3 below on using quadratic losses. |M_lrn_rate | M_max_itr | c_lrn_rate | c_max_itr | OBJECTIVE| |------------:|--------------:|------------:|-----------:|-------:| |0.0005 | 10 | 0.001 | 10 | 0.4651| |0.001 | 10 | 0.001 | 10 | 0.4649| |0.01 | 10 | 0.001 | 10 | 0.4650| |0.0005 | 5 | 0.001 | 10 | 0.4590| |0.001 | 5 | 0.001 | 10 | 0.4597| |0.01 | 5 | 0.001 | 10 | 0.4651| |0.0005 | 20 | 0.001 | 10 | 0.4651| |0.001 | 20 | 0.001 | 10 | 0.4612| |0.01 | 20 | 0.001 | 10 | 0.4651| |0.001 | 10 | 0.0005 | 10 | 0.4612| |0.001 | 10 | 0.01 | 10 | 0.4650| |0.001 | 10 | 0.001 | 20 | 0.4648| |M_num_of_hidden_layers | M_layer_size | OBJECTIVE| |:---------------------------:|------------:|-----------------:| |1 | 50 | 0.4644| |1 | 100 | 0.4646| |1 | 200 | 0.4641| |2 | 50 | 0.4646| |2 | 100 | 0.4651| |2 | 200 | 0.4651| |3 | 50 | 0.4644| |3 | 100 | 0.4650| |3 | 200 | 0.4642| &nbsp; > 2. [kDSQ,Nect] **Concerns on the form of landscape surrogate $\mathcal M$. Comparison between LANCER and other surrogate losses in P+O** We acknowledge that there is indeed similarity between the Decision Loss (DL) in [36], Decision Quality (DQ) in [A] and our landscape surrogate $\mathcal M_w$. Using our notation, the DL/DQ consider the loss in the form of $\mathrm{DL}(\mathbf{\hat z}, \mathbf z) := f(\mathbf g^*(\mathbf{\hat z}(\mathbf y)), \mathbf z)$, where $\mathbf z$ is the ground truth specification of the problem instance, $ \mathbf{\hat{ z }} $ its estimate, $\mathbf y$ is the features, and $\mathbf g^*$ is the groundtruth solver, and propose to use a surrogate loss $L_\phi$ to fit $\mathrm{DL}(\mathbf{\hat z}, \mathbf z)$. In contrast, our landscape surrogate $\mathcal M_w(\mathbf c, \mathbf z)$ is used to fit the loss in the form of $f(\mathbf g(\mathbf c(\mathbf y)), \mathbf z)$. Note that LANCER focuses on the quality of the final objective directly, rather than aim at a good estimate of $\mathbf z$ first. Therefore, there are a few key differences: + we never aim to predict a good estimate of $\mathbf z$ so our loss cannot be written as the form of $DL(\mathbf{\hat z, z})$ and thus do not have the same structure as in DL/DQ loss. In fact, from the function arguments, it is clear that $\mathcal M_w(\mathbf c,\mathbf z)$ is not a symmetric function. This can be advantageous, since what we want is the solution to $f$ and getting a good estimate of $\mathbf z$ is extra work. + we use a target mapping $\mathbf c(\mathbf y)$ to predict the surrogate cost for a cheap surrogate solver $\mathbf g$ to give a solution. Therefore, $\mathbf g$ may be very different from the original ground truth solver, which can be very slow or impossible to run (as in the case with MINLP), and our landscape surrogate $\mathcal M_w(\mathbf {c,z})$ needs to cope with that. + LODLs are trained per-instance on points sampled around the true labels which may not be representative of the region that the target model is passing through during training. Furthermore, in such a tight region it makes sense to use a simple convex model rather than a more expressive neural network. However, when trying to approximate the broad region that our target model traverses during training *across* a diverse training set (Algorithm 1, line 5), it is likely that a more expressive model is needed. Therefore, in our case fitting it with a NN is reasonable and have more benefits: - Expressiveness: NNs are capable of approximating complex functions, making them well-suited for approximating the Value- or Q-functions in actor-critic RL algorithms, where they are widely used. Similarly, in our setting, $f \circ \mathbf{g}$ can exhibit complex behavior, especially in high-dimensional problems. For instance, we observe such behavior in the P+O Multidim Knapsack and MINLP portfolio selection problems. - Scalability: we mainly focused on relatively high-dimensional optimization problems, and NNs can efficiently handle such large-scale tasks, enabling effective learning in these challenging environments. [A] Shah et al. Leaving the Nest: Going Beyond Local Loss Functions for Predict-Then-Optimize. arXiv:2305.16830. > 3. **Other alternative landscape losses we tested** Finally, we would like to emphasize that our method is not specifically tailored towards neural nets and is more generic. Any differentiable function $\mathcal M$ could be used. We conducted experiments with convex functions in several instances. For instance, in the nonlinear shortest path problem (section 5.1.2), the quadratic model yields an objective value of 0.412 (i.e., worse than a greedy approach). In contrast, the results obtained using the NNs are close to the optimum, with an obj value of 0.464.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Diffusion Schrödinger Bridge Matching
Accept (poster)
Summary: This paper formulates a new framework of a probabilistic model called Diffusion Schrödinger Bridge Matching (DSBM) and an iterative algorithm which is called iterative Markovian fitting (IMF). Firstly, the paper has contribution of pointing out a relationship between famous generative models (score matching and flow matching) in the form of DSBM. IMF is also novel and might be more suitable to be trained in neural networks than well-established algorithms such as IPF. To achieve these, the authors utilize a tractable bridge structure (Brownian bridge), which enables preserving key properties, i.e., densities of marginal distribution at the start and end of the process. The proposed algorithm often shows better generation results in certain tasks, such as image transfer. Strengths: 1. The paper is well-written despite dealing with a complex subject. As far as I understand, most of the arguments seem to be theoretically sound. 2. SB algorithms have suffered from numerical errors and computational complexity. Since the IMF approach is akin to standard matching algorithms for neural networks, the provided empirical results show that it scales well. 3. The paper provides a novel perspective on the diffusion bridge problem with a novel algorithm. As a result, DSBM-IMF shows promising results in certain tasks. Weaknesses: 1. Significance. In Tables 2, 3 and Figure 10, It is difficult for me to observe that DSBM-IMF excels DSMB-IPF. Therefore, I would say significance of the proposed algorithm is not clearly shown in the experiments. The paper needs more empirical evidence of the actual benefits of choosing this algorithm, such as performance boost and scalability, in more challenging, or large-scale settings. 2. Limited applications. I believe the actual implementation of the algorithm heavily relies on the well-known Brownian bridge. Although I think this is actually a valid contraint for certain applications such as image generation, at the same time it means that the algorithm is not always applicable to general optimal transport problems. Therefore, I recommend putting more effort on analyzing more classes of $\mathbb{Q}$ in the experiment (if there are any), or putting clear justification on certain classes of $\mathbb{Q}$ used in the experiment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Does $f_t$ is predefined in all tasks? 2. Is $\sigma_t$ constant and **not** scheduled? The authors of [1] mentioned time-symmetric scheduling the diffusion term, as suggested by prior SB models. 3. Could you explain how the IMF scheme is fundamentally better than IPF algorithms? [1] Guan-Horng Liu et al. (2023), I2SB: Image-to-Image Schrödinger Bridge. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Although I think overall paper is good, but I also think the proposed experimental results are not quite sufficient; the paper need more empirical evidence to show the excellence of the model. I hope to see response from the authors for this point. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and their positive evaluation. We appreciate their interest in our work. We address here the reviewer's concerns raised in the review. **“Difficult to observe DSBM-IMF excels DSMB-IPF ...”**: We would like to first make a clarification that DSBM-IPF is also a proposed novel algorithm in our work. While DSBM-IPF is based on the classical IPF theory, it is different from previous diffusion-based algorithms based on IPF such as DSB which we compare to in the paper. Comparing DSBM-IPF with DSB, our proposed methodology achieves significantly better results, see Figure 5. On the other hand, comparing DSBM-IPF with DSBM-IMF, they exhibit a comparatively minor difference like you mentioned. Algorithmically, the two DSBM methods only differ in the initialization of $\Pi^0$ in the first iteration. To summarize, DSBM-IMF and DSBM-IPF are both original contributions of the paper (differing mostly on their theoretical foundations while being very close in practice). **“Actual implementation of the algorithm heavily relies on the well-known Brownian bridge ...”**: We thank the reviewer for this comment. We refer to the "Choice of diffusion bridge" section in our response to all reviewers for a detailed response. **“More empirical evidence”**: We have demonstrated in our paper the usefulness of our proposed method in a number of high-dimensional tasks, such as CelebA ($128 \times 128$) and fluid flows downscaling ($512 \times 512)$. We have additionally performed an experiment on the AFHQ dataset at $512 \times 512$ resolution using the unmodified DSBM-IMF algorithm, and have included it in the one-page response PDF. Additionally, we note that the scalability of our method is comparable to the one of bridge matching, as our method can be viewed as a refinement of it. **“Is $f_t$ predefined ...?”**: Yes, the reference process $\mathbb{Q}$ has a similar role as the “forward noising process” in standard diffusion models. Two standard choices of $\mathbb{Q}$ include the Brownian motion for which $f_t(x_t)=0$, and the Ornstein-Uhlenbeck (OU) process for which $f_t(x_t)=-\frac{1}{2}x_t$. **“Is $\sigma_t$ constant and not scheduled? ...”**: We have used a constant schedule for the diffusion coefficient $\sigma_t$ in our work. We have experimented with a symmetric schedule of integration step sizes similar to the related works you mention, but did not observe any improvement. We agree that the schedules for both the noising term and the integration steps may be important directions to study for future work. **“Could you explain how the IMF scheme is fundamentally better than IPF algorithms?”**: In short, the IMF scheme always preserves $\pi_0, \pi_T$ as the marginals at the initial and final times, whereas the IPF scheme only reaches the SB solution $\mathbb{P}^\star$ which satisfies both marginal constraints in the limit. For extra details, we refer to the “Relationship between IPF and IMF” section in our response to all reviewers.
Summary: Flow matching and alpha blending have achieved tremendous attention in the matching problems. Although the straight trajectories based on $X_t = (1-\alpha)X_0+\alpha X_1$ yield **fast inference**, it doesn't necessarily mean they are efficient in score estimations in training. To make the training theoretically more efficient, rectified flow leverages the optimal transport ideas to minimize the transport cost. However, the stochastic alternative is still lacking (stochastic interpolant may not be optimal in terms of optimal transport). To tackle this issue, the authors proposed the stochastic version of flow matching/ alpha blending with entropic optimized transport. A novel scheme IMF that resembles the IPF iteration is proposed. Strengths: 1. Despite the advances of stochastic interpolant, the transport from marginals may not be optimal in general. The **optimal-transport**-based stochastic version of flow matching and alpha blending is **still missing** in the community. 2. In my understanding, the proposal of the IMF algorithm is the key/ most important contribution of this paper. 3. solid experiments and comparisons are studied, which verify the promising potential of DSBM. 4. Connections to flow matching, rectified flow, and stochastic interpolant are studied, which makes the authors easier to understand the benefit of this paper. Weaknesses: 1. Clearer clarification between IPF and IMF would be required to facilitate the understanding. IPF is based on forward K; IMF is based on backward KL, this is good in math, but more detailed information is lacking. for example, if we solely check proposition 2, we can derive almost the same loss based on forward KL (which leads to the IPF algorithm) except that the underlying measure is not based on $\Pi_{0, t}$. So **what is the real benefit/ advantages of simulating $\Pi_{0, t}$ to train the loss in proposition 2 would clarify our concerns** 2. Algorithm 1 is not clear enough. The authors may consider providing us more details in the learning of $\upsilon_{\phi^{\star}}$ in Alg 1. For example, in the training of alpha-blending (line 60-64 in https://github.com/tchambon/IADB/blob/main/iadb.py), given some samples of $x_1$ and $x_0$, very simple code would be used to train the models. However, when it comes to DSBM, how is it implemented? pseudocode would work if it clarifies our concerns. 3. My concern is that this algorithm may be quite expensive. Minor 1. missing one related work: Discussions on the accumulations of IPF errors based on general optimal-transport cost functions studied in [1] would be appreciated. [1] Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation. ICML'23. 2. writing of Equation between line 637 and line 638 is weird, because given X_t=x_t and x_T, what is the randomness there and why do we need expectation w.r.t. $\Pi_{T|t}$? 3. derivation of eq after line 113 is not clear to me. why $E_{\Pi_{0, T}}[||X_0-X_T||]^2$? is there any reference for that? 4. first part of section 4 in the main paper, the need the alternating schemes is still not well motivated 5. what is IMF-b? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. DDMs can be seen as the first iteration of DSM, which implies that we can use DDMs model weights to initialize the model weights of DSM. Does it mean that empirically we can also use weights from stochastic interpolant / the stochastic version of flow matching to train the first step of DSBM? 2. In line 993 in the appendix, the authors mentioned that "The advantage of this DSM loss is that it does **not rely on any divergence computation**". I believe this is quite an important part of scalability, but why it is only emphasized in the joint training instead of stating it in the main paper? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your thorough review. We will incorporate your suggestions regarding the clarity of Algorithm 1 and the related work. We now address your comments in more detail. **“Clearer clarification between IPF and IMF”**: Like you mentioned, we have related IPF with the forward KL, and IMF with the backward KL; however, they are not simply interchangable by switching the direction of KL in Proposition 2. We refer to the “Relationship between IPF and IMF” section in our response to all reviewers for details. **“Algorithm 1 is not clear enough...”**: We were constrained by space in the main paper to describe the algorithm fully. However, we plan to revise the paper to include a more detailed and down to earth description of the algorithm. In what follows, we include a more complete description of the algorithm. We start by describing a function DatasetUpdate which updates the dataset. >Input: direction $d$ (forward or backward), drift $v$ >If $d$ is forward do >>Sample $\mathbf{X}_0\sim\pi_0$ >>Simulate $\mathrm{d}\mathbf{X}_t=v(t,\mathbf{X}_t)\mathrm{d}t+\sigma\mathrm{d}\mathbf{B}_t$ >>Output Dataloader($(\mathbf{X}_0,\mathbf{X}_T)$) >If $d$ is backward do >>Sample $\mathbf{Y}_0\sim\pi_T$ >>Simulate $\mathrm{d}\mathbf{Y}_t=v(t,\mathbf{Y}_t)\mathrm{d}t+\sigma\mathrm{d}\mathbf{B}_t$ >>Output Dataloader($(\mathbf{Y}_T,\mathbf{Y}_0)$) Equipped with this function we are now ready to describe the full pseudocode of DSBM-IMF. >Input: forward drift network $v_\theta$, backward drift network $v_\phi$ >Initialize $d$ = backward and PairedDataset with independent samples $\mathbf{X}_0\sim\pi_0$ and $\mathbf{X}_T\sim\pi_T$ >For $n\in\{0,\dots,N-1\}$ do >>While not converged do >>>Sample $t\sim\mathrm{Unif}([0,T])$ >>>Sample $(\mathbf{X}_0,\mathbf{X}_T)\sim$ PairedDataset and $\mathbf{Z}\sim\mathrm{N}(0,\mathrm{Id})$ >>>Compute $\mathbf{X}_t=(1-t/T)\mathbf{X}_0+(t/T)\mathbf{X}_T+\sigma\sqrt{t(T-t)/T}\mathbf{Z}$ >>>If $d$ is forward $v=v_\theta$; update $\theta$ using ADAM on the loss function $\|v_\theta(t,\mathbf{X}_t)-(\mathbf{X}_T-\mathbf{X}_t)/(T-t)\|^2$ >>>If $d$ is backward $v=v_\phi$; update $\phi$ using ADAM on the loss function $\|v_\phi(T-t,\mathbf{X}_t)-(\mathbf{X}_0-\mathbf{X}_t)/t\|^2$ >>Update PairedDataset using DatasetUpdate$(d,v)$ >>If $d$ is forward change it to backward; if $d$ is backward change it to forward In practice, we can use caching to reuse samples from PairedDataset. We hope that this pseudocode resolves the concerns of the reviewer and we are happy to provide more details if more clarifications are needed. **“My concern is that this algorithm may be quite expensive.”**: For the first stage of DSBM, there is no computational difference with bridge matching. Later stages are trained similarly to Rectified Flow. The main computational complexity is that the coupling $(\mathbf{X}_0,\mathbf{X}_T)$ is not independent but given by the previous iteration. This requires running a sampling algorithm and is shared between DSBM, DSB and Rectified Flow. In practice, the computational load of this step can be greatly reduced using caching strategies. **“Missing one related work...”**: We thank the reviewer for pointing us to [1]. In [1], the authors study the bias accumulation for IPF when the potentials are approximated. The results can be applied to provide error bounds on the marginals. However, in the IMF procedure, there does not exist potentials associated with the path measure in the sense of [1]. It would be extremely interesting to derive similar results for the IMF sequence. We leave this study for future work. **“Writing of Equation between line 637 and line 638 ...”**: Thank you for this question. The variables $x_0$ and $x_T$ in lines 637-642 should all be changed to $\mathbf{X}_0$ or $\mathbf{X}_T$. We will make this modification in the paper. **“Why $\mathbb{E}\_{\Pi_{0,T}}[||X_0-X_T||]^2$?”**: One reference for this result is [2] around equation (1.2). This is a standard result when $\mathbb{Q}$ is a Brownian process $\sigma \mathbf{B}_t$. **“The need of alternating schemes...”“what is IMF-b?”**: As detailed in lines 186-192, a naive version of IMF can be implemented by iteratively performing Markovian projections in the forward-time direction using equation (9). Due to time-symmetry (Proposition 8), IMF can also be derived in the reverse-time direction using equation (13). They are essentially the same algorithm, and only differs in the direction of transport we learn. We name the latter "IMF-b" which stands for backward-only IMF. However, as shown in Figure 3, IMF-b suffers from increasing error as the number of iterations $n$ increases. This is in line with the analysis in lines 190-192. On the other hand, Figure 3 shows DSBM-IMF does not suffer from the same issue. This indicates that the forward-backward scheme is highly useful for avoiding bias accumulation and improving accuracy in the learned marginals. **“Can use pretrained weights in first step of DSBM?”**: That is correct. The first iteration of DSBM coincides with bridge matching, and so DSBM can also be interpreted as further refinements of bridge matching. **“The advantage of this DSM loss is that it does not rely on any divergence computation...”**: We did not mention this advantage in the main paper because some previous SB methods (such as DSB) also don't rely on divergence computation. However, we agree with the reviewer that any loss based on divergence (like the ISM loss) may suffer from scalability issues. We thank the reviewer for your suggestions to clarify our contribution and the pseudocode. We hope that our rebuttal resolves the main concerns of the reviewer. [1] Chen, Deng, Fang, Li, Yang, Zhang, Rasul, Schneider, Nevmyvaka (2023) -- Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation [2] Léonard (2013) -- A survey of the Schrödinger problem and some of its connections with optimal transport --- Rebuttal Comment 1.1: Title: reply Comment: I appreciate the authors' detailed reply and thanks for the suggested reference. I am satisfied with the response. --- Reply to Comment 1.1.1: Comment: Thank the reviewer for your response and your acknowledgement of the rebuttal.
Summary: The submission suggests a numerically effective approach for solving the Schroedinger Bridge (SB) problem and illustrates its potential for generative modeling. The approach generalizes in some sense some recent flow matching methods. Furthermore, it provides a more efficient (no full trajectory caching) algorithm with less error accumulations that better remembers the prior reference measure for solving SB problems compared to previous works. Empirical results indicate for instance that it can yield better generative performance for MNIST-EMNIST transfer compared to previous work, or better reconstructions for downscaling geophysical fluid dynamics. Strengths: Originality: The proposed methodology and algorithm is novel as far as I am aware. The work provides additional contributions compared to cited concurrent work [Peluchetti (2023)] that shares some similar ideas. Related literature and approaches are discussed in detail. Quality: Claims are adequately supported by proofs, and numerical experiments seem to support this. Clarity: The paper is well written. The presentation is quite rigorous. Significance: In my opinion, this work is of interest to the community as the method (i) generalizes recent flow matching approaches with non-degenerate noise and (ii) improves experimentally previous Diffusion Schrödinger Bridge approaches and (iii) provides some theoretical guarantees for solving the SP problem (under idealized assumptions). Weaknesses: It was not clear to me how the computational cost of the suggested method compares to related works per iteration or diffusion step (e.g. in Figures 5 and 16). Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Figure 16, it appears that FID can be improved by using more than 100 steps. Does DBSM obtain better FID values compared to Bridge matching for more than 100 steps? It was not clear to me what to take away from some parts of the appendix, such as as E Discrete-Time Markovian Projection or the Brownian/OU bridge representations in B.2. Are the results in B.2 helpful for saying anything about how to specify the SDE in the forward process $X$ for the suggested approach? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thorough evaluation of our work. We appreciate your acknowledgment of the paper’s merits. Here we would like to expand on the other comments raised in the review. **“Computational cost of the suggested method”**: At training time, our proposed method has comparable computational cost with previous SB methods such as DSB [1], but is more costly than flow matching [2] and bridge matching [3,4,5,6] (however, the first iteration of DSBM corresponds to bridge matching and so has comparable cost). DSBM can be seen as a refinement of bridge matching, and so pre-trained bridge matching models can be used to initialize DSBM. The additional training cost comes from the trajectory caching procedure in subsequent iterations (see footnote 4), which requires sampling from the SDE model trained in the previous iteration. For instance, at iteration $n+1$ we need to sample from $\mathbb{M}^n$ and save a batch of joint samples $(\mathbf{X}_0, \mathbf{X}_T)$. This batch of samples can then be cached and looped over for a number of epochs to sample different $\mathbf{X}_t$ at different values of $t$. Under reasonable assumptions on the SDE simulation cost and the number of epochs, the training cost averaged per iteration is about 1.5-2 times the training cost of flow matching and bridge matching (which is the case in e.g. Figure 5). Also, as both the forward and backward models are trained, the total training cost is doubled. On the other hand, DSBM is about 30\% more efficient than DSB, as the trajectory caching procedure in DSBM requires less NFEs than DSB (see lines 944-954). At sampling time, the computational costs are all equal among all methods per diffusion step. In Figure 16, all methods have the same sampling cost for each vertical slice. Therefore, DSBM can attain similar sampling performance as flow and bridge matching with significantly less sampling cost. We also note that FID score is not the only metric for well-learned transport. Indeed, while the FID score quantifies information about the accuracy of the marginal distribution, we also need to check for the similarity between the samples $(\mathbf{X}_0, \mathbf{X}_T)$, see the Unpaired Fluid Flows Downscaling experiment for instance. **“Not clear what to take away from some parts of the appendix ...”**: In Appendix B, we showcase that the DSBM methodology can cover a large class of linear drifts. Our goal is to derive the OU bridge and push the methodology to have an equivalent of VPSDE used in standard diffusion models. We think such a formulation could be beneficial for generative modeling (when $\pi_T=\mathcal{N}(0,I)$) but we haven't thoroughly tested it. In our preliminary experiments, we did not observe significant benefits for using the OU bridge. Regarding Appendix E, we have derived a discrete-time version of all of our results for readers who would prefer to introduce the methods without having the knowledge of SDE. Another goal of Appendix E is to show that there is nothing in our method that is intrinsic to the continuous-time framework and that our methodology could be recovered as the limiting case of a fully discrete numerical scheme. [1] De Bortoli, Thornton, Heng, Doucet (2021) -- Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling [2] Lipman, Chen, Ben-Hamu, Nickel, Le (2022) -- Flow Matching for Generative Modeling [3] Delbracio, Milanfar (2023) -- Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration [4] Liu, Vahdat, Huang, Theodorou, Nie, Anandkumar (2023) -- I2SB: Image-to-Image Schrödinger Bridge [5] Heitz, Belcour, Chambon (2023) -- Iterative $\alpha$-(de)Blending: Learning a Deterministic Mapping Between Arbitrary Densities [6] Albergo, Boffi, Vanden-Eijnden (2023) -- Stochastic Interpolants: A Unifying Framework for Flows and Diffusions --- Rebuttal Comment 1.1: Title: Response to authors Comment: I thank the authors for their detailed response that have fully addressed my concerns or questions. I have also read the other reviews and intend to keep my accept score. --- Reply to Comment 1.1.1: Comment: Thank the reviewer for your response and your acknowledgement of the rebuttal. --- Reply to Comment 1.1.2: Comment: Thank the reviewer for your response and your acknowledgement of the rebuttal.
Summary: The paper introduces Iterative Markovian Fitting (IMF) as a new method to compute Schrödinger Bridges (SBs), which are dynamic versions of entropy-regularized optimal transport. IMF alternates between projecting on the space of Markov processes and the reciprocal class, and it preserves the initial and terminal distributions. The paper also proposes Diffusion Schrödinger Bridge Matching (DSBM) as an algorithm to approximate SB solutions derived from IMF. DSBM overcomes issues of previous techniques and solves a simple regression problem at each iteration. The performance of DSBM is demonstrated in various transport tasks. The paper provides theoretical results, notations, and definitions related to the topics discussed. Strengths: 1, The paper introduces Iterative Markovian Fitting (IMF) as a new procedure for computing Schrödinger Bridges (SBs). This approach provides a fresh perspective and contributes to the field of optimal transport and generative modeling. 2, The paper establishes various theoretical results for IMF, demonstrating the validity and effectiveness of the proposed method. These results enhance the understanding of SBs and their applications in machine learning. Weaknesses: 1, While the paper mentions that IMF and DSBM address scalability issues, it lacks a detailed analysis of their scalability in terms of computational resources and dataset sizes. Understanding the scalability limitations of the proposed methods is crucial for their applicability to real-world, large-scale problems. 2, The paper assumes locally Lipschitz continuity of drift and limited settings for the proposed methods. These assumptions may limit the applicability of IMF and DSBM to certain scenarios. A discussion of the limitations imposed by these assumptions would provide a more comprehensive view of the proposed methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1, How does the Iterative Markovian Fitting (IMF) method differ from the traditional Iterative Proportional Fitting (IPF) approach in computing Schrödinger Bridges (SBs)? 2, Can you provide more insights into the theoretical results established for IMF in the paper? 3, How does Diffusion Schrödinger Bridge Matching (DSBM) overcome the time-discretization and "forgetting" issues of previous diffusion-based techniques? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: As stated in "Weakness" Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our submission and for their constructive feedback. We address here the main points from the review. **“Analysis of scalability”**: Our proposed DSBM method leverages tools from the recent flow/bridge matching literature, which are highly scalable generative models. In particular, the first stage of DSBM is the same as bridge matching, and so DSBM can be viewed as refinements of bridge matching in later stages. We have demonstrated our proposed method in a number of high-dimensional tasks, such as CelebA ($128 \times 128$) and fluid flows downscaling ($512\times512)$. We have additionally performed an experiment on the AFHQ dataset at $512 \times 512$ resolution using DSBM-IMF, and have included it in the one-page response PDF. Our experimental setup is fully described in the appendix of our paper (see Appendix I). Regarding the computational resources, our MNIST experiments were run on two 2080Ti GPUs, while the CelebA and fluid flows downscaling experiments were run on a single A100 or RTX GPU. In the case of image experiments, we use the standard dataloaders provided by the torchvision package. For the fluid flow experiment, further details on the climate dataset and preprocessing can be found in [1]. We hope this clarifies the reviewer's concerns and are happy to provide further clarifications if needed. **“Limitations of locally Lipschitz continuity assumptions”**: We thank the reviewer for pointing out this limitation. First, we would like to highlight that in most applications the quantities of interest are locally Lipschitz (which is one of the weakest requirement regarding the regularity of the drift and diffusion matrix). Dropping this requirement might lead to technical difficulties (for instance it is known that the solutions of SDEs with locally Lipschitz coefficients have a unique strong solution, which is not the case if the drift is only assumed to be continuous [2]). However, it is likely that the local Lipschitzness assumption could be dropped and replaced with finite entropy conditions. Currently, we make the Lipschitzness assumption to use known results regarding the well-posedness of the Doob $h$-transform [3]. However, there is another line of work that studies Doob $h$-transform under finite entropy conditions [4]. These conditions might be more amenable to the study of Schrödinger Bridges. We plan to study further the theoretical properties of the IMF under this finite entropy setting in a future work. **“How does IMF differ from IPF”**: IPF and IMF schemes project on different classes of path measures (see Table 1). We refer to the “Relationship between IPF and IMF” section in our response to all reviewers for a more detailed comparison. **“More insights into the theoretical results established for IMF in the paper”**: In what follows, we give a high level explanation of the main results of the paper. Our main contribution is to show that the two alternate projections considered in the IMF are 1) indeed projections under the Kullback--Leibler divergence (Proposition 2 and 4); 2) that they satisfy a Pythagorean theorem (Lemma 6). The fact that these two projections satisfy a Pythagorean theorem is surprising and one of the important results of our work. Our results differ from classical information theoretic results [5] in that our Pythagorean theorems are stated for the backward KL divergence and not the forward one. This is a key difference which greatly simplifies the analysis. As a result, the concurrent work [6] was able to prove the convergence of IMF to the Schrödinger Bridge. We would like to emphasize that since the submission of this paper we have found a proof of the convergence of IMF to the SB which is shorter while similar in spirit to the one of [6]. We summarize this proof below and and will include it in the revised version of the paper. Using the Pythagorean theorems, we can show that the KL divergence $\mathrm{KL}(\mathbb{P}^n|\mathbb{P}^\star)$ between the IMF iterates and the SB is finite. Using the coercivity of $\mathrm{KL}(\mathbb{P}^n|\mathbb{P}^\star)$ (this is where our analysis differs from the one of the IPF) we have that the IMF sequence is relatively compact. Then, we use the space of Markov path measure and the reciprocal class are closed under weak convergence to show that the only limiting point of IMF must be Markov and in the reciprocal class. Since they met the marginal constraint at endpoints, the unique limit point is the Schrödinger Bridge (Proposition 5). **“How does DSBM overcome the time-discretization and 'forgetting' issues”**: DSBM straightforwardly overcomes the time-discretization issue, as both the diffusion bridge $\mathbb{Q}_ {|0,T}$ (e.g. equation (4)) and the training objectives (e.g. equations (9)(13)) are derived in continuous time. As for the "forgetting" issue, previous diffusion SB methods only use the reference process $\mathbb{Q}$ in the first iteration of the algorithm which leads to the "forgetting" issue. On the contrary, DSBM directly uses the diffusion bridge $\mathbb{Q}_{|0,T}$ in each iteration. This can be intuitively understood as incorporating $\mathbb{Q}$ as inductive bias in the DSBM algorithm. See also Appendix F for a more detailed discussion on how DSBM overcomes the "forgetting" issue. We hope that our answers have resolved the reviewer's concerns. [1] Bischoff, Deck (2023) -- Unpaired Downscaling of Fluid Flows with Diffusion Bridges [2] Ikeda, Watanabe (1996) -- Itô’s Stochastic Calculus and Probability Theory [3] Palmowski, Rolski (2002) -- A technique for exponential change of measure for Markov processes [4] Leonard (2011) -- Stochastic derivatives and generalized h-transforms of Markov processes [5] Csiszar (1975) -- I-Divergence Geometry of Probability Distributions and Minimization Problems [6] Peluchetti (2023) -- Diffusion Bridge Mixture Transports, Schrödinger Bridge Problems and Generative Modeling
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their time and their very helpful feedback. We will take care to address all suggestions on improving clarity and minor typos in the next version of the paper. We would like to take the opportunity here to address a few common questions raised by several reviewers. **“Relationship between IPF and IMF”**: Several reviewers are interested in a more detailed discussion on the relationship between the classical IPF and the novel IMF schemes. In short, IPF and IMF schemes use *different projections* on *different classes* of path measures (see Table 1 for a summary). In more detail, IPF uses alternative forward KL projections onto the path measure classes $\\{\mathbb{P}: \mathbb{P}_0=\pi_0\\}$ and $\\{\mathbb{P}:\mathbb{P}_T=\pi_T\\}$. IMF uses alternative backward KL projections onto the classes $\mathcal{M}$ (Markovian class) and $\mathcal{R}(\mathbb{Q})$ (reciprocal class). The true Schrödinger Bridge solution $\mathbb{P}^\star$ is the unique path measure in all 4 classes. A fundamental difference is that the IMF iterates always admit $\pi_0, \pi_T$ as the marginals at the initial and final times. In contrast, each IPF iterate only satisfies one of the marginal constraints (odd iterate satisfies $\tilde{\mathbb{P}}^{2n+1}_T=\pi_T$, even iterate satisfies $\tilde{\mathbb{P}}^{2n+2}_0=\pi_0$, see line 119), and they only reach the SB solution $\mathbb{P}^\star$ which satisfies both marginal constraints in the limit as $n\to\infty$. Therefore, the IMF scheme is more preferable when the marginal accuracy of the samples at the initial and final times are important, and we would like to obtain more accurate samples before the algorithms have converged. From a practical point of view, IMF also gives rise to more stable training procedures. The reciprocal projection can be carried out empirically by simulating the modeled SDEs, while the Markovian projection leverages tools from the flow/bridge matching literature [1,2,3,4,5]. In contrast, in DSB there is an additional approximation that the transition density is Gaussian for small stepsizes. This assumption is only true for small stepsizes and can make the loss of DSB quite unstable. In addition, DSB suffers from bias accumulation throughout the IPF iterations. The IMF scheme circumvents this issue since we interpolate between the endpoints using the Brownian bridge. **“Choice of diffusion bridge”**: We use a Brownian motion as our reference process $\mathbb{Q}$ in our experiments, as it corresponds to solving the standard Wasserstein-2 EOT problem (see line 113). It is also the choice used in previous bridge matching papers [1,2,3,4]. We have analyzed a larger class of linear diffusion bridges in Appendix B. Extending bridge matching procedure to non-linear processes would be non-trivial and is still an open question, as this requires solving and sampling from an intractable diffusion bridge. However, even if the diffusion bridge is not available in close form, one might be able to first learn the diffusion bridge using methodologies such as [6], and then apply DSBM using the learned diffusion bridge. This extension should also be motivated by specific applications, which we leave for future work. In our case, the choice of Brownian motion is motivated by the optimal transport point of view. **“FID with more than 100 steps”**: We have performed further evaluations for the CIFAR-10 generative modeling task in Appendix I.5 with NFEs higher than 100 as suggested by several reviewers. We have included the result in our submitted one-page response PDF. We observe that at higher NFEs, DSBM-IMF still achieves better FID scores than bridge matching (BM) and rectified flow (RF), but achieves slightly worse FID than conditional flow matching (CFM) and OT-CFM for NFE ≥ 300. Overall, DSBM-IMF is effective in improving sample quality when the NFE is very low, but unlike RF, DSBM-IMF still results in comparably low FID when the NFE is very high. **“Scalability of DSBM”**: The scalability of our method should be comparable to bridge matching, as our method can be viewed as a refinement of it. For the first stage of DSBM, there is no computational difference between our proposed scheme and [5]. Later stages correspond to refinements of the process using improved couplings $(\mathbf{X}_0, \mathbf{X}_T)$, which involves a sampling process similar to DSB and Rectified Flow. The computational load of this step can be greatly reduced using caching strategies. We have demonstrated in our paper the usefulness of our proposed method in a number of high-dimensional tasks, such as CelebA ($128 \times 128$) and fluid flows downscaling ($512 \times 512)$. We have additionally performed an experiment on the AFHQ dataset at $512 \times 512$ resolution using DSBM-IMF, and have included it in the one-page response PDF. [1] Delbracio, Milanfar (2023) -- Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration [2] Liu, Vahdat, Huang, Theodorou, Nie, Anandkumar (2023) -- I2SB: Image-to-Image Schrödinger Bridge [3] Heitz, Belcour, Chambon (2023) -- Iterative $\alpha$-(de)Blending: Learning a Deterministic Mapping Between Arbitrary Densities [4] Albergo, Boffi, Vanden-Eijnden (2023) -- Stochastic Interpolants: A Unifying Framework for Flows and Diffusions [5] Lipman, Chen, Ben-Hamu, Nickel, Le (2022) -- Flow Matching for Generative Modeling [6] Heng, De Bortoli, Doucet, Thornton (2021) -- Simulating Diffusion Bridges with Score Matching Pdf: /pdf/476cca54c4a4549ba4285998ec9a42852fb0aae6.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes IMF, an iterative method for solving Schrodinger Bridge problems where the solution at each iteration preserves the correct marginal distributions at times 0 and T. This is in contrast to the existing method IPF which only satisfies this in the limit (and is very difficult to reach in practice). Strengths: - An interesting marginal-preserving algorithm for solving Schrodinger bridge problems. The popular method IPF can have convergence problems in practice, while this seems to fix some of that. Weaknesses: - Presentation. While I eventually understood the resulting IMF algorithm, I felt the presentation can be much more simplified. The algorithm is actually quite simple and closely related to priors works on SB and OT. While I understand the need to prove convergence and showcase novelty, the heavily front-loaded mathematical notation made reading on the first pass very difficult. I imagine the current paper may be off-putting for many readers who are not extremely familiar with related works. - Novelty. The proposed algorithm seems very similar to Rectified Flow but just with an additional noise parameter. - Limitation. The proposed algorithm only works for the standard formulation of Schrodinger bridge with quadratic costs. However, in this setting, it isn't clear to me why the extra stochasticity is useful, as opposed to solving deterministic maps (optimal transport problems). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - "Most notably, we adopt the SDE approach which is crucial for the validity of Proposition 5" - Can the authors expand on what breaks when there is no stochasticity? From what I understand, RF should also preserve marginal distributions and converge to the optimal transport solution, which is discussed more in [1]. The solutions are also unique similar to any Flow Matching-based approach. - All of the experiments in the main paper are currently only for transferring between two data distributions, but being able to find optimal transport maps is useful for generative modeling as well. I think it could really make the paper stronger by having experiments in generative modeling (Gaussian -> data) and showing some empirical metrics from e.g. [2] such as sampling cost and consistency, since these are important problems in generative modeling at the moment. [Edit: I found the experiments in I.5. I see that sampling efficiency is on par with RF but Table 6 seems pretty good! Although it does look like if we extrapolate to NFE > 100, the other methods are descending quite a bit faster in FID...] - Can the authors expand on the differences between the two proposed methods, DSBM-IPF and DSBM-IMF? If I understood correctly, the main difference between DSBM-IPF and DSBM-IMF is only in the first iteration, where the pairs (x0, xT) are sampled from (either an OU process Q or the independent joint distribution). And the real difference is that Q has a slight bias at time T because it may not have reached its stationary distribution. Practically, this difference seems very negligible, yet there seems to be differences in empirical results (e.g. Table 3 and Figure 5). Theoretically, since Q does not match the correct marginal at T, this is also problematic for the proof of convergence to SB right? [1] "Rectified Flow: A Marginal Preserving Approach to Optimal Transport" https://arxiv.org/abs/2209.14577 [2] "Multisample Flow Matching: Straightening Flows with Minibatch Couplings" https://arxiv.org/abs/2304.14772 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors do mention limitations such as marginal improvement, computational issues, and difficulties when sigma --> 0. I think these are sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough evaluation of our work. We appreciate their interest and their thoughtful questions. We would like to address the raised questions here which hopefully would clarify the role of stochasticity further. **“Algorithm presentation can be simplified.”**: We have tried to make the terminology clearer with the introduction of a Notation section. The introduction of some mathematical concepts like the Markovian and reciprocal projections seems important to us, as it allows us to describe within a same framework many different existing algorithms ([1,2,3,4,5] can be seen as special case of the Markovian projection procedure for instance). However, we understand the reviewer's concerns and will try to improve the clarity of the work in the revised version of the paper. If there are specific presentation issues the reviewer would like to point out, we would be happy to change them as well. **“Role of stochasticity”**: We thank the reviewer for this question. Indeed, it can be unclear at first why stochasticity is important. We would like to provide three motivations regarding the use of stochasticity: * Firstly, from a theoretical point of view, stochasticity is important for establishing the convergence of the IMF algorithm to the SB solution. The IMF methodology provably converges to the unique EOT/SB solution, see [8, Theorem 2] (concurrent work) for a proof. We have also found a short proof of this which we will include in the revised version of the paper. In contrast, such results appear to be more difficult to establish for Rectified Flow (RF). The RF algorithm can only provably solve the OT problem when the learned vector fields $v_t(x_t)$ are restricted to gradient fields, and counterexamples exist if this condition does not hold [7]. Even with the above restriction on the vector fields, the strongest convergence results for RF as of now appear to be Theorem 5.6 and Corollary 5.7 in [7], which rely on a surrogate loss and does not directly prove the convergence of the learned coupling to the OT map. * Empirically, we also encountered difficulties with the original RF formulation for transfer tasks, and we observe adding stochasticity can improve performance significantly. Qualitatively speaking, in the case of image transfer, when $\sigma$ is close to zero (i.e. our approach is closer to RF) the learned process only changes minimally the input samples. As a result, the output samples might look unrealistic, especially if the two domains are very different (for example interpolating between the classes "horse'' and "plane'' in CIFAR-10, see the additional results in the one-page PDF). * Finally, we would like to make a comparison between non-regularized OT and the SB problem. In itself, the Schrödinger Bridge problem is an important theoretical problem, with many recent works tackling it. Unlike non-regularized OT, the SB formulation can be useful when the solution does not necessarily have a one-to-one mapping structure. For example, in inverse problems (such as super-resolution), there are multiple solutions corresponding to a given condition, and so the solution mapping is one-to-many and the SB formulation is a more natural framework. We also would like to emphasize that another crucial difference with RF is that DSBM-IMF applies forward-backward training, whereas RF only trains the flow matching process iteratively in one direction. The latter will cause errors in the marginal distributions to accumulate, and marginal accuracy to deteriorate in each iteration, whereas DSBM-IMF can still improve marginal accuracy (as shown in e.g. Figure 5). Therefore, we believe our proposed method is a novel and relevant contribution, which also helps to unify the flow matching and SB research areas. We do hope that our clarification of the main differences between our algorithm and RF [6], as well as our justification of the use of stochasticity, have resolved the concerns raised in the review. **“Differences between DSBM-IPF and DSBM-IMF”**: Your understanding is indeed correct. The DSBM-IPF and DSBM-IMF algorithms only differ in the first iteration, and when the OU process is used their differences should be comparatively small. We use Brownian process in our work motivated by optimal transport, and their difference is more significant. However, both algorithms converge theoretically to the true SB solution. In the case of DSBM-IPF, although the process $\mathbb{Q}$ does not match with $\pi_T$ at time $T$, by Proposition 9 DSBM-IPF can recover the true IPF sequence $(\tilde{\mathbb{P}}^n)$ (see lines 118-125), for which a different, classical proof of convergence of IPF exists. To summarize, both algorithms converge to the true SB (assuming that the score is perfectly learned), however the theoretical frameworks are very different. We will highlight this in the revised version of the paper. Overall, we would like to thank the reviewer for their insightful questions which have helped us to clarify several points of the paper, in particular the importance of stochasticity. [1] Delbracio, Milanfar (2023) -- Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration [2] Liu, Vahdat, Huang, Theodorou, Nie, Anandkumar (2023) -- I2SB: Image-to-Image Schrödinger Bridge [3] Lipman, Chen, Ben-Hamu, Nickel, Le (2022) -- Flow Matching for Generative Modeling [4] Heitz, Belcour, Chambon (2023) -- Iterative $\alpha$-(de)Blending: Learning a Deterministic Mapping Between Arbitrary Densities [5] Albergo, Boffi, Vanden-Eijnden (2023) -- Stochastic Interpolants: A Unifying Framework for Flows and Diffusions [6] Liu, Gong, Liu (2022) -- Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow [7] Liu (2022) -- Rectified Flow: A Marginal Preserving Approach to Optimal Transport [8] Peluchetti (2023) -- Diffusion Bridge Mixture Transports, Schrödinger Bridge Problems and Generative Modeling
null
null
null
null
null
null
Reinforcement Learning with Simple Sequence Priors
Accept (poster)
Summary: This work proposes to use action-based complexity cost. This work argues that this action-based cost can be formulated either as compression over the experienced so far actions or using a transformer that predicts the next action given the past. This approach is then claimed to provide access to simplified / predictable action sequences. The work is evaluated against a number of dmcontrol tasks and is shown to do better than considered alternatives, both in terms of performance but also in terms of learning policies that are easier to learn. Overall, the work proposes an interesting set of evaluation and presents an interesting idea. However, I found the work fairly difficult to follow at times, whereby paper clarity can be improved. I am also unsure that the proposed solution is necessarily going to be useful, given the presence of theoretically stable alternatives, such as [4]. Finally, I think this work has potential but should be evaluated on more complex tasks - such as tasks from the field of robotics and / or tasks where exploration / initially bootstrapping policy learning is important. As a result, I cannot recommend this work for publication yet. Strengths: - A novel action sequence - driven reward formulation - Interesting and informative experimentation - Good literature review coverage Weaknesses: - No complex experiments that can further strengthen the claims for policy search and robustness - Insufficient comparisons to prior works, e.g. such as [4] but also works like HER - Some unclear prose, specifically the abstract, introduction and methods section can benefit from less grandous wording - Additional justification of the choice of transformer (as opposed to something simpler) and also some argumentation as to why this cannot be pre-trained ahead of time and then fixed during training Technical Quality: 3 good Clarity: 3 good Questions for Authors: Nothing explicit. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations section is a bit lacking, it is unclear how this would work in the context of complex sequential / long-horizon tasks. Also, it is unclear how would this approach work on policies learnt from pixels. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive and thoughtful feedback. We are glad that the reviewer found our work interesting, and we are encouraged by the fact that the reviewer thinks it has potential. The reviewer points out four main limitations of the submitted paper. We have addressed all four weaknesses pointed out by the reviewer, and it has made our paper a lot stronger. We address the weaknesses in detail below: >No complex experiments that can further strengthen the claims for policy search and robustness. [...] Also, it is unclear how would this approach work on policies learnt from pixels. The reviewer raised the point that we do not address how our algorithm might work in pixel-based environments. We have since added experiments showing that LZ-SAC can be adapted to pixel-based tasks too. We trained LZ-SAC and SAC to perform control from pixels in the DeepMind Control Suite 100k benchmark (DMC 100k), where the agents interact with the task through 100k environment steps. To adapt LZ-SAC to the visual control domain, we equipped it with a convolutional neural network encoder and performed a random shift augmentation on batches of images when we trained the actor and critic. We adapted SAC the same way to make the methods comparable. We see that, with our simple modification and compression bonus, LZ-SAC can beat recent approaches like CURL and SAC+AE, as well as our modified SAC implementation, on average across the six visual control tasks from DMC100k (see Fig 2.B and Table 1. in the one-page pdf). >Finally, I think this work has potential but should be evaluated on more complex tasks - such as tasks from the field of robotics and / or tasks where exploration / initially bootstrapping policy learning is important. To increase the breadth of tasks that we use for evaluating LZ-SAC, we included three robotic manipulation tasks from the metaworld benchmark. Across the three tasks we see that LZ-SAC learns to solve the manipulation problems faster and more consistently than SAC (Fig 2.A in the one-page pdf). Moreover, the action sequences LZ-SAC uses to solve these tasks do not exhibit periodic properties, but are still smooth and predictable (Fig 2.C in the one-page pdf). This showcases the possibility of using LZ-SAC not only in tasks with strong periodic patterns, but also in more generic continuous control settings. >Insufficient comparisons to prior works, e.g. such as [4] but also works like HER We implemented Robust Predictable Control (RPC) (e.g. ref [4]) and evaluated it on all the DeepMind Control Suite tasks from the original submission. We found that RPC could not solve the tasks with the same information budget as our other models, and increased the budget to make RPC more performant. Still with this fine-tuning RPC was outperformed by both LZ-SAC or SPAC. Both of our methods are therefore strong alternatives to existing baselines. We note that prior works like HER are orthogonal to the methods proposed in this paper, and can therefore be combined with our approach. >Some unclear prose, specifically the abstract, introduction and methods section can benefit from less grandous wording We have cleaned up the prose at various places throughout the manuscript, including the abstract and introduction. See our changes below. Strikes indicate that the text is removed. **"~~Everything else being equal, simpler models should be preferred over more complex ones.~~ In reinforcement learning (RL), simplicity is typically quantified on an action-by-action basis [...]"** **"Simplicity is an ~~powerful~~ important inductive bias. In science, we strive to build parsimonious simple theories and algorithms that involve repetitions of the same basic steps."** **"~~Though we want our RL agent to maximize rewards, we encourage it to do so with policies that produce simple action sequences.~~ We train our agents to learn policies that maximize reward as well as the compressibility of the action sequences they produce."** >Additional justification of the choice of transformer (as opposed to something simpler) and also some argumentation as to why this cannot be pre-trained ahead of time and then fixed during training We would like to thank the reviewer for suggesting pretraining the transformer ahead of time. We trained transformer models to perform next-action prediction from action sequences produced by the converged LZ-SAC agent for all tasks. Using the pretrained transformers (with frozen weights) rather than a randomly initialized one sped up learning significantly and allowed the SPAC agent to learn more rewarding behaviors (see Fig 4 in the one-page pdf). This showcases an interesting possible connection between our sequence compression framework and behavioral cloning. Lastly, we added a justification of our use of the transformer rather than a simpler model. While it is perfectly possible to combine our method with a simpler sequence prior, transformers are the SOTA models for learning complex sequence data, and are better suited for modeling long-range dependencies. We have added the following justification to our paper: **"We use transformers rather than simpler architectures since they are better suited for learning complex sequence data with long-range dependencies, as we expect to see in an RL setting."** --- Rebuttal Comment 1.1: Title: Thanks for the detailed rebuttal, well done Comment: Dear authors, I would like to start by thanking you for the detailed and thorough rebuttal to my comments, I am glad you found them useful! I will be happy to increase my score provided I see the referenced by you 1 page long pdf file and the reported updates here match the results there. This document is nowhere to be found, perhaps it is set to different visibility or it is not submitted yet. Thank you once more, great job! --- Reply to Comment 1.1.1: Title: Thank you! The pdf should be visible now Comment: Dear Reviewer 7uiK, We are thankful that you liked our rebuttal and that you are willing to increase your score. The one-pager was uploaded to the global response and there is a link to download it at the very bottom of the global response. The one-pager is visible in our console, and we hope that is visible to all reviewers as well. Thank you to Reviewer 6Awj for providing a link to the one-pager. We hope that the platform is displaying our response properly but please let us know in case anything is unclear. Again, we appreciate that the reviewer thinks we did a good job improving the paper: This would not have been possible without the instructive feedback.
Summary: This paper proposes the idea of using simplicity as a prior when learning control policies using RL. Simplicity (or complexity) here is defined as the cost of predicting action $a_t$ given past actions $(a_\{t - \tau:t-1})$ and current state ($s_t$). Two different approaches are proposed to learn this complexity value. First, using a learned neural network that learns to predict current action given past actions trajectory $(\phi(a_t | a_{t-\tau:t-1}$. This is used as additional reward in SAC. Second approach, aims to use lossless data compression algorithms (lz4) and the motivation is to compute complexity as the additional number of bits required to encode $a_t$ given that we have already encoded $a_{t-\tau:t-1}$. Strengths: Overall, the paper is well written and accessible. The idea of simplicity as a prior is also interesting. Weaknesses: I think the paper is very vague in terms of using concepts such as simplicity. In the Related works section, the paper simply connects simplicity with maximum entropy RL and suggests that a uniform policy is simple (”stay close to simple uniform prior policy”). I don’t see this connection, infact I a uniform random policy leads to random walks and infact maybe highly non-simple (see e.g. brownian motion and discontinuous behavior). On the approach side the first approach to predict actions from past is very strange since it results in trying to model non-stationary behavior and thus makes little sense. This is sort of elluded in the paper as well, where the paper suggests that learning a policy with this reward can be challenging. From the results we can further see that this approach really doesn’t work. The idea of using LZ4 would try to reduce new actions thus sort of minimizing surprise. However, interestingly, while this approach performs well it doesn’t perform unifromly well across all tasks e.g. on 5/8 tasks (Figure 5) it performs quite poorly. I also find some baseline SAC numbers to be strange e.g. hopper-hop, quadraped-walk, walker-run have all been shown to be solved using SAC. However, the numbers reported in this paper are very low. I don’t think these baseline numbers are fully accurate or in case there were some other differences maybe the authors can comment. Related Works: There are many other approaches in the literature which focus on simplicity by compressing action sequences into latent variables, or using surprise minimization as additional part of the objective. None of these approaches are referred to or compared in the text. I would recommend the authors to cite them, and compare against them as well. Achiam et al. Surprise-based intrinsic motivation for deep reinforcement learning Berseth et al. SMIRL: Surprise Minimizing Reinforcement Learning in Unstable Environments Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Do the authors know why the baseline numbers (SAC) for these simple environments are very low? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: limitations are discussed in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and thorough feedback, and we are glad that the reviewer found our idea of using sequence simplicity as a policy prior in RL interesting. In the following we address the issues raised by the reviewer: We have clarified what we mean by simplicity, as well as the strengths and weaknesses of SPAC. We have also fine-tuned SAC to be more performant on Hopper Hop and Quadruped Walk. Lastly, we conducted an experiment where we pretrain the SPAC transformer on converged policies, which improves performance. We thank the reviewer for raising these issues. We think our paper’s claims are better supported as a result of the reviewer’s feedback. >On the approach side the first approach to predict actions from past is very strange since it results in trying to model non-stationary behavior The reviewer is correct to point out the challenges of training the transformer towards non-stationary targets. However, the training is intended to encourage the agent to solve tasks with predictable and simple action sequences. We implemented another baseline using an information bottleneck, Robust Predictable Control (RPC), that uses a learnable prior over states to compress state-representations. In our experiments SPAC outperforms RPC on average over the 8 DeepMind control tasks (see Fig 1.A and Fig 1.B in the one-page pdf). This shows that, for a model with a strong information bottleneck, SPAC is quite performant. To address the issue of non-stationarity further, we pretrained the transformer model to predict the action sequences of converged policies, and trained the SPAC agent using this pretrained transformer, instead of a randomly initialized one, on all tasks from the original submission. This agent learns to solve the tasks more effectively, attaining scores comparable to SAC in challenging domains like walker and quadruped (see Fig. 4 in the one-page pdf). >I also find some baseline SAC numbers to be strange e.g. hopper-hop, quadraped-walk, walker-run have all been shown to be solved using SAC. We thank the reviewer for pointing this out. We repeated the experiments for Hopper Hop and Quadruped Walk with larger network sizes and lower information costs, and now obtain scores comparable to those reported in other published papers [1, 2]. We have updated the scores we report in the paper to reflect this, see Fig 1.A in the one-page pdf for the updated scores. The SAC agent does indeed solve the walker run task, and the score was always consistent with those reported elsewhere [1, 2]. Our LZ-SAC algorithm still outcompetes SAC after fine-tuning. >while this approach performs well it doesn’t perform unifromly well across all tasks e.g. on 5/8 tasks (Figure 5) it performs quite poorly. The experiment presented in Figure 5 is not the main result of our paper, rather it shows that the models we evaluated solve the tasks using various amounts of bits of information. We find that SPAC achieves the highest return per bit of information it uses to solve the tasks. While LZ-SAC is better at maximizing reward, it uses more information to do so. SPAC therefore, we argue, is a strong algorithm for performing information-constrained control. >There are many other approaches in the literature which focus on simplicity by compressing action sequences into latent variables, or using surprise minimization as additional part of the objective. We appreciate the reviewer’s pointers to related work. We updated the related work section to discuss similarities to our approach (see below). While Berseth et al. learn a policy that maximizes the predictability of the next state given the current state, we maximize the predictability/compressability of the next action, given previous actions. Moreover, we evaluated our algorithms in the DeepMind Control Suite, which does not satisfy the criterion of being an unstable environment as described in Berseth et al. In these environments, the agent could easily maximize predictability by not moving, remaining in the same state throughout the episode. **"Related to compression is predictability: Berseth et al. [21 ] learn a density model over states, and then learn a policy that seeks out states that are predictable, leading to self-sustaining behaviors in unstable environments. On the opposite end there are methods that seek out unpredictable states [22, 23], or states that the agent cannot compress, to improve exploration."** >the paper simply connects simplicity with maximum entropy RL and suggests that a uniform policy is simple (”stay close to simple uniform prior policy”). I don’t see this connection, infact I a uniform random policy leads to random walks and infact maybe highly non-simple The reviewer raises an interesting point that maximum entropy RL can produce random behavior that is not predictable. However, as is common in the RL literature [3, 4], we understand the simplicity of an input-output relation as the mutual information between the states and the actions. Since a uniform random policy does not use information about the state to select actions, it is considered simple. Through the lens of mutual information minimization, there is therefore a clear link between our surprise minimization and maximum entropy RL. We have rewritten parts of the exposition to make this connection clearer (see below). **"Though uniform priors can lead to discontinuous and unpredictable behaviors, maximum entropy methods are considered simple in that they try to minimize the use of information about the state to select actions [4, 15, 16]."** [1] Eberhardt et al. 2023 ICLR. Pink noise is all you need: Colored noise exploration in Deep Reinforcement Learning [2] Hansen et al. 2022 ICML. Temporal Difference Learning for Model Predictive Control [3] Eysenbach et al. 2021 NeurIPS. Robust Predictable Control [4] Bassily et al. 2018 Algorithmic Learning Theory. Learners that Use Little Information --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. > .. report in the paper to reflect this, see Fig 1.A Looking at the results it seems that only on Hopper-Hop and Quadraped-Walk is the method much better? On other environments the performance of SAC seems quite comparable. Further, LZ-Spac seems to suffer from high variance. > Through the lens of mutual information minimization Thank you, I appreciate the clarification. Overall, I have updated my score to 5. I still think the overall approach is not very principled. It's also not very clear to me why this approach will perform better in environments that do not have any periodicity (most dm_control tasks considered here have periodicity). Finally, works that explicitly take into account periodicity of an agent could also be useful to cite and compare against. Sharma et al. Phase-Parametric Policies for Reinforcement Learning in Cyclic Environments --- Reply to Comment 1.1.1: Title: Thank you for the detailed feedback Comment: Dear Reviewer GvQF, Thank you for actively taking part in the rebuttal process and for continuing to provide us with feedback. We are glad that you found our additions to the paper clarifying, and that you have increased your score. >It's also not very clear to me why this approach will perform better in environments that do not have any periodicity (most dm_control tasks considered here have periodicity). Several reviewers had also raised this point. To further address this question, we evaluated LZ-SAC against SAC in three robotic manipulation tasks from metaworld, where we expect periodicities to be a lot less prevalent. Our results show that LZ-SAC learned to solve the tasks faster (see Fig 2.A in the one-page pdf). While these tasks do not contain periodicities, LZ-SAC prefered to solve them with smooth and predictable motions (see Fig 2.C). >Sharma et al. Phase-Parametric Policies for Reinforcement Learning in Cyclic Environments Thank you for the suggestion, we now reference this work when introducing the DeepMind Control Suite: **“We evaluated the agents described in Section 3 on eight continuous control tasks from the DeepMind Control Suite [ 34 ]. Many of these tasks promote behaviors with periodic elements, such as running and walking. While specialized architectures exist for such tasks [ 35 ], we expect compressibility to be a useful inductive bias for learning these behaviors.”** Lastly, we would like to state that we believe the framework we use in our paper is indeed appropriately principled. LZ-SAC and SPAC maximize the sum of discounted rewards minus a tractable bound on the mutual information between sequences of states, and sequences of actions. Such information-constrained formulations of the learning objective show up in the control [1], cognitive science [2] and neuroscience [3] literature. We thank the reviewer again for engaging with our paper so thoroughly during the review process, and for the feedback and fruitful suggestions made. We think that our paper has improved a lot by addressing the points raised by the reviewer. [1] Eysenbach et al. 2021 NeurIPS, Robust Predictable Control. [2] Bhui et al. Resource-Rational Decision Making 2021. Current Opinion in Behavioral Sciences. [3] Zador 2019 Nature Communications. A critique of pure learning and what artificial neural networks can learn from animal brains.
Summary: This paper proposes two approaches to regularizing RL policies based on sequential action priors, rather than on single action priors. One approach is SPAC, which trains an autoregressive open-loop policy prior and uses it as a reference regularizer. Another approach uses the LZ4 compression algorithm to estimate how compressible an action sequence is, and uses that as the regularizer. It tests both approaches along with two baselines, SAC and MIRACLE across a few DM Control tasks, and show that the learned policies end up being simpler (more compressible), as well as more performant. Strengths: The main strength of the paper lies in the novelty of using LZ4 for a prior and its results. A priori, it is not clear at all whether relying on a dictionary-based compression method that is essentially doing substring matching would translate well to an RL domain, and the results are novel and interesting to see. Including SPAC as well helps to round out the results, showing that a more generic approach to sequence priors is possible. The method and results are mostly quite clear. Weaknesses: The ablations for noise and open-loop policies are maybe not the most pertinent ablations to have in the main paper. Perhaps there should be ablation for the size and architecture of the transformer prior for SPAC, as a potential way of controlling the complexity/simplicity of the policy prior. It would’ve shed more light on the tradeoff between simplicity vs. performance. Similarly, an ablation for how the actions are compressed with LZ4 (there really should be a short explanation in the main paper about this) such as changing the resolution for discretization would’ve also been very enlightening, to again show the tradeoff between simplicity and performance. Finally it would’ve been very useful to include a different kind of domain, perhaps gridworld-based domains (MinAtar/Atari) to see whether we can see similar trends. ---- After Author Rebuttal ---- I appreciate the additional ablations for the transformer architecture and action discretization. Therefore I have increased my score. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Did all four agents use the same SAC that was augmented with multiple past actions? Or was the augmented SAC only used for LZ-SAC and SPAC? If this is not the case, then the same augmented SAC should be used for all four agents to be consistent. Robustness to noise ablation: Were the agents retrained with the gaussian noise, or was the noise added to the already-trained agents? The plot shows all agents performance decaying relatively similarly, so it doesn’t seem like LZ-SAC or SPAC were particularly more robust to noise. Perhaps the details of this ablation is better put in the appendix so a different ablation can be in the main paper, such as over the transformer architecture or action discretization for LZ4. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper already discusses many of the limitations/weaknesses brought up earlier. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the helpful feedback and the encouraging review. We particularly appreciate that the reviewer found our method and results novel and interesting - both our use of dictionary-based compression and generic compression with sequence models. We have focused on running the ablation experiments the reviewer suggested, as well as adding more evaluations of our method. >Perhaps there should be ablation for the size and architecture of the transformer prior for SPAC, as a potential way of controlling the complexity/simplicity of the policy prior. We ran an ablation experiment where we varied the number of heads, number of layers, and embedding dimensions of the transformer SPAC used to predict its own actions. We tested the ablated models on three DM Control domains (Cheetah, Walker and Acrobot). We find that smaller transformer architectures work well for policy regularization (see Fig 3.B in the one-page pdf), outperforming the bigger architectures. Otherwise we do not see a consistent benefit of larger architectures. Instead, it is possible that the extra number of parameters of the bigger architectures makes it more challenging to learn both a sequence prior and a policy concurrently. We have added the ablation experiments to the Appendix. >Similarly, an ablation for how the actions are compressed with LZ4 (there really should be a short explanation in the main paper about this) such as changing the resolution for discretization would’ve also been very enlightening We thank the reviewer for this interesting suggestion. We varied the resolution with which we discretized the action sequences used as input to the compression algorithm. Across three DeepMind Control tasks, we observe that both too low and too high resolutions remove the performance gain of LZ–SAC: When the resolution is 0, the compression bonus no longer conveys a signal about the simplicity of the policy. Conversely, if the resolution is too high, every action sequence is equally incompressible due to the continuous nature of the action space. We find that rounding to two decimal places gave the best performance on average across the tasks (Fig 3.A in the one-page pdf). We also added an extra paragraph explaining how LZ4 works in the methods section (see below), and refer to pseudo-code for LZ4 which we have in the appendix A2. **"The LZ4 algorithm compresses sequences by replacing repeating sub-sequences in the data with references to an earlier occurring copy of the sub-sequence. These copies are maintained in a sliding window. Repeating sub-sequences are encoded as *length-distance* pairs $(l, d)$, specifying that a set of $l$ symbols have a match $d$ symbols back in the uncompressed sequence. This allows the sequence to be encoded with fewer bits, should it contain such repeating sub-sequences. See Appendix A2 for pseudocode."** >Finally it would’ve been very useful to include a different kind of domain, perhaps gridworld-based domains (MinAtar/Atari) to see whether we can see similar trends. Following the suggestion of the reviewer, we have included more tasks and baselines in our evaluation of the LZ-SAC algorithm. We now include evaluations in pixel-based versions of the DeepMind Control Suite (using the sample efficient 100k benchmark), as well as from the Metaworld benchmark, containing robotic manipulation tasks. We have added results from both benchmarks to our paper. In the pixel-domain, we implement LZ-SAC and SAC like in the state-based tasks, but with a convolutional neural network encoder and applying a random shift augmentation to the images before training the actor and critic. Using this simple modification with our compression objective, we see an improvement not only over SAC, but also over recent approaches specialized for solving visual control tasks, like CURL and SAC+AE: Across six visual control tasks from the DeepMind Control Suite, LZ-SAC attains the highest average score (see Fig 2.B and Table 1 in the one-page pdf for model comparisons), showcasing the viability of using LZ-SAC in the pixel domain. Lastly we compared LZ-SAC and SAC in three metaworld tasks. Across these robotic manipulation tasks, we see that LZ-SAC learns a successful policy faster and more reliably than SAC (Fig 2.A in the one-page pdf). Furthermore, the LZ-SAC policy produced smoother, more predictable action sequences than the policy SAC learned (Fig 2.C in the one-page pdf). This suggests that using dictionary-based compressibility as a prior could be useful in more generic continuous control settings, not just in periodic motor control tasks. >Did all four agents use the same SAC that was augmented with multiple past actions? We did not find it necessary to augment the state with the past actions of the agent in the DMC experiments (see confirmatory experiments in appendix F). As such, all agents had the same input. We have added a sentence to clarify this in the paper (see below). **"We also did not find it necessary to augment the state representations of our methods with the past actions to solve the DeepMind Control Suite tasks (see Appendix F)."** >Were the agents retrained with the gaussian noise, or was the noise added to the already-trained agents [...] The plot shows all agents performance decaying relatively similarly, so it doesn’t seem like LZ-SAC or SPAC were particularly more robust to noise. Perhaps the details of this ablation is better put in the appendix so a different ablation can be in the main paper In the noise ablation, the noise was added to the already trained agents. We found the reviewer’s proposed discretization resolution ablation very interesting and we have decided to replace the noise ablation with the discretization ablation in the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the additional ablations and answer to my questions! I have increased my score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer 6Awj, Thank you again for your instructive feedback and engaging in the rebuttal. We are happy to see the score increase as a result of the new ablations and clarifications - thank you!
Summary: In this paper, the authors propose a reinforcement learning method that produces simplified action sequences. Simplicity is defined as the predictability of the next action taken by the RL agent and is measured using the number of bits required to encode the action sequence. The authors propose two methods to introduce an information bottleneck for biasing/regularizing the RL agent toward simplicity. The first method SPAC, uses a learned sequence model that predicts the next action and the RL agent is incentivized to produce predictable actions. A second method LZ-SAC directly uses a compression algorithm to compress the action sequence and penalizes the number of bits required for storage. The authors show that this regularization can have several benefits and can lead to better overall performance as compared to other action regularization techniques. Strengths: The paper is well-written and presented. The motivation provided for the simplicity of action sequences is reasonable and the overall idea is interesting for the community. Several different types of experiments have been carried out, comparing performance, performance per bit of information and the ability to use open-loop control. If validated with more experiments, the proposed LZ-SAC method could be a good way to regularize RL policies. The future ability of RL agents to discard information about the state of the world and simply use prior actions instead is promising. Weaknesses: The major weakness of the work would be the lack of further baselines. Since the approach is very general, it needs to be validated by comparison with more SOTA off-policy RL baselines to ground the results. It would also be interesting to compare against other information bottleneck methods such as [4]. It would also be beneficial if the authors try to categorize which kinds of environments benefit from compression. It could be that environments without periodicity are worse off with compression. The authors claim that the method never performs worse than SAC but this claim needs to be validated by testing on many more non-periodic tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the methodology for choosing the compression method LZ4 over others? - Why doesn't any agent do well in the hopper-hop task? In section 7, how much does the performance drop in open-loop control as compared to closed-loop control? It is not clear from the axis in Figure 7. Are the differences in the cumulative rewards between the 4 methods statistically significant? - Figure 10: Why does the LZ-SAC method work the best in this case with stochastic policies, while SPAC performs much worse? Why does stochasticity have this effect? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes they are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and feedback. We are happy that the reviewer found our idea of using action sequence compression interesting for the RL community. Moreover, it is encouraging that the reviewer acknowledges the possible impact our method could have for regularizing RL policies, if it is tested with more experiments. We conducted a series of new experiments in order to address three weaknesses that were outlined by the reviewer: First, we tested the models on tasks that are not periodic in nature (metaworld). Secondly, we tested LZ-SAC on pixel-based versions of tasks from the Deepmind Control Suite. Lastly, we implemented Robust Predictable Control (RPC) and compared it to our methods in all Deepmind Control tasks from the original submission. In total, we introduce tasks from two new benchmarks and three new baseline models (RPC, CURL, and SAC+AE). We believe this has made our contribution more valuable, and we thank the reviewer for the suggestions. We discuss the experiments in more detail below: >The major weakness of the work would be the lack of further baselines. [...] It would also be interesting to compare against other information bottleneck methods such as [4]. Since several reviewers suggested it, we implemented Robust Predictable Control (RPC) (from [4]) and tested it on all DeepMind Control suite tasks from the original submission. We had to increase the information budget of RPC to make it performant in our task, and even then it is outperformed both by LZ-SAC and SPAC on average (see Fig 1.A and 1.B in the one-page pdf). Our method is thus a viable alternative to existing baselines. >It would also be beneficial if the authors try to categorize which kinds of environments benefit from compression. It could be that environments without periodicity are worse off with compression. To make a stronger case for our method, we tested LZ-SAC on more tasks. First, we trained our algorithm on pixel-based versions of tasks from the Deepmind Control Suite. We tested LZ-SAC in the sample efficiency domain, where the agent experiences 100k environment steps. We modified LZ-SAC with a convolutional neural network encoder and performed a random shift augmentation to images before training the actor and critic. With this simple modification, LZ-SAC outperforms not only SAC with the same modification, but also SOTA off-policy models like CURL and SAC+AE on average (see Figure 2.B and Table 1 in the one-page pdf). Simple action sequence priors can therefore be useful for solving high-dimensional visual control tasks. Second, we trained LZ-SAC and SAC on three robotic manipulation tasks from the metaworld benchmark, where we expected periodic sequences to play far less of a role. LZ-SAC not only learns to solve the tasks faster than SAC, but more consistently too (Fig 2.A in the one-page pdf). Moreover, LZ-SAC does not solve the tasks with periodic action sequences, but rather with smooth and predictable ones (see Fig 2.C in the one-page pdf). >The authors claim that the method never performs worse than SAC but this claim needs to be validated by testing on many more non-periodic tasks. We do not wish to claim that LZ-SAC never performs worse than SAC (just that it was equal or better in our experiments), and we have added the following sentence to the results section: **"Though we do not expect LZ-SAC to always outperform SAC in more generic control settings, we see an improvement in many tasks with periodic elements, like walking and running."** What is the methodology for choosing the compression method LZ4 over others? >We used LZ4 due to its fast compression speed. This information is now added in the methods section in the paper (see below). We justified this further by testing LZ-SAC with two other lossless compression algorithms (bzip and zlib) on the Cheetah - Run task, and found no significant difference in performance (see Fig 3.C in the one-page pdf). **"We chose the LZ4 algorithm due to its state-of-the-art compression speed (see Appendix G for experiments with other compression algorithms)"**. >Why doesn't any agent do well in the hopper-hop task? We thank the reviewer for pointing this out. We repeated the experiments for Hopper-Hop with larger network sizes and lower information costs, and see that the SAC scores are consistent with other papers [1, 2]. To make the comparison fair we make the same changes to LZ-SAC and see that it remains superior to SAC. We have added the new scores to the paper (Fig 1.A in the one-page pdf). >In section 7, how much does the performance drop in open-loop control as compared to closed-loop control? [...] Are the differences in the cumulative rewards between the 4 methods statistically significant? The performance drops vary across tasks. In tasks like Cheetah-Run and Walker-Run, SPAC and LZ-SAC achieve roughly 10% of the closed-loop performance, only observing the first 5% of the states in the episode. Conducting t-tests, we find that SPAC scores (averaged over tasks) for open-loop control are significantly higher than both LZ-SAC (p=0.006), SAC (p=0.004) and MIRACLE scores (p=0.001). We have added these statistics to the paper. >Figure 10: Why does the LZ-SAC method work the best in this case with stochastic policies, while SPAC performs much worse? Why does stochasticity have this effect? This decrease in the return per bit ratio does not reflect a decrease in returns for SPAC, but rather that our approximation of the mutual information between states and actions is lower for the stochastic LZ-SAC policy. Since we do not have analytic expressions for the entropy of the action marginal or policy, we approximate this through sampling (more information in Appendix D). [1] Eberhardt et al. 2023 ICLR. Pink noise is all you need: Colored noise exploration in Deep Reinforcement Learning [2] Hansen et al. 2022 ICML. Temporal Difference Learning for Model Predictive Control --- Rebuttal Comment 1.1: Comment: I thank the reviewers for the detailed answers and for conducting further experiments. I believe the additions of new results and clarifications to the main paper make the work worthy of publication, therefore, I am happy to increase my score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer teDV, We are grateful for the time you spent reviewing and engaging with our rebuttal. We are happy that the inclusion of new results and additional clarifications have made you think our paper is worthy of publication and that you have increased your score. We would like to thank you again for actively taking part in the reviewing process and providing such constructive feedback.
Rebuttal 1: Rebuttal: We would like to thank all of the reviewers for the time and effort put into providing thoughtful feedback on our paper. * Reviewer teDV found the paper “well-written and presented” and our idea “interesting for the community.” * Reviewer 6AwJ praised our paper for its “novelty of using LZ4 for a prior and its results.” * Reviewer GvQF said that the “paper is well written and accessible” and that the “idea of simplicity as a prior is also interesting.” * Reviewer 7uiK mentioned that our “work proposes an interesting set of evaluations and presents an interesting idea.” However, at the same time, reviewers also made important suggestions, especially regarding additional baselines and benchmarks. We believe that we were able to incorporate all of these suggestions and believe that doing so has improved our paper significantly. To summarize, we have made the following additions: * We evaluated our methods on nine additional benchmarks (requested by reviewers teDV, 6AwJ, 7uiK): - Six pixel-based tasks from the DeepMind Control Suite, highlighting that our approach scales and remains competitive in high-dimensional, visual environments. - Three robotic manipulation tasks from the Meta-World benchmark, demonstrating applicability to domains that rely less on periodicity. * We implemented three additional baselines – Robust Predictable Control (RPC), Contrastive Unsupervised Reinforcement Learning (CURL), and Soft Actor Critic + Auto-Encoder (SAC+AE) – and found that LZ-SAC and SPAC remain superior to all of these baselines (reviewers teDV, 7uiK). * We improved the performance of the SAC baseline by using larger network sizes and lower information costs (reviewers teDV, GvQF). Even with these changes, LZ-SAC remains superior to SAC. * We ran additional ablations for both SPAC and LZ-SAC (reviewer 6AwJ, GvQF): - an ablation experiment where we varied the number of heads, number of layers, and embedding dimensions of the transformer SPAC used to predict its actions. - an ablation experiment where we varied the resolution with which we discretized the action sequences used as input to the compression algorithm. - we pretrained the transformer model to predict the action sequences of converged policies and trained the SPAC agent using this pretrained transformer which improved performance even further. * Incorporated references requested by the reviewers (reviewer GvQF). Each of these changes is outlined in detail in our responses to the individual reviews below. We again want to thank the reviewers for their time and for actively taking part in the review process. Pdf: /pdf/cf17adbc72aa13df812fe73ec07202efef9b0447.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Multimodal Deep Learning Model Unveils Behavioral Dynamics of V1 Activity in Freely Moving Mice
Accept (poster)
Summary: The authors propose a neural network for predicting activity of V1 neurons in freely moving mice. The data consists of ~hour long electrophysiological recordings of a neuronal population while the mouse is freely exploring the space. Simultaneously with the neural activity, the experimental setup allows the authors to record the visual scenes from the mouse perspective as well as a number of behavioral variables. The network consists of two modules: one computes visual features, the other encodes behavioral variables. The outputs of both modules are concatenated and fed into the recurrent unit (GRU) accounting for the temporal dynamics, the output of which is decoded into the predicted neural activity of each neuron in the recorded population. The models achieve state-of-the-art performance both among the visual models and among those including behavioral variables. The authors analyze the model by computing MEIs as well as performing saliency analysis to study the effect of individual behavioral variables. Strengths: I think it is a very good paper. + It is clearly written + It discusses an important problem (integration of behavioral variables in predicting mouse V1 activity) + The proposed model achieves a noticeable improvement over the previous state-of-the-art Weaknesses: I don't see any significant methodological weaknesses. I think the main limitation is the small size of the dataset (a few dozen neurons per mouse). For example, the diversity of the MEIs in Fig. 4 both within and across the animals is very interesting, but a fairly small dataset size doesn't allow us to draw any quantitative conclusions about these MEIs beyond visual inspection. It would be really nice to have a calcium dataset of thousands of neurons in a freely moving mouse (similar to the Sensorium one) to address this issue, but it is of course not a critique of this paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - I am surprised to see that your vision-only network quite significantly outperforms the baselines despite having a standard CNN-FC architecture. Why do you think it is the case? And specifically for mouse 2, why do you think your model is able to perform so much better than the baselines? - Do you think the training data could leak into the test set in the 70-30 splits of continuous segments as you described in the paper? I guess it is not unlikely that a freely moving mouse could be exploring the same part of the arena in some of the train and test segments? - How did you set the values of the behavioral variables for the visual MEIs computation? It would be very interesting to see how the MEIs change as a function of behavioral variables as the animal explores the space around it. - Some of the previous work (e.g. [11]) explicitly assumed the existence of localized receptive fields by factorising the readout into sparse spatial masks and a feature vector. But looking at the MEIs you generated, many neurons don't have receptive fields, so such an architecture doesn't seem reasonable. Do you have any ideas why the models with an explicit receptive field assumption perform well nevertheless? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations are adequately addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging comments. > It would be really nice to have a calcium dataset of thousands of neurons in a freely moving mouse (similar to the Sensorium one) We agree with the reviewer that creating a large and standardized neural recording dataset from freely moving mice would be the dream. We outlined this in our future work section. Unfortunately, to date, Calcium imaging in a freely moving animal is not feasible. (Calcium imaging was possible in the Sensorium competition because the animals were head-fixed.) A yield of ~100 units is considered state of the art for freely moving animals. Furthermore, it is not clear whether the fast neuronal dynamics associated with locomotion could be captured with Calcium imaging, which is relatively slow. > I am surprised to see that your vision-only network quite significantly outperforms the baselines despite having a standard CNN-FC architecture. We were also surprised to see that vanilla CNNs were better than the other architectures we experimented with, including autoencoders, variational autoencoders, filter bank models, and pre-trained deep neural networks (ResNet and EfficientNet). In our opinion, the reason might be that the neurons in the mouse visual cortex were selective for other behavioral inputs and the vanilla CNN architecture imposed the fewest assumptions about how the visual input contributed to the neural activity. This is especially true for Mouse 2: we learned that the cortical probes of Mouse 2 were more superficial compared to the other two mice, so the recorded neurons may have both different anatomical inputs and different visual responses. > Do you think the training data could leak into the test set in the 70-30 splits of continuous segments as you described in the paper? I guess it is not unlikely that a freely moving mouse could be exploring the same part of the arena in some of the train and test segments? While it is possible that the mouse could have been exploring the same part of the arena at different segments of the recording, it was free to move its head, eyes, and body as it saw fit. Thus “data leak” is unlikely as two duplicate data points could only be produced by the animal exactly duplicating the time courses of its eye, head, and body movement in the exact same location of the arena. > How did you set the values of the behavioral variables for the visual MEIs computation? It would be very interesting to see how the MEIs change as a function of behavioral variables as the animal explores the space around it. The behavioral variables were initialized to a vector of all ones, and updated in the loop with the visual MEI. We agree that setting a fixed behavioral variable vector and inspecting the changes in visual MEIs would be an interesting analysis, which we would be happy to provide in the camera-ready version. > Some of the previous work (e.g. [11]) explicitly assumed the existence of localized receptive fields by factorising the readout into sparse spatial masks and a feature vector. [...] Do you have any ideas why the models with an explicit receptive field assumption perform well nevertheless? We agree with the reviewer that enforcing a localized receptive field may hinder good model fits. In contrast to previous work, our analysis used gradient ascent to reveal the maximally exciting visual input. We speculate that previous models may have still worked well because the assumption of a localized receptive field was imposed on the final feature map output (e.g., [11]), which is drastically different (and at a coarser resolution) than the raw visual input. Therefore, the receptive field in those models was not exactly as localized as claimed, which may explain why these models still performed well. --- Rebuttal Comment 1.1: Comment: Thank you very much for your response and for addressing my questions! I don't have any further questions at this point.
Summary: The authors use convolutional neural networks to fit data from neuronal recordings in primary visual cortex (V1) of the mouse while the animal is freely exploring the environment. The model incorporates visual signals but also other behavioral variables (“multimodal” aspect). The proposed model provides better fits to neuronal data than other benchmark models. The neuronal and behavioral data are particularly interesting and can provide interesting insights into neuronal computations when interpreted with mechanistic models. Strengths: The ability to fit neural data in a dynamic fashion is interesting and distinguishes this work from previous work in the field. Numerically, the proposed model and incorporation of behavioral variables provide better fits to the data than the alternatives tested. The differences across different neurons (section starting on line 212) is quite interesting, especially if it can be connected in the future to the ongoing efforts to characterize different cell types in mice. The neuronal and behavioral data are particularly interesting and probably the highlight of the paper. These data can provide interesting insights into neuronal computations when interpreted with mechanistic models and would be very useful for future investigators that are interested in building theories and models of brain function. Weaknesses: One could imagine incorporating a lot of different world variables to fit neuronal responses. Here the authors choose a very reasonable set of such variables, including pupil size, head direction, moving speed. What exactly this means in terms of the function of V1 is not clear. This kind of neural data fitting has become extremely common in the field. However, it remains unclear what kind of conclusions one can draw from this type of data fitting. Take the first sentence of the results. Table 1 does show that the 3-layer CNN is slightly better than other models, as the subtitle indicates. Then what? Does this mean that mouse V1 functions like a 3-layer CNN? Clearly not. There are lots of problems with 3-layer CNNs, including adversarial attacks, lack of robustness to noise, etc. These issues are not tested here. Thus, it is difficult to state what we learn about V1 function from the fact that the 3-layer CNN is slightly better than, say, ResNet-18. It is certainly better in fitting data from this experiment, what this means in the big picture of V1 computations is unclear. The next section indicates that behavioral variables improve the data fitting, which I agree with. However, it remains very unclear that V1 neurons would have inputs that directly reflect the behavioral variables incorporated into the model. Perhaps V1 neurons could get indirect inputs that relate to pupil size, head direction, running speed, etc. From a neuroscience perspective, the critical question is to understand the anatomical inputs, the mechanisms underlying these dependences. The notion of mixed selectivity alluded to in the discussion has been extensively used in neuroscience and remains rather useless. Neurons show “mixed selectivity” only insofar as the investigators use variables that are not directly related to the neuronal responses. Without mechanistic models, there are lots of representations that might seem as “mixed selectivity”. As a trivial example, consider a neuron that responds to 45 degrees orientation preferentially. But the investigators try to fit the responses in terms of horizontal and vertical and then claim “mixed selectivity!”. There is obviously no such thing, the neuron represents 45 degrees, period . The claims in the paper follow the same logic. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The differences across different neurons (section starting on line 212) is quite interesting, especially if it can be connected in the future to the ongoing efforts to characterize different cell types in mice. Expanding on this section would be useful in terms of neuroscience. What is distinct about each type of neuron, how do their properties relate to their firing rates, variability, receptive field sizes and properties, etc. This is where the paper can go from data fitting into first steps into a mechanistic understanding of V1 properties. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is no discussion about limitations, of which there are plenty. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and constructive comments. We agree that data fitting is not the final answer to understanding the functioning of mouse visual cortex, and we do not presume to be able to provide such an answer. However, we believe that our modeling efforts may provide valuable insights about visual cortex function, potentially leading to mechanistic insights in the future. For instance, previous models of mouse V1 have seen limited success in incorporating behavioral variables in their prediction (e.g., Sensorium+ competition). Rather than concluding that behavioral input may play a secondary role in visual cortex function, our study suggests that the move to freely-moving datasets is paramount to revealing the role of behavioral input to predictions of V1 activity. Moreover, identifying the behavioral variables that modulate processing (and particularly their impact on coding) may guide future experimental efforts to identify their neural basis. > However, it remains very unclear that V1 neurons would have inputs that directly reflect the behavioral variables incorporated into the model. Perhaps V1 neurons could get indirect inputs that relate to pupil size, head direction, running speed, etc. From a neuroscience perspective, the critical question is to understand the anatomical inputs, the mechanisms underlying these dependences. We agree with the reviewer in general. However, our results are indeed aligned with recent neuroscientific work that highlights how locomotion, arousal, and motor signals enter V1 (Fu et al., 2014, doi:10.1016/j.cell.2014.01.050; Leinweber et al., 2017, doi:10.1016/j.neuron.2017.08.036). Froudarakis et al. 2019 (doi:10.1146/annurev-vision-091517-034407) and Parker et al. 2020 (doi:10.1016/j.tins.2020.05.005) provide a thorough review of non-visual inputs to visual cortex. Given that these anatomical inputs exist, the question remains how V1 neurons may integrate these behavioral variables with visual information. To this end, our study provides a computational account of how V1 neurons may combine these multimodal inputs during free exploration. > The notion of mixed selectivity alluded to in the discussion has been extensively used in neuroscience and remains rather useless. We agree with the reviewer that “mixed selectivity” should be clearly defined in order to be meaningful. In the context of this study, mixed selectivity refers to a neuron encoding two different categories of information: e.g., visual and motor signals, or stimulus and reward signals. Combining horizontal and vertical edges would not count as such; this would be a purely visual response. We will make sure to clearly define this term in the camera-ready version. > The differences across different neurons (section starting on line 212) is quite interesting, especially if it can be connected in the future to the ongoing efforts to characterize different cell types in mice. Expanding on this section would be useful in terms of neuroscience. We agree with the reviewer that an important next step is to connect our results to characterizing different cell types in mice from a mechanistic and anatomical point of view. While at present we can speculate about these differences based on our model fits, to thoroughly answer this question, we would require a different set of tools that are beyond the scope of this work. --- Rebuttal Comment 1.1: Title: Reasonable responses, still unconvinced Comment: The authors provide very reasonable responses. Indeed, there are plenty of non-visual inputs to V1 in mice. Then what? Do those inputs help the mouse see better? Are there particular behaviors that are contingent on those connections? Of note, the current work does not present a mechanistic model of what those connections do. All we can say is that adding some behavioral variables can lead to better fitting of V1 activity. For example, it could be that mice pay more attention when they are running and that the neurons have higher activity and that leads to better correlation coefficients, but this should not be confounded with a mechanistic understanding of the function of V1 neurons. Most neurons have many inputs, sometimes as many as 10,000 inputs. Perhaps what the authors mean by mixed selectivity is that there are local and non-local anatomical connections and that there are many neurons that have different non-local anatomical connections. That said, there is nothing technically wrong with this study, as far as I can tell. While we can disagree on whether this study advances neuroscience understanding in any way, the work is well done, clear and well written.
Summary: The authors introduce a multimodal recurrent neural network to integrates information beyond vision - behavioral and temporal dynamics processed by a separate head to explain V1 activity in mice. This model (vision + behavioral and temporal dynamics) is the state-of-the-art in prediction of V1 activity during free exploration. Further, this model is analyzed using maximal-activating stimuli and saliency maps, to obtain insights into function of mice V1 area. Strengths: * The paper is well written and the integration of information beyond vision (or any one modality) for neural predictivity is novel, as far as I know. * The results are strong and convincing with proper ablation studies Weaknesses: * Vision only model seems to have fewer parameters compared to the ones with GRU on top. This makes it hard to say if the improvements come from the extra parameters (unlikely). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * I am surprised ResNet models do not do as well the smaller CNN. The skip connections in the resnet should have resulted in the model being at least as good as the shallower model. Have you looked into why? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging comments and suggestions. > Vision only model seems to have fewer parameters compared to the ones with GRU on top. This makes it hard to say if the improvements come from the extra parameters (unlikely). It is true that our vision-only models had fewer parameters than our vision-and-behavior models - this was unavoidable as adding behavioral and temporal dynamics increased the input space and thus necessitated more parameters. However, we agree with the reviewer that the performance improvements were not primarily due to the added number of parameters, for several reasons. First, we found that adding more parameters to the vision-only model did not improve performance. For this, we investigated several vision-only alternatives, which are described in the appendix, such as adding additional layers (Appendix A.1), more channels (Appendix A.1), and a decoder architecture (Appendix A.2 and Table 4). Second, we experimented with both GRU and LSTM architectures, and found that GRU (despite the lower number of extra parameters) drastically outperformed LSTM. > I am surprised ResNet models do not do as well the smaller CNN. The skip connections in the resnet should have resulted in the model being at least as good as the shallower model. Have you looked into why? The reviewer raises a great point. During our early experiments with different ResNet architectures, we noticed that these models were drastically overfitting our dataset. It is possible that this problem could be alleviated with more data, but as is, our dataset is already large by experimental standards in the field. Another possibility is that mouse visual cortex has a shallower architecture than macaque visual cortex (as pointed out in the main text), thus making deeper ResNet architectures less suitable for the task. --- Rebuttal Comment 1.1: Title: Reply to authors Comment: Thank you for the response. A difference of this work with [1] and others is that in their case, the models are initially optimized for a task (usually image categorization) and then the features from the model is used to predict neural data. Whereas, in this work, the authors directly optimize the entire network to predict the neural data. I am curious to hear as to why and if the authors expect to see any difference if they were to do it with a performance-optimized to network? In any case, I think the work is interesting & I am more convinced than before, thus I am improving my initial score. [1] Daniel L. K. Yamins, Ha Hong, Charles F. Cadieu, Ethan A. Solomon, Darren Seibert, and James J. DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23):8619–8624, June 2014. Publisher: Proceedings of the National Academy of Sciences. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their encouraging comments. Our ResNet-18 model and EfficientNet-B0 model were finetuned from the weights pre-trained for the image classification task, but in the end, they did not result in superior performance compared to vanilla CNNs (Table 1). We believe that this is due to the unique properties of the mouse visual system which we reviewed in lines 51-63. A key difference between our work and [1] is the type of neural data. The models in [1] were used to fit neural data from macaques, but our work tries to predict neural activity from mice. In contrast to primates, mice are known to have a low-resolution vision and a shallower, more parallel, and more interconnected visual cortex, which is not a close match to the usually deeper, more feed-forward performance-optimized networks. In addition, mice’s visual processing is more related to movement instead of a passive analysis of the visual scene, and a large proportion of examples from image-categorization datasets may not match what a mouse sees in its surroundings. This is another reason why the models pre-trained with an image classification objective do not perform well in this case.
Summary: The authors in this work propose a novel multimodal approach to design encoder models of mouse visual processing. The authors identify the limitations of unnaturalistic (head-fixed) recording, limited behavioral inputs and lack of temporal dynamics in prior recordings and models of mouse visual system and address these with new data recorded from 3 freely-moving mice that they have access to which fixes the above-mentioned issues. The multimodal architecture, a CNN-GRU encoder simultaneously encodes time-locked visual input and elaborate mouse behavioral state (running speed, eye and head position, pupil size, the first derivatives and pairwise products of these quantities). A neural response prediction readout is attached to the output of the encoder. The authors show that a 3-layer CNN is the best predictor of mouse visual responses and that behavioral input consistently outperforms a purely vision-based encoder. Further gradient ascent visualization and saliency-based analyses add interpretability to the above encoder. Strengths: + The proposed work is quite original and sound in the joint-computational modeling of mouse visual and behavioral responses in freely-moving mice. The encoder architecture proposed by the authors is quite interpretable and empirically performs well to predict the neural activity data. The originality in this work in my opinion is from the modeling that the authors propose. The dataset that has been used here seems to not be original to this work but prior work referenced by the authors in the Methods section. + It is (although not surprising) clear from the experimental results that joint-encoding of vision and behavior outperforms predictions from a purely vision based model. The methods and results sections have been written quite well in adequate details on preprocessing techniques, training of the encoder and metrics used to evaluate the quality of visual encoding. + Additional visualization and saliency based analysis performed by the authors adds interpretability to the proposed encoder model as it shows well-defined receptive fields (mostly those model neurons with high correlation to Mouse 1 neurons though) and selectivity to various behavioral attributes. Weaknesses: - Unfortunately it seems that the dataset has been published already and is accessed by this submission; the modeling efforts built on top seem like a relatively small contribution in my opinion. Especially since the kind of architecture designed and its hyperparameters aren't justified with a systematic search. Did the authors attempt alternative architectural choices, normalization techniques other than BatchNorm, different training approaches (unsupervised / self-supervised) or other recurrent units that are not included in this submission? Please provide the required justification for the presented architectural choices. - Related work section seems to be quite limited and I believe it could be enhanced to further include more information about other relevant computational models of mouse visual activity. - Other than Figure 1 and Figure 4, the other figures seem to have poor spatial resolution and need to be improved in quality for better readability. - Very little information is included about the neural activity dataset, I believe it would be good to add more details about the kind of visual stimuli used and about the recording setup in the Methods section or in the supplementary information. - I could not find mention of whether the authors are open to sharing their models and code, this would be appreciated if the authors want to maximize the reproducibility of their work. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please find my suggestions in the weaknesses section of my review. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations have been adequately addressed in this submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging comments and are pleased that they agree about the originality of the work. We do believe that this work constitutes a significant modeling contribution, as most previous modeling attempts have focused on head-fixed datasets with static stimuli, which may be appropriate for foveating animals, but arguably provide limited insight into visual processing in real-world environments. The extension to freely-moving datasets is not trivial, and the present work demonstrates state-of-the-art performance on a dataset collected with state-of-the-art equipment. > Please provide the required justification for the presented architectural choices. Did the authors attempt alternative architectural choices, normalization techniques other than BatchNorm, different training approaches (unsupervised / self-supervised) or other recurrent units that are not included in this submission? We indeed experimented with several different network architectures (inspired by the results of the Sensorium competition) prior to arriving at the proposed network. This included autoencoders (whose details are described in the appendix), variational autoencoders, pre-trained deep neural networks (ResNet and EfficientNet), filter bank models, and LSTM/RNN networks. The hyperparameter search for the CNN architecture is described in detail in the appendix. We did not attempt normalization techniques other than BatchNorm, because we found BatchNorm to be most effective in our previous experience with multimodal modeling and BatchNorm was standard in the field [11], [12]. We did not try unsupervised or self-supervised learning because our task (predicting the firing rate of each neuron based on visual and behavioral inputs) was supervised in nature and it was not clear how to formalize it so that other training techniques could be applied. We agree this could be a promising avenue for future research. > Related work section seems to be quite limited and I believe it could be enhanced to further include more information about other relevant computational models of mouse visual activity We agree that this section should be expanded to include other relevant computational models of the mouse visual activity. We will make sure of that in the camera-ready version. > Other than Figure 1 and Figure 4, the other figures seem to have poor spatial resolution and need to be improved in quality for better readability. Thank you for pointing out the image resolution issue. We will ensure the figures are of high spatial resolution in the camera-ready version. > Very little information is included about the neural activity dataset We agree that more information about the neural activity dataset could facilitate the flow of the paper, but we did not go into too much detail because space was limited and because it was reported in detail in the original publication which described the experimental results and the dataset that we accessed [24]. We will make sure to include a more detailed description of the dataset in the appendix in the camera-ready version. > I could not find mention of whether the authors are open to sharing their models and code The reviewer raises an important point, and we are happy to share all pre-trained models and code (line 151).
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive comments. We are pleased that reviewers agreed this is a clearly written paper (ZQ1m, xdTa) describing novel work (ZQ1m, NhDi) with strong results (NhDi, riS4, xdTa) that may help build theories and models of brain function (riS4). Reviewers raised insightful questions about the details of the dataset (ZQ1m, xdTa), the choice of our model architecture and comparison to alternative architectures (ZQ1m, NhDi, xdTa), and the implications for our mechanistic understanding of V1 processing (riS4). Below we address each of the reviewer’s questions and concerns point by point. In short, the dataset we accessed in this work is the state-of-the-art setup for collecting neural activity data from freely moving mice. We have indeed performed an architecture search to arrive at our novel multimodal network architecture (see Appendix). While our selection of behavioral variables was largely guided by known computational properties of mouse V1, our multimodal model may help unveil the computational principles by which non-visual inputs are integrated in mouse visual cortex. Lastly, we would like to emphasize that we are happy to provide all our pre-trained models and code upon acceptance.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Robustness of Mechanism Design under Total Variation Distance
Accept (poster)
Summary: The paper studies the following question: in what way can approximation guarantees of (approximately) IC mechanisms be preserved when the prior distribution is perturbed by a small amount in the total variation distance? The main technical lemmas state that the guarantees of DSIC and BIC mechanisms are in fact preserved in ways that can be useful. Based on these lemmas, the authors reproduce a number of previously known results, and also prove some new ones, in various subareas of algorithmic mechanism design. Strengths: The general question the paper aims to answer is very natural and important. The main technical lemmas turn out to be quite powerful, although they are fairly simple and intuitive (which I think is good). I like the way the various implications are derived in the paper, i.e., the right technical observations (in the case of this paper, the main technical lemmas in Section 3) can make things much easier. Weaknesses: One might complain that the main technical lemmas are not all that surprising, but I think the way the authors make use of these lemmas outweighs such criticism. Another minor thing is the applications are relatively loosely organized, and I'd be more excited if there is a key application that irrefutably shows the power of the framework proposed in the paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: (also including detailed comments here) Line 263, "two conditional P_{X, Y}, Q_{X, Y}": do you mean "joint"? Lemma 3: could mention this is essentially a Markov-like bound (right?) Lemma 5: some intuition might be helpful Line 316: "Bei et. al." => "Bei et al." (consider using \citet or \citeauthor?) Corollary 1: why do you need the mechanism to be posted pricing? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and questions. Below we answer your questions. - "Line 263, "two conditional P_{X, Y}, Q_{X, Y}": do you mean "joint"?" - "Line 316: "Bei et. al." => "Bei et al." (consider using \citet or \citeauthor?)" We will fix both typos. - Lemma 3: could mention this is essentially a Markov-like bound (right?) We will mention above the lemma that it is a Markov-like result. To clarify, the lemma requires two steps to prove (as shown in the Supplementary Materials): 1) An argument to bound expected TV distance between conditional distributions using the TV distance between joint distributions, which is the main calculation, and 2) Markov’s inequality (as you correctly point out). - "Lemma 5: some intuition might be helpful" Intuitively, one can focus on the subset of “good” types such that reporting truthfully on M is within $\epsilon$ of the optimal report; a type will be “good” with probability $1-q$, which implies a bound on the TV distance between the original distribution and the distribution “restricted” to “good” types. Our construction alters the behavior of $M$ on the “bad” types, and uses the aforementioned TV bound and Lemma 2 to bound the additional loss in the BIC constraint, as well as the revenue loss. We’d be happy to include this intuition, as well as further expand/clarify if the reviewer believes it would improve the quality of the paper. - "Corollary 1: why do you need the mechanism to be posted pricing?" The corollary holds for all DSIC and ex-post IR mechanisms. We refer to posted prices here because they are the main family of policies studied in the literature on prophet inequalities. Specifically, the optimal policy and, to the best of our knowledge, every simple and approximately optimal policy in the literature are posted prices. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I have no further questions.
Summary: This paper studies the robust mechanism design. In this problem, there is a set of items and a set of agents whose valuation functions are drawn from a batch of unknown distributions. These distributions are correlated, i.e., they are close to a known distribution under the total variance distance. The agents' valuation functions are private information, and the goal is to design a truthful mechanism that maximizes some objective functions in expectation. The two main objectives are considered in this paper: social welfare maximization and revenue maximization. The main contribution of this paper is: they prove dominant strategy incentive-compatible mechanisms are robust, namely alpha-approximate mechanisms under the classical setting can be converted into a mechanism under the robust setting without losing any factor on approximation ratio. A similar result holds for Beyseian incentive-compatible mechanisms, but a small factor has to be loss on the approximation ratio. Finally, the authors also list a batch of applications of the proposed framework. Strengths: 1. I appreciated that the submission is carefully written and structured, so reads well given the technicality of the material. Especially, the flow of the paper is well-designed. The presentation of the algorithmic idea is also clear. 2. The studied problem is interesting and well-motivated. 3. The proposed framework works for a large number of applications, although the fundamental technical results are simple. Weaknesses: From my perspective, there are no major weaknesses, but there seems to be no big surprise in the used techniques. However, I am not in a good position on judging the technique novelty of this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I don't have any specific questions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This is a theoretical paper, there is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review.
Summary: This paper studies the robust design of mechanisms for a designer with general bounded objective, when the true distribution of the agent types (possibly correlated) is not the actual distribution. More precisely, the main idea is that the optimal incentive compatible mechanism designed for the a priori distribution approximates well the optimal mechanism for the true distribution as a function of the TV distance between the two distributions, as well as guarantees approximate incentive compatibility. In this way, it generalizes results from the existing literature which were focused on specific objectives such as welfare or revenue, and under product distribution. This work is decomposed in 2 main different parts, first results relating how the various metrics (objective and incentive compatibility) degrade in the TV distance for both DSIC and BIC mechanisms are presented, then these approximation results are used for applications such as approximations in the prophet inequality setting when the distributions may be correlated, or for approximation results on simple mechanisms. Strengths: - This paper study the important setting of mechanisms robust to small perturbations of the agents types distribution. It generalizes some previous results, and presents a variety of tools that can be useful for a mechanism designer. Multiple applications are given. Moreover the various applications, beyond their own interest, also serve as an example on how to apply these tools. - The paper is clearly written, and the existing literature well presented. The link between previous results and how they are being generalized is transparent. - I have went through the proofs in the main paper, as well as some in the appendix, and found no issues. Weaknesses: - Compared to other works, such as `Posted Pricing and Prophet Inequalities with Inaccurate Priors' (Dutting et al 2019), this paper only studies the TV distance. - The approximation results are not related to any upper bound, which makes it difficult to evaluate the tightness of these results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - l638 : Does 'single agent' described in this context mean that $n=1$? In this case what would the product distribution $\mathcal{D}^p$ signify? - The proof of Lemma $2$ uses a coupling argument to bound the difference between objectives under different distributions in terms of TV distance. Can similar coupling arguments be used to derive similar robustness results, but this time for Wasserstein distance? More generally, does it look possible to extend those results to more general distances (or f-divergences like the Kullback-Leibler) or are these results stemming from the specific properties of the TV distance? - Is there an example when some of the proposed approximation bounds are tight, for instance in Theorem $1$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have correctly addressed some of the limitations of their work, such as discussing when some assumptions may be less general than previous works (common support of distributions necessary for Theorem 2, and weaker BIC guarantees). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and questions. Below we provide answers to your questions. - "l638 : Does 'single agent' described in this context mean that n=1? In this case what would the product distribution signify?" Indeed this is the case of a single agent. The product distribution here is referring to the items. That is, the values for the items are drawn from independent distributions. We will clarify this. - "The proof of Lemma 2 uses.. the TV distance?" For the KL divergence (as well as many other f-divergences, such as $\chi^2$-divergence), we can readily use our results since a bound on KL implies a bound on TV (e.g., using Pinsker's inequality). Note that f-divergences also exhibit supremum characterizations like TV distance (using convex conjugation ideas), but these characterizations do not have the simple structure that TV exhibits. So, it's difficult to obtain crisp robustness results from these characterizations for general f-divergences. The easier approach is to bound TV using different f-divergences and use our results for TV distance as mentioned above. For Prokhorov distance, consider the following situation, noting that a bound on Prokhorov implies a bound on Wasserstein distance. Distribution $D$ is a point mass at $H$ (i.e. $H$ happens with probability 1). Distribution $D'$ is a point mass at $H-\delta$. The two distributions have a Prokhorov distance of $\delta$ but TV distance of 1. Now consider the single-item mechanism $M$ that posts a price of $H$. If the bidder that participates in $M$ draws her value for the item from $D$ she will always buy the item; thus the mechanism has expected revenue $H$. If the bidder draws her value from $D'$, $M$ has expected revenue equal to 0. This example demonstrates that our results cannot be immediately transferred to other Prokhorov or Wasserstein-type distances (despite the existence of coupling and/or duality characterizations), and “robustifying” a mechanism, as Brustle et al. do, seems necessary. - "Is there an example when some of the proposed approximation bounds are tight, for instance in Theorem 1?" Yes. See the “global” response. --- Rebuttal 2: Comment: We thank the authors for their response. If possible, I would be happy to see if this tight example mentioned in the global response can be generalized beyond revenue or welfare, as the approximation bounds presented in this paper hold for more general objectives. Similarly, even if it is difficult to get tight examples for Theorem 2, I think it would be nice to have some partial negative results to show that these bounds are still not too bad. Otherwise, all my questions have been correctly addressed, and I believe this paper should be accepted as the contributions are novel and the writing and motivations are clear. --- Rebuttal Comment 2.1: Comment: Thank you for your reply. The main issue with generalizing to arbitrary objectives is that a worst-case arbitrary objective can do something uninteresting, e.g. take the value c no matter what, where obviously our result is not tight. It is known that $E_P[f(X)] - E_Q[f(X)] \leq TV(P,Q)$ for all functions f bounded by 1/2, and equality holds for the function $f^*(x) = 1/2$ if $P(x) \geq Q(x)$ and $f^*(x) = -1/2$ otherwise. We can use this to show that equality will hold for Lemma 2 when the objective function and the mechanism, combined, look like this function, i.e., $f^*(x) = \mathcal{O}(x,M(x))$, with appropriate re-scaling when V is not 1. This would give a non-trivial sufficient condition for functions beyond welfare and revenue. We will also include partial bounds for Theorem 2.
Summary: The paper studies a robust auction design problem for multiple items. In the model, the authors assume that they can access a (predicted) valuation distribution over all agents, and the total variance between the predicted distribution and the actual distribution is bounded. The goal is to design a truthful mechanism such that given any objective, the expected performance is robust with respect to the total variance bound. Both DSIC and BIC mechanisms are considered in the paper. For the DISC setting, the paper shows that when the total variance is at most $\delta$ and the length of the range of the objective function is at most $V$, if there exists a $\alpha$-approximate mechanism (for the special case that $\delta=0$), using the mechanism directly can return an expected objective at least $\alpha OPT- (1+\alpha) V \delta$. For the BIC setting, the authors prove a similar theorem. They show that the difference between the objective obtained by a mechanism on two close valuation distributions is at most $V\delta$. Several applications are mentioned in the paper. The authors illustrate how their ideas can be applied to these concrete applications. Strengths: The paper extends the previous result on robust auction design and shows that for any objective function, once the range is bounded, we can obtain a robust mechanism with respect to the total variance easily. This result is interesting, and the basic idea might be useful in many other mechanism design settings. Weaknesses: One shortcoming of the proposed result is that the mechanism still does not have a performance guaranteed when the total variance is large. Maybe the authors could borrow some ideas from the learning-augmented algorithms and find an efficient way to combine the predicted mechanism and the traditional worst-case mechanism, such that the mechanism is still competitive even if the total variance is large. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is there any negative result for the model? For example, could you give a concrete hard instance such that the difference is at least $V\delta$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and question. Indeed, when the total variation is large, our results don’t have any bite, which, of course, does not imply that some sort of robustness is impossible to show. The approach of using learning-augmented algorithms that the reviewer suggests is a great direction for future work. Regarding your question, “Is there any negative result for the model? For example, could you give a concrete hard instance such that the difference is at least $V \delta$?”, see the “global” response. --- Rebuttal Comment 1.1: Comment: I have gone through the hard instance stated in the global rebuttal. My question has been addressed.
Rebuttal 1: Rebuttal: We thank all reviewers for the thorough reviews and helpful comments. We will incorporate the valuable suggestions from all reviewers in the final version of this paper. A common question that reviewers acoM, XhLk, and yCAy ask is whether our various bounds are tight. One can construct trivial tight examples for Lemma 2 and Theorem 1, at least for the case of revenue and welfare. For instance, for the case of a single agent, letting $\mathcal{T} = [0,V]$, consider the case that the distribution P is a point mass at V, and distribution Q takes the value V with probability $1-\delta$, and zero otherwise. The TV distance between P and Q is exactly $\delta$. Now, consider the simple mechanism M that posts a price of V. Its revenue/welfare under P is V, and its revenue under Q is $(1-\delta)V$. One can construct such simple examples to show tightness for Lemma 2 and Theorem 1 in general. For BIC (e.g. Thm 2) it seems slightly trickier to construct tight examples (where one loses both in the BIC constraint and the revenue). We’d be happy to include such lower-bound examples in the final version of the paper.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper considers the design and performance guarantees of various mechanisms under prior distributions, and aims to provide a general account of what happens to these mechanisms and their guarantees when these (joint) prior distributions are perturbed. They use the definition of TV distance in terms of the largest difference in probabilities over all events and combine it with the assumption that the mechanisms in question have bounded values and payments in order to argue that expected values, rewards, and incentives are only incrementally affected by small perturbations in TV distance. This observation allows for the 'robustificaton' of a number of prior results. The authors present some more technically involved claims for settings involving product prior distributions and Markov random fields. Strengths: This work considers a natural question, and aims to provide a systematic answer. The applications and results are a combination of recovering prior robustness results and extending prior non-robust mechanisms and results to nearby joint prior distributions. Weaknesses: Upon some reflection it makes sense that the robustness guarantees the authors consider should contribute additive terms which depend on the maximum payment or reward in a mechanism. But for the bounds presented there are frequently other factors in the additive terms. This paper would benefit from discussion of lower bounds, and more generally from arguments that the forms of these guarantees make sense. Many of the claims are not stated along with all of the relevant conditions and caveats, for instance restrictions on boundedness and on the supports of distributions. The proofs of some central claims are intuitive and straightforward, and at the same time many claims are lacking outlines of their proofs. The authors herald their usage of a "fundamental duality property of total variation distance," but it is unclear in which proofs this duality is being used, or the sense in which it is being employed. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Where is the crux of your invocation of duality, and how are you using the connection between these characterizations of TV distance? Which of your robustness claims readily admit lower (upper?) bounds? Possible corrections: L195-196: Should Omega additionally be a measurable subset in order to be a standard Borel space? L213: is the supremum necessary? L244: "inequality is because" L310: epsilon used before it is introduced L376: "distributions" L384: is the Omega here intended to mean that there is some constant for which the claim holds? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Some of the theorem statements seem to be missing quantifiers and conditions, as well as informal overviews of their methods of proof. It would be helpful to have more discussion of which dependences are necessary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and questions. We will update our theorems and lemmas, whenever applicable, to state the relevant conditions/restrictions necessary. Below we answer your questions. - Where is the crux of your invocation of duality, and how are you using the connection between these characterizations of TV distance? The key duality property of TV distance refers to the fact that it has two equivalent characterizations: 1) the maximum over bounded functions of the difference of expectations, and 2) the minimum over couplings of the probability that the two associated random variables are different. This property is presented in Lemma 1, and it is distilled in Lemma 2 in the context of mechanism design. Specifically, the result in Lemma 2 is really a form of (weak) duality that tells us that the difference between expected objective functions is bounded by TV distance. Its proof illustrates how we use the minimum coupling characterization of TV distance from Lemma 1. This means that we are invoking duality of TV distance anywhere we are using Lemma 2. Lemma 2 is crucially used in both Theorem 1 (DSIC robustness) and in Theorem 2 (BIC robustness). Moreover, in the applications, we use it in Corollary 1 (Prophet inequality) and marginal robustness (Lemma 10, towards proving Theorem 4), in Lemma 7 in order to prove Lemma 5 ($(\epsilon,q)$-BIC to $(\epsilon + nqH)$-BIC reduction), and in the proof of Theorem 3 (application for BIC mechanisms). Note also that although the duality property of TV distance is related to notions like linear programming duality, we do not use any such form of duality to prove Lemma 2. Rather, Lemma 2 itself is proving a form of duality. We hope this clarifies our use of “duality”, and we are also happy to clarify this in the paper. - Which of your robustness claims readily admit lower (upper?) bounds? See the “global” response for an answer to this question. - L195-196: Should Omega additionally be a measurable subset in order to be a standard Borel space? Yes, $\Omega$ must be measurable. This will be clarified in the paper. - L213: is the supremum necessary? The supremum used in the dual characterization of TV distance could be replaced by a maximum, i.e., the maximizing function exists under our assumptions. This will be changed in the paper. (The supremum used to define the bound of 1/2 on the functions is necessary.) - L244: "inequality is because". L310: epsilon used before it is introduced. L376: "distributions". Thank you for catching these; we fixed them. - L384: is the Omega here intended to mean that there is some constant for which the claim holds? Correct. --- Rebuttal Comment 1.1: Comment: Dear authors, Your message has been noted. The decision on your paper will be based on my discussion with the reviewers. We will reach out to your should we require further clarifications. Regards,
null
null
null
null
null
null
Performance Scaling via Optimal Transport: Enabling Data Selection from Partially Revealed Sources
Accept (poster)
Summary: In this paper, the authors address the problem of data selection, acknowledging that complete data availability is often not possible, and data can only be obtained from specific providers. They recognize that different data sources may have varying impacts on the performance of the model, emphasizing the importance of optimizing the data selection strategy to enhance model training. To achieve this goal, the authors leverage optimal transport, validation data, the target model, and scaling law to allocate the selection budget effectively across different data sources. The experimental results presented in the paper validate the effectiveness of the proposed method. Pros: This paper is well-written, presenting a clear and logical flow of ideas, along with a comprehensive description of the method employed. The setting of the study is intriguing, considering the current landscape where data providers offer limited options for data access. Exploring the optimal combination within a constrained budget is a valuable contribution to both academia and industry research. Cons: 1. While the chosen setting is interesting, it may be somewhat impractical since scenarios where different data providers offer data for the same task with identical distribution are uncommon. The authors implicitly assume that data from all providers share the same distribution, which might weaken the persuasiveness of the results. It would be beneficial if the authors could present a real-world scenario to support their assumptions. 2. The assumption that the selection model has access to validation data with the same distribution as the testing data might not be practical in many cases. Often, the real testing data and its distribution are unknown, making this assumption less realistic. 3. Some of the figures presented in the paper are difficult to comprehend. For instance, Figure 5 lacks clarity in terms of color interpretation, as well as the meaning of the x and y axes. Overall, this paper offers a solid contribution to the field of data selection. Despite the presence of certain strong assumptions, the insights provided are valuable for enhancing the data selection process. Strengths: This paper is well-written, presenting a clear and logical flow of ideas, along with a comprehensive description of the method employed. The setting of the study is intriguing, considering the current landscape where data providers offer limited options for data access. Exploring the optimal combination within a constrained budget is a valuable contribution to both academia and industry research. Weaknesses: 1. While the chosen setting is interesting, it may be somewhat impractical since scenarios where different data providers offer data for the same task with identical distribution are uncommon. The authors implicitly assume that data from all providers share the same distribution, which might weaken the persuasiveness of the results. It would be beneficial if the authors could present a real-world scenario to support their assumptions. 2. The assumption that the selection model has access to validation data with the same distribution as the testing data might not be practical in many cases. Often, the real testing data and its distribution are unknown, making this assumption less realistic. 3. Some of the figures presented in the paper are difficult to comprehend. For instance, Figure 5 lacks clarity in terms of color interpretation, as well as the meaning of the x and y axes. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The assumptions are a bit strong and may degenerate the usefulness in practical setting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *"impractical since scenarios where different data providers offer data for the same task with identical distribution are uncommon... implicitly assume that data from all providers share the same distribution..."* (*Weaknesses: 1*) **Re:** We appreciate the valuable feedback on the presentation of this manuscript. **We do not make assumptions about data being identically distributed. ** **We rely on the fact that the distributions are different so that the discrepancy measures (such as Optimal Transport) can be leveraged to provide information from data in the construction of performance predictors.** This is also under the technical challenge of scaling laws, where the scaling ratios vary for data of different distributions. We then tackle it with the organic integration of the OT-based performance predictor to construct a self-adaptive pipeline, automatically fitting scaling parameters for the data distribution based on the prediction tools. In this work, we refined our scope to data of the same type (uni-modal)–for example, images are of the same format (i.e., resolution or the number of color channels), though, we note this framework is capable of extending to multi-modal data (e.g., multi-modal data by aligning the distribution on the joint embedding space of image and text). Our work provides the groundwork for future papers that might explore modality-misaligned data sources. We apologize for the potential unclarity and confusion it may have caused. **We would thoroughly proofread our manuscript to improve readability and prevent potential misperceptions.** *** > *"The assumption that the selection model has access to validation data with the same distribution as the testing data might not be practical in many cases.... the real testing data and its distribution are unknown, making this assumption less realistic."* (*Weaknesses: 2*) **Re:** Thanks for the interesting comment and we would be happy to discuss more about it. In general practice, data acquisition pipelines typically randomly partition the validation data into non-overlapping parts and use them for testing and validation purposes, respectively. In such cases, validation data and test data are i.i.d. **We also followed this scheme in this work.** **To implement the tools proposed in this method, the practitioner would need to first curate a validation dataset that represents the target tasks they expect the collected data to perform well on. This dataset does not need to be large, but is often considered balanced that can well represent all the target objectives.** This is generally perceived as plausible for industrial practitioners. For example, in state-of-the-art automatic speech recognition (ASR) tasks, a number of datasets are available consisting of many audio sources (e.g., accents, recording quality, environmental conditions), **the practitioners are able to curate a validation dataset representing their target user groups and scenarios** (e.g., North American market, home environment) that they hope their applications to perform best. Due to the large scale of samples, training the model each time is resource-intensive. Finding an ideal combination of data for training on the target tasks can be prohibitively expensive if conducted with manual searches. **The tools proposed in this work would provide a complete landscape of target model performance for any data composition on any scale (data quantity), helping inform practitioners in deciding what type of data to acquire and by how much so that the objectives are best met.** ***[We are adding this as a set of NEW experiments with additional results provided in the summarization rebuttal to all reviewers.]*** Besides, in relevant research of data valuation *(ref [3-8] in the manuscript)* or coreset selection *(ref [2][20][21] in the manuscript)* (cited and discussed in this work), the availability of such validation datasets is also assumed. **The assumptions made in this work do not exceed those of existing research and well align with practical needs in industrial applications.** *** > *"Some of the figures presented in the paper are difficult to comprehend. For instance, Figure 5 lacks clarity in terms of color interpretation, as well as the meaning of the x and y axes."* (*Weaknesses: 3*) **Re**: Figure 5 is a qualitative figure that is provided to facilitate the reader’s interpretation of how this framework works. We apologize if it ends up the opposite and adds to the confusion. ***[We provide a full response to this question in a separate comment to all reviewers. (R4)]*** We will add more explanations to avoid possible misunderstandings. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response. They've mostly addressed my concerns. However, when considering practical importance, relying solely on the fact that other studies make similar assumptions isn't enough to prove their point. Therefore, I'm maintaining my current score. --- Rebuttal 2: Title: Rebuttal period ending–we anticipate your feedback! Comment: Dear Reviewer WiXX, As the rebuttal/author discussion period is closing, we sincerely look forward to your feedback. The authors are deeply appreciative of your valuable time and efforts spent reviewing this paper and helping us improve it. It would be very much appreciated if you could once again help review our responses and additional results and let us know if these address or partially address your concerns and if our explanations are heading in the right direction. Please also let us know if there are any further questions or comments about this paper. We strive to consistently improve the paper and it would be our pleasure to have your precious feedback! Kind Regards, Authors of Submission2041
Summary: Developing ML systems typically requires collecting data from multiple sources. A natural question is how much data to collect from each source. This paper proposes a two-stage estimator that (1) estimates the relationship between data set proportions vs validation loss using Optimal Transport and then (2) optimizes the proportion of each source to emphasize in a dataset given a fixed budget. They validate aspects of the framework on several settings to demonstrate improved performance from good data selection decisions. Strengths: The topic is timely and of growing importance. The optimal transport perspective to data selection is novel and interesting. Weaknesses: 1. The paper makes some confusing and potentially incorrect statements regarding the related literature, particularly with power laws. For example, - Abstract: “these scaling functions are black-box …”. Neural scaling laws are very interpretive and give clear explanations on how data set size relates with model performance. More so, I do not see how the proposed work is less black-box than a neural scaling law. - The general learning curve assumed in this work is a logarithmic model of “performance $= -\alpha \log N + C$ where $\alpha, C$ are parameters to learn. While logarithmic learning curves have been used to model performance in the past, the current literature typically considers power laws. Moreover, the cited work in this part of the paper, [1], specifically discusses power laws and how they may be more effective than log learning curves (e.g., Fig 23 in [1]). While this paper is free to use a logarithmic model for ease of implementation or other design choices, the surrounding discussion is unclear about the motivation of the log learning curve as well as the implications of using log curves vs power laws in this setup. 2. The problem of understanding how much data to collect given a fixed budget or given a performance threshold from multiple sources has been studied in several recent prior works, for example [2, 3]. It would be important to differentiate this work from this prior literature (e.g., the OT framework). 3. The baselines considered in the numerical experiments are mostly simplified versions of the proposed method. Pertaining to the above 2 points, it would be important to validate the design decisions against more relevant related methods, for example, the data selection methods in [2, 3], or at the very least, a meaningful comparison against power law-based performance estimation strategies. 4. While the OT-based estimator in eq. (1) is interesting, it is not well-motivated in this paper. I appreciated the interpretation of $a_1$ and hope to read an interpretation of $a_0$ as well. Furthermore, while bounds on validation loss w.r.t. OT have been studied, how tight are these bounds? This is particularly important because OT is used as an estimator rather than a bound, suggesting either (i) that OT can very tightly bound validation loss or (ii) that the constant term $a_0$ is carrying a lot of the water in this bound. This is particularly interesting because $a_0$ is assumed to be independent of data set size. 5. If I understand correctly, in the numerical experiments, the initial pilot training data set is fixed to 55% of the full data set size. This generally makes the learning curve estimation problem significantly easier as we are well in the power law stage of the learning curve at this point. Moreover, this experiment setup weakens the motivation of having access to only a small pilot data subset. It would more interesting to consider meaningfully smaller data fractions. [1] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. [2] Tejero, Javier Gamazo, Martin S. Zinkernagel, Sebastian Wolf, Raphael Sznitman, and Pablo Márquez-Neila. "Full or Weak annotations? An adaptive strategy for budget-constrained annotation campaigns." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11381-11391. 2023. [3] Mahmood, Rafid, James Lucas, Jose M. Alvarez, Sanja Fidler, and Marc Law. "Optimizing data collection for machine learning." Advances in Neural Information Processing Systems 35 (2022): 29915-29928. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the above weaknesses. Most importantly: 1. How does the log learning curve estimation framework in the proposed setup compare vs power law-based estimation strategies? Can we use a power law estimator rather than a log one? How would that affect the empirical performance? 2. Can you give better interpretation of the OT-based estimator? For example, it would be nice to read out an OT-based bound on validation loss and then show how the proposed estimator’s components capture the different elements of the bound. 3. How does the methodology compare against the related prior literature on estimating and optimizing data set sizes? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are reasonably discussed in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *"This paper proposes a two-stage estimator that (1) estimates the relationship between data set proportions vs validation loss using Optimal Transport and then (2) optimizes the proportion..."* (*Summary*) **Re:** The authors would like to thank the reviewer for providing detailed and in-depth feedback and comments on the manuscript. **We would like to first clarify a potential misperception and the concerning confusion it may have caused.** This work proposes a two-stage scheme for **performance prediction (Stage I) and projection onto large scales (Stage II)**, referred to as **"performance scaling with data composition"** and **"performance scaling with data quantity"** in the manuscript, **respectively. The use of term "performance scaling" may be somewhat abused that causes unnecessary ambiguities in different contexts.** ***[Due to its critical importance and the space limit here, we are providing the full response in a separate comment below (C1).]*** *** > *"...some confusing and potentially incorrect statements regarding the related literature, particularly with power laws. For example, Abstract: “these scaling functions are black-box …” "* (*Weaknesses: 1.1*) **Re:** The authors acknowledge the responsibility for the choice of phrases that may have caused unnecessary ambiguities under different contexts and apologize for the confusion it may have caused. **As discussed above, we may have abused the term "performance scaling" and used it for different meanings in contexts outside of scaling laws (w.r.t. data quantity/data scales).** **We agree with the reviewer that these scaling laws are intuitive and interpretable, and "black-box functions" are references to the use of non-informative surrogates in representations of the relationship between data composition from different sources and the resulting model performance ("performance scaling with data composition").** ***[Due to its critical importance and the space limit here, we are providing the full response in a separate comment below (C2).]*** *** > *"... the current literature typically considers power laws... [1]... specifically discusses power laws and how they may be more effective than log learning curves..."* (*Weaknesses: 1.2*) **Re:** Thanks for the excellent comment. We are appreciative of your insights and pointing out our negligence in elaboration of equations. **In short, we are using power laws rather than log functions. It looks like log-linear relationships because we brought everything to log space for better numerical properties**–we took log operations on both sides of the equation and the power law in the original space became log-linear in the log space. ***[Due to its critical importance and the space limit here, we are providing the full response in a separate comment below (C3).]*** *** > *"...it would be important to validate the design decisions against more relevant related methods, for example, the data selection methods in [2, 3], or … against power law-based performance estimation strategies."* (*Weaknesses: 2, 3*) **Re:** We appreciate the reviewer for providing additional references and we would be glad to discuss them. At the time of submission, we were aware of [3] and cited a related paper [4] while [2] was published after our submission. **These papers are indeed interesting and provide inspiration for the line of research on data selection/data acquisition, though, we would like to point out that their scope and target problems are different from ours and the conceptual and technical contributions are non-overlapping.** ***[Due to its critical importance and the space limit here, we are providing the full response in a separate comment below (C4).]*** *** > *"... while bounds on validation loss w.r.t. OT have been studied, how tight are these bounds? ... suggesting either (i) that OT can very tightly bound validation loss or (ii) that the constant term is carrying a lot of the water in this bound."* (*Weaknesses: 4*) **Re:** This is also an excellent question. We would appreciate the chance to thoroughly explain about it. ***[Due to its critical importance and the space limit here, we are providing the full response in a separate comment below (C5).]*** *** > *“...the initial pilot training data set is fixed to 55% of the full data set size… makes the learning curve estimation problem significantly easier …weakens the motivation of having access to only a small pilot data subset… consider meaningfully smaller data fractions.”* (*Weaknesses: 5*) **Re: We apologize for the misunderstanding. 55% is for the part of work on predicting model performance on data composited from multiple data sources with arbitrary combinations (i.e., the proportion of data from each source), which is irrelevant to data quantity.** When fitting this relationship, we limit the proportion of data from each data source to no more than 55% such that we can test the **extrapolation performance** of the fitted predictors–to predict the performance of data compositions where the proportion of data from a data source exceeds 55%. Since the actual performance of these data compositions is not covered in fitting the predictors, we will be able to examine whether these fitted relationships suffer from overfitting that leads to excessive deviations in extrapolation tasks. **This is the first part of the work where we construct predictors on the pilot data with limited samples. We set the size of pilot data to 10%-20% of the larger data scales (data quantity).** **In the later part of the work, we extend the predictions to project them onto larger data scales (data quantity) where the target data scales are 5- to 10-fold of the pilot data.** E.g, for CIFAR-10, we assume access to 1k samples and project the performance to larger scales of 2k-10k. For ImageNet100, given 10k samples, we project performance to up to 50k samples. For the BDD100K dataset, we project from 1K samples to up to 5K samples. --- Rebuttal Comment 1.1: Title: C1: “performance scaling” and “scaling laws” Comment: > *"This paper proposes a two-stage estimator that (1) estimates the relationship between data set proportions vs validation loss using Optimal Transport and then (2) optimizes the proportion..."* (*Summary*) **Re:** The authors would like to thank the reviewer for providing detailed and in-depth feedback and comments on the manuscript. **We would like to first clarify a potential misperception and the concerning confusion it may have caused.** This work proposes a two-stage scheme for **performance prediction (Stage I) and projection onto large scales (Stage II)**, referred to as **"performance scaling with data composition"** and **"performance scaling with data quantity"** in the manuscript, **respectively.** **The first stage is to obtain a functional relationship between data composition from different sources and the resulting model performance, which is achieved leveraging Optimal Transport data distances.** Then the function can be used to construct a predictor for model performance on arbitrary data composition. **This stage is conducted on the small pilot dataset, and is referred to as "performance scaling with data composition" in the manuscript.** The conceptual contribution is to incorporate data distances (via Optimal Transport) into the representation of the functional relationship. Compared to previous works that rely on fitting non-informative surrogates, the construction of the OT-informed predictors is orders of magnitude faster (efficient and scalable) and the resulting prediction tools fundamentally prevent large deviations from overfitting high-order nonlinear surrogates (robust and reliable) **Then, the second stage is to project the prediction of model performance to target data scales (data quantity) that are much larger than that of the pilot data.** The technical contribution is to organically incorporate the performance predictor with the scaling laws in a parameter-free projection. **This stage is referred to as "performance scaling with data quantity" in the manuscript.** In previous works, a prominent challenge in scaling laws is that the model performance for different data would scale at different rates thus constructing a universal predictor for data from arbitrary combinations of sources using fixed scaling laws could lead to unsatisfactory results. The novel integrated performance projection proposed in this work allows for predicting model performance on any data composition and data quantity in a self-adaptive manner, automatically fitting scaling parameters for the data based on the prediction tools. So far, main technical challenges for the conceptual problem have been solved. The proposed pipeline with the two-stage scheme would provide a complete landscape of target model performance for any data composition on any scale (data quantity), helping inform practitioners in decision making. **Additionally, we showcase how this can help find the precise optimal operating point, which is formulated as an optimization problem** based on the predicted performance and solved via efficient gradient-based methods. This framework is highly extendable and allows the practitioner to make tradeoffs on what type of data to acquire and by how much so that the objectives are best met. In empirical studies, improved performance is also demonstrated, outperforming comparable baselines in many important aspects. The authors acknowledge the responsibility for the presentation of this work and appreciate having the chance to revise the manuscript accordingly for better readability. **Especially, the use of term "performance scaling" may be somewhat abused that causes unnecessary ambiguities in different contexts.** We are considering replacing the phrase “performance scaling with data composition” with “functional relationship between model performance and data composition”, and reserves the term “scaling” exclusively for the scaling laws w.r.t. data quantity (data scales). **The authors would appreciate it if the reviewer could recommend phrases that have better clarity in the context.** --- Rebuttal Comment 1.2: Title: C2: black-box scaling relationships Comment: > *"...some confusing and potentially incorrect statements regarding the related literature, particularly with power laws. For example, Abstract: “these scaling functions are black-box …” "* (*Weaknesses: 1.1*) **Re:** The authors acknowledge the responsibility for the choice of phrases that may have caused unnecessary ambiguities under different contexts and apologize for the confusion it may have caused. **As discussed above, we may have abused the term "performance scaling" and used it for different meanings in contexts outside of scaling laws (w.r.t. data quantity/data scales).** In the manuscript, we refer to the functional relationship between data composition from different sources and the resulting model performance as "performance scaling with data composition" and refer to the projection of the predicted model performance onto larger data scales (data quantity) as "performance scaling with data quantity", where **the latter has been more commonly associated with the term "performance scaling".** **We agree with the reviewer that these scaling laws are intuitive and interpretable, and "black-box functions" are references to the use of non-informative surrogates in representations of the relationship between data composition from different sources and the resulting model performance ("performance scaling with data composition").** Non-informative surrogates–e.g., rational functions [a] predicts the performance solely based on the size of data or its composition ratios (how much from each data source) **while neglecting the information of the content of data.** High-order nonlinear functions are essentially black boxes as the **implication of their parameters becomes impossible to interpret.** For example, [a] predicts the model performance L as the following $L\_{\lambda}(p)=\frac{1}{\lambda\_{11}\cdot p\_1+\lambda\_{12}\cdot p\_2...+\lambda\_{1k}\cdot p\_k}+\frac{1}{\lambda\_{21}\cdot p\_1+\lambda\_{22}\cdot p\_2...+\lambda\_{2k}\cdot p\_k}...+\frac{1}{\lambda\_{k1}\cdot p\_1+\lambda\_{k2}\cdot p\_2...+\lambda\_{kk}\cdot p\_k}$ where k is the number of data sources, $p=\\{p_1, p_2... p_k\\}$ are the proportion of data from each data source, $\lambda\_{ij}$ are a number of $k^2$ parameters of the surrogate to be fitted. **With these inherently nonlinear rational functions being added together, there is no way to associate parameters $\lambda$ to the effects of data from each source or interpret how each data source contributes to the predicted model performance.** Again, the authors apologize for the ambiguity and the confusion it may have caused. We appreciate the reviewer for pointing these out to us and for the effort in helping improve the presentation of this work. > *[a] Tatsunori Hashimoto. Model performance scaling with multiple data sources. ICML, 2021.* --- Rebuttal Comment 1.3: Title: C3: log-linear vs. power law scaling laws Comment: > *"... the current literature typically considers power laws... Moreover, the cited work in this part of the paper, [1], specifically discusses power laws and how they may be more effective than log learning curves..."* (*Weaknesses: 1.2*) **Re:** Thanks for the excellent comment. We are appreciative of your insights and pointing out our negligence in elaboration of equations. **In short, we are using power laws rather than log functions. It looks like log-linear relationships because we brought everything to log space for better numerical properties**–we took log operations on both sides of the equation and the power law in the original space became log-linear in the log space. In existing studies on scaling laws (e.g., [1]), the variable of interest is the validation loss, which is often the cross-entropy loss as in language tasks. In our paper, we center on directly predicting the target model "performance". For practitioners, it is typically the accuracy for classification tasks or MSE (mean squared error) for regression tasks. **Following the line of research on data selection/data acquisition, we use the residual error as our prediction variable**, which is (100%-accuracy) for classification tasks and MSE for regression tasks. The reduction of residual error is often exponential (e.g., accuracy 90%->99% with residual error 10%->1%), **and working in the log space often gives better numerical properties.** Then, when implementing the scaling laws, for the **power law relationship** in the original space given as > $L = a N^{-\gamma}$ **taking log operation on both sides, we have** > $\log L = \log (a N^{-\gamma}) = \log a - \gamma \log N$ **where $\log a$ and $-\gamma$ are the constants $C$ and $-\alpha$ in our expressions, $L$ and $\log L$ are the residual error in the original space and log space, respectively.** **We admit that we were unaware of the line of research studying directly using log-linear relationships to fit the scaling laws, thus not paying adequate attention to clearly stating the expressions and their derivations to distinguish them from alternative forms of scaling laws.** We appreciate the reviewer for providing us with additional information on current investigations on scaling laws (Figure 23 is especially helpful to us for a better understanding of their difference). The authors apologize for the confusion and will improve the presentation. --- Rebuttal Comment 1.4: Title: C4: Comparison to existing data selection methods Comment: > *"...validate the design decisions against more relevant related methods… data selection methods in [2, 3], or … against power law-based performance estimation strategies."* (*Weaknesses: 2, 3*) **Re:** We appreciate the reviewer for providing additional references and we would be glad to discuss them. At the time of submission, we were aware of [3] and cited a related paper [4] while [2] was published after our submission. **These papers are indeed interesting and provide inspiration for the line of research on data selection/data acquisition, though, we would like to point out that their scope and target problems are different from ours and the conceptual and technical contributions are non-overlapping.** **Both of these works are built for iterative data acquisition**, where the data collector aims to acquire data in an adaptive manner and gradually improve the strategy until the target amount of data is collected or other end goals are reached (e.g., time). **These frameworks aim to collect the best set of data at the end, not to directly predict the target model performance on all data compositions from the beginning.** This is a different field of research. There are tasks where such settings align with the need (such as the image annotation task [2] is built for), but are distinct from the data market problem we set our work into. **We focus on one-shot decision processes–given a small amount of pilot data (e.g., samples), we aim to directly provide the complete landscape of target model performance for any data composition on any scale (data quantity) such that the practitioner can be informed to decide the data acquisition strategy.** This is also true for the industrial instances that this work intends to apply to. **For example, training large models typically cannot accommodate changing training data during the process**. For example, in state-of-the-art automatic speech recognition (ASR) tasks, a number of datasets are available consisting of many audio sources (e.g., accents, recording quality, environmental conditions), the practitioners are able to curate a validation dataset representing their target user groups and scenarios (e.g., North American market, home environment) that they hope their applications to perform best. Due to the large scale of samples, training the model each time is resource-intensive. Finding an ideal combination of data for training on the target tasks can be prohibitively expensive if conducted with manual searches. The tools proposed in this work would provide a complete landscape of target model performance for any data composition on any scale (data quantity), helping inform practitioners in deciding what type of data to acquire and by how much so that the objectives are best met. ***[We are adding this as a set of NEW experiments with additional results provided in the summarization rebuttal to all reviewers (R1).]*** **Iterative data acquisition frameworks are not designed to accurately predict target model performance from the beginning.** Both [2] and [3] use surrogate models to represent the relationship between utility function (e.g., model performance) and the amount and proportion of data acquired from each source. In these works, [2] uses Gaussian Process (GP) and [3] uses Kernel Density Estimation (KDE), **which are both non-informative surrogates**–i.e., they predict the performance solely based on the size of data or its composition ratios (how much from each data source) while neglecting the information of the content of data. **KDE is a nonparametric method and its predictions are simply interpolations smoothened by the kernel function, while GP’s predictions are based on the estimated mean performance of each data source and the covariance between each two data source pair, resembling quadratic predictors (the quadratic baselines in our work).** Both of these methods need to fit on the model performance from a substantial number of model training on different data to function properly, and in both [2] and [3], **these models rely on adaptive improvements of their accuracy during the iterative data acquisition process. On a small amount of data (such as the pilot dataset considered in our work), their initial estimates are inaccurate and not intended for making final selections.** **Data selection via performance prediction based on scaling laws considers *only* the data size and not its composition** (which data sources it is from or/and by how much). As far as we are aware, [5] (cited in our work) provides a benchmark result on this strategy ("datasize" baseline). **Given that its performance is subpar compared to other baselines which consider both data composition and data size, we omit it in this work.** > *[4] Mahmood, Rafid, et al. How much more data do i need? estimating requirements for downstream tasks. CVPR, 2022* > *[5] Tatsunori Hashimoto. Model performance scaling with multiple data sources. ICML, 2021* --- Rebuttal Comment 1.5: Title: C5: OT bounds and performance prediction Comment: > *"... while bounds on validation loss w.r.t. OT have been studied, how tight are these bounds? ... suggesting either (i) that OT can very tightly bound validation loss or (ii) that the constant term is carrying a lot of the water in this bound."* (*Weaknesses: 4*) **Re:** This is also an excellent question. We would appreciate the chance to thoroughly explain it. Starting from the classic result on **Kantorovich-Rubinstein Duality** (KR-duality [a]) $W (p, q) = \inf\_{\pi∈\Pi(p,q)} E_{(x,y)∼\pi} [||x − y||\_{2}]= \sup\_{||h||_L\leq k} \left[ E\_{x∼p} [h(x)]− E\_{y∼q} [h(x)] \right]$ which gives that the 1-Wassersterin distance (Wasserstein distance is the distance defined by Optimal Transport) between two distributions p and q upper bounds the gap between the expected empirical performance of some model h trained on samples x from p and y from q, assuming the model h is k-Lipschnitz. **The bound is tight and is attained when the Lipschitz constant k is minimal everywhere for the model on the data manifold of interest. For modern machine learning problems with neural network models, training error can be reduced to near-zero and Wasserstein distance would provide a direct indicator of expected validation performance.** **Yet, in practice, the precise value of Lipschitz constant is hardly known in priori.** Besides, in practical problems, we do not have access to the actual representations of underlying distributions p and q and instead, we use empirical distributions from their samples as approximations to the underlying distributions. This introduces **<sample noises>** to the calculation of Wasserstein distance. Also, OT problems are solved numerically using the efficient Sinkhorn algorithm, which carries some small approximation error from entropy regularization and adds **<entropy bias>** to the computed Wasserstein distance. **Despite the bound being tight in principle, the empirical Wasserstein distance calculated from finite samples and entropy-penalized OT includes certain noises added to the true Wasserstein distance between underlying distributions. These noises are typically invariant for the same problem and only depend on the sample size.** Thus, we use an affine transformation to represent the relationship between empirical Wasserstein distance and model performance of interest, which is a natural and simple choice. The affine transformation takes 2 parameters that represent the slope and intercept, respectively. **We refer to this approach as "center-scaling"**, where we denote slope as "scaling ratio" and intercept as "centering constant", corresponding to fitting the terms in the above-described relationship. **Intuitively, the "scaling ratio" serves as an empirical estimation for the value of Lipschitz constant on the data manifold and the "centering constant" fits the invariant noises in empirical Wasserstein distance and aligns the predictor with the model performance.** These values are important in connecting the empirical performance of the model to the Wasserstein distance between training and validation data, but are impossible to be obtained analytically. **This work develops a new approach that demonstrates the plausibility of estimating these quantities empirically and successfully constructs predictors and develops applications based on these estimated relationships.** This work sheds light on a new path that shall provide inspiration for future work on data selection, data valuation, performance prediction, etc. We would compile this discussion into a section into the Appendix for the benefit of readers. Additionally, the technical pipeline of this work is built on the novel result of class-wise hierarchical OT distance first proposed in [b], which treats distances between labels as the OT distance between their features, where the analysis resembles the classic Kantorovich-Rubinstein Duality. For detailed derivations, please refer to [c] where comprehensive elaborations are provided in its Appendix. > *[a] David A Edwards. On the kantorovich–rubinstein theorem. Expositiones Mathematicae, 29(4):387–398, 2011.* > *[b] Alvarez-Melis, David, and Nicolo Fusi. "Geometric dataset distances via optimal transport." Advances in Neural Information Processing Systems 33 (2020): 21428-21439.* > *[c] Just, Hoang Anh, et al. "LAVA: Data Valuation without Pre-Specified Learning Algorithms." The Eleventh International Conference on Learning Representations. 2022.*
Summary: This paper considers the problem of predicting model performance (and subsequent data selection) under a partially revealed setting. The two challenges are estimating the right proportions as well as extrapolating to dataset scales beyond the observed scales. The paper proposes a two-stage approach called projektor: in the first stage, it predicts model performance as a simple function (either an affine transformation of a quadratic) of the optimal transport distance between target distribution D_val and the input distribution. In the second stage, the performance is extrapolated using ideas and functional forms from neural scaling laws. The paper also proposes a gradient-based method for data selection using the model performance estimator. Strengths: - Novel technical solution to a well motivated problem. The proposed approach is well-motivated, interesting, and well presented. - Well written overall: motivates problem well (section 1), contextualize well within related work (section 2), clear set up (section 3), Weaknesses: - The experiments are a bit disappointing in terms of their practicality. As far as I can tell, the data sources are just different subsets of distributions of fixed datasets (potentially unlabeled or mislabeled). Given the recent developments in the field (training language/diffusion models on large unfiltered data sources), I would have liked to see more experiments on more realistic/noisy data sources. Even for ImageNet, a better data source would be raw images from different online sources (e.g., Flickr),. Hence, it's a bit hard to judge the practical utility of the proposed approach (beyond the settings considered in the paper, which I am not sure are settings in practice where people need much better data selection strategies). - Section 5 (evaluation) is a bit hard to follow in terms of setup: what exactly are the datasets used, etc. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - In stage two, is the intuition that by fitting the two different scales N0 and N1 in stage 1, you eliminate the need to explicitly estimate the scaling parameters (alpha and C) in the scalings laws? - How are OT distances computed for images? Is it done in some feature space? which one? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Fine. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *"...experiments are a bit disappointing in terms of their practicality ...would have liked to see more experiments on more realistic/noisy data sources. "* (*Weaknesses: 1*) **Re:** We appreciate the review for the crisp understanding of the conceptual narrative of this work, and would like to take this chance to discuss a bit more on the considerations in designing empirical studies for this work. **In short, precisely as the reviewer pointed out, this data selection pipeline developed in this work could potentially apply to a variety of downstream tasks in the real-world.** ***[We provide a clear and complete presentation of experiments conducted in this work in the summarization rebuttal to all reviewers (R1).]*** Besides, we are in active collaboration with industrial researchers on a variety of problems. **Besides, in collaboration with industrial partners, we succeeded in deploying the proposed data selection pipeline in state-of-the-art automatic speech recognition (ASR) use cases.** ***[We are adding this as a set of NEW experiments with additional results provided in the summarization rebuttal to all reviewers (R2).]*** In ASR tasks, a number of datasets are available consisting of many audio sources (e.g., accents, recording quality, environmental conditions), the practitioners are able to curate a validation dataset representing their target user groups and scenarios (e.g., North American market, home environment) that they hope their applications to perform best. Due to the large scale of samples, training the model each time is resource-intensive. **Finding an ideal combination of data for training on the target tasks can be prohibitively expensive if conducted with manual searches.** The tools proposed in this work provide a complete landscape of target model performance for any data composition on any scale (data quantity), helping inform practitioners in deciding what type of data to acquire and by how much so that the objectives are best met. **Together with the experiments presented in the manuscript, we demonstrate a diversity of tasks with real-world instances, practical considerations, and realistic settings, showcasing the versatile capabilities of the proposed framework as well as the significance of the potential impact on both industrial applications and academic research.** *** > *"Section 5 (evaluation) is a bit hard to follow in terms of setup: what exactly are the datasets used, etc."* (*Weaknesses: 2*) **Re:** We apologize for lack of clarity in experimental details and will address them properly here. ***[We provide a full response to this question in the summarization rebuttal to all reviewers (R1).]*** *** > *"In stage two, is the intuition that by fitting the two different scales N0 and N1 in stage 1, you eliminate the need to explicitly estimate the scaling parameters (alpha and C) in the scalings laws?"* (*Questions: 1*) **Re: Exactly!** Function fitting is replaced with a direct mapping. Simple, natural, and quite effective in solving the practical challenge. *** > *"How are OT distances computed for images? Is it done in some feature space? which one?"* (*Questions: 2*) **Re:** We followed the standard treatment for calculating distributional discrepancy measures including OT distances used in this work. For vision tasks other than MNIST (on which we directly compute OT distances on the pixel space), **we trained a smaller model (ResNet-18) from scratch on the validation data (owned by the data buyer) and then used its output of the penultimate layer (before the final output layer) as the feature space (on which the OT distance is then computed).** The train-from-scratch procedure can also be replaced by fine-tuning off-the-shelf pre-trained feature embedders (such as the models pre-trained on ImageNet). Such procedures are more common for language model tasks and less important for vision tasks. For tabular data, generally, no feature embedding is needed and we only normalize the features of different dimensions to the same scale. If the dimension of data is very high (e.g., hundreds), sometimes dimension reduction (e.g., PCA) or feature selection (e.g., RFE, recursive feature elimination) can be applied. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. --- Reply to Comment 1.1.1: Title: Rebuttal period ending–we anticipate your feedback! Comment: Dear Reviewer WNMz, As the rebuttal/author discussion period is closing, we sincerely look forward to your feedback. The authors are deeply appreciative of your valuable time and efforts spent reviewing this paper and helping us improve it. We are compiling the discussions during the review and revising the manuscript to improve its presentation. To better improve its clarity, it would be very much appreciated if you could let us know if our responses and additional results address or partially addresses your concerns on the practicality of its applications and the experiment details and if our explanations are heading in the right direction. Please also let us know if there are any further questions or comments about this paper. We strive to consistently improve the paper and it would be our pleasure to have your precious feedback! Kind Regards, Authors of Submission2041
Summary: The paper considers the data acquisition setting where benefit to model performance from acquiring new streams of training data may be supported by the inspection of limited segments of a candidate corpus, such that one may wish to evaluate the benefit to model performance to support selection from a set of candidate corpi. Such forms of data acquisition evaluation for estimating model performance impact has much prior work, the first claimed novelty in this paper is associated with utilizing forms of optimal transport metrics to improve selecting ratios of data to be served from multiple candidate corpi. This estimate is benchmarked to suggest a resulting improvement to estimates of model performance in comparison to prior work, and then by extending the use of the derived estimates of an ideal ratio to another component of evaluation associated with data scaling laws there is another improvement to estimates of model performance. Strengths: The abstract claims of significant improvements to computational costs of the application with use of this approach was suggestive of a material contribution but I had difficulty evaluating that further from the rest of the writeup. It is intuitive that for real world application in industry the prioritization of data streams is impactful to bottom line, such that scalable solutions to more easily allocate training compute would be impactful. Weaknesses: The contributions of the paper associated with incorporation of an optimal transport metric on its own strikes me as much more of an iterative contribution rather than a significant one. I recognize this is common to the field, but I had a hard time getting comfort on the statistical significance of the benchmark findings - even a few more details on experimental setup could have possibly helped. I agree that the figures appear to demonstrate improvement but I don’t think the paper convinced me that it was anything more than suggestive. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In figure 5 do you have any theories as to why with increasing sample scales the method appears to increasingly underestimate model performance? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *“...claims of significant improvements to computational costs of the application with use of this approach was suggestive of a material contribution but I had difficulty evaluating…”* (*Strengths*) ***TL;DR: Scalability improvements of the proposed framework are on magnitudes.** Methods proposed in this work only require fitting as few as a fixed number of 2 parameters, orders less than existing methods. Figure 3 shows **empirically that the proposed method outperforms the strongest baselines with less than 1/5 computational overhead** of the latters'.* **Re**: Thanks for pointing out a potential source of confusion in the presentation of this work. **Previous approaches rely on fitting parametric functions as surrogates** (e.g., rational functions [1], or Datamodels [2]), which treat the model and data pipelines as a black box. The accuracy of such methods fundamentally relies on the expressive power of the surrogate functions. **The surrogates are non-informative**–i.e., they predict the performance solely based on the size of data or its composition ratios (how much from each data source) while neglecting the information of the content of data. **The improvement in their accuracy primarily relies on adding more parameters, which require more repetitions of the model training to fit and mitigate the elevated consequence from overfitting.** Under practical limitations on computing resources, the accuracy often needs to be traded off with robustness/reliability (using higher-order nonlinear surrogates may cause large deviations due to overfitting) and is generally unsatisfactory. **Our methods leverage data distance (measured by the Optimal Transport distance) to incorporate additional information into the prediction of model performance. Our simplest model, OTPP/CS only uses a fixed number of 2 parameters that can be well-fitted through a handful of model training and outperforms large parametric models with the number of parameters quadratic to the number of data sources, demonstrating a substantial advantage over the previous approaches. The sharp contrast highlights the remarkable scalability improvements from the proposed approach.** Empirical validations are depicted in Figure 3. OTPP/CS converges to a low prediction error (i.e., high accuracy) after 10 times of model training and outperforms non-informative parametric surrogates with up to a quadratic number of parameters even after 50 times of model training. The scalability improvements are on magnitudes. The authors acknowledge the responsibility for the presentation of this work and appreciate having the chance to revise the manuscript accordingly for better readability. > *[1] Tatsunori Hashimoto. Model performance scaling with multiple data sources. In International Conference on Machine Learning, pages 4107–4116. PMLR, 2021.* > *[2] Ilyas, Andrew, et al. "Datamodels: Predicting predictions from training data.", Proceedings of the 39th International Conference on Machine Learning, 2022.* *** > *"...contributions of the paper associated with incorporation of an optimal transport metric ...more of an iterative contribution rather than a significant one."* (*Weaknesses*) **Re**: We thank the reviewer for the effort to help assess the contribution of this work and for the responsibility held to the community. We would like to take this opportunity to share our understanding of the importance of this work and how it positions in the field. ***[We provide a full response to this question in a separate comment to all reviewers. (R3)]*** *** > *"...significance of the benchmark findings - even a few more details on experimental setup could have possibly helped."* (*Weaknesses*) **Re**: As part of the effort to improve the presentation of this paper, ***[We provide a full response to this question in the summarization rebuttal to all reviewers. (R1)]*** We apologize for the lack of clarity in experimental details. We are revising our manuscript in light of your valuable feedback. *** > *“In figure 5 do you have any theories as to why with increasing sample scales the method appears to increasingly underestimate model performance?”* (*Questions*) **Re**: Figure 5 is a qualitative figure that is provided to facilitate the reader’s interpretation of how this framework works. We apologize if it ends up the opposite and adds to the confusion. ***[We provide a full response to this question in a separate comment to all reviewers. (R4)]*** We will add more explanations to avoid possible misunderstandings. --- Rebuttal Comment 1.1: Comment: Acknowledged review of your rebuttal.
Rebuttal 1: Rebuttal: ### Summary: **All reviewers recognize the importance of the problem and the conceptual novelty of the proposed framework and original methods.** Reviewers WNMz and WiXX confirm the **solid development** of this work and its **multi-faceted technical contributions**. Reviewers WNMz and WiXX appreciate the **overall presentation** of this work and acknowledge it as well-written, well-motivated, and well-contextualized with a clear and logical flow of ideas and comprehensive descriptions. Reviewer Paf5 and WiXX acknowledge the **significance of potential impacts** on industrial applications. **Reviewers share the questions for experiment settings and their practicality.** Reviewer d24K did not include the part of work on predicting model performance for data composed from different combinations of multiple data sources. **Both point to the need for improving the current presentation of the manuscript and including additional discussions.** *** ### In our response, we would like to - **R1.** Clearly present the experiments conducted in this work, **showcasing their diversity and practicality** and clarifying their settings and considerations. - **R2.** **Provide additional results of a NEW set of experiments on data selection for automatic speech recognition (ASR)**, a highly practical application in collaboration with industrial practitioners. - **R3.** Further discuss the **multi-facet contributions** of this work and their **potential impacts** on both industrial and academic research. - **R4.** Clarify **Figure 5** with additional explanations on its settings and visualization scheme. *** ### R1. Experiments conducted in this work **a. We validate the proposed pipeline in stylized tasks with standard datasets. On MNIST (simple patterns), IMDB (text data), CIFAR-10 and ImageNet100 (vision tasks)**, we divide the samples into different data sources representing specific data categories, where each source contains only certain classes that are not necessarily exclusive. Then, based on a small amount of pilot data that is considered accessible to the practitioners (i.e., <20% of the target scale), we construct the "projektor" performance prediction tools that visualize the entire performance landscape for any data composition at any data size. We examine the accuracy of these predictions against the actual model performance and compare the error with baselines. Also, we perform data selection based on the predicted performance. We then train the model on the selected data and demonstrate advantageous performance for data selected by "projektor" compared to data selected by baseline methods. **b. We then examine the practical performance of the proposed methods in real-world instances and extended scenarios.** Beyond training from scratch, we apply our framework in **data selection tasks for fine-tuning**, which is of high relevance for pre-trained large models that are attracting growing attention. In particular, we implemented “projektor” to select fine-tuning data from the **autonomous driving dataset BDD100K**, the largest open driving video dataset for multi-object tracking (MOT) and segmentation (MOTS) challenges where we use the image frames, for a **Faster-RCNN model pre-trained on COCO**, a large-scale object detection, segmentation, and captioning dataset. For autonomous driving tasks, trained models are often sensitive to diverse weather conditions which limit the visibility of the road. **Thus, we divide the BDD100K data sources into three different challenging visibility categories: daytime, night, or dawn/dusk.** Similar to previous tasks, we construct our predictors with as few as 1k samples and predict model performance for up to 5k samples. Visualized in Figure 1(b), **selecting data based on our predictions achieves significantly higher model performance than random selections, and the predictions are highly accurate**–not deviating from the actual accuracy for more than 0.4%, indicating projektor’s practicality in realistic instances and its extended capability for fine-tuning tasks. Additionally, to model the **effects of data sources with varying data quality**, we included experiments with **added label noise** to a portion of sources (10%-20%). We projected performance from 1K samples onto larger data scales 2-10K and observed (Fig. 4) that the proposed methods **achieve the lowest errors** (MAE scores)–the predicted performance well aligns with the actual performance even in the case of noisy labels, showcasing that the proposed data-distance-based approach **well models the effects of varying data quality and accommodates such tasks with high practical relevance.** *** ### R2. Additional Results: automatic speech recognition (ASR) For ASR tasks, there are often a number of datasets available, consisting of many audio sources (e.g., accents, recording quality, environmental conditions). **Due to the large scale of samples, training the model each time is considerably resource-intensive.** Finding an ideal combination of data for training on the target tasks can be prohibitively expensive if conducted with manual searches. We consider the case of **fine-tuning a pre-trained Emformer RNN-T model on recordings from LibriSpeech and TEDLIUM-3**, datasets with contrasting speech recording styles–**the former contains clear and stable read speech (scenario 1) and the latter contains spontaneous speech in non-studio environments (scenario 2)**. Our target was to find the best composition of these data sources to improve the fine-tuned performance on both scenarios **with access to limited data (1% of total recordings, ~1hr).** ***(continuing in the comment below)*** Pdf: /pdf/437dfe0edf1539b2cd4fb472a027d1bc4c726360.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
TabMT: Generating tabular data with masked transformers
Accept (poster)
Summary: This paper introduces TabMT, a new Masked Transformer architecture designed for generating synthetic tabular data. While Transformers are predominantly used in natural language processing (NLP), TabMT demonstrates their effectiveness in dealing with heterogeneous data fields, such as images and tables, and efficiently manages missing data. The authors propose to sample the masking probability from a uniform distribution and predict masked values in random order during generation. By employing advanced masking techniques, TabMT can generate synthetic data with high performance across a wide range of dataset sizes. Moreover, the model proves valuable in privacy-sensitive applications, as it is capable of producing high-quality data while adhering to privacy restrictions. Strengths: The motivation is easy to understand and the problem is important but less explored than common domains. It is not straightforward to apply existing techniques on images and texts to the tabular data. The experimental designs are comprehensive, including several sections assessing data quality, Privacy and Sample Novelty, Missing Data, and Scaling, which are solid. Weaknesses: The motivations for designs are not very clearly written. Please consider reorganizing the structure, adding highlighted paragraphs, and moving line 129-138 to the beginning of the method or merging them with the methodology introduction texts. The introduction of temperature scaling is not well-motivated, is it for privacy purposes and why will it work? The introduction of privacy tabular data generation is not detailed. The conclusion claims that "the model is able to function under arbitrary privacy budgets." but I even didn't see how the budgets are computed and its defitions, motivations etc. Same for the definition of data "novelty". And the questions of "why many designs in TabMT can help those aspects are not well explained". Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The work can benefit from discussing and comparing the work of causal inference using transformers, which is a particular kind of missing data generation problem where privacy protection is also pressing, e.g. [1], [2]. [1] Exploring Transformer Backbones for Heterogeneous Treatment Effect Estimation [2] Differentially Private Synthetic Control Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See the weaknesses part Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Hi Reviewer EYXT, Thank you for the time you took to read our paper and write your review. **W:** The motivations for designs are not very clearly written. Please consider reorganizing the structure, adding highlighted paragraphs, and moving line 129-138 to the beginning of the method or merging them with the methodology introduction texts. **R:** We can reorganize these things and add bolding to make things clearer. 129-138 would likely work better at the top of this section. **W:** The introduction of temperature scaling is not well-motivated, is it for privacy purposes and why will it work? **R:** Temperature Scaling is often used when generating data from transformers to increase diversity or improve generation quality (see the references below). We can add a few more sentences explaining our learned temperature. Our learned temperature allows the network to sharpen the output logits across the embeddings. Allowing sharpening through temperature gives the model a more effective method of doing so without relying on the unordered embeddings or magnitude changes. This is important in our ordered embeddings since they are interpolations, instead of each embedding being independent of each other. We have confirmed the learned temperature acts in this way as all learned temperatures have averages well below 1 (sharpening), and nearly all end up below their initialized value. **References:** Temperature use during generation to improve generation quality or diversity: https://arxiv.org/pdf/1904.09751.pdf https://transformer.huggingface.co/doc/gpt2-large (text generation demo with temperature) Temperature during Learning: https://arxiv.org/pdf/2002.05709.pdf (hyperparameter which helps sharpen the logits) https://arxiv.org/pdf/2103.00020.pdf (learned temperature instead of hyperparamter) **W:** The introduction of privacy tabular data generation is not detailed. The conclusion claims that "the model is able to function under arbitrary privacy budgets." but I even didn't see how the budgets are computed and its defitions, motivations etc. Same for the definition of data "novelty". And the questions of "why many designs in TabMT can help those aspects are not well explained". **R:** Lines 189-203 help address this within the paper. Here we define the Distance to Closest Record metric (DCR), which has also been used in prior work, but allow us to explain things further. Perhaps instead of saying “arbitrary privacy budgets”, we should say “produce data with arbitrary privacy scores”. By budget we mean a privacy score (DCR) threshold. The model is able to operate under arbitrary privacy budgets since we are able to walk the Privacy v. Quality Pareto curves of our model. **Most previous works do not have the ability to change the privacy scores of their generated data after training**. Figure 4 is a good illustration of this concept. Imagine we would like our DCR to be above some number for our synthetic data (the threshold is our budget), we can change the temperatures of our model to achieve this DCR, no matter what the DCR is, up to the point of generating random data. Other models will have a fixed DCR after training and if that falls below what you desire, then that model becomes unusable for your application. Our model is tunable in this respect in a way most others aren’t and can function across all reasonable thresholds, because the temperature can be tuned in order to achieve it. We also show that at high quality scores our model also achieves higher privacy than models of similar quality. This demonstrates temperature works well to control this tradeoff in our model. DCR is also a good measure of novelty in addition to privacy because as samples get further away in terms of distance from all points seen during training, we can expect them to generally be more novel. Here’s another quote from the paper related to this. *By ensuring our model is both private and high quality, we verify that our model has learned the intrinsic structure of the data, and not simply memorized it.* This is what we mean by novelty. If our synthetic dataset is far in terms of distance from the training set, the model could not have simply memorized it. Our model has SoTA privacy and quality scores, meaning it produces novel data which is also high quality. Previous Work in this field using DCR for these purposes: RealTabFormer: https://arxiv.org/abs/2302.02041 TabDDPM: https://arxiv.org/pdf/2209.15421v1.pdf CTabGAN: https://arxiv.org/pdf/2102.08369.pdf Nearest neighbors are commonly examined from the training set, even in other domains, to see if generative models are producing truly novel data and are not memorizing or overfitting (e.g. BigGAN: https://arxiv.org/pdf/1809.11096.pdf) **Q:** The work can benefit from discussing and comparing the work of causal inference using transformers, which is a particular kind of missing data generation problem where privacy protection is also pressing, e.g. [1], [2]. **A:** We can add a discussion of this work within the related work section to talk about the challenges these techniques solve and the different approach they take for it. Our model achieves state of the art generation quality and privacy while having the ability to tradeoff between these arbitrarily, natively learn with missing data present, and condition on arbitrary subsets of the distribution during generation. We are not aware of any prior art which is able to do these things within a singular model, certainly not while also achieving state of the art generation quality. We achieve these results while using an innovative masked generator architecture, and thoroughly validate our architecture up to tens of millions of rows. Thank you very much for the time you spent reviewing and reading our paper. We hope these comments can help clarify some of the details of our paper, and why we believe it to be a worthwhile and novel contribution to Tabular Data Synthesis.
Summary: The paper proposes a masked transformer model that can be used to generate tabular data. The authors propose modifications in the masking strategy in the transformer, in order to make it more effective for generating data. Empirically, the proposed generator improves performance in various datasets compared to other baselines. Moreover, the paper presents cases of privacy-focused applications, presence of missing values, and large datasets to further depict the usability of the proposed method in real-world situations. Strengths: - The paper is well-written and very clear to its points. The authors define the problem with the masked transformer for generating tabular data, and presents an effective method to overcome the difficulty. The empirical results are well presented, and seems promising. Weaknesses: As noted in the paper, the major weakness would be the resource it takes to train the model. For general users, a pre-trained model with learning across multiple tables might be beneficial. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is it possible for the model to generate new categories (or numbers) not present in the train set? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The presented work might not be accessible to regular users, given the amount of resource required to train the model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Hi Reviewer aH7d, thank you for your rating and the time you took to carefully read and review our paper. **W:** ”As noted in the paper, the major weakness would be the resource it takes to train the model. For general users, a pre-trained model with learning across multiple tables might be beneficial.” **R:** You’re right, a pretrained model which can learn across datasets is definitely something we are interested in and plan to explore in the future. We allude to this in our future work section. **L:** ”The presented work might not be accessible to regular users, given the amount of resource required to train the model.” **R:** The compute used by our model is more than some of the other models we compared against, but we do still believe it should be accessible for most users to train for most use cases. Training on a single GPU takes around 0.5-2.5 hours on most datasets. A strong model for our scaling dataset with tens of millions of rows can be trained within a GPU day. These models as a whole use much less compute than ones used in NLP or Vision, but we still felt it important to mention the compute usage as a limitation, since it is often ignored. **Q:** ”Is it possible for the model to generate new categories (or numbers) not present in the train set?” **A:** Generating new categories is very difficult to do, at the time of writing, we are not aware of any model which can do this well in general. However for ordered or continuous variables our quantization means most values generated are not present in the training set. Additionally, altering the quantizer to be a GMM instead of the default K-Means, or interpolating between the support allows us to generate arbitrary values. Our default model does not do this, since we found the simpler finite support method still gave us strong results overall. The ability of our model to condition on any subset of fields during generation means it can also be seamlessly combined with other methods of generating data, for example, to create a joint diffusion-masked model. Prior work is unable to do this. Again, thank you very much for your time and review. We are glad you liked the paper, and we hope it can help researchers and practitioners to examine new research directions and solve more problems.
Summary: This paper proposes a new generative model of table-type data based on the transformer. And it can address the unique challenges posed by heterologous data fields and natively handle missing data. Strengths: 1. TabMT is a simple but effective Masked Transformer design for generating tabular data. 2. We highlight the applicability of our model in privacy-focused applications, illustrating TabMTs ability to arbitrarily trade-off privacy and quality through temperature scaling. 3. Experiments show the effectiveness of the proposed method. Weaknesses: 1. The TabMT structure has only been modified in terms of the input layer, without improving the multi-layer transformer structure. In other words, this paper only made adaptive improvements to the data format and is not on par with NeurIPS. 2. The application method of the model structure proposed by TabMT is not clear. Categorical and Numerical use independent inputs, which makes it necessary to determine the type of data input into the model. How to accurately and automatically identify this type is challenging, which also limits the complexity of generating table data for the model, making it difficult to cope with complex scenarios. The effectiveness of the model should be evaluated for inaccurate category recognition in the experiment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Hi Reviewer gJJU, Thank your for taking the time to read and review our paper. **W:** The TabMT structure has only been modified in terms of the input layer, without improving the multi-layer transformer structure. In other words, this paper only made adaptive improvements to the data format and is not on par with NeurIPS. **R:** We have contributed more than changing the input layer within our paper. Our paper’s goal is not to alter the transformer structure as a whole. We construct a Tabular Data Generator which achieves **state of the art quality** using a model design **fundamentally different from existing models**. Specifically we do so using Masked Transformers, which are very understudied for generation purposes and we show they suit the problem of tabular data generation very well. We contribute the following: - We construct a Masked Transformer architecture and training task for the tabular domain. - We improve upon the traditional masking task itself, and outline a procedure to generate data from this new model. (Masked Transformers are typically poor data generators) - We introduce ordered embeddings to help our model deal with numerical values better. - We thoroughly justify our architecture, intuitively, mathematically, and empirically. - We achieve State of the Art Generation Quality on over a dozen benchmarks. - We achieve State of the Art Privacy at these generation qualities, while being able to tradeoff privacy and quality. Most other models can't tradeoff between these and are fixed . - We show how our architecture changes allow it to deal with missing data natively, something existing models can’t do. - We explain how our model can be used to condition on arbitrary subsets of data, in a way that other generators cannot. - We scale our model to tens of millions of rows and show it still performs well, a much larger scale than previous works. - To our knowledge, this is the most thoroughly evaluated and highest performing transformer for generating tabular data. It is also the highest performing model across all existing generative tabular model families including GANs, VAEs, Diffusion Models, and Autoregressive Transformers while using a completely different and novel generation scheme. These points are all outlined in the paper. While we have contributed much more than just changing the input layer, if this change alone achieved state of the art, while having the unique privacy tradeoff, conditioning, and missing data capabilities our model has, we believe this would be notable and something to pay attention to. **We are not aware of another model which even has two of these capabilities**, certainly not while also achieving SoTA generation quality. **W:** The application method of the model structure proposed by TabMT is not clear. Categorical and Numerical use independent inputs, which makes it necessary to determine the type of data input into the model. How to accurately and automatically identify this type is challenging, which also limits the complexity of generating table data for the model, making it difficult to cope with complex scenarios. The effectiveness of the model should be evaluated for inaccurate category recognition in the experiment. **R:** We do not aim to automatically learn or identify which columns are categorical and which columns are numerical. We assume that this is known ahead of time, and that the model is allowed to treat categorical and numerical variables differently within the model. In the majority of cases, the data type (float, integer, string) is sufficient for this purpose, but it should be known metadata by the practitioner. As far as we are aware, this assumption is used by essentially all tabular generators. **References:** \ TVAE and CTGAN: https://arxiv.org/pdf/1907.00503.pdf TabDDPM: https://arxiv.org/pdf/2209.15421v1.pdf CTabGAN+: https://arxiv.org/pdf/2204.00401.pdf Again, Thank you very much for reading and reviewing our paper. We hope our responses can help clarify some things and explain why we believe our work is an impactful and novel contribution to the area of Tabular Data Synthesis.
Summary: This paper explores the effectiveness of masked transformers as generative models for synthetic tabular data generation. The proposed TabMT architecture effectively handles challenges related to heterogeneous data fields and missing data. The model shows promising experimental performance and demonstrates good performance even under privacy constraints. Strengths: - The paper proposes a promising masked transformer approach TabMT for generating tabular data. This is an elegant application of masked transformers. - The paper provides comprehensive experimental evaluation and compares to a diverse set of baselines on 15 datasets - The paper explores privacy-preserving capabilities of TabMT and shows promising results Weaknesses: - While the paper provides adequate experimental evaluation, it would be helpful to include recent state-of-the-art generative methods in evaluation such as STaSy [1] - In the privacy experiments, only TabDDPM was used as the baseline, it would be interesting to see more comparisons with other baselines [1] Kim, J., Lee, C. and Park, N., 2022. Stasy: Score-based tabular data synthesis. arXiv preprint arXiv:2210.04018. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Could you please clarify the training procedure? When training the masked transformer, does every feature have its own classification head for predicting the mask? 2. Since you explore hyperparameter tuning for the downstream Catboost model, how was hyperparameter tuning performed for each of the generative models? 3. In the privacy experiments, only TabDDPM was used as the baseline, would it be possible to add more comparisons with other baselines? 4. It would be great to add recent state-of-the-art generative models into evaluation, such as STaSy [1] [1] Kim, J., Lee, C. and Park, N., 2022. Stasy: Score-based tabular data synthesis. arXiv preprint arXiv:2210.04018. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Hi Reviewer sX8C, thank you for your careful consideration and review of our paper. **Q:** Could you please clarify the training procedure? When training the masked transformer, does every feature have its own classification head for predicting the mask? **A:** When predicting we use a separate linear layer for each feature, but there are no extra transformer blocks or any other separate layers. It is equivalent to using a single linear layer for all features and masking out impossible values before prediction. The parameters are shared with the embedding layers. **Q:** Since you explore hyperparameter tuning for the downstream Catboost model, how was hyperparameter tuning performed for each of the generative models? **A:** Each dataset has a fixed set of catboost hyperparameters which are used for all evaluations and all methods on that dataset. The hyperparameters are found by tuning them on the real dataset to maximize the validation score. We use the same tuned values as other works to ensure a fair comparison. They were found using 100 tuning trials with 5 hyperparameters of Catboost. **Q:** In the privacy experiments, only TabDDPM was used as the baseline, would it be possible to add more comparisons with other baselines? **A:** We chose to evaluate against TabDDPM because its quality scores were closest to ours, creating a stronger comparison. Privacy and quality tradeoff with each other, so comparing privacies at vastly different qualities is less useful. However, here are some privacy scores from CTABGAN+: | Method | AD | CA | CAR | CH | DI | KI | |--------------|--------------|--------------|--------------|--------------|--------------|--------------| | **TabMT (ours)** | **1.01(0.811)** | **0.117(0.832)** | **0.041(0.737)** | **0.281(0.758)** | **0.243(0.740)** | **0.335(0.868)** | | CtabGAN+ | 0.119(0.772) | 0.056(0.525) | 0.012(0.733) | 0.212(0.702) | 0.196(0.734) | 0.226(0.444) | We obtain both higher privacy and quality on all datasets here. We can add the full results for this model to the paper. **Q:** It would be great to add recent state-of-the-art generative models into evaluation, such as STaSy **A:** We are happy to mention StaSy and score-based modeling in our related work section. We have looked over StaSy and it is certainly a strong paper. Unfortunately, none of the datasets the paper tests against overlap with the ones we test with. Implementing StaSy and training it across our fifteen datasets would be quite time-consuming. Additionally, training our model on StaSy’s datasets would also take a fair bit of effort and would be disconnected from the other evaluations we perform. Due to the considerable effort involved, we likely won’t be able to include it in the evaluations at this time. Again, thank you very much for your response to our paper and the time you took to review it. --- Rebuttal Comment 1.1: Title: Response to the author rebuttal Comment: I thank the authors for their response and provided clarifications. Regarding hyperparameter tuning, my question was about the hyperpameter tuning of the generative models rather than catboost. How was hyperparameter selection performed for both TabDDPM and the baseline generative models it was compared to? Regarding implementing StaSy, in fact it’s official GitHub provides a user-friendly implementation: https://github.com/JayoungKim408/STaSy making experimentation on at least a few datasets feasible within the rebuttal period. As StaSy is a recent and strong generative tabular model, it is important to include it in comparison with TabDDPM. --- Reply to Comment 1.1.1: Title: Response to Review Comment Comment: Hi Reviewer sX8C, Thank you for your response. Hyperparameter tuning details and search space for TabMT are available in the supplementary material. We use 50 trials, just as prior work has. For the baselines, we used the same search spaces as previous work. This ensures a fair comparison. The cited baseline work have these search spaces available. Our reported metrics for these techniques match with metrics reported in prior work. The cited TabDDPM baseline work has a good summary of these search parameters. If you like, we can include the TabDDPM summary in the Appendix of our paper. The deadline for the rebuttals was mere hours after your response, and therefore including results for STaSY before the deadline is infeasible. An earlier reply would have made this a feasible task. It takes days to tune baselines properly on each dataset to ensure correctness and to guarantee an accurate comparison. We compare against 4 other state of the art techniques across 15 datasets, each representing the best we could find at the time across major generative modeling families (Diffusion, GAN, Autoregressive, VAE). This comparison alone is more than necessary for a publication and a contribution to the scientific literature. Our work is of a new masked generative model, different from these existing ones, which outperforms all tested baselines using this novel method. We also show SoTA performance and more evaluation in our scaling experiments. It's not possible to compare against every paper on ArXiV, especially without any dataset overlap. The STaSY paper was not accepted in a conference until late February, at which point we were well past looking for additional baselines to compare against. The StaSY paper appears to be a strong tabular data generator. But we cannot faithfully (or ethically) represent the STaSY paper in our paper until we have fully evaluated the quality, repeatability, and accuracy of the STaSY method. Again, Thank you for your time.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
PHOTOSWAP: Personalized Subject Swapping in Images
Accept (poster)
Summary: This paper presents a combination of DDIM inversion and Dreambooth to customize the appearance of existing images. The authors also experimented with some attention layer hacking. Strengths: 1. Paper is easy to read. 2. Framework is reasonable. Dreambooth+DDIM inversion+Attention hacking will really lead to such results. 3. Easy to reproduce. Weaknesses: 1. My main concern is the novelty: it seems that everything of in the combo of “Dreambooth+DDIM inversion+Attention hacking” are proposed and/or extensively discussed in previous works. However, we are also aware of that many recent papers in this direction (like masactrl) are non-peer-reviewed works so that I am not leaning negative because of the novelty. Nevertheless, the combo of these components still looks a bit ad-hoc to me. 2. The experiments need some improvements: reference guided diffusion is sensitive to inputs. We should present random non-cherry-picked samples to study the method performance. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Why we mainly conduct the comparison against prompt-to-prompt? Shouldn’t we mainly consider frameworks like imagic/masactrl to edit real image? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: see Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the suggestions, we will explain the concerns as follows: > About the paper novelty. a. Personalized subject swapping is an emerging vision task that has abundant user applications in practice. The task we undertook is inherently challenging, due to the lack of training pairs. In the literature, there are few works that can address the problem in a unified way. In contrast, as the results are shown in the framework, the proposed method is robust and can be generalized to different domains including human faces, animals, daily objects, vector art, and paintings. b. We proposed a unified framework to address the problem. The attention map manipulation process is non-trivial. In section 4, all the steps in our method are based the observation of attention map analysis. Without Photoswap, simply applying existing attention map manipulation methods will not works well, as they could not preserve the non-swapped region well. We use DreamBooth and DDIM inversion for implementation simplicity. Other concept learning (Lora, Suti) methods and inversion methods will also be compatible with our Photoswap framework. > Why not compare to Masactrl? MasaCtrl is a non-peer-reviewed work, and it is released on arXiv just a month before the NeurIPS submission. But we still cited and discussed it when we noticed the paper before submission. Our paper is targeting a fundamentally different task, personalized subject swapping, while their method is for subject gesture changing. Nonetheless, we have presented a comparison between Photoswap and MasaCtrl during this rebuttal stage, which is shown in Table 1 in the attached PDF and will be incorporated into our next version. We reproduced the results from MasaCtrl and acknowledged its commendable performance in altering subject gestures. However, it has notably inferior outcomes in the personalized subject swapping task. Through abundant human evaluation, Photoswap outperforms MasaCtrl across all metrics. Upon closer inspection, we observed that MasaCtrl primarily swaps the q vector in self-attention, having determined a strong correlation between the q vector and the subject shape. In our targeted task, the complete swapping of the source subject to the target subject necessitates a more comprehensive approach. Both the attention map—derived from the interaction of q and k—and the attention output—resulting from q, k, and v—is crucial. | Metrics | Photoswap | MasaCtrl+DreamBooth | Tie | | :---- | :-----: | :-----:| ----:| | Subject Identity Preservation |**79.1%**| 10.3% | 10.6% | | Background Preservation | **72.8%** | 10.2% | 17.0%| | Overall Swapping Quality | **83.3%** | 10.3%| 6.4% | > A combo of methods Personalized subject swapping is an interesting and challenging task, with our logical framework, Photoswap, we successfully addressed this complex challenge. It's important to emphasize that Photoswap is a versatile framework that can integrate various methodological elements. For instance, one can seamlessly incorporate LoRA weights into Photoswap or introduce a controlnet layer for more refined swapping. The task we embarked upon is fundamentally challenging, but we secured significant outcomes. It's essential to emphasize that a direct combination of DreamBooth with the earlier proposed attention map insertion is not optimal, as clearly demonstrated in the paper's P2P comparison and the rebuttal's P2P comparisons. Unlike prior approaches, we performed attention swapping on a broader set of attention variables. We also delved deeper into the efficacy of these variables, offering valuable insights for future investigations. > Improvements for experiments. As is exhibited in the paper, Photoswap is a subject swapping model for general images, including real images and synthetic images, the target ranges from human faces to common house items, from daily life images, movie pictures, to artistic works. During the human evaluation, we did 2000 comparisons on both real images and synthetic images. All the swapping results used for evaluation are non-cherry pick results. We will also provide code for performance reproduction. --- Rebuttal Comment 1.1: Title: Thanks for your insights. Do you have any other questions? Comment: First and foremost, we'd like to express our sincere gratitude for your constructive and encouraging feedback on our paper. We provided further clarification on Photoswap and the comparison result with MasaCtrl is attached in the rebuttal. Having made efforts to address your insights, we'd like to ensure that our explanations of Photoswap and additional experiments are both comprehensive and satisfying. If there are any lingering questions, please don't hesitate to bring them to our attention. Your continued guidance will only help in enhancing the quality and clarity of our work. Thank you once more for your invaluable feedback and support. --- Rebuttal 2: Comment: Thanks for the author response. This reviewer kill keep the original rate after considering the additional materials. --- Rebuttal Comment 2.1: Comment: Thanks so much for your time and efforts in reviewing our paper!
Summary: This paper introduces a new method for inserting a subject into a target image. The approach consists of 1) using dreambooth to extract the appearance information of the subject, and 2) copy the attention from the target image (obtained by regenerate the target image using DDIM inversion) to control the layout for the generated image. Empirical results show that the proposed method is preferred by raters ~50% of the time compared with P2P baseline. Strengths: The proposed method is straightforward, and the empirical results look promising. Weaknesses: * Technical contribution not clear. As mentioned in the paper, the two main components used in the paper, i.e. dreambooth and attention copying, have been proposed in prior works. It is unclear what the main technical contribution of this work is. In fact, it is even not clear, based on the limited description, what the main difference between the proposed method and the baseline is. * Unclear presentation. The approach section is hard to follow. In particular, Sec. 4.1 and 4.2 are distracting and it is unclear what information these two sections try to convey. The experiment section does not provide sufficient information for reproducing and understanding the implications of the experiments, e.g. what are the data, how were the baseline implemented, what the data and instructions are for the user study? * Limited experiments. While the authors claim that one of the main contributions of this work is extensive experiments, the actual evaluation is very limited. For a reasonable research paper, I will expect more solid and extensive evaluation regarding different aspects of the proposed approach, including but not limited to the success rate of the proposed approach, the effect of target image content and subject image, the effect of meta parameters, etc. From the experiments in the paper, it is even unclear how applicable the proposed method is in practice. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Please refer to the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: This paper does not discuss the limitation. While there's a section for ethical issues, the section doesn't provide useful discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks to the reviewer for your insightful suggestions. Here are our response to your concerns. > What is technical contribution. What is the main difference between the proposed method and the baseline. Quote Reviewer z7Bi, “The task of personalized subject swapping in images is fancy and interesting. This is the first work that could handle such challenging swapping task.” Such a task requires seamlessly integrating a new subjects into existing images in the exect position of the source subject. While previous methods are usually restricted to global editing, we achieved great results with logical framework, Photoswap. Note that Photoswap is a framework, which could incorporate more methods and components. For example, one could easily add LoRA weights into Photoswap or add controlnet layer to achieve fine-grained swapping. The primary distinction between our approach and the baseline centers around the attention swapping process. In P2P, the authors predominantly use the cross-attention map for image editing, having identified a strong correlation between this map and the resultant image. In contrast, our technique employs attention swapping on the cross-attention map, self-attention map, and self-attention output, allowing for more detailed adjustments. As showcased in Figure 7 of our paper and the attached pdf file, PhotoSwap excels in both subject swapping and background conservation. The human evaluation results in Table 1 show that Photoswap outperforms the baseline by a large margin. > Unclear presentation in Section 4 In Section 4, we present the framework of Photoswap. Photoswap first learns a concept image into a token by a concept learning method, as in section 4.1. The training-free attention swapping process further keeps the background information unchanged during the target image generation process, and the learned subject is injected into the image through textual prompt as in section 4.2. Please refer to Algorithm 1 in the paper for a detailed process. We will also release the code for performance reproduction. > What are the data? How was the baseline implemented? What the data and instructions are for the user study? In terms of concept source, for non-human, we utilized the images from the DreamBooth benchmark as our concept learning method is based on DreamBooth. For human, we collect celebrity images from the Internet. All the source images are also from the Internet. For the baseline, we use DreamBooth to transform a new concept into a textual token and then employ P2P as the attention-based process for subject swapping. The data used for the user study was swapping results on randomly sampled synthetic images and real images from the Internet. The user instructions and our user study interface are also attached in the pdf. We will add those details to the next version. Please let us know if there are any more details missing in your mind. > Limited experiments In the paper, we have tested Photoswap widely from synthetic images to real images, from human images to non-human subjects. We conducted experiments on complicated situations such as multiple subjects and occluded subjects. Since this is a new task, we built a baseline and did both human evaluation and qualitative comparison. We also conducted ablation study on the effect of different hyperparameters on human faces and common subjects. To validate the generalization of our framework, we also test that on other concept learning methods. We also did experiments to test the potential ethical concerns. Lastly, we also discussed the failure cases. In this rebuttal period, we further conducted a large-scale human evaluation on more baselines. > Success rate. Effect of target image content and subject image. Effect of meta parameters. From the How applicable the proposed method is in practice. In our study, we assessed Photoswap's capability for subject swapping within images, a challenge that had not been effectively addressed before. Whether transitioning from real to synthetic images, everyday subjects to human faces, or artistic creations to film stills, we have illustrated Photoswap's proficiency in seamlessly substituting a subject from one image with a personalized subject from another. Both in the original paper and throughout this rebuttal, we've noted a commendably high success rate (exceeding 50%) in terms of users identifying the context from the source image and recognizing the identity from the reference image. The effects of the three hyperparameters, which influence the swapping steps, are detailed in the appendix. We will make our source code available for further examination of Photoswap's applicability. We would be happy to address them if the reviewer has more specific suggestions or concerns during the discussion phase. --- Rebuttal Comment 1.1: Title: Thanks for your valuable feedback. Do you have other questions? Comment: We deeply appreciate the time and effort you've invested in reviewing our manuscript. We have done our best to address each of your concerns in the preceding responses. To ensure our revisions and explanations meet your expectations, could you please let us know if there are any additional questions or if any areas remain unresolved in your view? We strive for clarity and thoroughness and want to ensure that we've attended to all your points adequately. Thank you once again for your invaluable insights and feedback. --- Rebuttal 2: Comment: Thanks the authors for the response. While the rebuttal partially address my concern, most of them lack concrete information and are not sufficient to address the concern. 1. If the contribution and performance gain comes from a better attention swapping mechanism, I would expect the a more thorough description and a solid ablation study to verify the claim. The current paper does not provide sufficient information for the improvements made by this work and their contribution. 2. The concerns regarding the presentation are not resolved. Note that releasing the code does not help unless the code is also reviewed for completeness and clarity. 3. While I understand it is hard to "reproduce" the data collection process, "collecting from internet", "randomly sampled" are still to vague. Information regarding the data source, e.g. how they are crawled, the size, any selection process, etc. should be provided. 4. It would be more informative if the authors can provide how the success rate is measured and what's the criteria of success. Also, a few qualitative examples in the paper are not sufficient to be considered as a meaningful ablation study. To make it a meaningful study, the authors need to show how consistent the results are. In summary, similar to the original paper, the claims in the rebuttal are reasonable but are not well supported by concrete evidence. --- Rebuttal Comment 2.1: Title: Response (1/2) Comment: Thanks for your feedback on our rebuttal. Here we would like to further resolve your concerns. >If the contribution and performance gain comes from a better attention swapping mechanism, I would expect a more thorough description and a solid ablation study to verify the claim. The current paper does not provide sufficient information on the improvements made by this work and their contribution. Photoswap pioneers a novel challenge and consistently delivers superior performance across different image domains. To ensure an objective comparison and demonstrate the effectiveness of our attention swapping process, we established baselines based on P2P, PnP, and MasaCtrl, which also utilize attention-based mechanisms to achieve image editing. As is shown in the rebuttal pdf and text, Photoswap outperforms all other methods by a large margin, which shows the superiority of our training-free attention swapping process. The results are also attached below. | Metrics | Ours | P2P+DreamBooth | Tie | |-------------------------------|-------|----------------|-------| | Subject Identity Preservation | 0.434 | 0.300 | 0.266 | | Background Preservation | 0.393 | 0.302 | 0.305 | | Overall Swapping Quality | 0.373 | 0.271 | 0.356 | Table 1. Human evaluation comparison with P2P+DreamBooth. | Metrics | Ours | PnP+DreamBooth | Tie | |-------------------------------|-------|----------------|-------| | Subject Identity Preservation | 0.527 | 0.221 | 0.252 | | Background Preservation | 0.491 | 0.207 | 0.302 | | Overall Swapping Quality | 0.551 | 0.224 | 0.225 | Table 2. Human evaluation comparison with PnP+DreamBooth. | Metrics | Ours | MasaCtrl+DreamBooth | Tie | |-------------------------------|-------|---------------------|-------| | Subject Identity Preservation | 0.791 | 0.103 | 0.106 | | Background Preservation | 0.728 | 0.102 | 0.170 | | Overall Swapping Quality | 0.833 | 0.103 | 0.064 | Table 3. Human evaluation comparison with MasaCtrl+DreamBooth. In contrast to the image classification domain where a ground truth typically exists, the image subject swapping task we address is novel and lacks an established ground truth or evaluative benchmark. Current image editing works including P2P[1], PnP[2], and MasaCtrl[3] mainly rely on human evaluation to prove efficacy. As highlighted by Reviewer TtiF, "Considering the lack of unified metrics for this task, human evaluation is adequate". Comprehensive human evaluations and qualitative analyses indicate that our model significantly surpasses other methods in performance. Although human evaluation already provides the most direct evaluation for model performance, we also employed the DINO and CLIP-I metrics to measure image similarity following DreamBooth [4]. First, we assess subject identification preservation by examining the similarity between the generated image and the concept image. Subsequently, background preservation is evaluated by determining the similarity between the generated image and its source counterpart. From the data presented in the table, it is clear that Photoswap consistently outperforms competing methods across both evaluated metrics. This aligns with the findings from our human evaluation, in which Photoswap surpassed other methods in all metrics, including both Subject Identity Preservation and Background Preservation. | | Ours | P2P+DreamBooth | PnP+DreamBooth | MasaCtrl+DreamBooth | |--------|------|----------------|----------------|---------------------| | DINO | 0.55 | 0.44 | 0.42 | 0.31 | | CLIP-I | 0.80 | 0.72 | 0.68 | 0.53 | Table 4. Automatic evaluation on subject identity. | | Ours | P2P+DreamBooth | PnP+DreamBooth | MasaCtrl+DreamBooth | |--------|------|----------------|----------------|---------------------| | DINO | 0.78 | 0.72 | 0.73 | 0.70 | | CLIP-I | 0.89 | 0.79 | 0.76 | 0.69 | Table 5. Automatic evaluation on background information preservation. We also further tested the effectiveness of all the swapping variables used in this paper. In Figure 8 and Section 5.3, the impact of the self-attention map M is elucidated. Further insights on the significance of the overall attention map variable and the attention output variable in the subject swapping process are presented in Figure 5 and Appendix Section C. This section also delves into the influence of each swapping phase. --- Reply to Comment 2.1.1: Title: Response (2/2) Comment: >The concerns regarding the presentation are not resolved. Note that releasing the code does not help unless the code is also reviewed for completeness and clarity. Could you provide further insight into the concerns surrounding the presentation? Within Section 4, we delineate our primary methodology. To delve deeper, Section 4.1 elucidates our approach to incorporating a novel concept into the model, which subsequently facilitates its use in text prompts. Subsequently, in Section 4.2, we expound upon the training-free attention swapping procedure. This comprehensive process is also visually represented in Algorithm 1. Should there remain any ambiguities, we are more than willing to offer additional clarifications. We deeply appreciate your feedback and suggestions. >While I understand it is hard to "reproduce" the data collection process, "collecting from internet", "randomly sampled" are still to vague. Information regarding the data source, e.g. how they are crawled, the size, any selection process, etc. should be provided. We acknowledge that our initial description of the source image collection process was not sufficiently clear. In this context, we provide an in-depth explanation. For real images, we sourced all our images from internet searches. We employed the search prompt: 'a photo of <target>'. Here, the <target> variable could be a specific celebrity (e.g., 'Elon Musk') or a descriptive scene (e.g., 'a cute yellow cat running in the forest'). The celebrity names were identified through a Google search with the prompt "top celebrities 2023." For scene descriptions, we curated a list of 100 distinct search prompts to source images from the internet. In total, we aggregated 1,000 images using these prompts. All prompts, along with the collected image, will be made available in our next revision. For synthetic images, we generated 1,000 images using text prompts with the text-to-image diffusion model version 2.1. These prompts spanned a range, including those centered on humans (e.g., "A photo of a woman looking left and smiling, Van Gogh style") and those focus on non-human subject (e.g., "An old car in the middle of the road, flanked by trees during autumn"). All prompts used in synthetic image generation will also be released too. For the human evaluation exhibited in this rebuttal and in the paper, we utilized the "random" package in Python to sample 200 images from both the real and synthetic datasets, respectively. Each image underwent evaluation by five distinct individuals on Amazon Turk. In total, this resulted in a comprehensive 6,000 ratings, as we compared our model against P2P, PnP, and MasaCtrl. Our findings unequivocally indicate that our model surpasses all other methods in performance. For source image processing, all we do is to resize the image into standard 512x512 pixels. It is worth noting that there is also no postprocessing needed for generated images. >It would be more informative if the authors can provide how the success rate is measured and what's the criteria for success. Also, a few qualitative examples in the paper are not sufficient to be considered a meaningful ablation study. To make it a meaningful study, the authors need to show how consistent the results are. To the best of our knowledge, there is no existing automatic evaluation metric for success rate on image editing. Due to the lack of ground truth and standard evaluation metrics, current image editing works such as P2P[1], PnP[2], and MasaCtrl[3] are mostly using human evaluation and qualitative comparison to demonstrate performance. In response to your feedback, we expanded our analysis to incorporate human evaluations focused on the success rate. With respect to quantifying the success rate, we established three criteria to define a successful image swap: * The generated image featuring the replaced subject should exhibit high quality. * The background details not pertaining to the targeted subject should remain consistent with the original source image. * The outcome of the swap should be recognizable as the subject from the concept image, mirroring the pose, gesture, and facial expression of the original subject. Presented below are our findings, based on a sample of 200 real images juxtaposed with 200 synthetic images: | | Ours | P2P+DreamBooth | PnP+DreamBooth | MasaCtrl+DreamBooth | |-------------------|------|----------------|----------------|---------------------| | Human subject | 0.57 | 0.37 | 0.34 | 0.12 | | Non-human subject | 0.72 | 0.51 | 0.43 | 0.14 | Table 6. Human evaluation results on Success Rate. In the table, results from the human evaluator our method outperforms other methods in terms of success rate. Thanks for the insightful suggestions on the success rate, and we will include this in the next revision.
Summary: The authors present a solution to the problem of personalized subject swapping, where the goal is to replace the subject in an image with another user-defined subject. Authors leverage pre-trained diffusion models to make local edits to an input image based on a collection of images of the subject to be inserted. This is done through concept inversion on images of a target subject and scheduling attention map swapping during the diffusion process, switching between the use of attention maps from the original diffusion process and the newer process conditioned on the concept modified text prompt. Strengths: - Clarity: Work is well presented and method is logically sound. Motivation behind method is also well justified. For example, the attention map visualizations are helpful in motivating their scheduled attention swapping method. - Significance: The problem of personalized subject swapping is practically interesting and hasn't been explored with depth in prior works. The practical effectiveness of large image diffusion models have been limited by its controllability. This work takes a reasonable step towards making such models accessible and useful to users who don't have domain specific knowledge in image editing. - Quality of results: Experimental results are impressive, and achieving subject swapping with high fidelity. Weaknesses: - Missing baseline: Recent work such as Plug-and-Play diffusion has also explored the effectiveness of feature and attention map injection during the diffusion process for image editing. Using DreamBooth + plug-and-play [1] is a more reasonable comparison than using P2P. It is also an important comparison as it is not clear to me whether the author's proposed attention map swapping method is really more effective than the method proposed int plug-and-play. - Evaluation: Considering the lack of unified metrics for this task, human evaluation is adequate. However, it would be helpful to see other metrics such as subject fidelity [2] reported. It is unclear whether this method leads to lower subject preservation than prior work like DreamBooth. - Results: Results shown all include subject swapping that are very similar in nature (e.g. cars are swapped for a different looking car). Subjects being swapped may not necessarily be similar in nature (see questions sections for more detailed comment). - Contribution: both concept inversion and editing through attention map insertion have been explored in prior work, this work seems to combine both effectively, but contribution from their method is relatively small. [1] Tumanyan, N., Geyer, M., Bagon, S., & Dekel, T. (2022). Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation. ArXiv, abs/2211.12572. [2] Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., & Aberman, K. (2022). DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation. ArXiv, abs/2208.12242. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does this method perform when there is a large domain shift in the edited subject? For example, if I wanted to replace a car with a tree, does the method still perform well? - Does the structure of the subject in the reference image fix the structure of the inserted subject? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Authors adequately address the limitations and potential ethical risks in using image diffusion models trained on internet-scale data, especially discussions on the potential biases present in these models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks reviewer for the appreciation on our motivation, paper writing, technical contribution and results. We will explain also the concerns as follows: > Comparison with PnP+Dreambooth. We have attached a comprehensive human evaluation both here and in Table 1 from the pdf. We could see that Photoswap outperforms PnP by a large margin across all metrics. PnP was designed for changing the image style, where background details preservation is not the focus. PnP utilizes the text prompt to inject the style information while copying intermediate variables in the self-attention layer to keep the layout unchanged. However, in our task, the background is supposed to be the same while the source subject should be swapped into the target subject. To keep the background unchanged, the only difference between the source prompt and the target prompt in Photoswap is the subject name. Our training-free mechanism is also designed for keeping the background details while swapping the target. | Metrics | Photoswap | PnP+DreamBooth | Tie | | :---- | :-----: | :-----:| ----:| | Subject Identity Preservation |**52.7%**| 22.1% | 25.2% | | Background Preservation | **49.1%** | 20.7% | 30.2%| | Overall Swapping Quality | **55.1%** | 22.4%| 22.5% | > Evaluation metric could be improved: We shared the same opinion that human evaluation is important since this is a new task. We have conducted a larger-scale human evaluation and the results are attached in the pdf. We also believe subject preservation is important, and one metric used for human evaluation is subject identity recognition. Our Photoswap outperforms all baselines by a large margin. We also plan to add more metrics for evaluation given more time for the revision. > Could it swap between dissimilar subjects? Yes. Photoswap does not need explicit mask of the source subject. The attention swapping in the Unet layer could be between any two shapes of subjects. > Contributions As you and Reviewer z7Bi mentioned, The task we undertook is inherently challenging, yet we have managed to achieve noteworthy results. Besides, we want to highlight that our simple training-free structure achieved amazing results without complicated structure. Our method could easily combined with other structure such as LoRA or ControlNet. As is well illustrated in the P2P comparison in the paper, and P2P, MasaCtrl, PnP comparison in this rebuttal, our model achieves a much better performance than all baselines across all metrics. We further analyzed the effectiveness of attention map and attention output, which provide insight for training-free image editing work. > How does the method perform when swapping has large domain gap? Fundamentally, our method does not require similarity between source domain and target domain since we directly swap the attention variables. In Figure 8 in the paper, we show it works when swapping between portrait and painting. > Does the structure of the subject in the reference image fix the structure of the inserted subject? No, the structure of the subject is not rigidly defined. Once the identity of the subject is learnt, it adapts to the gesture of the source subject. As exemplified in the teaser image of our paper, the human faces or subjects in the resultant images align with the source image, instead of merely duplicating the reference image. --- Rebuttal Comment 1.1: Title: Thanks for your suggestions. Do you have any other questions? Comment: We're truly grateful for your encouraging words on our work. We've taken care to address your comments and provide additional clarifications where needed. The PnP baseline comparison is also attached in the rebuttal. To ensure that our responses have fully addressed any concerns, we kindly ask if you have any further questions or if there are specific areas where you believe more clarification might be beneficial. We value your expertise in image generation/editing and aim to ensure that our paper is as clear and comprehensive as possible. Once again, thank you for your supportive and constructive insights. --- Rebuttal 2: Title: Raised score to accept Comment: My initial concerns were mostly on the evaluation method, degree of contribution and performance of method in edge cases. I agree with the authors that given the nature of the problem, human evaluation is sufficient. I am still on the fence about the novelty of the method, but given the impressive results I am willing to raise my initial score. --- Rebuttal Comment 2.1: Title: Thank you for acknowledging our work Comment: We're glad that our human evaluation on this challenging task provided sufficient support for Photoswap. We have taken note of your additional feedback and will incorporate the suggested changes to further improve the paper. and we hope our work will make a valuable contribution to image editing/image generation. Again, we sincerely appreciate your positive feedback and acknowledgment of the results presented in our paper. --- Reply to Comment 2.1.1: Title: Additional experimental results. Comment: We did more experiments on subject fidelity as you previously mentioned in DreamBooth [1]. Following DreamBooth, we also evaluated DINO and CLIP-I. The results show that our model consistently outperforms all other methods, aligning with the findings from human evaluations. | | Ours | P2P+DreamBooth | PnP+DreamBooth | MasaCtrl+DreamBooth | |--------|------|----------------|----------------|---------------------| | DINO | 0.55 | 0.44 | 0.42 | 0.31 | | CLIP-I | 0.80 | 0.72 | 0.68 | 0.53 | Table 1. Automatic evaluation on subject identity. | | Ours | P2P+DreamBooth | PnP+DreamBooth | MasaCtrl+DreamBooth | |--------|------|----------------|----------------|---------------------| | DINO | 0.78 | 0.72 | 0.73 | 0.70 | | CLIP-I | 0.89 | 0.79 | 0.76 | 0.69 | Table 2. Automatic evaluation on background information preservation. [1] Ruiz, Nataniel, et al. "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Summary: In this paper, they propose Photoswap, which could seamlessly swap personalized subjects into source images. The swap process is training-free and only leverages the manipulation of self-attention and cross-attention. The swapped object could maintain the pose of the source image without hurting the coherence of the image. The results are promising on both synthetic images and real images. Strengths: - The task of personalized subject swapping in images is fancy and interesting. This is the first work that could handle such challenging swapping task. - The paper excels in providing thorough illustrations to explain the design approach, making it easier for readers to understand the intricacies of the proposed method. Furthermore, the clear determination of parameters is commendable, as it enhances the reproducibility and applicability of the research. - The results presented in the paper are impressive. Even when dealing with challenging scenarios such as multi-subject swap and occluded subject swap, the proposed method performs remarkably well. Weaknesses: - In the user study, the comparisons are conducted on 99 examples. Although the number of test images and total votes (3 votes per example) is small, it may introduce bias into the results. I recommend addressing this concern by either increasing the number of test images. Additionally, it is unclear from the paper whether the user study was conducted solely on synthetic images. It would be valuable to explore the performance of your proposed method on real images as well, as this would provide further insights into its practical applicability and generalization. - In Fig. 7, the presented results are not convincingly better than those of the P2P+dreambooth method. To strengthen your claims, I suggest providing additional comparisons, especially on real images. - When discussing the swapping of the self-attention layer, it is unclear which self-attention layer in the U-Net architecture you are referring to. Please provide clarification on which specific self-attention layer is being swapped, as this information is essential for replicating your results and understanding the significance of this modification. Technical Quality: 3 good Clarity: 3 good Questions for Authors: refer to Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have stated the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the reviewer on the acknowledge of our task and appreciation of the results. Besides the great suggestion, we will explain all the concerns as follows: > larger scale human evaluation comparison with P2P on both synthetic and real images Thanks so much for the suggestion. During this rebuttal, we conducted more human evaluation on 400 image pairs containing 200 real images and 200 synthetic images, with each having 5 voting. Here, as you suggested we share the P2P+dreambooth baseline comparision below, where Photoswap consistently outperforms P2P on all three aspects. | Metrics | Photoswap | P2P+DreamBooth | Tie | | :---- | :-----: | :-----:| ----:| | Subject Identity Preservation |**43.4%**| 30.0% | 27.6% | | Background Preservation | **39.3%** | 30.2% | 30.5%| | Overall Swapping Quality | **37.3%** | 27.1%| 35.6% | Table 1. Human evaluation on 400 images with 200 real image 200 synthetic images. Each image pair contains 5 ratings from Amazon Turk. > Qualitative comparison with P2P on real image. As suggested, we attached the qualitative comparison with P2P+DreamBooth in Figure 2 in the attached pdf. From the example, we can clearly see that Photoswap performs better on both subject swapping and background preservation, especially on preserving the pose of the source subject. We will include more comparisons on real image in the revision. > Please provide clarification on which specific self-attention layer is being swapped There are 16 Unet layers in the Stable Diffusion backbone. We tested the effect of different layers according to their position (up, middle, or lower) in the Unet and their latent size. Through around hundreds of experiments on the layer combination, we found that while the latent size of the layer play a minor role, the position of the layer to be swapped matters. More specifically, we find the essential part is to do swapping operation in all the decoder layers in the Unet. We will also release code to verify all findings to ensure all the results can be reproducible. --- Rebuttal Comment 1.1: Title: Thanks for your kind words. Do you have other suggestions? Comment: Thank you for your acknowledgment and valuable feedback for Photoswap. We have addressed each of your concerns in the rebuttal above. We would like to ensure that our responses have provided clarity and satisfactorily addressed your queries. Should you have any further questions, or if there are areas where you feel my response might benefit from additional clarification, please do not hesitate to let us know. It is of utmost importance to me to ensure that all your concerns are fully addressed. We truly appreciate your support and guidance in this process.
Rebuttal 1: Rebuttal: Attached pdf Pdf: /pdf/9977078dc30c7032f9315db6a39e080c27706bba.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Lower Bounds on Adaptive Sensing for Matrix Recovery
Accept (poster)
Summary: Sparse recovery for vectors has been studied for a long time. It has been shown that allowing adaptive queries will give extra power to reduce the number of queries. We also have upper and lower bounds in this setting. More recently, sparse recovery for low-rank matrices has also been extensively studied. With non-adaptive queries, $\Omega(n^2)$ queries are necessary. With adaptive queries, based on the power method, we get an algorithm with $O(nr)$ linear measurements in each round over $O(\log n)$ adaptive rounds. It is an interesting question to see if the power method is optimal in this setting. This paper gives an affirmative answer to this question by showing a measurement-vs-rounds trade-off for recovering low-rank matrices using linear measurements. More specifically, let $A$ be an $n$-by-$n$ matrix with rank $r$. In each round, the algorithm can make $k$ queries. Each query is an $n$-by-$n$ matrix $S$, and the answer is $\langle S, A\rangle$ with some Gaussian noise. The goal is to reconstruct a matrix $\hat{A}$ such that $\|\hat{A}-A\|_F\leq c\|A\|_F$. The main result of this paper is that any adaptive algorithm which uses $n^{2-\beta}$ linear measurements in each round must run for $o(\log n/ \log \log n)$ rounds to compute a good reconstruction with high probability. Their techniques also apply to obtain measurements-vs-rounds trade-offs for other numerical linear algebra problems, including low-rank approximation in several different norms, singular vector approximation, etc. Technically, the construction of the hard instance is $A=\frac{\alpha}{\sqrt{n}}\sum_{i=1}^r u_iv_i^\top$, where $u_i,v_i$ are independent, Gaussian random vectors. In the proof of their main result, they first reduce to show the lower bound for a deterministic algorithm with perfect linear measurements of the random matrix $\frac{\alpha}{\sqrt{n}}\sum_{i=1}^r u_iv_i^\top+G$, where $G$ is a Gaussian matrix, outputting a reconstruction of $\frac{\alpha}{\sqrt{n}}\sum_{i=1}^r u_iv_i^\top$. The key observation is that the distribution of the responses in the first round is close to $N(0, I_k)$, and therefore the algorithm cannot have a lot of “information” about the target matrix. The proof relies on a random tensor concentration result and Bayes risk lower bounds. Strengths: This paper fills a gap in the field of sparse recovery and makes significant progress in understanding the limitations of adaptive queries in solving numerical linear algebra problems. Their main result is the lower bound for the sparse recovery of low-rank matrices, nearly matching the upper bound via the power method. Even if they use some techniques in prior works, the proofs are still non-trivial. This paper is well-motivated, and the idea of their techniques and proofs are clearly presented. Most of the claims are sound to me. Weaknesses: For the applications, it is difficult to judge the significance since there is not enough comparison between the results in this paper and prior works. It seems that some applications are for the regimes incomparable to the previous literature. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Does the lower bound require that all the iterations take $k$ queries uniformly? What if the algorithm can adaptively decide the number of queries in each iteration? 2. It would be better to show some known upper bounds for the problems in Table 1. 3. Line 52: $M\in \mathbb{R}^{n\times n} \rightarrow \mathbb{R}^t$. $t$ should be $k$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Uniform number of measurements: We note that our lower bound says that even when one is allowed $n^{2 - \beta}$ linear measurements in *each* round, the algorithm must use $o(\log n/ \log \log n)$ rounds to be able to approximate the matrix. So, allowing for the number of measurements to be adaptively chosen in each round does not seem to modify the results. - Known Upper bounds: In the appendix, we show that under certain conditions, the 2-approximate spectral norm approximation problem can be solved in $O(\log n^2/k)$ rounds using $k$ linear measurements in each round using the subspace iteration algorithm. We will include a discussion in the paper on the upper bounds for other problems and add references to them. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I keep my score.
Summary: This paper focuses on investigating the lower bound of the adaptive low-rank matrix sensing problem. Specifically, the authors demonstrate that when the noise level significantly exceeds the signal, any adaptive algorithm requiring fewer than $o(\log(n)/\log\log n)$ rounds must utilize at least $\Omega(n^2)$ linear measurements in total. This finding highlights an intriguing trade-off between the number of measurements and the number of rounds in various matrix sensing problems within numerical linear algebra. The paper presents a clear message, and the theoretical results are robust and captivating. However, one aspect worth considering is the general interest in the noise level examined in this study. Strengths: - The theoretical results are very sound - The presentation is very clean and easy to follow - The adaptive settings of matrix recovery is of general interest Weaknesses: This paper examines the case where the noise level is assumed to be $O(1)$, while the signal of each entry is considered to be $O(1/\sqrt{N})$. It is worth noting that in a more typical scenario where both signals and noises are of magnitude $O(1)$, a single round with $\Omega(nr)$ measurements, as presented in [5], would suffice. This raises questions about the significance of studying this extreme case and the role that $\sigma_l$ plays in the lower-bound tradeoff. To provide more justifications for studying this extreme case, it would be beneficial for the authors to elaborate on the motivations and implications behind their choice of noise and signal levels. By offering insights into why this specific scenario is relevant and shedding light on the insights gained from this extreme case analysis, the authors can strengthen the value and message of their paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The authors make an assumption that $M^{(i)}$ represents an orthonormal basis and argue that this assumption does not affect the generality of their approach due to the possibility of a change of basis. However, it is important to consider whether this change of basis has any impact on the homogeneous noise level, i.e., $Ag$ for a Gaussian vector $g$ is not a homogeneous Gaussian vector anymore if $A$ is not unitary. It would be valuable for the authors to provide further elaboration on this point. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: I do not forsee any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Noise level: You are correct that the lower bounds we study are in the high-noise regime. A main reason for this is that adaptivity does not reduce the number of linear measurements by a large factor in the low noise regime since the non-adaptive sensing algorithms already recover the low rank matrix using a number of linear measurements of approximately $O(nr)$, up to some small multiplicative factors. So, it does seem that adaptive algorithms are majorly helpful only in the high noise regime, which is where the study of measurements vs rounds tradeoff becomes interesting. While not exactly related to matrix recovery, [1] studies the performance of data-aware projection algorithms when the matrix is corrupted with a large noise. - Orthonormality of the measurements: We will elaborate on this. You are correct that the assumption about orthonormal measurements is not without loss of generality. For example, the standard assumptions in matrix recovery allow the same linear measurement to be performed multiple times and obtain results with independent noise in each of the measurements whereas our lower bounds which assume that the measurements are orthonormal does not account for this scenario. We will emphasize this in the next version. We do note that this loss of generality is not an issue for the lower bounds that we present for other problems such as spectral norm low rank approximation, etc. [1] Abdullah, Amirali, et al. "Spectral approaches to nearest neighbor search." 2014 IEEE 55th Annual Symposium on Foundations of Computer Science. IEEE, 2014.
Summary: In this paper, the authors discuss the power of adaptive algorithms in low-rank approximation. This is a setting where one observes general linear measurements of a matrix and wants to produce the best r-rank approximation of it. It is known that non-adaptive algorithms need order n^2 measurements and that with log n rounds a spectral algo works with nearly linear measurements. The authors discuss whether something can be done with o(log n) rounds. ## Contribution The authors prove that for an algo that works with o(log n/log log n) rounds, we need order n^{2-o(1)} measurements. Strengths: The authors point to the interesting importance of having access to approx log n rounds, as they prove that having access to o(log n/log log n) rounds is like having one round in terms of measurement complexity. I find the contribution clean, correct, and interesting. Weaknesses: It is a little bit disappointing that the authors cannot only assume o(log n) rounds, and need to assume o(log n/log log n) rounds. Could they comment more on that weakness? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: It seems that for the lower bound the authors only assume that the target matrix is r-rank Gaussian spike plus Gaussian noise. Is that true? If so, please highlight it as this is a rather simple "bad" example to build a lower bound, which makes the result even more appealing. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - $o(\log n)$ vs $o(\log n / \log\log n)$: Our Bayes risk analysis, which union bounds over the information growth in each of the rounds, is what leads to the $\Omega(\log n/\log \log n)$ rounds lower bounds instead of the more desirable $\Omega(\log n)$ rounds lower bound. However, we do note that, as stated in Eq (5), if we want algorithms that succeed with probability $\ge 1 - 1/\text{poly}(n)$, we can obtain an $\Omega(\log n)$ adaptive round lower bound on any algorithm that uses $n^{2 - \beta}$ linear measurements in each round. - Hard instance being rank-$r$ gaussian + Gaussian: Yes, the distribution of matrices for which we prove the lower bound is a rank-$r$ Gaussian (sum of $r$ outer products of independent Gaussian vectors) + another Gaussian. We will highlight this. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and I maintain my score.
Summary: This paper studies the problem of low-rank matrix reconstruction from linear measurements, which is a matrix generalization of the well known sparse reconstruction setting. The provide new lower bounds for this problem under different error metrics, such as the Frobenius norm, essentially showing that non-trivial reconstruction error is impossible unless $\Omega(\log n)$ adaptive rounds of measurements are made. The hard instance is a rank-1 matrix planted into i.i.d Gaussian noise. Strengths: - The problem solved is fundamental and to the best of my knowledge the contributions are new and generally applicable. - The technical steps are clearly explained and well written, and look sound. Weaknesses: - The presentation in Section 1 and 1.3 could be improved. While I like that the explanation goes into technical detail, it could be significantly improved by having an outline of the 2-3 most significant contributions and a more modular presentation. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Is there any potential connection between this work and the hardness results used in [1] for matrix completion based on planted clique (or the references within)? Are there any implications for matrix completion? [1] Yudong Chen, Incoherence-Optimal Matrix Completion, IEEE Transactions on Information Theory, 2015 Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Thanks for the comments and question. Since the value of an entry of a matrix can be computed using a linear measurement, it is indeed true that lower bounds on linear measurements must also carry over to the matrix completion setting. As our lower bounds are in the “noisy” linear measurements model, they are not quite applicable to your reference that studies the recovery of exact matrices using the entries that are randomly observed. When there is no noise, an $n \times n$ rank-$r$ matrix can be recovered using only $O(nr)$ linear measurements over two adaptive rounds as follows: In the first round, compute $AS$ where $S$ is an $n \times O(r)$ matrix with independent Gaussian entries. Note that $AS$ can be computed using $O(nr)$ linear measurements. We can show that with a large probability $\text{rank}(AS) = \text{rank}(A)$ and hence the column space of $AS$ is the same as the column space of $A$. If then $U$ is an orthonormal basis for the column space of $A$, we can compute $U^T A$ using another $O(nr)$ linear measurements. So in the noiseless setting, we can recover the rank $r$ matrix using only $O(nr)$ linear measurements over 2 rounds as opposed to lower bounds in our setting that says one needs ~ $O(n^2)$ measurements if we want to recover the matrix in $o(\log n/ \log \log n)$ rounds. - In the relevant setting of matrix recovery with noise (e.g., [1]), the previous works seem to study only very low noise regimes. So the tightness of the results cannot be inferred from our lower bounds. [1] Candes, Emmanuel J., and Yaniv Plan. "Matrix completion with noise." Proceedings of the IEEE 98.6 (2010): 925-936. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Using persistent homology to understand dimensionality reduction in resting-state fMRI
Reject
Summary: The paper is an empirical study of different dimensionality reduction (DR) methods for brain activity data. Authors compare various approaches to brain representation using the MRI data from Human Connectome Project. In the manuscript, "brain representations" are called "dimensionality reduction" (DR) since they present brain MRI data in a compact way. By DR, the author mean ways of presenting brain activity: selecting brain segmentation (parcellations), measuring activity inside these zones, calculating cross-correlation etc. The definition is different from the one used in ML/AI community, where by DR we mean algorithms like t-SNE, UMAP, etc. Authors use topological data analysis (persistent homology, topological bootstrap, prevalence score) to evaluate the quality of DR. Lots of computational resources were (80, 000 CPU hours over the course of a month) were spent to such an evaluation. Strengths: Neuroimaging is an active field of research. The paper is technically correct in my opinion. Some recent tools of TDA (like prevalence, topological bootstrap) are applied. Weaknesses: First of all, I'm not an expert in neuroscience, so I can evaluate only the dimensionality reduction/topology part. 1. In the manuscript, "brain representations" are called "dimensionality reduction" (DR) since they present brain MRI data in a compact way. By DR, the authors mean ways of presenting brain activity: selecting brain segmentation (parcellations), measuring activity inside these zones, calculating cross-correlation etc. The definition is different from the one used in ML/AI community, where by DR we mean algorithms like t-SNE, UMAP, etc. 2. No contribution for ML/AI/DL. Typical conclusion from the study: "As expected, we also saw that feature number was a more important driver of persistence structure than the underlying rank of the decomposition". The conclusion gives insights about types of brain features, not DR algorithms from ML. 3. I didn't understand some parts of the paper with neuroscience jargon (cortical parcels, grayordinates, subject space). Captions on Fig. 2 are not explained, what does mean "Shaefer600 pNMs Psim-ztrans", etc? 4. Some references are missing and some relevant methods are not evaluated. Overall, the paper seems to fit more traditional neuroscience community than NeurIPS community. But I can be mistaken. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Why not to evaluate representations after DR via Representation Topology Divergence [1]? 2. Why you don't use standard tools for DR, like t-SNE, UMAP? 3. Recently, some tools for topologically-aware dimensionality reduction were proposed [2, 3]. Can they by applied for your problem? 4. The topological bootstrap and prevalence are rather novel tools in topological data analysis. Given significant computational budget (bootstraping and calculating persistence diagrams 1000 times) are there real benefit in using it? [1] Barannikov, S., Trofimov, I., Balabin, N., & Burnaev, E. (2021). Representation topology divergence: A method for comparing neural network representations. arXiv preprint arXiv:2201.00058. [2] Moor, M., Horn, M., Rieck, B., & Borgwardt, K. (2020, November). Topological autoencoders. In International conference on machine learning (pp. 7045-7054). PMLR. [3] Trofimov, I., Cherniavskii, D., Tulchinskii, E., Balabin, N., Burnaev, E., & Barannikov, S. (2023). Learning topology-preserving data representations. ICLR' 2023. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: Authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses:** 1. [...] The definition is different from the one used in ML/AI community, where by DR we mean algorithms like t-SNE, UMAP, etc. * We apologize for our confusing terminology. As the reviewer correctly notes, we focus on domain-specific features sets that typically involve a brain parcellation and feature choice (see the ‘Terminology’ section in general response). Although these specific feature sets differ from generalizable DR approaches, we believe that our flexible mathematical framework has broader applications for DR comparisons (see ‘Relevance to NeurIps’ section in general response). 2. [...] The conclusion gives insights about types of brain features, not DR algorithms from ML. * We believe that the flexible mathematical framework developed here can be used across domains and for domain-general DR comparisons (see ‘Relevance to NeurIps’ section in general response). 3. Captions on Fig. 2 are not explained, what does mean "Shaefer600 pNMs Psim-ztrans", etc? * We apologize for this oversight. In this example, "Schaefer" refers to the parcellation choice (the Schaefer parcellation), "600" refers to the rank of that parcellation, pNMs is shorthand for "partial network matrices," and "Psim-ztrans" denotes that the Fisher-transformed Pearson divergence was used as the dissimilarity measure for this embedding. 4. Some references are missing and some relevant methods are not evaluated. * We apologize for this oversight. Overall, the paper seems to fit more traditional neuroscience community than NeurIPS community. But I can be mistaken. * We believe our general mathematical framework, which encompasses comparisons of embeddings/networks/DRs/etc, is of interest to researchers in the NeurIps community (see ‘Relevance to NeurIps’ section in general response). **Questions:** 1. Why not to evaluate representations after DR via Representation Topology Divergence [x]? * We were not aware of this method and agree that it is very interesting and shares many goals with our analysis. We are excited to validate against it in future work. However, we believe that our method is applicable to a larger class of representation comparisons since it does not require a shared vertex set and may provide more granular accounting of the degree of difference between representations. 2. Why you don't use standard tools for DR, like t-SNE, UMAP? * One of our primary goals for this work was to address the challenge of analytical flexibility within the neuroimaging domain (see the ‘Novelty and contributions’ section in our general response), and we therefore chose domain-specific feature sets. Nevertheless, we believe our flexible mathematical framework has domain-general applications (see ‘Relevance to NeurIps’ section in general response). 3. Recently, some tools for topologically-aware dimensionality reduction were proposed [x, x]. Can they by applied for your problem? * Our goal was to evaluate the impact of analytical flexibility that is typical to large sectors of the neuroimaging community; as such, these fall outside of our scope. However, these methods do fall within the purview of our comparison framework and look extremely interesting; it is possible that we will be able to include them in future work, and we appreciate the reviewer bringing them to our attention. 4. The topological bootstrap and prevalence are rather novel tools in topological data analysis. Given significant computational budget (bootstraping and calculating persistence diagrams 1000 times) are there real benefit in using it? * We apologize for not motivating this decision more clecomparison arly in the manuscript. Our persistence data tends to primarily lie near the birth-death line (see **Figure 1** in the general response figure page); however, some work has found important structure in this topological “noise” [e.g., 1]. However, most techniques make statistical claims on persistence intervals, rather than homology generators themselves [2-4]. Our data do not exhibit high-persistence topological features, but do show nontrivial topological structure. The inclusion of the topological bootstrap offers insight to data that exhibits topological structure other than a small number of very persistent features, extending the reach and sensitivity of our comparison framework. [1] P. Bubenik, M. Hull, D. Patel, and B. Whittle, “Persistent homology detects curvature,” Inverse Problems, vol. 36, no. 2, p. 025008, Jan. 2020, doi: 10.1088/1361-6420/ab4ac0. [2] B. T. Fasy, F. Lecci, A. Rinaldo, L. Wasserman, S. Balakrishnan, and A. Singh, “Confidence sets for persistence diagrams,” Annals of Statistics, vol. 42, no. 6, pp. 2301–2339, Mar. 2013, doi: 10.1214/14-AOS1252. [3] R. J. Adler, S. Agami, and P. Pranav, “Modeling and replicating statistical topology, and evidence for CMB non-homogeneity,” Proc. Natl. Acad. Sci. U.S.A., vol. 114, no. 45, pp. 11878–11883, Nov. 2017, doi: 10.1073/pnas.1706885114. [4] E. Onaran, O. Bobrowski, and R. J. Adler, “Functional Central Limit Theorems for Local Statistics of Spatial Birth-Death Processes in the Thermodynamic Regime.” arXiv, Feb. 23, 2022. doi: 10.48550/arXiv.2202.02766. --- Rebuttal Comment 1.1: Comment: Dear reviewer, Thanks for your support during the review process. The authors have provided additional clarifications to your questions—please acknowledge the rebuttal briefly and ask any additional questions you might have. Thanks,\ Your AC --- Rebuttal Comment 1.2: Title: Response Comment: I appreciate authors for addressing my questions. In my opinion, the manuscript has potential and novelty (applications of prevalence scores, study of DR methods for neuroimaging data). But the text itself should be improved by taking into account comments from my review and other reviews. I don't see an updated version of the manuscript here. So, I prefer to leave my evaluation unchanged.
Summary: This paper investigates shared geometric structure across different (very broadly speaking) dimension reduction algorithms for functional brain connectivity. The authors examine different connectivity representations through persistent homology via topological statistical and bootstrap. Strengths: - The paper is written clear language. - The paper proposes novel metrics to evaluate graph structures. Weaknesses: - This paper is quite ambiguously written and not self-contained although the languages are clear. - I believe the presentation can be far improved by explaining background better, restructuring paragraphs and better description of mathematical notations. - For example, topological sampling is not well explained and readers would have to rely on other papers. Also, sections do not flow smoothly as mathematical notations are either not consistent or variables in equations do not connect. - There is no baseline experiments and the result from the analysis cannot be properly validated. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: - The authors mention that the primary goal is to compare the structural changes in a single neuroimaging dataset under a variety of brain representations, which is not so clear. Where is the "changes" coming from? - Where is $D \sim 10^8$ in line 40 coming from? This is explained far later in line 123 and very data specific. - $d$ in section 1 is dimension, then it becomes dissimilarity in section 2.2, and then it becomes persistent diagrams in section 2.3.3. - There is no baseline or comparisons with other methods. At least there should some naive approach that should demonstrate the benefits of the proposed analysis. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: There is a section describing the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses:** This paper is quite ambiguously written and not self-contained although the languages are clear. * We apologize for our lack of clarity. Please see the ‘Terminology’ section in the general response for further information. I believe the presentation can be far improved by explaining background better, restructuring paragraphs and better description of mathematical notations. * Thank you for this feedback, which we will make sure to implement in our future work. For example, topological sampling is not well explained and readers would have to rely on other papers. Also, sections do not flow smoothly as mathematical notations are either not consistent or variables in equations do not connect. * We apologize for inconsistencies in our mathematical notations, and we regret that we cannot give a detailed treatment of the topological bootstrap in the space provided. There is no baseline experiments and the result from the analysis cannot be properly validated. * We have performed some baseline experiments using an alternative domain-specific method (‘canonical correlation analysis’) -- see **Figure 2** of the figure page in the general response. Unfortunately, we were unable to include this work due to space restrictions, but this will be published in our future work. **Questions:** The authors mention that the primary goal is to compare the structural changes in a single neuroimaging dataset under a variety of brain representations, which is not so clear. Where is the "changes" coming from? * We apologize for our lack of clarity. When discussion ‘changes’ we meant to refer to differences in subject embeddings due to analytical flexibility. Where is D~10^8 in line 40 coming from? This is explained far later in line 123 and very data specific. * This is the dimensionality of the original acquired neuroimaging data. Although we agree that this number is data-specific, we note that the general problem of high dimensional data analysis is encountered in many different domains. d in section 1 is dimension, then it becomes dissimilarity in section 2.2, and then it becomes persistent diagrams in section 2.3.3. * We apologize for the inconsistencies in our mathematical notations. There is no baseline or comparisons with other methods. At least there should be some naive approach that should demonstrate the benefits of the proposed analysis. * We have performed some baseline experiments using an alternative domain-specific method (‘canonical correlation analysis’) -- see **Figure 2** of the figure page in the general response. Unfortunately, we were unable to include this work due to space restrictions, but this will be published in our future work. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their comments and clarifications. However, I agree with SXyw on several weaknesses of this paper, e.g., lack of clarity, motivation and self-containedness, as well as lack of baseline experiments, which should have been there in the original manuscript. Venues like NeurIPS is not a journal paper that reviewers can keep their eyes on until the manuscript gets revised until publication, therefore, I would have to maintain my current score.
Summary: The authors study shared geometric structure across different dimensionality reduction (DR) algorithms applied to neuroimaging data (fMRI data from the Human Connectome Project). In particular, they compare different DR algorithms (which they call "brain representations") by applying them to the same data sample and comparing the resulting Vietoris-Rips complex in each low rank data embedding using a modified topological bootstrap and cluster on the resulting estimated topologies. Strengths: The authors do introduce framework for the comparison of DR methods that work with any data or dissimilarity measure amenable to Vietoris-Rips filtration and apply their method to real neuroscience data. Weaknesses: The authors seem to have put in a good amount of work. However the paper currently lacks motivation or clear significance. Instead the paper reads like a number of different parcellations and DR methods were all applied to a public dataset after which a number of somewhat justified, somewhat arbitrary sequential analysis decisions were made to arrive at a clustering to determine similarity. Good background is given on each step, but it is not clear where the novelty lies here, or why these steps are the right ones. It is unclear exactly what deeply useful conclusions can be drawn from this analysis. The writing and definitions could be much more clear throughout. Some are used before they are defined well, some terms with precise meaning seem misused (e.g., induced topology seem misused; "induced on...data"), some terms are highly redundant and misleading (e.g., "brain representations"). The github link is not really anonymized (there is another project at the same github link clearly from the "Personomics Lab"). Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: "Brain representations" is confusing, sounds like something it isn't, and results in redundant confusing phrases like "We frame brain representation as a manifold learning problem" when you've already said you would use the terms "brain representation", "dimensionality reduction" and "manifold learning" synonymously. Why not just use "DR method" as you do in lines 62-112? After wading through this relatively confusing exposition, I am disappointed to be left with little idea as to the impact of the findings. Evidently methods cluster according to feature number and type...why does this warrant publication in NeurIPS? What is the major new insight into DR methods enabled by this approach? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The authors have a limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The writing and definitions could be much more clear throughout. Some are used before they are defined well, some terms with precise meaning seem misused (e.g., induced topology seem misused; "induced on...data"), some terms are highly redundant and misleading (e.g., "brain representations"). * We apologize for the lack of clarity in our definitions and any imprecise uses of terminology in our manuscript. For the specific example given: we intended this phrase in the sense of a "metric-induced topology," since we are only able to realize projection topologies via some choice of dissimilarity measure. Since not all of our measures of dissimilarity are actually metric, we settled on "induced topology" as an (hopefully evocative) alternative to "metric-induced topology" in the manuscript. However, the reviewer is obviously correct to point out that this is unlikely to coincide with the subspace topology on our embeddings (considered as embedded submanifolds of the "original" subject space S), and we are happy to shift our terminology to avoid this misimpression in the future. The github link is not really anonymized (there is another project at the same github link clearly from the "Personomics Lab"). * We apologize for this oversight. **Questions:** "Brain representations" is confusing, sounds like something it isn't, and results in redundant confusing phrases like "We frame brain representation as a manifold learning problem" when you've already said you would use the terms "brain representation", "dimensionality reduction" and "manifold learning" synonymously. Why not just use "DR method" as you do in lines 62-112? * We apologize for the lack of clarity in our definitions. Please see the section ‘terminology’ in the general response for more information. [...] why does this warrant publication in NeurIPS? What is the major new insight into DR methods enabled by this approach? * We believe that our method addresses important domain-specific challenges of analytical flexibility (see ‘Novelty and contribution’ section in general response) and provides a flexible mathematical framework that has future extensions for broader domain-general comparisons of dimensionality reduction approaches (see the ‘Relevance to NeurIps’ section in general response). --- Rebuttal Comment 1.1: Comment: Dear reviewer, Thanks for your support during the review process. The authors have provided additional clarifications to your questions—please acknowledge the rebuttal briefly and ask any additional questions you might have. Thanks,\ Your AC
Summary: The paper proposes an approach of comparison of different standard dimensionality reduction technics (DRT) of fMRI, using topological data analysis tools. In this case, these DRT computes lower dimensional representations of the Human Connectome Project dataset. Then, for this specific dataset, the authors associate to each DRT (and for each feature type of each DRT) divergence matrices computed using correlations of these representations. Finally, these matrices, seen here as distance matrices, are compared to each other, by 1. Computing persistence diagrams associated to the Vietoris-Rips complex of these matrices, 2. Reweighting these diagrams by *prevalence scores*, 3. Computing wasserstein distances between these diagrams, 4. Cluster the DRTs using this distance matrix. Strengths: Looking at the fundamental differences between different dimension reduction techniques is a very interesting topic: on one hand, this can help increasing the performance of these methods by cleverly taking all of them into account; and on the other hand, this allows to have an insight on what these algorithms are retrieving from the original data. This also motivates to ignore the individual statistical performances of these algorithms. Furthermore, looking at the topology, and the *prevalence* of topological structures of the output of such method, for such a geometric dataset (brain representations), seems to be a very interesting and promising idea. Weaknesses: Some techniques used to tackle this problem are not natural or are not motivated enough; either intuitively or theoretically, especially the prevalence-weighted wasserstein distance. In particular, I have the following comments: - The bijections between persistence diagrams usually add the points of the diagonal, with an infinite mass. In particular, - how is defined the prevalence on the diagonal? - If the diagonal has no mass here, there are also a few problems - A bijection may not exist (not the same cardinal) - This distance is not symmetric, as points of $d_2$ that are not matched to points of $d_1$ are not taken into account. - Multiplying points in the diagram by a real correspond to do an **homothety with center $(0,0)$**, which raises a few questions - To my knowledge, 0 has no particular role in a diagram, so what is the motivation for using it? - For a same prevalence $\alpha$, the rescaling depends at which scale the topological structure appears. And the same goes for the angle direction in which the points are moved. Is this behavior wanted, and why? I think that taking into account the prevalence is an interesting idea, but I'm not convinced that this is the best way to do it. Moreover, there are small typos: - Section 2.1 - Feature types details could be improved, i.e., mathematical definitions. - Section 3.2.1 - Mention who is $k$ - The Vietoris Rips filtration is not a graph, but a clique complex (it has simplices of arbitrary dimensions) - This section could be made bigger, for non-TDA practitioners. - Section 2.3.2 - Clarity could be improved here, - line 196 "multiple data element to [represent] the same homology [class]" - line 205 : intervals are never defined Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Looking at the covariance-like matrices of the representations (feature-type) for each method, instead of looking directly at the topology they produce seems weird to me. - How to interpret the topology of these covariance matrices-like space? - How is this topological information relevant to inspect the structures of the representations? 2. The different features types are living on very different mathematical spaces, - How can you assert that your model is identifying brain representation structures rather than these mathematical spaces? These spaces may have significantly different scale densities, so this is not clear. 3. The paper does intentionally not take into account pre-representation data. - Is the data fundamentally too large to be taken into account by TDA techniques? - How to identify if the feature extracted by the different representations are given by noise or useful structure from the dataset? 4. The constants used to compute the prevalence are fixed at $R=1000$ bootstrap per resampling, at $90\%$, with homology of degree $1$. - How does these choices matter? i.e., do the results change with other parameters? - What is the motivation for using these values? Overall, I think the proposed approach is currently too fuzzy and lacks proper motivation, which makes me lean towards rejection. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: Limitations have been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses** Some techniques used to tackle this problem are not natural or are not motivated enough [...], especially the prevalence-weighted wasserstein distance. In particular, I have the following comments: * The bijections between persistence diagrams usually add the points of the diagonal, with an infinite mass. In particular, - how is defined the prevalence on the diagonal? - Prevalence is defined only for nontrivial homology generators. It follows that prevalence is not defined on the diagonal, because no nontrivial generators of the persistence module are represented on the diagonal of the diagram. - If the diagonal has no mass here, there are also a few problems - A bijection may not exist (not the same cardinal) - This distance is not symmetric, as points of d_2 that are not matched to points of d_1are not taken into account. - These are both common practical issues in the use of the persistence diagram Wasserstein that are typically solved by the addition of a separate L^p penalty for added/removed points. We apologize for not more carefully distinguishing the definition of the Wasserstein distance on persistence diagrams from its definition on probability measures. * Multiplying points in the diagram by a real correspond to do an homothety with center (0,0), which raises a few questions [...] * We are not multiplying points in the diagram by the prevalence value; rather, we are using the prevalence score to scale the contribution of a given generator to the p-Wasserstein cost. We apologize for the lack of clarity on this point in the manuscript. I think that taking into account the prevalence is an interesting idea, but I'm not convinced that this is the best way to do it. * We appreciate and understand your concern that the prevalence-weighted Wasserstein is untested and does not receive detailed theoretical treatment in this work. However, it does have precedent in other members of a family of weighted and generalized Wasserstein distances [1,2], though this family may generally fail to share some important properties of the unmodified Wasserstein distance [3]. Nonetheless, we feel it has a clear intuition: less-repeatable homology generators make a correspondingly lesser contribution to the Wasserstein transport cost between diagrams. We offer it as a first pass at combining both prevalence (statistical) and persistence (topological) information in a single measure and hope to characterize both its properties and alternatives in future work. In addition, we eagerly anticipate any refinements any other groups may propose to the combination of prevalence and persistence information. [1] T. de Wet, “Goodnes-of-fit tests for location and scale families based on a weighted L2-Wasserstein distance measure,” Test, vol. 11, no. 1, pp. 89–107, Jun. 2002, doi: 10.1007/BF02595731. [2] [1] B. Piccoli and F. Rossi, “Generalized Wasserstein distance and its application to transport equations with source,” Arch Rational Mech Anal, vol. 211, no. 1, pp. 335–358, Jan. 2014, doi: 10.1007/s00205-013-0669-x. [3] L. Lombardini and F. Rossi, “Obstructions to extension of Wasserstein distances for variable masses.” arXiv, Dec. 09, 2021. Accessed: Aug. 09, 2023. [Online]. Available: http://arxiv.org/abs/2112.04763 **Questions** 1. [...] - Covariance matrices have a well-characterized Riemannian structure (the symmetric positive-definite cone), which is why we use its geodesic distance as a choice of “dissimilarity measure” when comparing "covariance matrix-like" features (network matrices, spatial network matrices, and partial network matrices). - The reviewer is correct to point out the relevance of the feature-space structure. It is quite possible that the "feature-produced" topological structure is the dominant (over parcellation & network definition) analytic choice influencing inter-subject variability -- indeed, this is precisely what we believe our results suggest. However, this is still useful information: for those working in the domain of application, it is important to know if and when feature choices overshadow DR choices. 2. Our paper does not make this assertion. In fact, one of our central interpretations of our results is that projections into feature space typically impose stronger topological constraints than do parcellation/network definition choices. However, we do claim that we find differences in representational structures. By comparing within feature type and across dimension reduction type, we can be confident that the topological differences we observe are due to the choice of brain representation. 3. [...] * Yes. A full pre-representation dataset is on the order of ~10TB, which is larger than we are able to process with TDA and the resources we can access. * This is where we find utility in the topological bootstrap; sampling stability is our best proxy distinction between signal and noise. However, it is not possible to be certain, because our data does not have a ground truth. See the “validation” section in our general response for further discussion. 4. Re: bootstrap params and homology degree - Due to computational resource constraints, we were not able to fully characterize our results’ sensitivity to these parameters. However, intermediate testing work suggested that our results are similar (but with lower power) for R=100-250 bootstraps and resampling rate at 80%. - We chose R = 1000 bootstraps to maximize the accuracy of our prevalence estimates. - We chose a 90% resampling rate to keep the data structure relatively consistent while allowing a very large number of unique resamples. - We restrict to H1 because (1) time and memory complexity for higher homology computations scales quickly and (2) minimal homology representatives are crucial to the topological bootstrap and can currently only be efficiently computed for H1. --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: Thank you for your explanations. I understand the point of the paper better now. However, I still think the lack of clarity, motivation and self-containedness, as well as lack of baseline experiments, are too salient to accept the work in its current form.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable input to our work. Please find our general response summary below, in addition to the item-wise responses we provide to individual reviewers. We also include figures showing typical persistence diagrams and a naive approach. **Novelty and contribution** Our primary goal in this work is to characterize the impact of analytical flexibility in neuroimaging analysis pipelines. Analytical flexibility is a major challenge in psychology and neuroimaging that encompasses the wide range of acceptable analysis steps and parameter decisions available to researchers. Analytical flexibility leads to tens of thousands of different versions of valid analysis pipelines with extensive variability in results. Importantly, analytical flexibility has a siloing effect on the field because it restricts cross-pollination of findings and ideas to occur only within analytically aligned subsets of the scientific community. Analytical flexibility is especially challenging to characterize and address in resting state neuroimaging because feature sets differ in both size (depending on parcellation/network definition) and structure (depending on the feature choice). The major novel contribution of this work is to develop a mathematical framework that enables robust and generalizable statistical comparisons between resting state neuroimaging features sets that differ in size and nature. Importantly, the findings reveal that differences between neuroimaging feature sets are primarily driven by the feature choice rather than by parcellation or network definition. These results suggest that future work into clinical and behavioral neuroimaging correlates should focus more on feature type comparisons and less on parcellation choices. Separately, we also believe this work makes an important novel contribution in its use of the topological bootstrap to show “that cycles with low persistence may carry meaningful structure” in high-dimensional real data with complex topology. **Terminology** Our reviewers have collectively pointed out some ambiguities in our terminology around “dimension reduction,” “brain representations,” and “manifold learning”. We agree with these comments and apologize for our lack of clarity. The goal of this study was to compare different feature sets derived from resting state neuroimaging data. Reviewer 5 correctly notes that the extraction of these feature sets typically involves a spatial summary of the brain into parcels or voxel-weighted networks, each of which is characterized by timeseries. From these timeseries, a variety of features can be calculated including amplitudes (overall strength), functional connectivity (between-parcel temporal correlations), etc. We agree that the terminology of ‘parcellation’ and ‘feature set’ is preferable over dimension reduction (which is too domain-general), brain representation (which encompasses both parcellation and feature choice), and manifold learning (which we use primarily to refer to parcellation). We did not investigate more general dimensionality reduction approaches such as t-SNE and UMAP because these techniques are relatively uncommon in the neuroimaging domain. Nevertheless, we believe that our flexible analysis framework extends to broader applications beyond the domain-specific comparisons performed here. **Relevance to NeurIPS** Though our motivation is firmly rooted in the neuroimaging domain (as explained above), we also believe our investigation offers meaningful contributions more broadly. First, to our knowledge, most of the literature evaluating dimension reduction considers the regime where (a) the ambient dimension is smaller than the number of data samples and (b) the target dimension is low (often two or three dimensions). By contrast, we work in an ill-conditioned (samples << original dimension) regime with high original and target dimension, which is common in many DR use cases. As such, our domain-specific analyses offer a valuable testbed for very high-dimensional data. Secondly, we believe that our method offers an extremely broad and flexible framework for future comparisons of dimension reduction beyond the field of neuroimaging. In particular, because we can conduct this analysis on any datasets compatible with Vietoris-Rips filtration, we are not limited to, e.g., comparing graphs with identical vertex sets, and can consider broad classes of data and methods. **Motivation for validation approach** Understandably, several reviewers commented on our decision not to validate our analysis in synthetic manifolds or simulated data, and, relatedly, our method’s ability to distinguish useful structure and noise. We apologize for not making our reasoning clearer in the manuscript. While we agree that manipulable validation is very imortant, we do not feel that simulations or synthetic manifolds are viable options in our problem context. Because brain activity simulation remains an open problem and synthetic manifolds tend to have different structure, sampling, and dimensionality characteristics than our data, we validated via statistical stability instead of synthetic data. Though we do not currently have plans for synthetic/simulation testing of this method, we are planning to characterize topological behavior of several relevant null models in future work; because random geometric complexes remain under active theoretical investigation ([1-3]), the empirical characterization of null models in our data regime is of interest. [1] D. Yogeshwaran, E. Subag, and R. J. Adler, “Random geometric complexes in the thermodynamic regime.” arXiv, Sep. 09, 2015. doi: 10.48550/arXiv.1403.1164 [2] O. Bobrowski, M. Kahle, and P. Skraba, “Maximally Persistent Cycles in Random Geometric Complexes.” arXiv, May 15, 2016. doi: 10.48550/arXiv.1509.04347 [3] O. Bobrowski and M. Kahle, “Topology of random geometric complexes: a survey.” arXiv, Jul. 23, 2017. doi: 10.48550/arXiv.1409.4734 Pdf: /pdf/54ca689ffad792e7c13df87694bf8b3c1c8a6f8a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: A very interesting and rigorous exercise of comparison of embeddings for the purpose of evaluating manifold learning is presented. The chosen framework is root in trendy topological concepts such as persistent homology for the purposes of analyzing the data topology mixed with geometry-based measures of similarity across representations, and coupled with stochastic (topological) bootstrap to study variations over co-embeddings. Contributions are explicitly mentioned in lns 113-117 and certainly delivered. Best of my lot for this year. Strengths: + The idea is exceptional IMHO. I’ve known of (and used myself) some other frameworks trying to establish and understand similarities or dissimilarities of projections, but they were all much more naïve. This one offers a clearly more sophisticated approach that yields a much richer picture without substantially sacrificing interpretability of results, with the “only” price to pay of a very large computational cost. + Extremely well explained despite the very complex concepts involved. + The observation on lns 290-1 that cycles with low persistence may carry meaningful structure is something that I have thought myself occasionally but couldn’t put my finger on it nor how to articulate it nor know how to reveal it. This is a very nice confirmation of that intuition. Weaknesses: Not many that I can see to be honest… + The study is only conducted in experimental data from the human connectome project but it is never validated in synthetic known manifolds with and without added noise. This means that some of the observations in the last part of the draft are (most likely correct but) difficult to verify. For instance, the implication that the proposed framework distinguishes more feature types and numbers than representation types. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: + Lns 150-2: Is the separated treatment of the vectors and matrix intentional? Would a tensorial treatment lead to a more homogenous framework? + Lns 298-9: In order to “shift” the focus towards the representation, perhaps one can substitute Pearson’s correlations for Chatterjee’s correlations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: + Lns 92-106 literature review is perhaps missing some of the most primitive approaches; e.g. distance distortions plots, (graph/manifold) isomorphisms, … this is possibly intentional as the persistence element there is only implicit rather than explicit as in the case of the works reviewed. But if it wasn’t, well, it is perhaps convenient to at least mention some early efforts. + Perhaps the number of compared embeddings (brain representations) is not as large as one would like for this type of exercise but the authors clearly state why they stay on low numbers (computational cost) and promise larger comparisons in the future. Looking forward to those! Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Questions: Lns 150-2: Is the separated treatment of the vectors and matrix intentional? Would a tensorial treatment lead to a more homogenous framework? - The treatment is intentional, since we chose dissimilarity metrics based on common methods of comparison in the application (neuroimaging) literature; these tend to use separate methods of comparison between the two, so we chose consistency with the literature over a homogeneous framework. In particular, tensorial treatments are typically reserved for matrix comparisons in the neuroimaging literature. However, a more homogeneous tensorial framework might yield more direct comparisons between methods; either choice of trade-off may yield helpful insights, and we appreciate the reviewer's thoughtful question. Lns 298-9: In order to “shift” the focus towards the representation, perhaps one can substitute Pearson’s correlations for Chatterjee’s correlations. - We thank the reviewer for this suggestion and will consider it for future iterations of this work. Limitations: Lns 92-106 literature review is perhaps missing some of the most primitive approaches… - We agree and apologize for this oversight. --- Rebuttal Comment 1.1: Comment: I reckon mine was the easy answer as I truly like this work a lot. I can see some of the issues raised by other colleagues and while, fair enough, clarity for others may have not been so obvious (although it was to me) and self-containment could perhaps be improved, these are solvable issue after all. They do not seem to me to be inherent flaws of the method itself or invalidate the results, so I truly wish the best of luck to this submission after this rebuttal.
null
null
null
null
null
null
Aligning Synthetic Medical Images with Clinical Knowledge using Human Feedback
Accept (spotlight)
Summary: This paper proposed to use the diffusion model with expert feedback to improve the quality of synthetic medical images. The key motivation is the embedding of RLHF for medical image generation. To achieve this, there are three steps: (1) pre-trained a generation model and collected expert feedback based on several manual rules; (2) trained a model to predict the feedback scores; (3) incorporated the scores to fine-tune generative models. Extensive experiments demonstrate the effectiveness of the proposed model. Overall, this is an interesting and sound paper. Strengths: 1. RLHF-based generation model for medical knowledge incorporation. Interesting and useful. 2. Reasonable framework. The proposed model is based on the conditional diffusion models and the whole process is sound. 3. Extensive evaluation. Besides the commonly-used quality-related metrics and downstream performance, the new knowledge discovery and visual results are also impressive. Weaknesses: 1. Reproductivity. It's better to make the codes public and improve the reproductivity. 2. Manual rules. The expert feedback is highly related to the manual design rules. It's better to see the performance with different amounts of rules or with more rules. Also, please elaborate more on how to define these rules. In other words, it is not clear why the designed rules can indicate the complex image quality and small image differences. 3. Downstream performance. There is only one downstream task to denote effectiveness. More tasks should be considered. Meanwhile, why not show the performance of downstream models trained by both synthetic and real images? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See the above weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Besides the comments on the paper's weaknesses, please also consider: 1. Even though it is a kind of application paper, please add some discussions to state the technical contributions. 2. The authors claimed such a synthetic way could be useful for rare diseases. Please provide the evidence. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. Please refer to the combined response above for a general overview of the improvements and changes that have been incorporated in the revised manuscript. **Weaknesses** 1- We have prepared a Github repository with all the codes required to reproduce our experiments. The codes will be released upon acceptance and the end of the anonymity period. 2- As you mentioned, we have indeed demonstrated the relationship between the amount of expert feedback and model performance in **Figure 4 a) and b)**. Our results show that as we incorporate more feedback, the quality of synthetic images improves. Regarding the use of more rules or alternative sets of rules, we agree that this could potentially enhance the model performance. However, due to time constraints, we were unable to explore this aspect in depth. Nevertheless, we have ensured that the rules used in our study are comprehensive and based on established clinical knowledge found in relevant textbooks. These rules were carefully chosen to capture the complex image quality and subtle differences between cell types, which are crucial for generating clinically plausible synthetic medical images. In our revised manuscript, we will provide a more detailed explanation of the process of defining these rules and their relevance to the evaluation of complex image quality and small image differences. We will also discuss the potential benefits of exploring additional or alternative sets of rules in future studies, as well as the limitations imposed by time constraints. 3- We appreciate your feedback on the downstream performance evaluation and the suggestion to consider more tasks and explore the performance of downstream models trained using both synthetic and real images. (1) We believe that we have demonstrated the effectiveness of our approach through three downstream tasks. First, we showed that the synthetic data quality is improved by both clinician evaluation (**Main Table 3**) and computational-based qualitative measurements (**Main Table 4**). Second, we demonstrated that human feedback could improve the synthetic data's ability for cell type classification in an amount-dependent manner (**Main Table 5, Figure 4 a,b**). Lastly, we showed that specific human feedback could drive the conditional generation of new concepts (**Main Figure 4 c**). However, your suggestion inspired us to perform another downstream task, where we conducted a Turing test-style experiment: asking a pathologist to distinguish synthetic data from real data and evaluating whether the fine-tuned model can confuse the pathologist more. **Rebuttal Table 5** showed that pathology-in-the-loop framework could indeed improve model’s performance in generating high quality clinical images data that could confuse the pathology expert more, representing by the decrease in accuracy. (2) We appreciate your suggestion to explore the performance of downstream models trained using both synthetic and real images. We conducted the experiment (**Rebuttal Table 4**). As can be seen from the table, leveraging both real and synthetic data together benefits morphology-based cell classification. We will incorporate these findings into the refined manuscript. **Limitations** Thanks for your suggestions. In the revised manuscript, we will add a more elaborate discussion on the technical contribution. Regarding the potentially utilization in diagnosing rare diseases: we would like to highlight two key aspects of our pathologist-in-the-loop framework that support its potential application in this context: 1. Our framework facilitates the generation of high-quality synthetic images for rare cell types, which may have a frequency of less than 1% in healthy individuals. While common image generation pipelines might struggle to generate high-quality images due to the low frequency of these cell types, our pathologist-in-the-loop framework helps improve image quality. This can be particularly valuable in diagnosing rare diseases where high-quality images of rare cell types are essential. 2. Our method, as described in **Subsection 2.4**, allows for the rapid incorporation of new concepts into trained medical image generative models. The concept introduced in this study for differentiating subtypes based on nuclear morphology can potentially be extended to identify pathological variations observed in cancer diagnoses, such as changes in nuclear size, shape, texture, and nuclear: cytoplasmic ratio. Additionally, this approach could be applied to cytoplasmic features. For rare diseases, generating synthetic data using our framework may provide the best option for creating sufficient training data to develop and evaluate AI models effectively. --- Rebuttal Comment 1.1: Comment: Overall, I am satisfied with the author's response and hope these comments could improve the final version of this paper.
Summary: This paper develops a framework that generates synthetic medical images aligned with the clinical knowledge of doctors through training a reward model based on pathologists' image plausibility annotations. The intuition for their very simple but powerful approach that depends on incorporating clinical expert knowledge into the diffusion models by using the reward model to inform a finetuning objective arises from their observation: "designing domain-specific objective functions for finetuning foundational models that ensure a generative model adheres to clinical knowledge is challenging". Their presentation is very clear, and empirical evaluations & ablations are convincing. I recommend this paper for acceptance beyond my concerns around how well this paper fits into NeurIPS venue. Strengths: - Very clear presentation - Simple but powerful idea that brings in an idea from a different domain (LLMs) to medical imaging - Convincing empirical results and ablations Weaknesses: My main worry about this paper is that NeurIPS might not be the best venue, given the algorithmic novelty is limited here. However, I do think this is an interesting application of integrating domain-specific knowledge to models through RLHF-style training. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - How do you think your findings might change as the diffusion models get stronger in creating more plausible images (given the rate of implausible images was higher than 80% for some cell types)? - How do you think you might generalize your method to other modalities such as interpreting MRI images where clinical plausibility criteria might be more limited? (might be harder to define the criteria beyond aliasing artifacts in certain anatomies?) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Authors have not explicitly described limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. Please refer to the combined response above for a general overview of the improvements and changes that have been incorporated in the revised manuscript. **Weaknesses** This paper serves as an application submission to NeurIPS, aligning with the conference's well-established track for such submissions. While our paper is not motivated by a core technical problem, we believe that the application of ideas from the RLHF to clinical problems is by itself a novel idea that might have significant impact in our application domain of interest. We believe that presenting our work at NeurIPS would provide valuable insights to the broader AI and machine learning community, demonstrating the potential impact of integrating domain-specific knowledge in real-world applications. **Questions** 1- As diffusion models become stronger and generate more clinically plausible images, we anticipate that the rate of implausible images would decrease. However, the importance of incorporating human feedback in the loop would still remain significant for several reasons: (1) Ensuring clinical validity: Even with stronger diffusion models, it is crucial to align the generated images with clinical knowledge. Expert feedback would continue to play a vital role in refining the model's performance and ensuring clinical validity for various cell types and rare diseases. (2) Learning new clinical concepts: Human feedback not only helps improve the quality of synthetic images but also teaches the model new clinical concepts not annotated in the original training data. This aspect would remain valuable regardless of the initial strength of the diffusion model. (3) Adapting to evolving medical knowledge: Medical knowledge and best practices continue to evolve over time. Incorporating human feedback allows the model to stay up-to-date with the latest clinical understanding and maintain its relevance and utility in the ever-changing healthcare landscape. In conclusion, while we acknowledge that stronger diffusion models may generate more plausible images, the integration of human feedback would still be crucial for ensuring clinical validity, learning new concepts, and adapting to evolving medical knowledge. 2- We acknowledge the potential difficulties in defining criteria for modalities like MRI; however, we believe our method can still be adapted and generalized with some modifications: (1) Collaborating with domain experts: For modalities like MRI, it is essential to work closely with radiologists or other relevant domain experts who possess the necessary knowledge to identify and define the clinical plausibility criteria. Their expertise would help to set guidelines and provide feedback on the generated images. (2) Developing modality-specific reward models: To account for the unique characteristics and challenges associated with different imaging modalities, we can develop modality-specific reward models. These models would be trained to predict expert feedback on the generated images for the particular modality, ensuring that the human-in-the-loop framework remains effective in refining the generated images' clinical plausibility. (3) Leveraging auxiliary data: In cases where clinical plausibility criteria are harder to define, we can leverage auxiliary data, such as clinical reports, annotations, or patient metadata, to guide the model training and evaluation. This additional information can help the model learn more nuanced and context-specific criteria for generating clinically plausible images. (4) Incorporating active learning techniques: In situations where defining clinical plausibility criteria is challenging, active learning techniques can be employed to selectively query expert feedback on the most uncertain or ambiguous cases. This approach would ensure that the model gains the most valuable information from experts, even when the criteria are not easily defined. In conclusion, we believe that our method can be generalized to other imaging modalities, such as MRI, by closely collaborating with domain experts, developing modality-specific reward models, leveraging auxiliary data, and incorporating active learning techniques. These adaptations would help ensure the effectiveness of our human-in-the-loop framework in generating clinically plausible images across various modalities. --- Rebuttal Comment 1.1: Comment: I thank the authors for their explanations, my assessment remains unchanged.
Summary: The authors proposed to include pathologist in the loop for medical data synthesis. The pathologist can be replaced by training a reward model, which is used to fine-tune the generation model. Strengths: - I admire that the authors broke down the barriers between disciplines, and I believe their proposed pathologist-in-the-loop synthetic data generation framework is a promising strategy for improving the generation of biomedical samples. It is the main reason why I give a positive score. As "pathologist-in-the-loop" is the key contribution, I personally encourage that the raw scores from doctors and the corresponding images should be published in pair after acceptance -- then we can check the quality of the manual annotations. - The paper is well-organized and the experiments are comprehensive. Weaknesses: - The authors only gives the overall performances. I wonder if there is a direct verification for the reward model. - It would be better if this idea can be evaluated on other datasets. Of course, I know it is hard. - The technical innovation seems to be limited. Technical Quality: 3 good Clarity: 3 good Questions for Authors: A direct evaluation of the reward model is helpful. I am curious about the relationship of the reward model quality and the synthesis performances. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors claimed that they have discussed the limitations, but it is not clear for me. Maybe I missed something. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. Please refer to the combined response above for a general overview of the improvements and changes that have been incorporated in the revised manuscript. **Weaknesses** 1- Thanks for suggesting a direct verification of the reward model. We provide the reward function performance on a held-out validation set, which is also annotated by clinical experts (**Rebuttal Table 2**, Please refer to the pdf attached with the combined response to check the results). The reward function performance improves with more feedback, which correlates with the downstream classification performance demonstrated in **Main Figure 4**. In the revised manuscript, we will include detailed performance metrics for each cell type in the appendix, rather than showing only the overall performance. This should provide a more comprehensive understanding of the model's effectiveness. 2- We concur that assessing our methodology on additional datasets is crucial to validate its applicability across various contexts. To address this concern, we have attempted to gather another independent dataset from an external institute. Regrettably, due to time constraints, we were only able to annotate 20 cell types per class for the training set, a significantly smaller number compared to the 128 per class in the dataset presented in the paper. Despite this limitation, we proceeded to train a diffusion model as the baseline and employed the same workflow to fine-tune it. The reward function remained consistent; however, it was applied to synthetic images generated from the new and independent dataset. The results are shown in the **Rebuttal Table 3** (Please refer to the pdf attached with the combined response to check the results). Owing to the limited data, the baseline performance was substantially inferior to that of the primary dataset. Nonetheless, the pathologist-in-the-loop approach still managed to significantly enhance the model's performance. This indicates that our method is effective even when applied to smaller and distinct datasets. 3- We believe that the novelty and significance of our work lie in its application and the unique combination of existing techniques to address the challenges of generating clinically plausible synthetic medical images. To the best of our knowledge, our study is the first to leverage human feedback to model clinician preferences in the context of medical image generation. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: Thank you for you reply, and the responses with revised paper solve my concerns. But, I still hope the author can refine the work mentioned in the second point and present the final results in the final version. Thank you!
Summary: This paper introduces a pathologist-in-the-loop framework for generating clinically plausible synthetic medical images. The training process is similar to generative adversarial nets, with two major modifications. First, the discriminator was trained by human input instead of real/fake labels. Second, the generator was replaced by Diffusion Models. Evaluation of synthetic bone marrow patches by expert hematopathologists, leveraging thousands of feedback points, demonstrates the significant quality enhancement achieved through human input. Strengths: 1. I found the paper to be well-written and informative, and I thoroughly enjoyed reading it. 2. The generation of synthetic data holds significant importance. 3. The extensive visualization of the synthetic images, along with the notable improvement resulting from the pathologist's involvement, is highly convincing. Weaknesses: 1. Missing an important baseline method that incorporates feedback from a real/fake binary classifier. This classifier can distinguish between plausible (real) and implausible (fake) images without relying on pathologists' feedback or cell-type labels (automated feedback). It is imperative to investigate whether utilizing this information can enhance the ability of Diffusion Models to generate realistic images. 2. The clinical plausibility criteria presented in Table 1 pose challenges for implementation in various other tasks. Firstly, developing the checklist items necessitates domain expertise. Secondly, disagreements among pathologists may arise regarding these criteria. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Despite incorporating 100% pathologist feedback, there appears to be a discernible disparity between synthetic and real data. Could you elaborate on the underlying reasons for this gap and propose potential solutions to mitigate it? 2. What if pathologists are unable to differentiate between real and synthetic images? For example, in some cases [1], the experts may not be able to distinguish the real/synthetic images. 3. What is the rate of implausible images after fine-tuning the diffusion model? 4. Consider integrating both real and synthetic data for training purposes. To what extent can this integration enhance the overall performance? This is particularly important as the authors have not yet achieved comparable performance of AI models trained solely on synthetic data when compared to those trained on real data. **Reference** [1] Hu, Qixin, Yixiong Chen, Junfei Xiao, Shuwen Sun, Jieneng Chen, Alan L. Yuille, and Zongwei Zhou. "Label-free liver tumor segmentation." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7422-7432. 2023. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: No potential negative societal impact was detected. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. Please refer to the combined response above for a general overview of the improvements and changes that have been incorporated in the revised manuscript. **Weaknesses** 1- We appreciate the reviewer's suggestion to include a baseline method that incorporates feedback from a real/fake binary classifier. This classifier would distinguish between plausible (real) and implausible (fake) images without relying on pathologists' feedback or cell-type labels, offering automated feedback. Following your suggestion, we conducted experiments where we trained a binary classifier, referred to as a naive classifier, on real and artificial images without relying on pathologists' feedback or cell-type labels. However, this naive classifier did not improve the diffusion model's performance in the downstream cell-typing task, as shown in **Rebuttal Table 1** (Please refer to the pdf attached with the combined response to check the results). In summary, while the real/fake binary classifier offers an interesting approach, our experiments show that it does not lead to improvements in the downstream cell-typing task, which highlights the importance of pathologists' feedback to generate synthetic medical images that hold clinical validity. 2- We acknowledge that developing these criteria requires domain expertise and that disagreements among pathologists may arise. We would like to address these concerns as follows: (1) Domain expertise: It is indeed true that annotating the synthetic medical image quality requires expert knowledge, which is why we have engaged clinician experts to help with the annotation process. We believe that the expert-informed criteria provide a robust foundation for evaluating the clinical plausibility of generated images. Furthermore, once this annotation process is completed, the criteria can be potentially applied to other tasks within the domain, such as cell counting and abnormal cell identification. This makes our approach scalable and applicable to a wide range of medical imaging tasks. (2) Disagreements among pathologists: While we acknowledge that disagreements among experts may occur, the criteria we have used are based on standards outlined in medical textbooks. We have made every effort to ensure that our criteria are as unbiased and objective as possible. **Questions** 1- There are several factors that may contribute to the observed disparity between synthetic and real data: (1) Limitations of the generative model: While our generative model is capable of synthesizing visually realistic images, it may not fully capture the complex and nuanced features present in real medical images. This is an inherent challenge in generative modeling, especially when dealing with high-dimensional and intricate data such as medical images. (2) Incomplete representation of clinical knowledge: Although we have incorporated pathologist feedback, there might be aspects of clinical knowledge that are not entirely captured by the reward function. Medical expertise is vast, and it can be challenging to encapsulate all relevant information in a single model. 2- Please note our feedback collection process requires pathologists to identify images that are not clinically valid, and not differentiate between real/synthetic images. In fact, pathologists are only shown synthetic images for feedback. If the pathologist cannot distinguish between real and synthetic images, this means that a synthetic image is clinically valid. 3- As shown in **Main Table 3**, we found that incorporating pathologist feedback led to a significant boost in the quality of synthetic images across all cell types. After fine-tuning the diffusion model with expert feedback, the average rate of clinical plausibility increased from 0.21 to 0.75. This improvement demonstrates the effectiveness of our approach in generating more clinically plausible synthetic medical images. 4- Integrating real and synthetic data for training purposes can potentially enhance overall performance; however, it is essential to ensure that the synthetic data is of high quality and clinically plausible. Our current approach aims to improve the quality of synthetic medical images by incorporating pathologist feedback, which can make synthetic data more suitable for integration with real data during the training process. It is important to note that the synthetic data generated in our approach is based on the limited real data available. As a result, integrating synthetic and real data together may not fully address all the issues arising from the limited real data, such as underrepresented classes or rare cases. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed responses to my concerns and questions. I think it is a great work incorporating domain knowledge from expert humans and generative models, which is naturally expected to be beneficial than simply let the model learn to synthesize from raw data. I hope the authors can include the some discussion in the review phase to the final version.
Rebuttal 1: Rebuttal: We thank all reviewers for the comprehensive and constructive feedback on our submission. The valuable input received has significantly contributed to improving our work. We have prepared an extensive, point-by-point response for each reviewer, outlining our plans to address their concerns and suggestions for additional analyses to enhance the manuscript. **All new figures and tables can be found in the attached one-page PDF file.** We believe that our response will address the reviewers' concerns and allow us to promptly resolve any remaining minor issues. **Summary of improvements based on reviewers' comments** 1. External validation: We agree that evaluating our methodology on additional datasets is essential for validating its applicability across various contexts. To address this, we collected an independent dataset from a separate hospital, and our pathologist-in-the-loop approach still demonstrated significant improvements in the model's performance, indicating its effectiveness across multiple datasets (**Rebuttal Table 3**). 2. More comprehensive comparison and evaluation We introduced an additional baseline control using feedback from a real/fake binary classifier to distinguish between plausible and implausible images without relying on pathologist feedback or cell-type labels (**Rebuttal Table 1**). Furthermore, we included a comparison where we trained the downstream classifier with both synthetic and real images (**Rebuttal Table 4**). Lastly, we added a Turing test-style experiment, asking a pathologist to differentiate synthetic data from real data (**Rebuttal Table 5**). These additional experiments collectively enhance the quality and rigor of our work. 3. Validation of the reward model We presented the reward function performance on a held-out validation set annotated by clinical experts (**Rebuttal Table 2**). The results show that the reward function's performance improves with more feedback, correlating with the downstream classification performance demonstrated in **main Figure 4**. This offers a more comprehensive understanding of the model's effectiveness. 4. A more comprehensive comparison illustrating clinician feedback To better understand the distinctions between plausible and implausible images, we included a visualization of 128 images (64 plausible and 64 implausible) in **Rebuttal Figure 1**. Additionally, we added an appendix containing 512 images, with 32 images for each cell type, to provide a deeper understanding of the different cell types and variations in image quality. These additional visualizations and comparisons effectively address the reviewers' concerns and enhance the clarity of our paper. Once again, we would like to thank all reviewers for their comments and we are looking forward to the discussion period. Pdf: /pdf/1ce257a0d2c77aedc87093b35ea339096ddb360c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a pathologist-in-the-loop framework for generating clinically plausible synthetic medical images using diffusion models. The process involves pretraining a conditional diffusion model on real medical images, then using synthetic images labeled by expert pathologists to train a "clinical plausibility reward model" to predict pathologist feedback on new images. Finally, the diffusion model is fine-tuned using a reward-weighted likelihood loss that incorporates the reward model, to align synthetic outputs with clinical knowledge. The method is evaluated on a bone marrow cell image dataset. Results show the proposed method incorporating pathologist feedback significantly improves the clinical plausibility, fidelity, diversity, and downstream utility of synthetic images. Strengths: 1) Clinically plausible medical image synthesis is an interesting and important problem. This paper provides a promising human-in-the-loop framework to incorporate clinical knowledge. 2) The method is evaluated extensively both qualitatively and quantitatively. The results demonstrate clear improvements from incorporating expert feedback. 3) The paper is well-written and easy to follow. The method is described in sufficient detail. Weaknesses: 1) The criteria used by the pathologist for judging clinical plausibility could be described more precisely. Are there any quantitative metrics for each criterion? 2) Only binary feedback is collected from the pathologist. More fine-grained ratings could provide richer supervision signal. 3) Evaluations could be conducted on multiple datasets and with higher resolution images. 4) The comparison to "automatic feedback" using one classifier is somewhat weak. Better baselines or more comparisons could be evaluated. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1) The proposed method is only evaluated on a single dataset. Experiments on more datasets may be necessary to validate the generalizability of the method. Experimenting on higher resolution image synthesis could also be important for potential clinical usage. 2) It might be helpful to include more visual or qualitative comparisons between synthetic images marked as "implausible“ or "plausible". 3) Ye et al. also proposed a relevant idea using automatic feedback as the reward in [1]. It might be worth discussing this paper. [1] Ye et al. Synthetic Sample Selection via Reinforcement Learning. MICCAI 2020. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Authors could consider discussing the limitations and potential negative societal impact of this work: Limitations: 1) The method has only been evaluated on one medical imaging modality/dataset (bone marrow). Applicability to other datasets or modalities with different visual and clinical characteristics is unclear. 2) The framework requires extensive pathologist time for labeling synthetic images. Scalability to large diverse datasets may be problematic. 3) Binary plausibility labels may not capture nuanced aspects of clinical knowledge. More granular ratings could be beneficial. 4) The synthetic images are with low resolution (64x64). Quality and utility for diagnosis may degrade for higher resolution synthesis. Example negative societal impacts: 1) If not carefully validated, inaccuracies in synthetic medical images could mislead or harm practitioners relying on them for diagnosis/treatment. 2) Widespread availability of synthetic medical data could lead to privacy risks if reconstructed from patient data without consent. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. Please refer to the combined response above for a general overview of the improvements and changes that have been incorporated in the manuscript to address this issue. **Weaknesses** *1- Criteria for evaluating clinical plausibility* In the final manuscript, we will further elaborate on the qualitative criteria for evaluating clinical plausibility. However, **we would like to stress that quantitative metrics are not possible as cells can be implausible for a wide range of unpredictable reasons.** This is precisely the motivation behind our work; if there were a finite number of ways in which images can be invalid and if there were quantitative metrics for evaluating those failures, there would be no need for human feedback. *2- Limitation to binary feedback* We appreciate the reviewer's suggestion of collecting more fine-grained ratings from the pathologist to provide a richer supervision signal. Indeed, incorporating a more detailed feedback system could potentially enhance the learning process and improve the model's performance. However, there are some practical considerations that led us to opt for binary feedback. Binary feedback simplifies the annotation process for the pathologist, reducing the cognitive load and minimizing potential inconsistencies in the ratings. In a clinical setting, it is crucial to balance the workload of the pathologist while ensuring that the feedback provided is both accurate and meaningful. Furthermore, our results suggest that binary feedback is an effective supervision signal for generating clinically plausible synthetic medical images. That being said, we acknowledge the potential benefits of more fine-grained ratings and will consider exploring this approach in future work. *3- Image resolution* We would like to clarify the context in which these images are generated and used, as it may help address the concern about the resolution. The resolution of the single cell patch may appear low, but these are cropped from whole slide images scanned at 400x magnification, which is the industry standard for clinical whole slide scanners. It is important to note that clinicians routinely use this resolution, or even lower resolution, for cell counting and making diagnoses. Thus, the resolution of the synthetic images in our study is consistent with the resolution used in real-world clinical settings and should not impact the utility of our approach for diagnosis. Our methodology is designed to be compatible with the standard industry practices and resolution requirements in the clinical context. *4- The automatic feedback baseline* We thank this reviewer for suggesting more baseline experiments. Accordingly, we add one more baseline control that incorporates feedback from a real/fake binary classifier. This classifier would distinguish between plausible (real) and implausible (fake) images without relying on pathologists' feedback or cell-type labels, offering automated feedback. Please see our reply to **Reviewer q99y** for the results. **Answers to Questions** 1- We leveraged another independent dataset and showed that the framework is generalizable to the external dataset. Please see our response for **Reviewer n6aW** for the results. 2- We appreciate the reviewer's suggestion to include more comparisons between synthetic images marked as "implausible" and "plausible." As per your suggestion, we have included a visualization of 128 images, wherein 64 images are considered realistic, and the remaining 64 are deemed implausible. This visualization will provide a clear comparison between plausible and implausible synthetic images (**Rebuttal Figure 1**) and please refer to the pdf attached with the combined response to check the results. Additionally, we have added an appendix to the paper, which contains 512 images, with 32 images for each cell type. We hope that these additional visualizations and comparisons will effectively address the reviewer's concerns and enhance the clarity of our paper. 3- We appreciate the opportunity to discuss Ye et al., MICCAI 2020. While Ye et al. propose a reinforcement learning (RL) based method for selecting high-quality synthetic samples to improve the performance of medical image recognition systems, our work focuses on improving the generative model itself by incorporating pathologist feedback. We believe that our approach differs from Ye et al.'s work in three significant ways: (1) generative model improvement, (2) task complexity, and (3) expert validation. We will include a comparison in the revised manuscript. **Limitations:** 1- Evaluation on a single medical imaging modality/dataset: In the revised manuscript, we will discuss this limitation and emphasize the need for future studies to evaluate our approach on various medical imaging modalities and datasets to ensure its generalizability. 2- Pathologist time requirements: We will address this limitation in our paper and explore possible ways to alleviate this issue, such as involving multiple experts, leveraging active learning techniques to minimize the number of labels required, or incorporating semi-supervised or unsupervised learning methods. 3- Binary plausibility labels: In the revised manuscript, we will discuss this limitation and consider potential solutions, such as adopting multi-level or continuous rating scales for clinical plausibility, which may provide a more detailed representation of expert knowledge. Regarding the potential negative societal impact, we understand that the misuse or biased generation of synthetic medical images could have adverse consequences in medical research and decision-making. In the revised manuscript, we will discuss the importance of ensuring the ethical use of synthetic medical images, addressing potential biases in the generative models, and considering privacy concerns when generating and sharing synthetic data. --- Rebuttal Comment 1.1: Comment: I appreciate authors' detailed rebuttal and additional results/analysis. I enjoyed reading the paper and rebuttal. Since authors have resolved most of my concerns in the rebuttal, I tend to keep my original rating as Accept.
null
null
null
null
null
null
Bayesian nonparametric (non-)renewal processes for analyzing neural spike train variability
Accept (poster)
Summary: In this paper, the authors introduce the Bayesian nonparametric non-renewal (NPNR) process to model variability in neural spike trains with covariate dependence. The method generalizes modulated renewal processes using sparse variational Gaussian processes. Tested on synthetic data, as well as mouse head direction cell data and rat hippocampal place cell data, NPNR shows its superiority in terms of capturing interspike interval statistics and predictive power. Strengths: + Bayesian nonparametric non-renewal (NPNR) process to model variability in neural spike trains; + Validation of the approach on synthetic data and mouse head direction cell data and rat hippocampal place cell data; Weaknesses: - There are some aspects that while they may be hard to analyze theoretically at least in the experimental part can be investigated through simulations. For example, how is the inference affected by the dimension of x_t, number of unmeasured or unknown neurons that can act as perturbations, the number of samples and number of units/channels? I.e., how much data (in space and time) is needed for an efficient inference? - Within the experimental investigation, it is unclear what ELL values can be considered as good predictions. For example, if NPNR is used for k-step ahead forecasts, how many steps can it achieve with at least 50% accuracy? - Although this is a minor issue, the reader has some difficulties at seeing all the plots well from Figure 2 in particular the ISI probability plots. I assume they fit well gamma or exponential distribution. - A major issue is to check the literature and in unbiased way provide an accurate description even if the problems considered by prior work have a different or more advanced setup. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The authors mention that the variational inference framework is scalable. In the synthetic and real-world experiments, the number of neurons/units are relatively small (less than 50). I wonder if there's any limitation of the method to scale up to larger ensemble systems with much more neurons? 2. How does the proposed model perform in terms of predictive power when applied to synthetic data? 3. The experiments on mouse head direction cell data and rat hippocampal place cell data shows the predictive performance of NPNR and baseline models by showing the expected log-likelihood. For the two datasets, the range of ELL for the models vary a lot and is hard to interpret. I wonder what value of ELL can be considered as a good prediction? For example, if NPNR is used for k-step ahead forecasts, how many steps can it achieve with at least 50% accuracy? 4. The manuscript states that “Extending point process models with input-dependent variability has not been widely explored…” Multivariate auto-regressive frameworks and multiple covariates based models have been considered in "A Granger causality measure for point process models of ensemble neural spiking activity." PLoS computational biology 7, no. 3 (2011): e1001110. "Data-driven perception of neuron point process with unknown unknowns." In Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, pp. 259-269. 2019. "Variance as a signature of neural computations during decision making." Neuron 69, no. 4 (2011): 818-831. In general the prior work needs to be more exhaustively checked and discussed, as of now it is biased and solely based on one group while there are similar and related works from other groups. 5. Within the context of multiple neuronal recordings there is always the issue of interference and the problem that we cannot with certainty measure exactly N number of neurons. The activity of N neurons may be influenced by another P neurons so the question is how we can subtract the effect or perturbations in order to accurately model the N neurons and their covariates, etc. This again has been tackled in the neuroscience literature and the authors should check this related problem of understanding neural computations with unknown influences. 6. In the experiments, 1-D, 2-D and 3-D x_t are considered for the NPNR modeling. I wonder if and how the inference can be affected by the dimension of x_t, number of samples and number of units/channels? I.e., how much data (in space and time) is needed for an efficient inference? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not applicable in my opinion, this is a mathematical modeling paper with applications in neuroscience. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their time and helpful feedback. We provide brief comments on the mentioned weaknesses in the relevant question. *1. The authors mention that the variational inference framework is scalable...* The datasets in this work each contain around 30 selected neurons that are fitted simultaneously and have 1 to 2 million time steps in the training set, giving around 30-60 million data points in total (see appendix C.3). Most recent probabilistic models applied to neural data take in 100s of neurons but with binned spike counts, reducing the temporal length to $O(10^5)$ typically (“Scalable Bayesian GPFA with automatic relevance determination and discrete noise models”, Jensen et al. 2021; “Linear Time GPs for Inferring Latent Trajectories from Neural Spike Trains”, Dowling et al. 2023). Hence overall, in terms of number of data points this qualifies as a scalable method by recent standards. For the camera-ready version, we will add these numbers to Section 4.2 to explicitly demonstrate the scale of our datasets. *2. How does the proposed model perform in terms of predictive power when applied to synthetic data?* When extending the synthetic experiment on 5 different held-out synthetic datasets: | model | test ELL (nats / s) | |:--------:|:--------:| | Poisson | $40.55 \pm 2.09$ | | Gamma renewal | $47.07 \pm 2.10$ | | conditional Poisson | $40.82 \pm 2.22$ | | NPNR (ours) | $44.93 \pm 2.30$ | Note the Gamma renewal process is within model class for 3 of the synthetic neurons. We will include these numerical results in the camera-ready version in the appendix. *3. The experiments on mouse head direction cell data and rat hippocampal place cell data...* The two datasets are very different in terms of the spiking statistics when comparing the figures 3 and 4, hence the differences in the ELL values across the two datasets is consistent. Such numbers nevertheless can be compared in a relative sense within datasets. The test ELLs are also useful for fair comparison of predictive powers for qualitatively different baseline models (Ref. 2, “Construction and analysis of non-Poisson stimulus-response models of neural spiking activity”, Barbieri et al. 2001; Ref. 60, “Time-rescaling methods for the estimation and assessment of non-Poisson neural encoding models”, Pillow 2009; Ref. 17, “Non-parametric generalized linear model”, Dowling et al. 2020). Our model requires autoregressive sampling of posterior spike trains as seen in figures 3C and 4C, with each draw having a different CIF. The k-step ahead forecast you proposed is therefore not clear to us; prediction accuracy of the held-out spikes after k steps will vary across different posterior spike train samples. We will point out the difference in ELL values between the two datasets more explicitly in the camera-ready paper with a brief discussion similar to the above. *4. The manuscript states that...* We would like to thank the Reviewer for pointing out these works, some of which were not known to us. - **Kim et al. 2011**: a parametric CIF with a fixed interval history dependence, with a focus on Granger causality structure inference - **Yang et al. 2019**: a parametric GLM framework with unobserved or latent inputs, while considering the dependence of the spike-history on the external/unobserved covariates - **Churchland et al. 2011**: great example of the relevance of studying neural variability for neural coding For the camera-ready paper, we will add these papers to the literature review in the relevant introduction paragraphs with appropriate discussion. Our contribution remains unique in the scalable Bayesian nonparametric modeling of spike-history dependence with emphasis on covariate-dependent modulation of instantaneous spiking variability, the autoregressive dependence in terms of interspike intervals rather than spike-train windows, and finally our perspective on the connection to conditional ISI distributions when conceptualizing spiking variability extracted with our model. We will qualify the incriminated sentence in our Introduction accordingly. *5. Within the context of multiple neuronal recordings...* We agree strongly with the Reviewer on the importance of investigating these issues. The effect of ignoring subsets of neurons generally leads to apparent noise correlations in the neurons we record, which we can capture by augmenting our model with latent variables shared across the population. Overall, capturing neural correlations is orthogonal to our main contribution of modeling the single neuron responses in a more flexible manner, and hence beyond the scope of this paper. Another related tangential topic is the downstream effect of spike sorting contamination (e.g. Ref. 86, “Assessing goodness-of-fit in marked point process models of neural population coding via time and rate rescaling”, Yousefi, A. et al. 2020). *6. In the experiments, 1-D, 2-D and 3-D x_t are considered for the NPNR modeling...* In this study, our synthetic data has 2D inputs $x_t$ and has a million time points, and we were able to accurately recover the ground truth (figure 2). Real data had either 1D or 3D $x_t$ and ~2 million time points. Based on the synthetic experiment, we can be quite confident that this was in a sufficient data regime. Our method builds on sparse variational Gaussian processes, and hence inherits the convergence properties and data scaling from such models (Ref. 63, “Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods”, Rad and Paninski 2010; “Rates of Convergence for Sparse Variational Gaussian Process Regression”, Burt et al. 2019; and the references in appendix B.1 on technical papers introducing sparse variational GPs). We will add and expand the relevant references in appendix B.1 to address these questions for the camera-ready paper, as well as add this discussion topic to Section 3.2 with the relevant references.
Summary: This paper proposes the Bayesian nonparametric non-renewal process (NPNR) for inferencing both neural spiking intensity and variability. The tuning curve is based on a sparse variational Gaussian process (GP) prior, considering both spatial and temporal factors. They compare NPNR with other competitors on a synthetic dataset showing the capability of NPNR in inferencing the rate map and renewal density. On the two real-world neural datasets, they show that NPNR outperforms lots of competitors in terms of event prediction and interspike interval (ISI) recovery by different statistics combined with visualizations. Strengths: * Clear logical flow and presentation. Key maths are derived elegantly with lots of necessary details in the Appendix. * Both synthetic and real-world experiments are good and solid, and also supported by the code. * The literature review by the authors are exhaustive so that the comparison between different kind of models are clear and in detail. * The idea is new and intuitive, both preserve the interpretability and are not too simple. Weaknesses: * I'm very excited when seeing the model part. But when I get to Section 3.2, I feel a bit pity that we still need to do time discretization (convert the continuous timestamps TPP data to spike counts in time bins). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * I think Eq. 15 should be $=$ rather than $\propto$. * I'm wondering if this model can report a predictive log-likelihood using the mean as the estimation for the intensity in each time bin. In such a case, we can compare this model with other models (especially the simple GLM) to show that the proposed NPNR outperforms GLM? I'm expecting that this model will be slow but if the firing rates (tuning curve) recovery is better than GLM, the predictive log-likelihood (which is actually a golden criterion in neural latent variable models) should be better. * Can this model get information on the causal relationships between neurons? Is this model only dependent on time $t$ and external input $\boldsymbol x$, but does not consider influences between neurons? From my understanding, this is not a latent variable model doing information extraction from coupled neurons (like dimensionality reduction), but getting the firing rate for each neuron, and the firing rate is mainly affected by the neuron itself and the external input $\boldsymbol x$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: / Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and helpful feedback on the work. **Q&A** *1. I think Eq. 15 should be = rather than ∝.* This would indeed be true if the denominator (normalization constant) in equation 20 (appendix A) was 1. For a Poisson process with $t_i = 0$ (i.e. the most recent spike happened at time 0), this is true and the equality holds. In general, this does not hold and we need to compute the normalization constant which leads to valid ISI densities as plotted in the paper. *2. I'm wondering if this model can report a predictive log-likelihood using the mean as the estimation for the intensity in each time bin. In such a case, we can compare this model with other models (especially the simple GLM) to show that the proposed NPNR outperforms GLM? I'm expecting that this model will be slow but if the firing rates (tuning curve) recovery is better than GLM, the predictive log-likelihood (which is actually a golden criterion in neural latent variable models) should be better.* In fact, the expected log likelihood (ELL) on the test sets is exactly the predictive log-likelihood you mention (see Section 4.2), defined as $ \text{ELL} = \sum_n \mathbb{E}_{q(\mathbf{f}_n)} [ \log p(\mathbf{y}_n | \mathbf{f}_n) ] $ Instead of the predictive log-likelihood, we use the term test ELL as we are working in a variational framework rather than with a maximum likelihood approach. This metric also incorporates the variational posterior uncertainty in the intensity estimation in each bin, similar to metrics used in the Gaussian process literature (“Scalable Exact Inference in Multi-Output Gaussian Processes”, Bruinsma et al. 2020; “Dual Parameterization of Sparse Variational Gaussian Processes”, Adam et al. 2021). The simple GLM (and more advanced variants of it) are actually already presented in figures 2, 3 and 4. These appear under the name “conditional Poisson” or “cond. P” for short (see Section 4.2 first paragraph last sentence). We apologize for the slightly unorthodox nomenclature, but this is necessary as we are comparing many variants and extensions of the GLM/SRM family (see Section 2.1.2). The Poisson GLM model has been extended with different kinds of spike-history filters, hence we also add the type of filter to the name. For example, “RC cond. P” is the radial cosine conditional Poisson in figures 3A and 4A, the classical GLM as referred to in the literature (Ref. 80, “Capturing the dynamical repertoire of single neurons with generalized linear models”, Weber and Pillow 2017). Since we run all baseline models using Gaussian process mappings in the same variational framework (see appendix B.2, B.3 and B.4), we also compute the test ELL in a similar way to our method (conventional GLMs use a maximum likelihood linear feature model for the input-output mapping). Since this metric is computed over the whole population, by adding latent input dimensions we would arrive at the predictive likelihood that you mentioned for latent variable models (although this is not done in this study due to the focus of this work). *3. Can this model get information on the causal relationships between neurons? Is this model only dependent on time $t$ and external input $x$, but does not consider influences between neurons? From my understanding, this is not a latent variable model doing information extraction from coupled neurons (like dimensionality reduction), but getting the firing rate for each neuron, and the firing rate is mainly affected by the neuron itself and the external input $x$.* Generally, causal relationships are very hard or impossible to estimate from statistics alone (see Ref. 42, “Inferring structured connectivity from spike trains under negative-binomial generalized linear models”, Linderman et al. 2015). However, we could capture correlations between neurons through latent variables that would co-modulate the CIFs of individual neurons in the population. Alternatively, we could include the recent spiking history of other neurons in the input covariates in addition to the self-history of a neuron, similar to the GLM framework (“Spatio-temporal correlations and visual signaling in a complete neuronal population”, Pillow et al. 2008; “Non-parametric generalized linear model”, Dowling et al. 2020). In the NPNR framework the latter would naively lead to a significant increase in the input dimensions, and one interesting further work direction is to encode population spike-history in an efficient way. Indeed, the firing properties of each neuron in this study are only affected by its own history and the external covariates/inputs. For demonstrating our contribution, which is a flexible spike train noise model, this suffices and indeed improves over SOTA methods applied in the same manner. **Weaknesses and limitations** *I'm very excited when seeing the model part. But when I get to Section 3.2, I feel a bit pity that we still need to do time discretization (convert the continuous timestamps TPP data to spike counts in time bins).* We agree with the reviewer and indeed drafts of this work were considering continuous time techniques that are faithful to the continuous-time formulation. However, exact methods for continuous time processes often rely on parametric assumptions that we try to alleviate (“Variational Inference for Gaussian Process Modulated Poisson Processes”, Lloyd et al. 2015; Ref. 18, “Temporal alignment and latent Gaussian process factor inference in population spike trains”, Duncker and Sahani 2018), or use MCMC, generalized thinning and other techniques (Ref. 74, “Gaussian process modulated renewal processes”, Teh and Rao 2011) that are not as straightforward to scale up to millions of time steps as in our work using stochastic variational inference. Do note that the discretization is done at 1 ms, which for neural data that has been sorted properly (no significant spike contamination) should not contain more than one spike (a binary time series). --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed replies. I'm still satisfied with this paper and I think the answers have solved most of my concerns.
Summary: The authors proposed a scalable Bayesian approach which generalizes modulated renewal processes using sparse variational Gaussian processes. They applied the proposed method to simulated and two real neural datasets and showed that the proposed method is effective on these datasets and outperforms other baseline methods. Strengths: The paper is well written. The authors have done extensive experiments to show the effectiveness of the proposed method. Weaknesses: The proposed method doesn't incorporate any latent variables in the model. (see more in the questions section below.) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Method section: all the formulas are described for 1 neuron. It might be clearer to write likelihood and loss function in terms of multiple neurons. - I wonder if the authors have done any comparisons to the parametric methods? e.g. Gao Y*, Archer E*, Paninski L, Cunningham JP (2016) Linear dynamical neural population models through nonlinear embeddings. NIPS 2016. - The proposed method doesn't incorporate any latent variables. It might be worth adding the latents to discover useful representations from the data and fit the data variability better. I wonder if the model would fit data worse if miss one covariate in the inputs? (e.g. only includes location and direction in covariate not theta phase in the hippocampus data.) I feel that adding latents might help with this as well. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have discussed the limitations and future work of their proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their time and helpful feedback. *1. Method Section: all the formulas are described for 1 neuron. It might be clearer to write likelihood and loss function in terms of multiple neurons.* The current work inherently models each neuron separately while fitting the neural population simultaneously, hence the likelihood for all neurons is simply the sum of the log likelihoods in equation 18 $\mathcal{L} = \sum_n \mathbb{E}\_{ q( f_n \| u_n) } \left[ - y_{nt} f_{nt} + \Delta t \sum_{t=1}^T e^{f_{nt}} \right] + D_{\text{KL}}( q(\mathbf{u}_n) |\ p(\mathbf{u}_n) ) )$ where in $f_{nt}$ etc. we add the neuron index to the quantities. We will modify this in the camera-ready version to make this more explicit. As you suggest below, introducing latent variables would provide the ability to capture neural (noise) correlations. See our response to question 3 below for a brief discussion on this topic. *2. I wonder if the authors have done any comparisons to the parametric methods? e.g. Gao Y\*, Archer E\*, Paninski L, Cunningham JP (2016) Linear dynamical neural population models through nonlinear embeddings. NIPS 2016.* (Gao et al. 2016) is a model for spike counts with parametric noise models (Poisson and generalized count distributions). Our work is in the realm of spike trains where we have essentially binary time series after time discretization (see Section 3.2). We discussed related works in both realms, see the introductory paragraphs starting at line 40 to line 61, i.e. last paragraphs before the “contribution” paragraph. In this case of working with point processes/spike trains, spike count models are not appropriate as the only “spike count distribution” in a time bin is a Bernoulli distribution. Note that the main contribution of this paper is the flexible statistical modeling of the spiking process of a neural, or in machine learning terms flexible modeling of the noise distribution, per neuron. Works like (Gao et al. 2016) are latent variable models that model neural correlations despite having the same separable structure across neurons in the noise model. In terms of parametric models, we conducted an extensive comparison of current parametric models as GLMs/SRMs under the name conditional processes (Ref 80, “Capturing the dynamical repertoire of single neurons with generalized linear models”, Weber and Pillow 2017; Ref. 17, “Non-parametric generalized linear model”, Dowling et al. 2020), renewal processes and other variants described in Section 2 (details in appendix B.2-3 and C). Results are shown for various models in figures 3 and 4, with the abbreviations described in the last sentence of the first paragraph in Section 4.2 (which include the simple GLMs as “RC cond. P” or radial cosine conditional Poisson processes). In the camera-ready version, we will include (Gao et al. 2016) in the introductory paragraphs starting at line 40 to line 61 and more explicitly mention the lack of neural correlations in Section 3.1 (similar to the discussion in section C.1, “Vectorization over neurons”). *3. The proposed method doesn't incorporate any latent variables. It might be worth adding the latents to discover useful representations from the data and fit the data variability better. I wonder if the model would fit data worse if miss one covariate in the inputs? (e.g. only includes location and direction in covariate not theta phase in the hippocampus data.) I feel that adding latents might help with this as well.* Indeed, we agree that adding latent variables and thus making the model suitable for analyzing neural noise correlations is a major avenue for further research (see Section 5.1). Our current contribution is a model that is able to capture single neuron statistics with a sufficiently flexible noise model, which shows up as explaining all the data variance as measured by usual distribution comparison methods for point processes (see the KS plots figures 2C, 3B and 4B). However, such measures do not take into account neural correlations, and extensions to the KS framework have been proposed as in Ref. 28 (“Applying the multivariate time-rescaling theorem to neural population models”, Haslinger and Pipa 2011). Importantly, our current implementation is suitable for latent variable augmentations as we fit neurons in parallel like a coherent neural population (see appendix C.1 and the JAX implementation provided). Adding more covariates generally will improve the test ELL/predictive performance from the following perspective: our model can capture arbitrary spike train noise models (in theory), and the signal that is modeled is the modulation of the flexible noise distribution by those covariates. Adding more covariates allows the model to discover more fluctuating signals since we have more freedom if the covariates are uncorrelated. When the signal can account for more of the data variance, the noise model inferred can be sharper, i.e. more precise, and hence this will mathematically lead to higher log likelihoods. The extreme case here would be a noise model leading to conditional ISI densities that are delta distributions at the average ISI given some covariates (such models are approximated by Gamma renewal processes with shape parameters well above 1). In the Reviewer’s example, including theta allows one to capture oscillations in the firing output of neurons with the theta rhythm (one can see the wiggled curve in figure 4C, note the contrast to smoother rates in figure 3C where no fast oscillating covariates as theta phase are included). If we fit the same model leaving out the theta phase covariate, we obtain similar goodness-of-fit measures (as the flexible noise model/ISI densities adapt to account for the reduction in signal) but lower test ELLs: | model | test ELL (nats / s) | |:--------:|:--------:| | $x$ position + head direction | $13.92 \pm 3.90$ | | $x$ position + head direction + theta phase | $14.39 \pm 4.11$ | --- Rebuttal Comment 1.1: Comment: Thank the authors for their detailed response! They have addressed my concerns, so I increase the rating to 6.
Summary: The variability of neural data is widely observed in many neuroscience experiments. Using statistical model to capture the variability structure plays an essential role in understanding neural computations. Generally, the variability of neural data is a result of non-stationary activities and dependencies on behavioral covariates. To tackle these challenges, the authors proposes a scalable Bayesian approach generalizing modulated renewal processes (NPNR) to analyze neural data variability. They develops a nonparametric generalization of modulated renewal processes beyond renewal order, which makes their method flexibly model irreducible “intrinsic” neural stochasticity and input-dependent variability. Furthermore, the authors apply stochastic variational inference and can fit the model to long time neural data given cubic time complexity in the number of inducing points. The performace of NPNR is evaluated on both synthetic and real neural datasets. Strengths: * The proposed method can model two types of neural variability: (1) capturing spiking statistics from non-stationary data; (2) capturing modulation by behavioral covariates. * To achieve the desired non-stationarity, the proposed method uses time warping on $\tau$, which avoids the use of non-stationary kernls and maintains the ability to draw samples by pathwise conditioning. * The proposed inference method provides an elegant approach to determine the spike-history dependence in ISI statistics. * The authors' exposition of their motivation, contribution, and conclusions from the experiments are comprehensive and clear. * The proposed method would provide an important set of contributions to the field of neural coding. Weaknesses: The proposed method is clearly written and well supported by experiments. I have nothing further to add here. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * The proposed method captures ISI statistics using a spatio-temporal GP prior over CIF. Could the Neural Temporal Point Process (NTPP) perform similarly to your method in capturing ISI statistics? The CIF in NTPP is usually modeled by neural networks, and could this be more powerful to represent ISI distributions and capture ISI statistics? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their time and helpful feedback. *1. The proposed method captures ISI statistics using a spatio-temporal GP prior over CIF. Could the Neural Temporal Point Process (NTPP) perform similarly to your method in capturing ISI statistics? The CIF in NTPP is usually modeled by neural networks, and could this be more powerful to represent ISI distributions and capture ISI statistics?* Indeed, we are also familiar with the NTPP works and have considered them as candidates. However, the Gaussian process approach offers a few unique properties that are difficult to achieve with neural networks: - Automatic relevance determination of the lagged ISI dimensions by learning temporal kernel lengthscales (see figures 2B, 3D, 4D and 6A) - Principled regularization using the evidence lower bound and smoothness constraints on the CIF dependence on covariates $\mathbf{x}_t$ and past ISIs $\mathbf{\Delta}_t$ (e.g. figure 12 comparing against different kernels that impose different levels of differentiability of the CIF (i.e. choosing different order Matern kernels) In general, any method that involves flexible modeling of the CIF directly is closely related to the main idea/contribution of this work. Our work introduces nonparametric autoregressive dependence on past ISIs, which adds $K$ more input dimension compared to baselines which handle past dependencies using parametric spike history filters or renewal assumptions. For this reason, kernel methods such as Gaussian processes provide an elegant framework for handling this without overfitting as much as neural networks. A detailed comparison study of neural networks replacing the Gaussian process for modeling the CIF is out of scope for the current work and is more related to the general argument of Gaussian processes versus deep neural networks, which is an entire subfield on its own. Small examples of neural networks overfitting in cases when Gaussian processes do not are for instance given in Ref. 43 (“A universal probabilistic spike count model reveals ongoing modulation of neural variability”, Liu and Lengyel 2021). Additionally, there are many influential works investigating the relationship between these two classes (“Approximate Inference Turns Deep Networks into Gaussian Processes”, Khan et al. 2019 ; “Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes”, Yang 2019). An interesting case mentioned in our work is Ref.54 (“Fully Neural Network based Model for General Temporal Point Processes”, Omi et al. 2019), where the authors chose to model the cumulative hazard function (integral of the CIF) for temporal point processes. This requires a monotonic function approximator, which is most easily done with constrained neural networks and difficult to achieve with Gaussian process priors. In the camera-ready paper, we will expand on the comparison to neural network approaches similar to the discussion above, and add relevant references on these topics. --- Rebuttal Comment 1.1: Comment: I thank the authors to answer my question. It's a good work to discuss neural data variability.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a Bayesian nonparametric approach using modulated renewal process to model neural spike train, and capable of modeling the covariability. The method includes a nonparametric priors on conditional interspike interval distribution and automatic relevance determination for lagging interspike interval based on renewal order. The method is evaluated on one synthetic data and two real datasets on animal navigation. It demonstrates better performance than the current SOTA baselines in its capability of capturing the interspike interval statistics. Strengths: 1. Motivation: the paper is well-motivated, modeling the spike train statistics is an important question, and the interspike interval statistics is an important property of neural dynamics, and potentially leads to identification of cell types, and functionality, etc. Designing a model that captures this property well is important. 2. Method: it uses Bayesian nonparameteric approach that could fit complex data structures and patterns well, and it could infer the spike-history dependence using a data-driven approach. 3. Results: the method demonstrate better accuracy than the current SOTA methods in multiple tasks and datasets. Weaknesses: 1. Method: the model requires hyperparameter tuning of critical components includes $\tau_w$ $K$, which might be hard to optimize, given its variability across neurons and datasets. 2. Evaluation: the scalability of the method is a major concerns, as the datasets evaluated in this paper only has 9 neurons, ~30 units in each datasets. It's important to show how well the model performs in larger neural datasets. 3. Complexity and computational cost: add evaluations based on the cost and speed of the proposed model and other baselines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Give a detailed introduction about the parameters that would be optimized under eqn 18. and list other hyper-parameters for reproducibility. 2. Evaluate the method on larger scale neural datasets. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. There is no potential negative societal impact of their work. 2. The limitations about scalability of the proposed approach should be carefully addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their time and helpful feedback. **Q&A** *1. Give a detailed introduction about the parameters that would be optimized under eqn 18. and list other hyper-parameters for reproducibility.* The NPNR model parameters consist of: - Gaussian process hyperparameters (kernel hyperparameters $\theta_{GP}$, variational posterior mean $\mu$ and covariance $\Sigma$, inducing point locations $Z$, Gaussian process mean function parameters $\tau_m$ and $b_m$). All these parameters are learned with gradient descent - Time warping timescale $\tau_w$. This parameter is fixed in our experiments to the (empirical) mean ISI, but we also compared the cases where we jointly learn this parameter with the rest (see appendix figure 12) - Maximum lagging ISI order $K$ (integer). This hyperparameter is chosen in advance to a number we expect to be sufficiently large (in general, it is unlikely to have many neurons with significant history dependence beyond the past 2 ISIs based on studies of single neuron modeling). Indeed, figure 3D and 4D show this applies to most neurons in both datasets For the camera-ready version, we will provide a short description at the end of the inference Section summarizing all the parameters and hyperparameters, stating which are learned and which are not. *2. Evaluate the method on larger scale neural datasets.* The datasets in this work each contain around 30 selected neurons that are fitted simultaneously and have 1 to 2 million time steps in the training set (1 ms resolution), giving around 30-60 million data points in total (see appendix C.3). Most recent probabilistic models applied to neural data take in 100s of neurons but often with binned spike counts, reducing the temporal length to O(10^5) to O(10^6) in typical studies (e.g. “Scalable Bayesian GPFA with automatic relevance determination and discrete noise models”, Jensen et al. 2021; “Linear Time GPs for Inferring Latent Trajectories from Neural Spike Trains”, Dowling et al. 2023). Hence overall, in terms of number of data points this qualifies as a scalable method by recent standards. We fit did not fully utilize the 11 GB GPU RAM that we ran the experiment on. With the same temporal batch size and other settings, we generally observe OOM issues around ~80 neurons, which can be increased even further by reducing the temporal batch size. We encourage the reviewer to try out the JAX implementation that is provided, which also provides fully reproducible runs of the experiments presented in this study. For the camera-ready version, we will add these numbers (50-60 millions data points and total time points from appendix C.3) to Section 4.2 to explicitly demonstrate the scale of our datasets, which can be misleading when just looking at the number of neurons (~30 neurons). **Weaknesses and limitations** *1. Method: the model requires hyperparameter tuning of critical components includes tau_w, K, which might be hard to optimize, given its variability across neurons and datasets.* The time warping process is chosen based on inductive biases from the neuroscience literature (see Section 3.1). We fix $\tau_w$ instead of learning it as this leads to automatic relevance determination in the temporal kernel timescales (see figure 6A). In appendix figure 12, we show that the loss of performance due to fixing this parameter is minimal. Choosing $K$ is in a sense similar to choosing the number of layers in a deep neural network, or the dimensionality of latent spaces in latent variable models like VAEs. A rigorous approach would involve a grid search, but in the spirit of Bayesian models one can use a single high capacity model and perform automatic relevance determination (“Scalable Bayesian GPFA with automatic relevance determination and discrete noise models”, Jensen et al. 2021; “Bayesian Gaussian Process Latent Variable Model”, Titsias and Lawrence 2010) to get rid of redundant degrees of freedom as shown in figure 2B, 3D and 4D. In the camera-ready paper, we will elaborate on the issue of hyperparameter selection and tuning as discussed above in Section 3.2, and include relevant references to similar procedures done in the literature. We will explicitly mention and discuss all our hyperparameter tuning results (appendix figure 12 and automatic relevance determination figure 2B, 3D and 4D) in the light of hyperparameter tuning. *2. Evaluation: the scalability of the method is a major concerns, as the datasets evaluated in this paper only has 9 neurons, ~30 units in each datasets. It's important to show how well the model performs in larger neural datasets.* See response to question 2 in Q&A. *3. Complexity and computational cost: add evaluations based on the cost and speed of the proposed model and other baselines.* The stochastic sparse variational Gaussian processes (Hensman et al. 2013) has computational complexity $O(N T M^2 + N M^3)$, where $M$ is the number of inducing points, $N$ the number of output dimensions (neurons) and $T$ is the temporal batch size. As we can shorten the batch size $T$, the predominant factor is $O(M^3)$ as we mention in Section 3.2 in the main text. Our method has the same computational scaling as stochastic sparse variational Gaussian processes (“Gaussian processes for big data”, Hensman et al. 2013), which can be applied to very large datasets due to mini-batching. Baseline models from the literature are adapted to use the same underlying SVGP machinery to model the input-output mapping or spike-history dependence, and hence inherits the similar complexity scaling up to multiplicative factors. A detailed discussion of model implementation and inference is presented in appendix B.2-5, where we present a large amount of details not suitable for the main text. For the camera-ready paper, we will extend the discussion on computational complexity in Section 3.2 and appendix B.2-5, with the full $O(N T M^2 + N M^3)$ expression. --- Rebuttal Comment 1.1: Comment: Thanks the authors for providing more details for their methods, especially related to the scalability and hyperparameter tuning. I improved my score accordingly.
null
null
null
null
null
null
Stability and Generalization of the Decentralized Stochastic Gradient Descent Ascent Algorithm
Accept (poster)
Summary: This paper establishes the algorithmic-stability analysis of the decentralized stochastic gradient descent-ascent (D-SGDA) algorithm which is commonly used for min-max problems in distributed settings. This is turn leads to generalization bounds of D-SGDA for various assumptions on convexity-concavity of the objective. In particular, the paper focuses on two notions of generalization for min-max problems, namely weak and strong primal-dual generalization error. The bounds are expressed in terms of sample-size, training horizon and certain information about the mixing matrix such as the spectral gap. The flavor of results are closely-related to the recent analyses in literature that obtain algorithmic-stability bounds for decentralized gradient descent, however this work extend these to min-max scenarios. The bounds (as expressed in the theorems) take complicated forms in term of the problem parameters, however the authors provide simplified interpretations of them in the remarks. In particular, the derived bounds strongly depend on the learning rate and spectral gap of mixing matrix in order to be meaningful. In particular, for strongly convex-strongly concave objectives, the generalization bounds are of order $O(\frac{\eta}{1-\lambda} + \frac{1}{n})$ and for the convex-concave case the bounds are $O(\frac{\eta T}{n} + \frac{\eta^2 T}{1-\lambda})$ for learning-rate $\eta$, sample size $n$, training time $T$ and the spectral gap $1-\lambda.$ Strengths: The studied problem is new as it is the first work to obtain generalization bounds for D-SGDA. Although the generalization error of decentralized gradient descent and the gradient-descent algorithms were studied separately in existing works, this paper gives a unifying analysis. The method is largely based on the work of Hardt et al and relevant works on the gradient descent-ascent literature. The derived bounds are in rather simple forms and might be informative on the role of sample-size or some other parameters such as the strong-convexity parameter on the generalization. The paper is generally clear and easy to read although some aspects of the presentation can be improved as I will discuss in the next section. Several experiments on different topologies presented in the paper seem to back the general claim on the role of spectral-gap and connectivity on generalization performance. I will discuss this more in the next section. Weaknesses: Weaknesses and some questions: 1 - One of my main concerns is that the generalization bounds derived in this paper are essentially vacuous for a large set of commonly used learning-rates or training time choices. Even for strongly convex-strongly concave setup, the generalization bounds are meaningful only when considering time-decaying learning rates. The authors do not compare their results to the state-of-the-art results for the generalization of ordinary gradient descent-ascent with fixed learning rate, therefore it is impossible to discuss the optimality of the obtained rates. This is especially important due to the role of step-size on the optimization and convergence of D-SGDA. The derived bounds also appear to be very sub-optimal for convex-concave objectives: the bounds are increasing in $T$ (because of the $\eta^2 T/(1-\lambda)$ term), therefore common early stopping choices such as $T\approx \sqrt{n}$ lead to a vacuous generalization bound unless the learning rate is decaying. 2 - The results in Table 1 are not very informative: there is no comparison with Decentralized SGD (D-SGD) and SGDA methods and the choice of step-size for each row is also not specified. It will be helpful if these comparisons are added. 3 - In line 47-49 the authors assert that *"Note that even for the decentralized minimization problem, the generalization and stability of decentralized SGD are adversely affected by an extra non-vanishing term [31], and the stability usually suffers from a constant term λ^2"*. While the current results on the stability of D-SGD indeed suffer from this additional term, these are just upper bounds therefore the conclusions on the role of topology/mixing matrix are not accurate until the optimality of these bounds are proved. In the conclusions section the authors state *"Our theoretical results show that a decentralized structure does not destroy the stability and generalization of D-SGDA"* . This is wrong as explained above and even in contradiction with the generalization rates obtained throughout the paper, as the generalization error rates is severely affected by decentralization. 4- A precise discussion on the optimization guarantees of DSGDA with the given learning rate selections seems lacking in the paper. This will be beneficial for understanding final test error rates. 5 - Some parts of the paper can be rephrased: - The descriptions in the "Related Works" section are wrong in some places: The work of Bousquete and Elisseeff established the connection between generalization and stability. Hardt et al. just showed that the output of SGD satisfies the stability notion. I think the lines 87-89 need to be edited. - The sentence in line 80 "Huang [15]...." is also not clear and I think it needs to be rephrased. - The authors refer to (Sun et al. 2021) and (Zhu et al. 2022) for previous works on generalization of decentralized methods. Some missing and relevant works that study generalization of DGD are the following: *"Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent"* by Richards et al. 2020. *"On Generalization of Decentralized Learning with Separable Data"* by Taheri et al. 2023. *"High-dimensional inference over networks: Linear convergence and statistical guarantees."* by Sun et al 2022. - In the abstract (lines 11-13), the purpose of the sentence is not clear, since the bounds essentially are in terms of sample-size, learning rate and iterations. - The sentence in line 35 is not clear. Can the authors please rephrase this sentence? - Line 150: the sentence starting with ".And therefore" needs to be rephrased. Also see Line 235 and the sentence ".So...". Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the last section for questions and suggestions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitations of the work are clear and I see no potential negative societal impact related to this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer Z6QM Thank you for your careful reading and constructive suggestions! We have answered your questions in the following comment. ***Q1: Vacuous results for generalization bounds.*** 1.Our results of generalization are not vacuous even for fixed learning rate conditions. In SC-SC case, the stability and corresponding generalization gap bound is $\mathcal{O}(\frac{\eta}{1-\lambda}+\frac{1}{n})$ for fixed learning rates as we have discussed in Remark 5, where we can choose the learning rates as $\eta\sim\frac{1}{T}$ to get a non-vacuous bound. And in C-C condition, the generalization bound is $\mathcal{O}(\frac{\eta T}{n}+\frac{\eta^2 T}{1-\lambda})$ for fixed learning rates as shown in Remark 7, where we can also choose $\eta\sim\frac{1}{T}$ to obtain a non-vacuous bound. 2.Compared with vanilla SGDA, their generalization bound under SC-SC tis $\widetilde{\mathcal{O}}(\frac{1}{\sqrt{T}}+\frac{1}{N})$ (see Theorem 2.(e) in [18]). Our results can almost match it except for the influence of $C _{\lambda}$. Under C-C condition for fixed learning rates their generalization bound is $\mathcal{O}(\frac{\sqrt{T}}{n}+\frac{1}{\sqrt{n}})$ (see Theorem 2.(b) in [18]), our result can approach it when $\eta\sim\frac{1}{T^{\frac{3}{4}}}$ and $n\sim T^{\frac{3}{4}}$ except for the influence of $\frac{1}{1-\lambda}$. As for the influence of $\lambda$, i.e., the topology effect. We have made a thorough investigation in Remark 5.(3) with proof in Lemma 3 and Remark 10 in Appendix C for detail. ***Q2: The results in Table 1 are not very informative: there is no comparison between Decentralized SGD (D-SGD) and SGDA methods and the choice of step size for each row is also not specified. It will be helpful if these comparisons are added.*** Thank you for your kind suggestions! In fact, our work focuses on the minimax problem, and we revise the Table and make comparisons with vanilla SGDA in generalization gap and population risks with respect to specified learning rates in the table presented in the PDF in the common response which you can refer to. However, compared with D-SGD which solves the minimization problem, our results can in a certain sense be tighter for the reason that our results do not contain a constant term concerned with $\lambda$. The $\lambda$-concerned term in all settings, SC-SC($\frac{C_{\lambda}}{T^{\frac{L}{L+\mu}}}$), C-C($\frac{1}{(1-\lambda)T}$), NC-NC($(C_{\lambda}T^L)^{\frac{1}{c+L}}(\frac{m}{n})^{1-\frac{1}{c+L}}$) can all vanish as iterations $T$ and sample amount $n$ increase. ***Q3: Questions about the statement.*** Thank you for your constructive comments. We should revise our statement. Actually, we draw this conclusion due to the fact that our results containing $\lambda$-term can vanish as we have illustrated in the answer to ***Q2***. While in [31], the term concerned with $\lambda$ can not eliminate no matter how we choose the parameters. ***Q4: More discussions on the optimization guarantees.*** Thank you for your kind suggestions. We have put the discussions about the optimization error in the appendix due to the space limit. Specifically, the optimization error for SC-SC is $\frac{C_{x}^2+C_{y}^2}{2\eta^{min}T}+\eta^{max}G^2+\frac{4(C_{x}+C_{y})GL\eta^{\max}}{1-\lambda}+\frac{2(C_{x}+C_{y})G}{\sqrt{T}}$ for fixed learning rates, and $\frac{2G(C_{x}+C_{y})}{\sqrt{T}}+T_{x}+T_{y}+T_{max}$ for decaying learning rates $\eta_{x,t}=\frac{1}{\mu_{x}(t+1)^{c_{x}}}$ and $\eta_{y,t}=\frac{1}{\mu_{y}(t+1)^{c_{y}}}$, where $T _ {\alpha}=\left\\{\begin{array}{cc} \frac{G^2}{2\mu_{\alpha}}\frac{1+\ln{T}}{T} & c_{\alpha}=1\\\\ \frac{G^2}{2\mu_{\alpha}(1-c_{\alpha})T^{c_{\alpha}}} & 0< c_{\alpha}<1 \end{array} \right.$ and $T _ {max}=\left\\{\begin{array}{cc} \frac{4GLC_{\lambda}(C_{x}+C_{y})\ln{T}}{\mu T} & k_{min}=1\\\\ \frac{4GLC_{\lambda}(C_{x}+C_{y})}{\mu(1-k_{min})T^{k_{min}}} & 0< k_{min}<1 \end{array}. \right.$ And the optimization error for C-C is $\frac{C_{x}^2+C_{y}^2}{2\eta^{min}T}+\eta^{max}G^2+\frac{4(C_{x}+C_{y})GL\eta^{\max}}{1-\lambda}+\frac{2(C_{x}+C_{y})G}{\sqrt{T}}$ for fixed learning rates. The details are presented in Theorem 7 in Appendix E.2 and Corollary 1 in Appendix F.2 respectively. Furthermore, considering the population risk is a sum of the generalization gap and optimization error. We have to re-choose the learning rates for lower population risk. For SC-SC case, we should choose $\eta\sim\frac{1}{\sqrt{T}}$ to also guarantee the optimization convergence, as we discussed in Remark 6. For C-C case, we have to re-choose $\eta\sim\frac{1}{T^{\frac{2}{3}}}$ to also guarantee the optimization convergence, as we discussed in Remark 8. ***Suggestions on the rephrased parts.*** Thank you for your careful reading and constructive suggestions. We will revise our paper and polish our writing. - Line 87-89: Bousquete and Elisseeff[4] established the connection between generalization and stability. Elisseeff[8] extends the concept to randomized algorithms. Hardt[13] followed the theory and proved the stability of SGD. - Line 80: Huang[15], Luo and Ye[24] proposes accelerating algorithms by methods of variance reduction. - We will discuss these related works in proper positions in our paper. - Line 11-13: Yes indeed, what we mean is our results also analyze the impact of different topologies on the generalization bound except the trivial factors. - Line 35: However, it is important to consider the generalization performance for the stochastic algorithm, which is quantitatively evaluated by the value difference between Eq.(1) and Eq.(2). - Line 150: Hence, the difference between the empirical and population gaps can reflect the generalization performance. - Line 235: The major difference with vanilla SGDA lies in $C_{\lambda}$ and the number of nodes $m$. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My score remains the same.
Summary: this paper provides an analysis of the stability and generalization of decentralized GDA algorithms in SCSC, CC, and NCNC settings. the main results indicate that a decentralized setting GDA has similar error bounds as centralized settings. numerical experiments on AUC and GAN problems are reported. Strengths: 1. the analysis looks solid to me and the results build the bridge between GDA and DGDA for generalization bound. 2. the numerical experiments are strong and convinsing. Weaknesses: 1. the figures are too small, need to zoom in to read the content. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. what is the main technical difficulty for DGDA compared with the centralized case? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments! We have answered your questions in the following comment. ***Q1: Size of the figures.*** Thank you for your valuable suggestions. We will zoom in on Figure 1 and Figure 2 for better readability. ***Q2: Technical difficulty of D-SGDA.*** Thank you for your insightful question. We answer this question in the common response. Please refer to the part ***Technical challenges*** in the common response.
Summary: The paper investigates the primal-dual generalization gap bound of the decentralized stochastic gradient descent ascent (D-SGDA) algorithm. The authors start with a general decentralized minimax stochastic optimization problem, and its empirical counterpart calculated by the training dataset consisting of local samples. They model the network as an undirected graph with symmetric double stochastic matrix W. They assume Lipschitz-continuity and Lipschitz-smoothness on the local functions for the algorithm. They define weak and strong population risks, weak and strong generalization gaps -between the training dataset and original unknown distribution-, and algorithmic stability. The main purpose of the paper is the connection between and e-argument stable algorithm to its generalization gaps, both weak and strong. The authors show that the weak and strong -for strong convexity- strong concavity satisfied- generalization gaps are bounded by the argument stability error of an e-argument stable algorithm. The authors further show the boundedness of the argument and weak stability errors for strongly convex-strongly concave, convex-concave, and nonconvex-nonconcave settings. They analyze the generalization gaps for various parameters, including network topology, learning rates, and the number of nodes to show that the results are bounded as proposed. Strengths: The paper aims to investigate the connection between argument and weak stability with strong and weak generalization gaps for a training dataset for D-SGDA algorithm. The authors claim to propose this connection not only for D-SGDA algorithm, but for decentralized minimax algorithms in general. Their generalization and stability bounds for decentralized SGDA extent the stability and generalization for vanilla SGDA, so it can be said that the study is original. The overall quality of the paper is well, the problem is well-defined in a clear way and the approach is shown clearly. The results are significant at an acceptable level, since the study proposes the decentralized structure does not violate stability and generalization of SGDA algorithms, and decentralized minimax algorithms in general, as the bounds are shown by the authors. Weaknesses: In general, the paper is well-structured, but I believe a few things could be more elaborate. For example: 1. The mixing matrix and its purpose, how to select λ, what the W corresponds for a network topology could be explained better. 2. Proof of Theorem 1 in Appendix could show some intermediate steps where 𝑆(𝑙) is included. 3. Typos: Theorem 2, part b. wording, Ref. 6, sizes of parentheses in equations Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As the authors state, the analysis requires Lipschitz continuity and smoothness, and the stability and generalization analysis of D-SGDA for heterogeneous data distribution is missing. How would the analysis on stability and generalization vary for a directed network? In this case, W wouldn’t be necessarily symmetric, how would that affect the analysis? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer ipno Thank you for your careful reading and constructive suggestions! We have answered your questions in the following comment. ***Q1: More explanations on the mixing matrix $W$ and the crucial constant $\lambda$.*** Thank you for your kind suggestions. We will provide more details about the background and preliminary of the network and its associated mixing matrix in Section 3.2. Actually each communication network $G=(V,E)$ is naturally associated with an adjacency matrix, which is usually called a mixing matrix in the context of decentralized learning. Each element in the mixing matrix represents whether these two nodes communicate and the non-zero value will estimate the probability of one node choosing to communicate among its neighborhood. As for the crucial constant $\lambda$, it is naturally defined with a given matrix that is the second largest eigenvalue. So there exists a correspondence between specific network topology and the mixing matrix also and the constant $\lambda$. I can further illustrate the relationship by an example in the PDF in the common response which you can refer to. ***Q2: More intermediate steps in the proof of Theorem 1.*** Actually $S^{(l)}$ is only a symbol for neighboring dataset in a decentralized manner (see Definition 4). In the proof of Theorem 1, we specify the detailed setting of $S^{(l)}$ as illustrated in Line 527-530 that $l=(l_1,l_2,...,l_m)$ with each $l_k$ representing the $l$-th local dataset differs by the $k$-th local sample. As for the formula begins at Line 536, the first equation is because there are $n^m$ kinds of permutation of $S^{(l)}$ and the symmetric distribution between $\xi_{i,l_i}$ and $\xi'_{i,l_i}$. And the subsequent steps do not involve the derivation of $S^{(l)}$. ***Typos: Theorem 2, part b. wording, Ref. 6, sizes of parentheses in equations.*** Thank you for your careful reading. We will revise these typos in our paper and polish our writing later. ***Limitations: directed network.*** Thank you for your insightful comment! Our theoretical result does rely on the property of the undirected graph, where the mixing matrix needs to be symmetric. We have also made experimental attempts on the *exponential* network and the experiment shows similar results, which offers us a clue for further theoretical research on directed networks. The research for directed graphs is nowadays a novel research avenue with notable works of *push-sum distributed algorithm* and we will attempt to address this issue in future work.
Summary: The paper studied the generalization analysis of the decentralized stochastic gradient descent ascent algorithm through the lens of argument stability for solving minimax problems. Strong/weak primal-dual population risks are established for both convex-concave, strongly convex-strongly concave and nonconvex-nonconcave cases. Strengths: 1. The paper provided a comprehensive stability and generalization analysis of D-SGDA for solving the minimax problem. Their results imply that the decentralized structure does not destroy the stability and generalization of SGDA. 2. The impact of different topologies of the decentralized structure on the generalization bound is observed, which is very interesting. 3. Experiments validate the theoretical results. Weaknesses: 1. The results in Theorem 1 might be improved. Specifically, [1] established the connection between on-average argument stability and the weak primal-dual generalization gap for Markov chain SGDA (Here, SGDA is a special case of Markov chain SGDA) only with the Lipschitz assumption. Also, [2] provided this connection for Lipschitz losses. However, Theorem 1 requires the loss to satisfy both Lipschitz continuous and smooth conditions. For strong primal-dual generalization gap, [2] established the connection under the assumption $f$ is $\mu_y$ strongly-concave, while Theorem 1 assumes that $f$ is $\mu_x$ strongly-convex $\mu_y$ strongly-concave. In addition, $\mu$ is not defined in the theorem. 2. The results for both weak primal-dual and strong primal-dual population risks depend on $T$, $\eta$ and $n$. The authors might give some discussion on the choices of $\eta$ and $T$, and establish the explicit population rates as provided in [1] and [2], and compare with their results. [1] Wang, P., Lei, Y., Ying, Y., and Zhou, D. X. (2022). Stability and generalization for markov chain stochastic gradient methods. Advances in Neural Information Processing Systems, 35, 37735-37748. [2] Lei, Y., Yang, Z., Yang, T., and Ying, Y.(2021). Stability and generalization of stochastic gradient methods for minimax problems. In International Conference on Machine Learning, pages 6175–6186. 3. The experiment setup is unclear, I would suggest the authors add some details in the appendix. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Can the stability and generalization results be generalized to the directed graph? 2. As mentioned in Weakness 2, could the authors provide some discussions or corollaries for discussing the choices of stepsize $\eta$ and iteration number $T$ to establish the explicit population rates? 3. A small question: in table 2, for fully connected graph, why $\frac{1}{1-\lambda}$=0? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer FcPE Thank you for your positive feedback, constructive suggestions, and insightful questions! We have answered your questions in the comment below. ***Q1: The results in Theorem 1 might be improved. Specifically, [1] established the connection between on-average argument stability and the weak primal-dual generalization gap for Markov chain SGDA (Here, SGDA is a special case of Markov chain SGDA) only with the Lipschitz assumption. Also, [2] provided this connection for Lipschitz losses. However, Theorem 1 requires the loss to satisfy both Lipschitz continuous and smooth conditions. For strong primal-dual generalization gap, [2] established the connection under the assumption $f$ is $\mu_y$ strongly-concave, while Theorem 1 assumes that $f$ is $\mu_x$strongly-convex$\mu_y$strongly-concave. In addition, $\mu$ is not defined in the theorem.*** Thank you for your kind suggestions and we will add a detailed discussion on these two referred papers in our revision. 1.When establishing the connection between argument stability and weak primal-dual generalization gap, we are careless when checking the required conditions which strictly only need Lipschitz continuity. We will revise Theorem 1 for this part of our paper. 2.When establishing the connection between argument stability and strong primal-dual generalization gap, we check our proof process again and conclude that it does require $\mu_x$SC-$\mu_y$SC that strong convexity cannot be omitted. Because the deduction on the term $\inf_{x'\in\mathcal{X}}F(x',y)$ also relies on the property of strong convexity to derive $F(x^*(y),y)$ and Lipschitz property of $x^*(y)$ with respect to $y$. This deduction is just symmetric to the term $\sup_{y'\in\mathcal{Y}}F(x,y')$ which requires strong concavity as well. And we look into [2] and find that part c in Theorem 1 also requires strong convexity. Besides, we apologize for the negligence of definitions here $\mu\triangleq \min \lbrace\mu_x,\mu_y\rbrace$ which we will revise in our paper later. ***Q2: More discussion on the results of population risks and the choice of $\eta$, $T$, and $n$.*** Thank you for your kind suggestions. For strong primal-dual population risk, for fixed learning rates, when we choose $\eta\sim\frac{1}{\sqrt{T}}$, our population rate is $\mathcal{O}(\frac{1}{n}+\frac{1}{(1-\lambda)\sqrt{T}})$. For decaying learning rates that $\eta^{min} _ t=\frac{1}{\mu(t+1)}$ and $\eta^{max} _ t=\frac{1}{\mu(t+1)^c}$ , the population rate is $\widetilde{\mathcal{O}}(\frac{T^{1-c}}{n}+\frac{C_ {\lambda}}{T^{\min\lbrace\frac{1}{2},\frac{L}{L+\mu}\rbrace}})$. Compared with vanilla SGDA[2], who reaches $\mathcal{O}(\frac{ln N}{\mu N})$ for decaying learning rates that $\eta_t=\frac{1}{\mu(t+t_0)}$ and $T\sim N$, our results can matach when $n\sim T^{\min\lbrace\frac{1}{2},\frac{L}{L+\mu}\rbrace}$. For weak primal-dual population risk, when $\eta\sim\frac{1}{T^{\frac{2}{3}}}$, the population rate is $\mathcal{O}(\frac{T^{\frac{1}{3}}}{n}+\frac{1}{(1-\lambda)T^{\frac{1}{3}}})$. Compared with vanilla SGDA[2], whose result is $\mathcal{O}(\frac{1}{\sqrt{n}})$ when $T\sim n$ and $\eta\sim\frac{1}{\sqrt{T}}$. And our results can match when $n^{\frac{1}{2}}\sim T^{\frac{1}{3}}$. But there is no need to compare with the results in [1] of $\mathcal{O}(\frac{log n}{\sqrt{n}log(\frac{1}{\lambda(P)})})$ when $T\sim n$ and $\eta\sim\frac{1}{\sqrt{T log(T)}}$ , since their results are under the Markovian assumptions. We have included the discussions in Remark 6 and Remark 8 respectively. And further proof details can be referred to Appendix E.3 and F.3 respectively. We will revise our paper to include more detailed comparisons with the referred [1][2]. ***Q3: More details about the experiment setup.*** We simplify the descriptions of the experiment set up in the paper due to the space limit, but we have implemented the details in Appendix I. We will make the descriptions more clear later. ***Q4: Can our results be generalized to the directed graph?*** We have to admit that our current methods have restrictions that the mixing matrix should be symmetric, i.e., the network is limited to undirected graphs. However, we have tried some experiments on exponential topology which is a directed graph. The results do not exhibit contradictions with undirected graphs, which may be potential for future theoretical work on directed graphs. And we are thinking of the decentralized distributed method of push-sum which may help us analyze the directed network. ***Q5: Why $\frac{1}{1-\lambda}=0$ for fully connected graph?*** We are sorry for the vague expressions in Table 2. The first row of $\lambda$ represents the value of the crucial constant, while the last two rows of $C_{\lambda}$ and $\frac{1}{1-\lambda}$ mean the value concerned about $\lambda$ appeared in our bound result respectively. For a fully connected graph, $\lambda=0$ so that $\lambda$ will not appear in the bound result. Therefore we remark it as $0$ which means will disappear in the term. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the comments and I would like to keep my score.
Rebuttal 1: Rebuttal: # Common Response ***Technical challenges*** The major difference with vanilla SGDA lies in the communication through different local agents. The vanilla SGDA can be seen as a special case where there is only one agent and it can train on its own dataset and update its parameters all by itself. While in the decentralized setting, after each local agent updates by each local gradient sampled on the local dataset, they have to communicate with each other to complete the iterations. So the key challenge is how we deal with the communication process. By means of the mixing matrix $W$ and its associated crucial constant $\lambda$, we can quantitatively characterize the communication between agents and observe from the bound results to figure out the impact of different communication ways (or associated network topology) on the generalization performance. In a decentralized setting, we need to measure the impact of consistency on generalization due to the fact that gradients are computed locally. Compared with the centralized approach, the decentralized setting lacks the aggregation step involving all workers, which can lead to a certain degree of model inconsistency. ***Technical novelty*** We are the first work to analyze the stability and generalization of D-SGDA for decentralized minimax problems, which is nontrivial considering the complex structure of minimax problems combined with the communication between agents. Firstly, we propose the decentralized framework for analyzing stability and generalization based on the thought of permutation, which is not discussed in existing decentralized work. While under the new definitions of the decentralized neighboring dataset and corresponding decentralized algorithmic stability (see Definition 4 and 5, where we allow each local dataset to hold at most one different sample), we can see more clearly how the number of agents and local samples could influence the generalization performance. Furthermore, our proposed definitions are well-defined that when $m=1$, the decentralized definitions can be naturally degraded into vanilla one. When $m>1$, it can characterize the isolated influence of the number of the agent on the generalization performance. Secondly, we have derived a tighter generalization bound compared with existing decentralized results. There are only two works concentrating on the generalization of decentralized algorithms. [43]requires a rather strong assumption that the weight difference obeys Gaussian distribution, and results in [31] contain an extra constant term concerned with $\lambda$ which means the upper bound suffers from a nonvanishing influence of the communication network. While in our work, the term concerned with $\lambda$ in the upper bound, in all cases of SC-SC ($\frac{C_{\lambda}}{T^{\frac{L}{L+\mu}}}$),C-C ($\frac{1}{(1-\lambda)T}$), and NC-NC ($(C_{\lambda}T^L)^{\frac{1}{c+L}}(\frac{m}{n})^{1-\frac{1}{c+L}}$), can vanish as iterations $T$ or the sample amount $n$ increase. To summarize our ***contribution***, we are the first work to investigate the stability and generalization of D-SGDA for minimax problems. When meeting with the above challenges, we come up with the method of permutation and eventually draw the conclusion with a tightly bound to reveal the topology effect on the generalization performance. Pdf: /pdf/6381542766f812bfccd03598424b8421bf9cfe6b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper analyzes the generalization of the Decentralized Stochastic Gradient Descent Ascent (D-SGDA) Algorithm for min-max problem using the algorithm stability framework. Specifically, the paper analyzes the weak and strong primal-dual generalization gap under both convex-concave (C-C) and nonconvex-nonconcave (NC-NC) settings. The key result is that the decentralized structure with undirected graph does not harm the generalization upper bound of D-SGDA. Empirical experiments are performed to further validate the theoretical findings. Strengths: 1. The paper is well written and easy to follow. 2. The paper studies an important problem of the generalization of the DSGDA algorithm for minmax problems in the decentralized settings. Weaknesses: 1. Technical novelty is limited. The algorithm stability for decentralized SGD algorithm has been analyzed in prior work [31]. And the algorithm stability for the SGDA algorithm for minmax problem has also been analyzed in prior work [18]. The techniques used in this paper mainly combine the two papers [18, 31]. 2. The tightness of the generalization upper bound obtained in this paper is not discussed. 3. Some results are not clearly described. See **Questions**. ### Minor: 1. Unify the notations for projection operator. e.g. Algorithm 1 uses $P_{\cal X}$, while line 564-567 in the appendix uses $Proj_{\cal X}$. 2. Line 109, "differential" -> "differentiable". Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Table 1, why only the NC-NC case has a bound depending on $m$, the number of agents while other results do not depend on $m$? 2. Please also include results on the optimization error and population risk in Table 1 or in the appendix. 3. One theoretical result shows that the decentralized structure does not harm stability and generalization of D-SGDA. This is different from the conclusion of [31], which shows the decentralized SGD are adversely affected by an extra non-vanishing term compared to SGD. What leads to the difference in the conclusions? Is it because of specific assumptions on the structure of the undirected graph, or improved / tighter analysis? This needs to be discussed carefully since it is the main contribution of this work. 4. In Theorem 6, what are the step size choices to achieve the corresponding bound? This needs to be stated as in other theorems. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have discussed the limitations in Section 6, conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 5Agt Thank you for your careful reading and constructive suggestions! We have answered your questions in the comment below. ***Q1: Limited technical novelty.*** We answer this question on the common response. Please refer to part ***Technical novelty*** in the common response. ***Q2: The tightness of upper bound.*** Thank you for your constructive suggestions! We have not discussed the tightness of the generalization bound in our paper due to the fact that we are the first work to analyze the stability and generalization of decentralized minimax problems and we can not make comparisons. And we have to admit that we do not analyze the lower bound of our generalization gap, which is valuable for generalization analysis. We are thinking about this problem and will continue on this as future work inspired by [ref]. However, in a certain sense, we have a tighter bound compared with the existing stability and generalization analysis on D-SGD, our upper bound does not contain a *constant* term concerned with $\lambda$. Specifically, as we have illustrated in the ***Technical novelty*** part in the common response, results in [31] contain an extra constant term concerned with $\lambda$ which means the upper bound suffers from a nonvanishing influence of the communication network. While in our work, the term concerned with $\lambda$ in the upper bound, in all cases of SC-SC ($\frac{C_{\lambda}}{T^{\frac{L}{L+\mu}}}$),C-C ($\frac{1}{(1-\lambda)T}$), and NC-NC ($(C_{\lambda}T^L)^{\frac{1}{c+L}}(\frac{m}{n})^{1-\frac{1}{c+L}}$), can vanish as iterations $T$ or the sample amount $n$ increase. We also implement comparisons with vanilla SGDA in the table presented in the PDF in the common response. [ref] *Stability of SGD: Tightness Analysis and Improved Bounds* ***Typos*** > Minors: > 1. Unify the notations for projection operator. e.g. Algorithm 1 uses $P_{\cal X}$, while line 564-567 in the appendix uses $Proj_{\cal X}$. > 2. Line 109, "differential" -> "differentiable". Thank you for your careful reading and we will revise in the paper as suggested. ***Q3: Why only the NC-NC case has a bound depending on $m$?*** Thank you for your insightful questions! In the NC-NC condition, we are using different deduction methods due to a lack of convexity. When meeting with the "different" sample for each local loss function, the cross term of the inner product between different local loss functions cannot be eliminated by the inequality property of convexity-concavity. Thus these cross terms can only be bounded by the Lipschitz property, and the number of cross terms will remain as a factor in the upper bound, which is related to the number of agents $m$. ***Q4: More results on the optimization error and population risk in Table 1 or in the appendix.*** Thanks for your kind suggestions. We present our main results as a summary in Table 1, including the generalization gap and population risk under SC-SC and C-C cases. We omit the optimization error here since we are mainly concerned about the generalization performance of D-SGDA and the population risk acts as a sum of generalization gap and optimization error. For more details, we present the optimization error (empirical risk) in Theorem 7 in Appendix E.2 and Corollary 1 in Appendix F.2 for SC-SC and C-C respectively. As for NC-NC conditions, we only obtain results on its generalization gap without its optimization error or even population risk. It's a great challenge for minimax stochastic optimization problems under NC-NC conditions, where the Nash equilibrium saddle point may not exist, and finding it is NP-hard. Besides, we implement some comparisons with vanilla SGDA in the table presented on the PDF in the common response which you can refer to. ***Q5: More discussion on the result that the decentralized structure does not harm the stability and generalization of D-SGDA.*** Yes indeed, we draw the conclusion that the decentralized structure does not harm the stability and generalization of D-SGDA from the observation that our algorithmic stability bound does not contain a non-vanishing term concerned with $\lambda$ (see Theorem 2, Remark 5 and Theorem 4, Remark 7). While in [31], as their Theorem 1 indicates that the second term in the algorithmic stability bound can not vanish with decaying learning rates and Theorem 2 shows that the second term does not disappear with iteration $T$ increasing. Actually, this difference does not come from extra assumptions. We can obtain a tighter bound due to the several preliminary steps in Line 563-567 in Appendix while [31] just directly comes to the norm of the differences, which result in a looser bound. At last, thank you for your kind suggestion to stress this contribution. We will revise our paper to stress our technical contribution. ***Q6: More statement on the choice of the step size in Theorem 6.*** Thank you for your kind suggestions. We miss the discussion about the choice of step size in Theorem 6. Here we complement the following discussion which will later be revised into our paper in the remark below Theorem 6. For fixed learning rates, the weak stability can be bound: $\epsilon^w_{sta}(A)\leq2\sqrt{2}G^2(\frac{{\eta^{max}} T}{n}+\frac{2L{\eta^{max}}^2T}{1-\lambda})$ which can reach $\mathcal{O}(\frac{1}{n}+\frac{1}{(1-\lambda)T})$ when $\eta\sim\frac{1}{T}$. For deacaying learning rates that $\eta_t^{min}=\frac{1}{t+1}$, and $\eta_t^{max}=\frac{1}{(t+1)^c}, c\leq1$, the stability and generalization gap is bounded by $\small \mathcal{O}((C_{\lambda})^{\frac{1}{c+L}}T^{\frac{L}{c+L}}(\frac{m}{n})^{1-\frac{1}{c+L}})$. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the detailed rebuttal. It addresses my concerns and I have updated my score. --- Reply to Comment 1.1.1: Title: Thanks for your support Comment: Dear Reviewer 5Agt We sincerely thank you for raising your score. Your support means a lot to us. We really appreciate it! If you possess further insights or advice, your input is warmly welcomed. Best, Authors
null
null
null
null
null
null
Rank-1 Matrix Completion with Gradient Descent and Small Random Initialization
Accept (poster)
Summary: In this work, the authors proved the global convergence of the gradient descent algorithm with a small initialization for the rank-1 matrix completion problem. Strengths: The results in this work is novel and should be interesting to audiences in optimization and machine learning fields. This work shows that the incoherence regularizer may be unnecessary for the matrix completion problem. Weaknesses: A more detailed comparison with Chen, J., Liu, D., & Li, X. (2020). Nonconvex Rectangular Matrix Completion via Gradient Descent Without ℓ₂,∞ Regularization. IEEE Transactions on Information Theory, 66(9), 5806-5841. should be included. In addition, the importance of analyzing the rank-1 case should be discussed in more detail. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) Line 55: I would suggest the authors to be more specific on what parameters the convergence time has a logarithmic dependence on. (2) Line 78: it would be better to be consistent in using "GD" as the abbreviation of "gradient descent". (3) Line 86: it would be better to explicitly mention that u^* is a vector. (4) Line 102: I think the squared norm of x^0 is expected to be \beta_0^2. But I am not sure if the norm of x^0 is expected to be \beta_0. (5) Line 128: I think it may be better to include the global convergence result in a formal theorem (instead of a remark). (6) Line 143: besides the relation between T^* and \beta_0, I wonder if there is a reason why \beta_0 is lower bounded. It seems that the initialization size is not necessarily lower bounded in [17, 23]. It may be better to explain the reason why a lower bound is necessary. (7) Line 154: the remark on the estimation error could also be formalized as a theorem. (8) in Line 88, the authors mentioned that the incoherence at be at most poly(log n). But this condition is not included in Theorem 3.1. I wonder if this condition is necessary for the results. (9) Line 170: please be more specific on the meaning of "incoherent up to a logarithmic factor". (10) In my opinion, the discussion of proof ideas in Sections 4-5 is a little too long. It would be ideal if the length can be reduced by ~2 pages. With that said, I am okay with the current structure. (11) Line 319: "an optimal number of samples" is confusing. Please consider using a different word. (12) Another interesting open problem will be whether the results can be extended to the over-parameterized case, where a small initialization is also required. (13) Besides the aforementioned problem, it would also be interesting to consider the asymmetric matrix completion problem; see the follow-up work: Soltanolkotabi, M., Stöger, D., & Xie, C. (2023). Implicit balancing and regularization: Generalization and convergence guarantees for overparameterized asymmetric matrix sensing. arXiv preprint arXiv:2303.14244. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See my comments in the previous section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the feedback. Please see our response to the reviewer’s question. >**A more detailed comparison with Chen, J., Liu, D., & Li, X. (2020) ... should be included.** The paper that reviewer mentioned is an extension of the local convergence result [a] to the asymmetric case and it reduces the required sample complexity by some log factors. If we apply the techniques developed in the paper, we may also reduce the sample complexity, but we are not sure about this at this moment because for the rank-1 case, many parts of the analysis in [a] are already simplified. --- >**In addition, the importance of analyzing the rank-1 case should be discussed in more detail.** The question of whether the combination of GD with random initialization can effectively solve a specific problem holds significant importance in the field of optimization. Phase retrieval and matrix sensing are one of the low-rank recovery problems, like matrix completion. For those problems, the global convergence of GD from a randomly initialized point has been proved. However, despite its similarity to matrix sensing, no analogous result had been established for matrix completion. The problem remains open especially after the local convergence result [a] published in 2017. Although the rank-one case does not have much impact in itself, it will provide a good starting point for someone who challenges the full problem. Our novel analysis on Phase I provides a good understanding on the dynamics of GD that starts from a small random initializer. We expect that our key lemmas such as Lemmas 5.2 and 5.5 will continue to hold for the general rank-r case in a similar way. For Phase II, we have a difficulty in analyzing the singular values of the trajectory, and it is discussed in Section 8 with some simulation results. --- >**(1) Line 55: I would suggest the authors to be more specific on what parameters the convergence time has a logarithmic dependence on. (2) Line 78: it would be better to be consistent in using "GD" as the abbreviation of "gradient descent". (3) Line 86: it would be better to explicitly mention that $u^{*}$ is a vector.** We will change the manuscript accordingly. We appreciate your careful review. --- >**(4) Line 102: I think the squared norm of $x^0$ is expected to be $\beta\_0^2$. But I am not sure if the norm of $x^0$ is expected to be $\beta\_0$.** Yes, the square norm is expected to be $\beta_0^2$, and the expectation of norm is not precisely $\beta_0$, but it is very close to $\beta_0$. We will correct this. --- >**(5) Line 128: I think it may be better to include the global convergence result in a formal theorem (instead of a remark).** We tried to emphasize our own contribution with the main theorem. We will state the global convergence result in a formal theorem in the supplementary due to lack of space. --- >**(6) Line 143: besides the relation between $T^{*}$ and $\beta_0$, I wonder if there is a reason why $\beta\_0$ is lower bounded. It seems that the initialization size is not necessarily lower bounded in [17, 23]. It may be better to explain the reason why a lower bound is necessary.** An exceptionally small initialization size such as $e^{-n}$ could be detrimental to the convergence of GD because it extends convergence time from $\Theta(\log n)$ to $\Theta(n)$. Because we are deriving probabilistic bounds for all iterations, establishing an upper bound on the iteration count is imperative. Similar bounds can also be found in previous work [a]. While imposing a limit on the number of iterations might seem counterintuitive when proving local convergence, Theorem 2 of reference [a] does precisely that by constraining the maximum iteration count to $O(n^5)$. Nonetheless, we can further reduce the lower bound $n^{-10}$ to $n^{-c}$ for any $c > 10$ by tuning some constant factors during the proof. --- >**(7) Line 154: the remark on the estimation error could also be formalized as a theorem.** The potential adjustments to the proof when initialization size is fixed is briefly discussed in Section F of the supplementary. We will include a formal theorem on this result in the supplementary in the final version. --- >**(8) in Line 88, the authors mentioned that the incoherence at be at most poly($\log n$). But this condition is not included in Theorem 3.1. I wonder if this condition is necessary for the results.** Yes, it is also assumed in Theorem 3.1. We will include the condition in Theorem 3.1. --- >**(9) Line 170: please be more specific on the meaning of "incoherent up to a logarithmic factor".** We will define the incoherence of a vector in Section 2. --- >**(10) In my opinion, the discussion of proof ideas in Sections 4-5 is a little too long. It would be ideal if the length can be reduced by ~2 pages. With that said, I am okay with the current structure.** We agree with this, but it would be hard to rewrite the sections in this submission. --- >**(11) Line 319: "an optimal number of samples" is confusing. Please consider using a different word.** We will mention the optimal number of samples as $n \mathrm{poly}(\log n)$. --- >**(12) Another interesting open problem will be whether the results can be extended to the over-parameterized case, where a small initialization is also required. (13) Besides the aforementioned problem, it would also be interesting to consider the asymmetric matrix completion problem; see the follow-up work: Soltanolkotabi, M., Stöger, D., & Xie, C. (2023). ...** We are aware of both directions, and we will mention them in Section 8 if space is allowed. We have tried for both directions, but it was not trivial to extend the current result to both cases. --- [a] Cong Ma et al. "Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution." --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the detailed response! I will increase my rating, but I think the structure of the paper should be improved. The global convergence results are more important and should be included in the main manuscript, potentially using the space by moving the proofs ideas in Sections 4-5 to the appendix.
Summary: The paper studies global convergence of GD (with a fixed step-size) for the rank-1 matrix completion problem (with symmetric and i.i.d. Bernoulli(p) observation and Gaussian noise on the entries) started from "small" random initialization. The authors prove that such vanilla GD, without any explicit regularization as commonly used in the literature, converges to the ground truth matrix in a polynomial time with near-optimal sample complexity. The result is interesting and the program is well-motivated. The analysis is mostly based on dynamical systems rather than optimization techniques, and a similar approach could be applied for related problems. The proof idea seems to be novel (although some of the ideas seems to be motivated by recent literatures on GD with small initialization & stepwise for matrix factorization and matrix sensing problem). First, the authors first show that GD for the fully observed case with small initialization converges to the ground truth matrix (Corollary 1 in the appendix). This is possible due to an explicit formula for the GD iterates (a linear combination of the initialization and the ground truth with dynamic coefficients). The analysis is not very difficult, but also not so trivial. Having established the desired convergence result for the fully observed GD dynamics, the main idea is to couple that with the partially observed GD dynamics by starting at the same initialization and use an interpolated dynamics eq. (13). The main novelty is in showing that these two trajectories remain close, and much closer than the norm of the iterates from the fully observed case. This allows one to show that the partially observed iterates converges to a local region near the ground truth after a polynomial number of iterations, and then one can use existing local convergence result [14]. Strengths: The main text is exceptionally well-written (with minor comments/suggestions) in that it gives the structure of the proof of the main result in a very clear manner (which was a pleasure to read for the most part). The argument in the main text is well-complemented with various simulation results. The general rank-r case was also discussed at the end, and the main difficulty of having the r singular values possibly growing at different rates is well-pointed out. It seems that the appendix gives a rigorous justification of most of the claims in the main text (although some references and pointers are missing). I've only read the appendix until Section C and cannot assess that the remainder (the coupling analysis, which is the most substantial part) is correct, but the sketch in the main text is convincing. Weaknesses: Minor comments: L98: "To recover the matrix," --> "To recover the matrix $M^{\star}$," L110: "controlling the $\ell_{\infty}$-norm in [14]" --> $\ell_{\infty}$-norm of what? Notation $\lesssim$ is not defined L166: ".. linear combination of $x^{(0)}$ and $u^{\star}$, .." --> ".. linear combination of $x^{(0)}$ and $u^{\star}$ (see eq. (C.1) in the appendix)" L214: "becomes" --> "is" L233 and L234: "between" --> "of" L259: ".. parallel to $u^{\star}$." --> ".. parallel to $u^{\star}$ (see Lem. A.5 in the appendix)." L286: ".. in both $\ell_{2}$ and $\ell_{\infty}$ norms, .." --> ".. in both $\ell_{2}$ and $\ell_{\infty}$ norms (see Cor. 1 in the appendix), .." Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: eq. (8): Is it intentional to not to cancel out $n^{1/4}$ factor in the upper bound? L216: Shouldn't $\tilde{x}^{(1)}$ here be $x^{(0)}$? Lemma 5.5: $u^{(l)}$ is not defined. Same as $u^{\star}$ but 0 at the $\ell$th coordinate? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the feedback. Please see our response to the reviewer’s question. >**Minor Comments: ...** We will change the manuscript accordingly. Thanks for your careful review. --- >**eq. (8): Is it intentional to not to cancel out $n^{1/4}$ factor in the upper bound?** Yes, it was intentional to not to cancel out $n^{1/4}$. We tried to emphasize that when optimal samples are provided, i.e. $np = poly(\log n)$, the initialization size should be less than $n^{-1/4}$. --- >**L216: Shouldn't $\tilde{\mathbf{x}}^{(1)}$ here be $\mathbf{x}^{(0)}$?** We tried to compare the norms of $\mathbf{x}^{(1)} - \tilde{\mathbf{x}}^{(1)}$ to those of $\tilde{\mathbf{x}}^{(1)}$, so $\tilde{\mathbf{x}}^{(1)}$ is the right one. --- >**Lemma 5.5: $\mathbf{u}^{(l)}$ is not defined. Same as $\mathbf{u}^\star$ but 0 at the $l$th coordinate?** $\mathbf{u}^{(l)}$ is defined at the end of Lemma 5.5. $\mathbf{u}^{(l)}$ is the first eigenvector of $\mathbf{M}^{(l)}$ .
Summary: The authors study a random initialization scheme for gradient descent applied to the problem of rank-one matrices, assuming a known ground truth. Their results concern global convergence properties of the with respect to a particular random model of partially observed matrices. Specifically, starting from a $n\times n$ symmetric positive definite, fully-revealed ground truth matrix $\bf{M}^\star = \lambda^\star \bf{u}^\star (\bf{u}^\star)^T= \bf{x}^\star (\bf{x}^\star)^T$, entries above or on the diagonal are perturbed by noise drawn i.i.d. from a zero-mean Gaussian, and revealed independently with probability $p.$ The main result, Theorem 3.1, requires a coherence assumption: namely, for $\lVert \mathbf{u}^\star \rVert = \sqrt{\frac{\mu}{n}},$ the quantity $\mu$ is polynomially-bounded in $n.$ With this assumption, the main result claims that, if the Gaussian noise is sufficiently small and an initial iterate $\mathbf{x}_0$ is provided whose magnitude is not too large or small according to quantities depending on $n, \mu , \lambda^\star ,$ and $p,$ that gradient descent will converge to the given ground truth with probability tending to $1$ as $n\to \infty .$ In addition to giving proofs, the authors explain the qualitative behavior of convergence as determined by several phases, which appear in both the analysis and a simulation study. Strengths: It seems that the combination of leave-one-out sequences developed in [14] and the random initialization used in areas like phase retrieval are a novel aspect of this work, although as mentioned in Sec. 6 there is difficulty in extending this combination to the case of arbitrary rank. Theorems and definitions are, for the most part, stated unambiguously, and I was unable to find any errors. The topic is clearly a good fit for NeuRIPS. Weaknesses: One issue I have with this paper is that rank-1 symmetric matrix completion (as well as rank-1 general matrix completion) is simply a much easier problem than general matrix completion. Indeed, I would like to point out the reference "Uniqueness of Low-Rank Matrix Completion by Rigidity theory", by Singer and Cucuringu, which is not cited in this work. Section 5 of this paper shows that the existence of an exact completion is guaranteed (with "probability one") based purely on combinatorial conditions of the graph of revealed entries. By contrast, the authors make high-probability statements about matrices whose entries are drawn and hidden according to specific distributional assumptions as the sample size goes to infinity, which seem to be much stronger. In general, the bibliography could be more extensive. Additionally, equation (8) shows that there are _lower_ bounds in addition to upper bounds on the magnitude of the initialization that is needed. Thus, merely a "small" initialization is not enough to ensure convergence, contrary to the title. Finally, there is a nontrivial coherence assumption. This may be standard in compressed sensing, but it is still a nontrivial assumption. In summary, the assumptions are overly restrictive, and, unlike the arbitrarily matrix completion problem, rank-1 matrix completion is a fairly simple problem. So, I think the paper's claimed results are not very interesting overall. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: line 3: Do you mean "simplest yet _most_ efficient?" line 121: It would helpful to clarify that asymptotic notation like $o(1)$ refers to the regime $n\to \infty ,$ as opposed to other parameters tending towards infinity or zero. Theorem 3.1: It seems that a corollary of this theorem would be that $\pm \bf{x}^\star$ are the only ground-truth solutions to the matrix completion problem. Is that something that is already assumed in the proof of your result? I don't see anywhere an explanation of why the cases with multiple ground truth solutions would be asymptotically rare. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the feedback. Please see our response to the reviewer’s question. >**One issue I have with this paper is that rank-1 symmetric matrix completion (as well as rank-1 general matrix completion) is simply a much easier problem than general matrix completion.** The question of whether the combination of GD with random initialization can effectively solve a specific problem holds significant importance in the field of optimization. Phase retrieval and matrix sensing are one of the low-rank recovery problems, like matrix completion. For those problems, the global convergence of GD from a randomly initialized point has been proved. However, despite its similarity to matrix sensing, no analogous result had been established for matrix completion. The problem remains open especially after the local convergence result [a] published in 2017. Although the rank-one case does not have much impact in itself, it will provide a good starting point for someone who challenges the full problem. Our novel analysis on Phase I provides a good understanding on the dynamics of GD that starts from a small random initializer. We expect that our key lemmas such as Lemmas 5.2 and 5.5 will continue to hold for the general rank-r case in a similar way. For Phase II, we have a difficulty in analyzing the singular values of the trajectory, and it is discussed in Section 8 with some simulation results. --- >**Indeed, I would like to point out the reference "Uniqueness of Low-Rank Matrix Completion by Rigidity theory", by Singer and Cucuringu, which is not cited in this work. Section 5 of this paper shows that the existence of an exact completion is guaranteed (with "probability one") based purely on combinatorial conditions of the graph of revealed entries. By contrast, the authors make high-probability statements about matrices whose entries are drawn and hidden according to specific distributional assumptions as the sample size goes to infinity, which seem to be much stronger. In general, the bibliography could be more extensive.** For more than a decade, the probabilistic model investigated in this paper has stood as the standard model for researchers studying matrix completion, following the pioneering work of [b]. There have been hundreds of papers studying matrix completion under the probabilistic model; therefore, it is difficult to assert that the model is overly restrictive. However, we will try to include the paper that the reviewer suggested in the final manuscript, as it offers a distinct perspective on matrix completion. --- >**Additionally, equation (8) shows that there are lower bounds in addition to upper bounds on the magnitude of the initialization that is needed. Thus, merely a "small" initialization is not enough to ensure convergence, contrary to the title.** An exceptionally small initialization size such as $e^{-n}$ could be detrimental to the convergence of GD because it extends convergence time from $\Theta(\log n)$ to $\Theta(n)$. Because we are deriving probabilistic bounds for all iterations, establishing an upper bound on the iteration count is imperative. Similar bound can also be found in previous work [a]. While imposing a limit on the number of iterations might seem counterintuitive when proving local convergence, Theorem 2 of reference [a] does that precisely by constraining the maximum iteration count to $O(n^5)$. Nonetheless, we can further reduce the lower bound $n^{-10}$ to $n^{-c}$ for any $c > 10$ by tuning some constant factors during the proof. We do not think this is a restrictive assumption. --- >**Finally, there is a nontrivial coherence assumption. This may be standard in compressed sensing, but it is still a nontrivial assumption.** The incoherence condition is another standard assumption that researchers follow when studying matrix completion. The seminal work [b] provides a good explanation on why such an assumption is required. If information of matrix is concentrated on only few entries, we will not be able to recover the matrix unless we observe those entries. The incoherence assumption ensures that information is distributed nearly evenly across all entries of the matrix. --- >**line 3: Do you mean "simplest yet most efficient?"** We will change the sentence to “a simple yet efficient”. --- >**line 121: It would helpful to clarify that asymptotic notation like $o(1)$ refers to the regime $n \to \infty$ as opposed to other parameters tending towards infinity or zero.** We will mention in somewhere that all asymptotic relations in this paper are with respect to $n$. --- >**Theorem 3.1: It seems that a corollary of this theorem would be that $\pm \mathbf{x}^\star$ are the only ground-truth solutions to the matrix completion problem. Is that something that is already assumed in the proof of your result? I don't see anywhere an explanation of why the cases with multiple ground truth solutions would be asymptotically rare.** We are not assuming that $\pm \mathbf{x}^\star$ are the only global minima. What Theorem 3.1 implies is that GD converges to $\pm \mathbf{x}^\star$ with high probability (even if some other global minima exist). --- [a] Cong Ma et al. "Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution." [b] E. J. Candes and B. Recht, “Exact Matrix Completion via Convex Optimization” --- Rebuttal Comment 1.1: Comment: ''What Theorem 3.1 implies is that GD converges to $\pm \mathbf{x}^\ast $ with high probability (even if some other global minima exist).'' Sorry, but I'm still confused. Suppose $\pm \mathbf{y}^\ast $ is another global minimum---does your theorem not also imply convergence to it w.h.p.? --- Reply to Comment 1.1.1: Comment: We are not assuming anywhere in the proof about the existence (or non-existence) of a point $\mathbf{y}^\star$ such that $f(\mathbf{y}^\star) = 0$ and $\mathbf{y}^\star \neq \pm \mathbf{x}^\star$. Theorem 3.1 asserts that GD will converge to $\pm \mathbf{x}^\star$ even if such a $\mathbf{y}^\star$ exists. Below we explain why such a counterintuitive phenomenon actually happens. Let us define $\mathcal{S}$ as the set of incoherent points, which is explicitly written as $$\mathcal{S} = \left\\{ \mathbf{x} : \Vert \mathbf{x} \Vert_\infty \lesssim \sqrt{\frac{\mathrm{poly} (\log n)}{n}} \Vert \mathbf{x} \Vert_2 \right\\}.$$ Note that $\pm \mathbf{x}^\star \in \mathcal{S}$ by the incoherence assumption. It was proved in [a] that with high probability, there is no global minimum other than $\pm \mathbf{x}^\star$ in the set $\mathcal{S}$. However, we proved through Theorem 3.2 that the trajectory of GD remains in the incoherent region $\mathcal{S}$; the trajectory of fully observed case, $\tilde{\mathbf{x}}^{(t)}$, can be easily shown to be incoherent for all $t$, and both $\ell_2$ and $\ell_\infty$-norms of $\mathbf{x}^{(t)}$ is close to those of $\tilde{\mathbf{x}}^{(t)}$ by Theorem 3.2. Hence, even if such a $\mathbf{y}^\star$ exists, the GD converges only to $\pm \mathbf{x}^\star$, because the trajectory is only allowed to move inside $\mathcal{S}$, but $\mathbf{y}^\star$ must reside outside of $\mathcal{S}$. In summary, any global minimum other than $\pm \mathbf{x}^\star$ is **NOT** incoherent as proved in [a], but the whole trajectory of GD is incoherent by Theorem 3.2, so it cannot converge to such a global minimum. Final note we want to make is that [a] eliminated all global minimum other than $\pm \mathbf{x}^\star$ by a regularizer that penalizes non-incoherent points, but our result proves that GD converges to $\pm \mathbf{x}^\star$ without any regularizer due to the *implicit regularization* of GD (the trajectory is kept incoherent automatically). --- [a] R. Ge, J. D. Lee, and T. Ma, “Matrix Completion has No Spurious Local Minimum”
Summary: This paper studies the global convergence of vanilla GD for the rank-1 matrix completion problem. It is shown that with small random initialization and after a logarithmic number of steps, GD enters a region around the global minimizers in which linear convergence happens. The paper provides sufficient conditions on the initialization scale to ensure this phenomenon happens and shows a tradeoff between the initialization scale and the number of available samples. Illustrative simulations are provided. Strengths: The paper studies a topic of interest for the Neurips community, which it presents in a clear manner and for which it provides results of both practical and theoretical importance. The derivations in the main paper are cleanly carried out, and the reasoning behind them is well-presented. Weaknesses: - The sample complexity seems to be quite large (for example, compared to that in [1]). In this light, can the authors elaborate further on this aspect (my question is despite the further commentary in section 6)? [1] C. Ma, K. Wang, Y. Chi, and Y. Chen, “Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution,” Foundations of Computational Mathematics, vol. 20, no. 3, pp. 451–632, 2020. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The nature of the presented proofs lies very much in the detail. While the authors did a good job providing a higher-level view of the proof strategy, and the writing is clear, the text is still difficult to follow at times. A possible improvement would be to include some pictorial description of the phases -- similar to the one in section 4, for the analysis carried out in sections 5, 6. This can significantly aid understanding, in my opinion. - Figure 2 a: the green line is labelled as $\\| x^{(t)} - x^{\star} \\|$, but in the figure commentary it is written $\\|x^{(t)} \pm x^{\star}\\|$. Which one is the correct one? - Figure 2 c: Perhaps instead the label "Distance" can be replaced with $\\|x^{(t)} \pm x^{\star}\\|$ for clarity. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the feedback. Please see our response to the reviewer’s question. >**The sample complexity seems to be quite large (for example, compared to that in [1]). In this light, can the authors elaborate further on this aspect (my question is despite the further commentary in section 6)?** The global convergence of gradient descent for phase retrieval was proved in [a], and it also required more log factors ($\log^{13} n$) compared to the local convergence result [b] ($\log n$). An improved analysis compared to ours could reduce the required sample complexity to that in [b], but we want to point out that it is difficult as it was for phase retrieval. The global geometry of loss function is not as benign as the local geometry around the global minimum. Additionally, it is customary to partition the gradient descent trajectory into distinct phases during the analysis of global convergence. However, establishing precise bounds during phase transitions is usually challenging and we have to rely on extra sample complexity. --- >**The nature of the presented proofs lies very much in the detail. While the authors did a good job providing a higher-level view of the proof strategy, and the writing is clear, the text is still difficult to follow at times. A possible improvement would be to include some pictorial description of the phases -- similar to the one in section 4, for the analysis carried out in sections 5, 6. This can significantly aid understanding, in my opinion.** Due to space limit, we could not insert a figure for Sections 5 and 6 in the main text. We will at least include the figure in supplementary in the final version. --- >**Figure 2 a: the green line is labelled as $\Vert \mathbf{x}^{(t)} - \mathbf{x}^\star \Vert$, but in the figure commentary it is written $\Vert \mathbf{x}^{(t)} \pm \mathbf{x}^\star \Vert$. Which one is the correct one?** $\Vert \mathbf{x}^{(t)} \pm \mathbf{x}^\star \Vert$ is the correct one. We appreciate your careful review. --- >**Figure 2 c: Perhaps instead the label "Distance" can be replaced with $\Vert \mathbf{x}^{(t)} \pm \mathbf{x}^\star \Vert$ for clarity.** We will change the label accordingly. --- [a] Yuxin Chen et al. "Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval." [b] Cong Ma et al. "Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution." --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: I thank the reviewers for their responses, which I read along with the other reviews and their respective responses. I maintain my score -- I think this paper makes a solid contribution, supported by well-carried-out proofs and a clear presentation, though having the downside of being restricted to rank one matrices.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper shows some convergence properties of gradient descent with small random initialization for rank-1 noisy matrix completion. Strengths: This paper uses a new approach (gradient descent with small random initialization) to solve the nonconvex formulation for rank-1 noisy matrix completion. It is shown that the GD trajectory will arrive at a local neighborhood (in both $\ell_2$ and $\ell_{\infty$ norm) of the ground truth within a number of iteration. Weaknesses: 1. My major concern of this paper is a lack of theoretical novelty. After looking at the results and quickly go through the proof, I believe the proof idea of this paper is similar to that in [1], which focus on a general low-rank matrix sensing problem (we know that the low-rank matrix sensing problem with certain RIP condition has the same population-level loss function as the noisy matrix completion problem). For example, the analysis idea that the GD dynamic is close to a simplified linear evolution system in the initial phase thanks to the small initialization already appeared in [1]. Compared with [1] which works for general low-rank setting, this paper only works for the rank-1 case, which is more restricted. On the other hand, matrix completion problems are known to be more difficult than the matrix sensing problem in the sense that it requires incoherence and $\ell_{2,\infty}$ error analysis to show that the empirical loss concentrates around the population counterpart. This difficulty was not encountered in [1]. This paper uses leave-one-out analysis that has been widely used in low-rank estimation literature like [2,3] to track the $\ell_{\infty}$ error of the GD trajectory. The analysis does not seems to be challenging, if one is familiar with the above-mentioned literature. If I underestimate the technical novelty of the paper, I hope the authors could clarify and please highlight their technical novelty. 2. I am not satisfied with the convergence guarantees. The estimation error provided by Theorem 3.1 only shows that the output of the proposed algorithm is consistent, namely is $o(1)$. However state-of-the-art results for matrix completion already shows that GD with spectral initialization (which I believe is equivalent to small random initialization in some sense, since the initial phase of the latter algorithm is similar to some form of power method) achieves minimax-optimal estimation error [2]. Could the authors please explain why their analysis leads to looser error bound? 3. In addition, the noise condition in Theorem 3.1 is $\sqrt{np}$ times more stringent than that in [2]. Could the authors please explain why their analysis requires stronger noise conditions? [1] Li, Yuanzhi, Tengyu Ma, and Hongyang Zhang. "Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations." Conference On Learning Theory. PMLR, 2018. [2] Ma, Cong, et al. "Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution." Foundations of Computational Mathematics 20 (2020): 451-632. [3] Chen, Yuxin, et al. "Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval." Mathematical Programming 176 (2019): 5-37. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have no question at this moment. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This paper does not have potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the feedback. Please see our response to the reviewer’s question. >**1. My major concern of this paper is a lack of theoretical novelty. ... If I underestimate the technical novelty of the paper, I hope the authors could clarify and please highlight their technical novelty.** The results presented in this paper cannot be obtained by just combining the previous works that the reviewer mentioned. The technical novelty of our work is at delicate analysis of both $\ell\_2$ and $\ell\_\infty$ norm in Phase I, which was not studied before. In order to show that the norms decrease exponentially at a rate less than 1, the strong convexity of Hessian matrix around $\mathbf{x}^\star$ was mainly used in [2]. In Phase I, we do not have such convexity, and the Hessian matrix implies that both $ \Vert \mathbf{x}^{(t)} - \tilde{\mathbf{x}}^{(t)} \Vert\_2$ and $ \Vert \mathbf{x}^{(t)} - \mathbf{x}^{(t,l)} \Vert\_2$ may grow at a rate $(1 + \eta \lambda^\star)$ in Phase I. However, with some simulation, we observed that both quantities do not increase exponentially, and we actually prove through Lemma 5.1 and 5.4 that the quantities increase polynomially with respect to $t$. Two techniques were developed to obtain such results. First, we introduced another sequence $\hat{\mathbf{x}}^{(t)}$ that evolves with simpler recursive equation than $\mathbf{x}^{(t)}$, and proved that $\Vert \hat{\mathbf{x}}^{(t)} - \tilde{\mathbf{x}}^{(t)} \Vert\_2 $ increases polynomially as stated in Lemma 5.2. We proved the lemma by expanding both sequences with matrix polynomial, and such an approach was not used in any of the previous work. The quantity $\Vert \mathbf{x}^{(t)} - \hat{\mathbf{x}}^{(t)} \Vert\_2 $ is allowed to grow exponentially with the rate $(1 + \eta \lambda^\star)$ as stated in Lemma 5.3, but since its initial size is proportional to $\beta\_0^3$, it is negligible compared to $\Vert \hat{\mathbf{x}}^{(t)} - \tilde{\mathbf{x}}^{(t)} \Vert\_2 $ in Phase I, thanks to the small initialization. Second, we proved that $(\mathbf{x}^{(t)} - \mathbf{x}^{(t,l)})$ is almost orthogonal to $\mathbf{u}^\star$ in Phase I (see Lemma 5.5). $ \Vert \mathbf{x}^{(t)} - \mathbf{x}^{(t,l)} \Vert\_2$ can only grow exponentially at the rate $(1 + \eta \lambda^\star )$ when it is parallel to $\mathbf{u}^\star$. However, by showing that they are almost orthogonal to each other in Phase I, the norm grows only linearly with respect to $t$. With this result, we finally succeed to show that the trajectory is close to the fully observed case in the sense of $\ell\_\infty$-norm and the norm is kept to $\frac{1}{\sqrt{np}} \frac{\beta_0}{\sqrt{n}}$ ignoring for log factors, as stated in (12) of Lemma 5.1. To conclude, although the same leave-one-out approach was used, different induction hypotheses from [2] were used in our work to reflect different geometry of Phase I. Note also that due to our delicate analysis of $\ell\_2$-norm in Phase I, the initialization size of up to $n^{-1/4}$ is allowed in our result, while the upper bound on initialization size reads $n^{-3/4}$ in [4], which generalizes the result of [1]. --- >**2. I am not satisfied with the convergence guarantees. The estimation error provided by Theorem 3.1 only shows that the output of the proposed algorithm is consistent, namely is o(1). However state-of-the-art results for matrix completion already shows that GD with spectral initialization (which I believe is equivalent to small random initialization in some sense, since the initial phase of the latter algorithm is similar to some form of power method) achieves minimax-optimal estimation error [2]. Could the authors please explain why their analysis leads to looser error bound?** >**3. In addition, the noise condition in Theorem 3.1 is $\sqrt{np}$ times more stringent than that in [2]. Could the authors please explain why their analysis requires stronger noise conditions?** Both questions are answered in Section 3: Estimation Error of the main text and Section F of the supplementary material. To summarize, out current approach employs extra samples to enhance the upper bound on initialization size, which is evident from equation (8). (it reads $n^{-1/4} \sqrt[4]{np}$) Nonetheless, by keeping the initialization size to $n^{-1/4}$ regardless of sample complexity, we are able to obtain the minimax-optimal estimation error and the noise condition of [2]. We discussed about this tradeoff in Section 3 of the main text, while the potential adjustments required for proving in the context of a fixed initialization size are briefly outlined in Section F of the supplementary material. Hence, with extra samples, one has the option to either improve the estimation accuracy or to increase the initialization size to reduce the number of iterations. --- [1] Li, Yuanzhi, Tengyu Ma, and Hongyang Zhang. "Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations." Conference On Learning Theory. PMLR, 2018. [2] Ma, Cong, et al. "Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution." Foundations of Computational Mathematics 20 (2020): 451-632. [3] Chen, Yuxin, et al. "Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval." Mathematical Programming 176 (2019): 5-37. [4] Dominik Stöger and Mahdi Soltanolkotabi “Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction”
Summary: This work considers the convergence analyses of gradient descent method for rank-one matrix completion problem. In particular, this work assumes small random initialization, which is relatively relaxed condition compared to existing work. With such assumption, the logarithmic convergence of the gradient descent method has been proved in this work. Also, the impact of the regularization for gradient descent method has been analyzed. Strengths: The motivation of this work is clear and valid, also this work is well-organized. Meanwhile, this work is technically sound, where the proof and related analyses are provided, also some simulations are provided to further support the major results. Weaknesses: - I have concern for the novelty or the contribution of this work. Rank-one matrix completion problem is the simplest problem for matrix completion problems. Also there have constraints for the noise assumption of the revealed entries in this paper. For such cases, there are many efficient methods to deal with, like alternating minimization or projected gradient descent method. For gradient descent method, there also have existing works which have proved its convergence. Though this work provided with relaxed conditions, such contribution may be limited for the optimization and machine learning community, let alone the real problems in industry. - The presentation for the proof of the major results can be further improved. For instance, the proof of Lemma 5.1 is separated into two parts at different places, it may be better to combine them together after finishing other proofs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can you compare the major results about the rank-one matrix completion problem with the following works? https://arxiv.org/pdf/2008.04988.pdf https://proceedings.neurips.cc/paper/2020/file/f86890095c957e9b949d11d15f0d0cd5-Paper.pdf Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please see comments above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the feedback. Please see our response to the reviewer’s question. >**I have concern for the novelty or the contribution of this work. Rank-one matrix completion problem is the simplest problem for matrix completion problems. Also there have constraints for the noise assumption of the revealed entries in this paper. For such cases, there are many efficient methods to deal with, like alternating minimization or projected gradient descent method. For gradient descent method, there also have existing works which have proved its convergence. Though this work provided with relaxed conditions, such contribution may be limited for the optimization and machine learning community, let alone the real problems in industry.** The question of whether the combination of GD with random initialization can effectively solve a specific problem holds significant importance in the field of optimization. Phase retrieval and matrix sensing are one of the low-rank recovery problems, like matrix completion. For those problems, the global convergence of GD from a randomly initialized point has been proved. However, despite its similarity to matrix sensing, no analogous result had been established for matrix completion. The problem remains open especially after the local convergence result [a] published in 2017. Although the rank-one case does not have much impact in itself, it will provide a good starting point for someone who challenges the full problem. Our novel analysis on Phase I provides a good understanding on the dynamics of GD that starts from a small random initializer. We expect that our key lemmas such as Lemmas 5.2 and 5.5 will continue to hold for the general rank-r case in a similar way. For Phase II, we have a difficulty in analyzing the singular values of the trajectory, and it is discussed in Section 8 with some simulation results. For the noise model, we followed the standard of many previous works on low-rank recovery, and we do not think it is a strong and restrictive assumption. It is common to model noise with the Gaussian distribution in different engineering fields although it may not be appropriate for some specific problems. --- >**The presentation for the proof of the major results can be further improved. For instance, the proof of Lemma 5.1 is separated into two parts at different places, it may be better to combine them together after finishing other proofs.** The lines from 227 to 243 are all related to the proof of (11), and the lines from 246 to 263 are all related to the proof of (12). They are not separated. --- >**Can you compare the major results about the rank-one matrix completion problem with the following works? (Rui Liu and Alex Olshevsky, 2020) and (Qianqian Ma and Alex Olshevsky, 2020)** A special error model was studied in [b]. For some fractional number of rows and columns, the observed matrix is allowed to be corrupted entirely by any order. Based on an algorithm similar to alternating minimization, the corrupted rows and columns are found recursively, and those rows and columns are not used in the estimation of singular vectors. The main theorem gives the required sample complexity as a function of the fraction of corrupted rows and columns, and it is implied from the theorem that at most a fraction of $1 / \log \log(n)$ rows can be corrupted. Because [b] uses different error model with ours, a direct comparison is difficult. However, our model allows Gaussian noise to be added to every entry of the observed matrix not to some fractional number of entries. Moreover, the variance of Gaussian noise can be as large as $\sqrt{\log n}$ times the maximum entry of the ground truth matrix. Our main theorem shows that by increasing the initialization size, convergence time is reduced if more than optimal number of samples ($n \mathrm{poly} (\log n)$) are provided. In [b], however, only the necessary and sufficient condition on the sample complexity was analyzed, and how extra samples affect the performance or convergence speed of the algorithm was not provided. [c] studies the alternating minimization algorithm for rank-1 matrix completion problem and convergence of the algorithm was analyzed. However, both [b] and [c] assume that every entry of the ground truth matrix is positive. We suspect that such an assumption is required to show that singular vectors (x, y) at each round is incoherent. Our result does require such an assumption. Lastly, we would like to mention that the main purpose of this work is not to insist that GD with small random initialization is the best algorithm to use when solving rank-1 matrix completion problem. Our prior interest is at explaning the convergence of GD for nonconvex problems depite the existence of local minima or saddle points, which also gains much attention in machine learning and optimization community in recent years. --- [a] Cong Ma et al. "Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution." [b] Q. Ma and A. Olshevsky, “Adversarial Crowdsourcing Through Robust Rank-One Matrix Completion” [c] R. Liu and A. Olshevsky, "Asymptotic Convergence Rate of Alternating Minimization for Rank One Matrix Completion" --- Rebuttal Comment 1.1: Comment: There is a typo in the third paragraph of the answer to the last question. The last sentence should be fixed to "Our result does **NOT** require such an assumption". We are sorry for the mistake.
null
null
null
null
High-Fidelity Audio Compression with Improved RVQGAN
Accept (spotlight)
Summary: In the present paper, the authors introduce RVQGAN, a neural audio codec that uses a convolutional encoder / decoder along with Residual Vector Quantization as a bottleneck, with a multi scale mel reconstruction loss and different adversarial losses. They show state of the art performance from 3 to 8kbps, compared with the EnCodec model [8]. The key novelties are: - in each VQ layer, the authors perform the retrieval of the nearest codebook entry into a lower dimension space, and use cosine similarity instead of L2 distance to boost the utilization of the codebooks. - the authors drop the exponential moving average rule for learning the codebooks. - the author notice that the original technique from Soundstream [45] to select a varying number of quantizers can hurt the full bandwidth performance, and thus select 50% of the time all the quantizers in RVQ. - refinement of the losses and adversaries from previous work (in particular using different weights for different frequency bands). - balancing of the dataset to sample more often fullband audio. The authors provide extensive ablation studies with objective metrics, and one subjective comparison with EnCodec with various bitrates. Strengths: - great execution and illustration of the various issues tackled here and the proposed solutions. - quality of the final model clearly surpasses the existing state of the art. - detailed ablation study with objective metrics. - single model for fullband audio over multiple audio domains. Weaknesses: - incremental improvement over previous work: overall method is coming from [45], adversarial losses are a combination of the one from [45] and [8]. Minor changes to the objective loss compared with [15, 45]. The authors however claim novelty: l.59, "we make impactful design changes [...]: multi scale stft discriminator, multi scale losses". - some details are unclear to me, in particular, the authors mention they do not use the EMA rule from [9]. How are the codebooks updated then? The authors also mention a low dimension projection, but do not mention when and how it is computed and updated. See questions. - no ablation with subjective evaluations: could have been interesting to clearly identify where most of the subjective gains is coming from e.g. is it from the quantizer, adversarial losses or dataset balancing? - seems like the authors do a comparison of a 24kHz baseline model with a 44.1kHz, keeping the ground truth as 44.1kHz, which can have a high impact on subjective and objective metrics independently of the design choices made by the authors. In particular the visqol for Encodec in Table 3 is much lower than reported in [8]. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As mentioned before, I would need more clarification on the exact algorithm used for VQ. Is a PCA computed ? if so how often is it updated (as the encoder output distribution might change). How are the codebooks updated ? In Section 3.4, the architecture for the multi scale discriminator is missing. Is it the same one as [8] or [45]? Paragraph starting 193: this insight has been noted and motivated before in [45], [5] and [40], it doesn't seem like the authors bring any new material evidence here? It would be interesting to see the breakdown of Figure 3 by the category of audio. A last question would be over learnt snake activation parameters. Do the authors have any insight over the distribution of the learnt $\alpha$? does this vary with the layer? I'm trying to get a better sense of how exactly the model is utilizing this feature. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: authors properly address societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for you review and constructive criticism of our model. We appreciate the effort in getting to understand the details of our model, and we’ve made our best efforts to answer each of your concerns and questions. > *incremental improvement over previous work: overall method is coming from [45], adversarial losses are a combination of the one from [45] and [8]. Minor changes to the objective loss compared with [15, 45]. The authors however claim novelty: l.59, "we make impactful design changes [...]: multi scale stft discriminator, multi scale losses"* > We discussed novelty concerns at length in the global rebuttal. We sincerely request the reviewer to review that for additional context. > *some details are unclear to me, in particular, the authors mention they do not use the EMA rule from [9]. How are the codebooks updated then? The authors also mention a low dimension projection, but do not mention when and how it is computed and updated. See questions* > The residual vector quantization algorithm is an improved version of the one in SoundStream [45], where the codebook learning is modified, inspired by the techniques proposed in Improved VQGAN[43], to encourage uniform codebook usage thereby preventing a “collapse” where many codebook entries are unused. Specifically, while the standard quantization operation is implemented as follows: $$ \displaylines{ z_q(x) = e_k,\quad \text{where}\quad k = \text{argmin}_j ||z_e(x) - e_j ||_2 \\\ \text{e is the codebook, $z_e$ is the encoder, $z_q$ is the quantizer} } $$ The modified quantization operation used in our work applies the following equation: $$ \displaylines{ z_q(x) = W_\text{out} e_k,\quad \text{where}\quad k = \text{argmin}_j ||\ell_2(W_\text{in}z_e(x)) - \ell_2(e_j) ||_2 \\\ \text{where $W_\text{in} \in R^{D \times M}$ and $W_\text{out} \in R^{M \times D}$ are projection matrices,} \\\ \text{$D$ is the output dimension of encoder, $M$ is the codebook dimension, $M \ll D$} } $$ As illustrated in the paper as well as [43], performing codebook lookup in a low-dimensional space leads to improved codebook utilization. In our work, D = 1024 and M = 8. Also, we noticed that directly updating the codebook using the loss functions in the original VQ-VAE paper instead of EMA is sufficient, simpler to implement and leads to slight performance improvements. Specifically, the loss function is defined as follows: $$ \displaylines{ z_\text{proj}(x) = W_\text{in}\ z_e(x) \\\ \mathcal{L}_\text{VQ} = ||\text{sg}[\ell_2(z_\text{proj}(x))] - \ell_2(e_k) ||_2^2 + \beta \ ||\ell_2(z_\text{proj}(x)) - \text{sg}[\ell_2(e_k)] ||_2^2 \\\ \text{where sg is the stop gradient operator} } $$ > *seems like the authors do a comparison of a 24kHz baseline model with a 44.1kHz, keeping the ground truth as 44.1kHz, which can have a high impact on subjective and objective metrics independently of the design choices made by the authors. In particular the visqol for Encodec in Table 3 is much lower than reported in [8]* > Thanks for the important feedback. We revisited comparisons against relevant work and discussed it at length in the global rebuttal. Please review that for additional clarifications. > *In Section 3.4, the architecture for the multi scale discriminator is missing. Is it the same one as [8] or [45]?* > The architecture for the Discriminators is as follows. We use the same Multi-period Discriminator architecture from HifiGAN (cite), and the multi-scale STFT discriminator architecture from UnivNet except with complex STFT as input rather than magnitude spectrograms. Additionally, we do multi-band processing by splitting the spectrogram into sub-bands and using separate discriminator weights for each sub-band, as motivated in our paper (Section 3.4 and Section 4.5). Also, the code attached to the submission has further exact details on the implementation. > *Paragraph starting 193: this insight has been noted and motivated before in [45], [5] and [40], it doesn't seem like the authors bring any new material evidence here?* > While this may seem obvious, earlier work such as [45], [5] and [40] haven’t explicitly stated this intuition or shared audio samples with reconstructions from different quantizers. We found this may be interesting to some readers, especially with a practical demo. Moreover, this intuition is important to understand since prior work such as EnCodec use quantizer dropout only in groups, rather than at each level, and this could impact downstream performance when learning language models on top of the tokens. For instance, we have internally experienced that training language models on top of a codec trained without any quantizer dropout leads to poor audio quality. > *It would be interesting to see the breakdown of Figure 3 by the category of audio.* > We have attached a pdf to the global rebuttal with the requested breakdown. > *A last question would be over learnt snake activation parameters. Do the authors have any insight over the distribution of the learnt? does this vary with the layer? I'm trying to get a better sense of how exactly the model is utilizing this feature.* > Thanks for the insightful question. We were just as curious about the distribution of the snake activation parameters, and expected it to correlate with the frequencies being learnt or introduced across the generator. However, we did not find any clear patterns in the parameters in each layer (across different dimensions) or across layers. --- Rebuttal Comment 1.1: Title: replying to the authors Comment: I would like to thank the authors for their detailed response. I would kindly ask them to repost the explanation on the updated RVQ rules as it seems the formatting is broken and hence hard to parse. --- Reply to Comment 1.1.1: Comment: Thank you for bringing this to our notice. We have posted an additional comment with the corrected Latex code. --- Rebuttal Comment 1.2: Title: Latex correction for correct equation rendering Comment: We present the equations above again with the correct latex code. --- The modified quantization operation equation used in our work: $$ z_q(x) = W_\text{out} e_k, \quad \text{where}\quad k = \text{argmin}_j ||\ell\_2(W\_\text{in}z_e(x)) - \ell\_2(e_j) ||_2 $$ $$ \text{$W_\text{in} \in R^{D \times M}$ and $W_\text{out} \in R^{M \times D}$ are projection matrices}\\ $$ $$ \text{$D$ is the output dimension of encoder} $$ $$ \text{$M$ is the codebook dimension, $M \ll D$} $$ --- The vector quantizer loss function: $$ z\_\text{proj}(x) = W\_\text{in}\ z_e(x) $$ $$ \mathcal{L}_\text{VQ} = ||\text{sg}[\ell\_2(z\_\text{proj}(x))] - \ell\_2(e_k) ||_2^2 + \beta ||\ell\_2(z\_\text{proj}(x)) - \text{sg}[\ell\_2(e_k)] ||_2^2 \\ \text{where sg is the stop gradient operator} $$
Summary: This paper introduces a novel high-fidelity neural audio compression algorithm that achieves impressive compression ratios while maintaining audio quality. The authors combine advancements in high-fidelity audio generation with improved vector quantization techniques from the image domain, along with enhanced adversarial and reconstruction losses. Their approach achieves a remarkable 90x compression of 44.1 KHz audio into tokens at just 8kbps bandwidth. One of the notable strengths of this work is its universal applicability, as it can compress various audio domains (speech, environment, music) using a single model. The authors conduct a thorough comparison with competing audio compression algorithms and demonstrate the superior performance of their method. Furthermore, they provide detailed ablations for each design choice, allowing readers to gain insights into the effectiveness of different components. Additionally, the paper offers open-source code and trained model weights, which contribute to the reproducibility of the results. Strengths: - **Impressive compression performance**: The proposed algorithm achieves a 90x compression ratio for 44.1 KHz audio at just 8kbps bandwidth, demonstrating its effectiveness in reducing data size while preserving audio quality. - **Novel Method**: The proposed "codebook collapse" and "quantizer dropout" effectively address the issues in lossy audio compression. - **Universal applicability**: The single model's ability to compress various audio domains makes it highly versatile and applicable to generative modeling of different audio types. - **Comprehensive evaluation**: The authors compare their method against existing audio compression algorithms, demonstrating its superiority in terms of performance. - **Thorough ablations**: The paper provides detailed insights into the impact of design choices, allowing readers to understand the effectiveness of different components and their contributions to the overall results. - **Reproducibility**: The availability of open-source code and trained model weights enhances the reproducibility of the research, enabling other researchers to build upon and validate the findings. Weaknesses: - The novelty of the proposed model structure is a combination of existing models: - factorized codes and L2-normalized codes are from Improved VQGAN image model; - Snake activation function from BigVGAN - This paper presents a strong audio compression technique. However, since the proposed novel points are specifically tailored for a narrow domain, their impact may be limited to the machine learning community and other domains like computer vision/NLP Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Have you attempted to apply a similar architecture to the vocoder in TTS? - Which components do you believe can be applied and generalized to other domains or tasks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your time and feedback. Please find below answers and clarifications to your questions. > *The novelty of the proposed model structure is a combination of existing models:* > > - *factorized codes and L2-normalized codes are from Improved VQGAN image model;* > - *Snake activation function from BigVGAN* We have addressed the novelty concern in the global rebuttal. We sincerely request you to review them for additional context. > *Have you attempted to apply a similar architecture to the vocoder in TTS?* > > *Which components do you believe can be applied and generalized to other domains or tasks?* > While this specific work is focused on an improved technique / algorithm for training neural audio codecs, much of the new recipe is widely applicable to other audio generation tasks such as speech enhancement, source separation, neural vocoding, etc. Internally, we have trained it for the tasks of neural vocoding (trained by removing the encoder and quantization step) and speech enhancement (trained by removing quantization step, while retaining the adversarial losses) and found it to perform as good or better than state of the art models for the respective tasks. While some of our recipe is borrowed from the image domain, it would also be influential to apply some audio generation techniques to other domains like image/video . For example: periodic activations, residual vector quantization and spectral reconstruction losses haven’t been deeply explored in the image domain and there is interesting scope for future work on this angle.
Summary: This paper introduces a RVQGAN-based neural audio codec method, demonstrating superior audio reconstruction quality, a high compression rate, and generalization across diverse audio domains. The authors substantiate the significant performance superiority of their model over alternatives through extensive and thorough qualitative and quantitative experiments. They present and validate their technique to fully utilize residual vector quantization, alongside model, discriminator, and loss design choices for enhanced performance. Strengths: * The paper addresses some of the key challenges in the neural audio codec domain. * The authors conducted strong and extensive experiments, providing comprehensive results. * The reference list appears to be thorough and comprehensive. * The authors support their findings by sharing the developed model, which is beneficial for the research community. Weaknesses: * The authors derived the proposed methods from existing studies and experimentally validate them in the neural audio codec domain. This approach seems to compromise the scientific novelty of the research. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * Could the model be applied to downstream applications such as training text-to-speech (TTS) models? Previous works like EnCodec and SoundStream utilized causal architectures to make them suitable for in-context learning or prompting in TTS tasks. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors have adequately addressed both the limitations of their research and its possible societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your time and feedback. Please find below our comments and clarifications. > *The authors derived the proposed methods from existing studies and experimentally validate them in the neural audio codec domain. This approach seems to compromise the scientific novelty of the research.* > We addressed the novelty concern in the global rebuttal. We sincerely request you to review them for additional context. > *Could the model be applied to downstream applications such as training text-to-speech (TTS) models? Previous works like EnCodec and SoundStream utilized causal architectures to make them suitable for in-context learning or prompting in TTS tasks.* > We find that causal architectures for the codec aren’t related to downstream applications such as TTS or music modeling. Causal architectures were traditionally required to support streaming applications, which was the primary reason to work on codecs earlier. We trained a music generative model trained using our codec, which is capable of creating high quality variations of music with different styles of prompting (publication details are withheld for anonymity). Additionally, we internally trained AudioLM-style text-to-speech models on top of the learned tokens and found it to be capable of generating very high quality speech with minimal artifacts. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: I would like to thank the authors for addressing the raised concerns. I wish to further discuss the last query I presented, and if there are any misunderstandings on my part, I would be grateful for any clarifications. When preparing audio data for training LM-like TTS model, it's typical not to differentiate between the audio for the prompt and the subsequent audio for training. Instead, the entire audio is encoded all at once. In such a setup, a non-causal encoder might cause the prompt code to incorporate trailing audio information. For instance, assuming the length of an audio prompt is 3 seconds and the receptive field of the encoder spans 2 seconds, the prompt code derived from a 5-second ground truth audio won't be confined to just the 0 to 3-second audio data. It will also cover the subsequent 3 to 5 seconds, as they fall within the receptive field of the final code. During the inference of an LM-like TTS model trained in this manner, potential issues can emerge: a) If the entire 0 to 5 second ground truth audio is encoded and the first 3/5 of it is selected as the prompt code, it results in a cheating problem. This is because the last code of the prompt will represent not just the audio information from 0 to 3 second but also from the subsequent 3 to 5 second. b) Conversely, if only the first 3 seconds of the ground truth audio are clipped first and then encoded to be used as the prompt code, the encoded code might be interpreted that the audio from 3 to 5 seconds is silent due to the effect of zero-padding in the encoder. On the contrary, encoders with a causal architecture do not present these issues, making them seemingly more suited for providing audio prompts to LM-like TTS models. I am curious to hear the authors' perspective on this matter. --- Reply to Comment 1.1.1: Title: Discussion on causality of audio codec for training LM-like models Comment: Thanks for bringing up this interesting discussion. In this context of our proposed Improved RVQGAN model, we can easily compute the receptive field of the generator architecture, and we find that one frame in the encoded latent space "sees" 7978 samples of audio @ 44.1 KHz which corresponds to 180 ms (90 ms on each side). Our architecture only involves strided convolutions without any self-attention that limits the receptive field of the encoder (arguably, in a good way). Since the receptive field in the non-causal direction is only 90 ms, we believe that it doesn't cause a significant "bleed" of information. Moreover, although the receptive fields strongly overlap between subsequent tokens (due to strided convolutions) we believe there's very less incentive for the model to store overlapped information in the discrete latents since the task of heavy compression (~90x) necessitates the model to judiciously use the information bottleneck to store relevant, unique information to reconstruct audio at high fidelity. We empirically find that the latents learn very "local" (patch-wise) information, in agreement with the findings in SoundStream / AudioLM and the interpretation of them as "acoustic tokens". Ex: we found that any point-wise artifact in the input audio (like a short, sudden click) is exactly reproduced in the output audio when passed through the codec. As noted earlier, we have also trained LM-style music and speech models on top of the learned acoustic tokens, and they do not exhibit any issues during inference (with prompting), suggesting that bidirectional codecs (with limited receptive field) don't have systemic problems limiting their usage in this respect. Note: when prompting we would only encode the prompt audio of 3 seconds, and not the entire audio of 5 seconds followed by clipping. We also did not observe any zero-padding artifacts. We find that the decision of codec causality and the subsequent generative modeling approach don't generally have a strong interaction. Another data point supporting this is in the image domain models such as Parti, Phenaki which also train causal LMs on top of non-causal image tokenizers. However, we feel that this interaction can be better studied thoroughly as future work, and our open-source code can help in this respect.
Summary: The authors propose a neural audio codec model that demonstrates superior performance compared to previous works, and present experimental results. Strengths: - The authors appropriately explain the problem they aim to address. - Their method is adequately described. - The authors provide a specific implementation, ensuring reproducibility. - The claims made are reasonable, and the experiments and results support them. - The authors' various ablation studies can be helpful for future work. Weaknesses: - For a neural audio codec to be utilized like traditional audio codecs, it should not fail in any patterns. Data-driven neural audio codecs have not been proven to be sufficiently stable from this perspective. Although the authors divided the original dataset into a training set and evaluation set, it is necessary to validate whether the proposed audio codec works well on more diverse and completely different audio data. Additionally, finding failure cases of previous works and comparing them can serve as strong evidence supporting the superiority of the authors' proposed method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Based on the MUSHRA scores curves, it appears that higher bitrates yield better scores, and the highest quality that this method can achieve remains unconfirmed. Is there a specific reason for this? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations have been well described. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your time and feedback. Please find our answers below to clarify some details about our model. > …*it is necessary to validate whether the proposed audio codec works well on more diverse and completely different audio data. Additionally, finding failure cases of previous works and comparing them can serve as strong evidence supporting the superiority of the authors' proposed method.* > To the best of our best knowledge, our model doesn’t have clear failure cases on any audio domain. However, we noticed that some specific sounds like cymbal crashes in music or glockenspiel type sounds aren’t modeled accurately. We leave this to future work to identify why there’s certain limitations on the specific sounds that can be modeled. Otherwise, we found that our proposed model clearly fixes failure cases in previous works (such as EnCodec). Specifically, EnCodec poorly models background noise (such as room tone), reverb in speech data, but our proposed model almost perfectly reconstructs such cases. This is illustrated in our samples page. > *Based on the MUSHRA scores curves, it appears that higher bitrates yield better scores, and the highest quality that this method can achieve remains unconfirmed. Is there a specific reason for this?* > We did not test our model at significantly higher bitrates since our goal was to achieve the highest rate of compression possible. However, we additionally trained our model at 16 kbps max bitrate with 44.1 KHz audio and found our metrics to significantly improve over the 8 kbps model (Table 2 in the attached pdf).
Rebuttal 1: Rebuttal: The authors of this paper would like to sincerely thank all the reviewers for their time and feedback. Specifically, we thank the reviewers for positively acknowledging: 1) the key challenges in the neural audio codec domain successfully addressed in this work 2) the strong results significantly improving over state of the art 3) comprehensive evaluation and ablation studies that provide insight into all design choices 4) reproducibility of the research through open-sourced code and model weights We also appreciate and welcome constructive criticism provided for our work and hope to resolve some common concerns noted by the reviewers. ### 1. Addressing novelty concerns The authors would like to clarify and reiterate that we don’t claim novelty in terms of inventing new techniques, in this paper. However, there is a strong novel contribution of understanding fundamental limitations in existing codec algorithms, and mitigating them with techniques available at our disposal. Each change was strongly motivated by limitations in existing models that have not been addressed till date. Specifically: 1. We found a fundamental limitation in existing audio codecs (such as SoundStream, EnCodec, etc.) that under-utilize their bandwidth due to poor codebook learning (or codebook collapse). This sub-optimal bandwidth usage prevented these models from achieving better quality and higher compression rates which is desirable. 2. We studied the quantizer dropout technique in greater detail and found that it could hurt full bandwidth reconstruction of the model. Addressing this leads to better perceptual reconstruction quality, while still maintaining the benefits of quantizer dropout. 3. While techniques such as periodic (Snake) activations, multi-scale spectral loss, multi-scale STFT discriminator already exist in prior work, there is little consensus in the audio community on the impact of these changes or the motivation behind them. For instance, Snake activation introduced in BigVGAN was not adopted by EnCodec even though it existed earlier. Our work uncovers the theoretical limitations that motivates each of these design changes and also provides thorough ablations for each change. As an additional example, we note in our work that using multi-scale mel reconstruction loss with a lowest hop size of 8 (and filter length 32) leads to a much better modeling of quick transients that exist in audio domains such as music. We find that scientifically understanding each limitation, and addressing each of them with well-studied and motivated techniques leads to a significant improvement over the state of the art, even leading to a 2x improvement in key metrics such as mel distance (see Table 1 in attached PDF). We request the reviewers to consider the scientific novelty in our work in identifying key limitations of existing models, as well as the disciplined usage of known techniques to mitigate them. ### 2. Revisiting comparisons against relevant work The reviewers noted that some of our comparisons against baseline models (like EnCodec) are at different sampling rates, which could affect objective and subjective metrics. While this is a valid concern, this was a difficult choice we originally made since downsampling the proposed model would be an unfair comparison. While EnCodec runs natively at 24kHz, by downsampling the output of the proposed model from 44.1kHz to 24kHz we discard all the capacity and bitrate that was allocated to these higher frequencies. **Listening tests at 24 KHz**: To remove the bandwidth concerns, we downsampled all examples to 24 KHz and redid the subjective listening tests. While this puts our proposed work at a further disadvantage, since it was trained to compress higher bandwidth audio with higher compression factors, we find that our model still significantly outperforms baseline methods (Figure 1 in attached pdf). **Apples-to-apples comparison with EnCodec**: Moreover, we re-trained our proposed model with the same exact configuration as EnCodec (24 kHz sampling rate, 24 kbps bitrate, 320 stride, 32 codebooks of 10 bits each) to make a thorough apples-to-apples comparison. We have attached quantitative evaluations for the same (Table 1 in attached PDF). In summary, our proposed model significantly improved over EnCodec across all metrics, achieving a mel distance (lower is better) of 0.49 compared to 1.05 of EnCodec. We believe these updated results should further strengthen the significance of our proposed model. We will add these updated results to the table in the paper to make the comparisons straightforward. Pdf: /pdf/fcac4ae243005c460829c74515865dd572364b04.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Transformers learn through gradual rank increase
Accept (poster)
Summary: This paper presents theoretical justifications and empirical evidence that transformers demonstrate incremental learning dynamics in the low-initialization regime. The authors consider a very restricted diagonal attention model along with a range of restrictive assumptions to theoretically characterize a number of features of learning dynamics in single-layer transformers (a single attention layer). The assumptions, which include nondegeneracy of dynamics (4.2), existence of stationary points that are strict local minima (4.3), and robustness of gradient flow to perturbations (4.4). These assumptions, in the diagonal model, allow the authors to prove that learning evolves through discrete, incremental stages of gradual rank increase. These assumptions and theoretical predictions are experimentally verified using a toy learning scenario matching the assumptions, and results are given on a full multi-layer, multi-head vision transformer which demonstrate similar learning dynamics despite being significantly far from the restrictive assumptions of the theory. Strengths: 1. **Convincing Theory**. The theoretical framework and proof of incremental learning dynamics is fairly easy to follow and, although based on a very restricted model and stringent assumptions, I think provides a number of useful tools of understanding and studying the learning dynamics of more complex models. 2. **Fairly Convincing Empirical Confirmation**. The empirical validation of incremental dynamics in the single attention layer model (Figure 3) are interesting -- in both the diagonal and full model the empirically observed behavior follows the predicted incremental "activation" of directions. However I find it odd that the evolution of singular values is so markedly different between the value-output and key-query matrices (Figures 3a-d and 3b-e). The value-output eigenvalues are significantly more sparse and their activations more "instantaneous". This difference is also roughly observable in the ViT results in Figure 2 (although reversed!). Weaknesses: 1. **Organization**. As will many theoretical papers, some of the most interesting observations are relegated to the Supplementary Material -- which I appreciate is inevitable. Reorganizing the preliminaries and theoretical development of the main paper I admit is delicate, but I think that the stable rank analysis in Figure 4 and illustration of low-rank bias in Figure 5 add little to the empirical results section of the main paper. Instead, I think this space could have been used better providing experimental evidence of the assumptions 4.2, 4.3, and 4.4 (is there an assumption 4.1 that I am missing?). Some of the plots in Figures 9 and 10 of the Supplementary Material, or even the *very* clear illustration of stepped behavior and dependence on $\alpha$ from Figure 6 would be excellent illustrations to include in the main paper. I find the plots in rescaled training time much easier to interpret. 2. **ViT Training Regime**. All of the empirical results on ViT were generated using Adam, which I fear might introduce its own dynamics into the learning process due to its modulation of gradients using the diagonal empirical Fisher matrix. It would be interesting to see results generated using vanilla SGD to verify that the observed learning dynamics are due to the low-initialization regime. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Why is the rank at initialization bounded (at 128?) in the plots in Figure 5? 2. Is the top/bottom organization of plots indicated in the captions of Figures 2 and 3 correct? If so, can you explain the difference between behavior between the key-query and value-output eigenvalues? 3. Are the same learning dynamics observed in ViT when training with SGD as opposed to Adam? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors provide a discussion of the limitations of the results presented in the paper, acknowledging that the restrictive assumptions and requirements of the theory are quite far from actual practice. They also connect their to recent work on low-rank model adaptation which could potentially exploit the theoretical and empirically observed incremental learning dynamics of transformers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and for your helpful suggestions on presentation and paper organization. We are happy that you found our theory and experiments convincing. We answer your questions below. * Q1: “Why is the rank at initialization bounded (at 128?) in the plots in Figure 5?“ * The rank is bounded because the head dimension is 128 and embedding dimension is 512, meaning that the matrices $W_Q, W_K, W_V, W_O$ are in $\\mathbb{R}^{512 \\times 128}$. * Q2: “Is the top/bottom organization of plots indicated in the captions of Figures 2 and 3 correct? If so, can you explain the difference between behavior between the key-query and value-output eigenvalues?” * Yes, the top/bottom organization is correct, but it is not consistent between Figure 2 and Figure 3, which may have caused confusion. We will make it consistent in the revision. Both plots demonstrate that incremental learning dynamics occur for both key-query and value-output matrices. * You make an interesting point that there seem to be some qualitative differences in the evolution of keys-queries and values-outputs matrices. See also Figures 12, 14 where the stable-rank of $\\Delta W_VW_O^T$ is higher than that of the $\\Delta W_Q W_K^T$ for CIFAR-10 and CIFAR-100. However, this trend does not hold for ImageNet (Figure 16). Our current theory unfortunately does not give any clue for why $\\Delta W_VW_O^T$ might have higher stable-rank than $\\Delta W_QW_K^T$, or vice-versa. * Q3: “Are the same learning dynamics observed in ViT when training with SGD as opposed to Adam?“ * Yes. Thanks for the suggestion. We have added experiments on SGD confirming this. See attached document. * Thank you also for your excellent suggestions on organization. We agree that moving Figure 6 and some panels from Figures 9 and 10 to the main text will help readability of Section 4, both for illustrating the theorem statement and illustrating and justifying the assumptions. We will implement this in the revision. --- Rebuttal Comment 1.1: Comment: Many thanks for your responses to my main questions, curiosities and concerns. The responses address all of my concerns and my opinion of this work after reading the rebuttal and the other reviews remains positive. If I interpret the results from Figures 2 and 3 in the Rebuttal PDF, the results using SGD not only exhibit the same rank-increasing trend, they are also quite a bit more stable compared to using Adam. I encourage the authors to include these observations, as well as the results on training GPT-2 provided in response to Reviewer ZvU1 in any final version of this work.
Summary: This paper conducts solid analysis and experiments to demonstrate the theory and proofs provided in the paper offer valuable insights into the incremental learning dynamics in transformers and how they can be better understood. Strengths: 1. The theory and proofs provided in the paper offer valuable insights into the incremental learning dynamics in transformers and how they can be better understood. 2. The experiments conducted in the paper also provide evidence to support the proposed approach. Weaknesses: Besides LoRA, are there any other methods and techniques related to the proof and conclusions of this paper? A further discussion may help. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please refers to the point listed in the Weakness part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The proofs and conclusions of this study are dependent on the assumptions of the diagonal weights and small initialization. Such limitations have been discussed in an entire paragraph of Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive evaluation of our paper. We were happy to read that you thought our contribution provided valuable insights and that our experiments gave good evidence. We answer your question below: * Q1: Besides LoRA, are there any other methods and techniques related to the proof and conclusions of this paper? A further discussion may help. * Training dynamics of transformers are a very active area of research, and so there are indeed other methods and techniques related to the ideas in this paper that have appeared recently. * A very relevant paper called “InRank: Incremental Low-Rank Learning” [Zhao et al. ’23] appeared on arXiv in June, after the NeurIPS deadline. Similarly to us, the authors observe gradual rank increase dynamics during transformer training. This inspires [Zhao et al. ’23] to run LoRA to train a transformer starting at random initialization, and they obtain significant runtime/memory improvements over regular training, with little loss in performance. The theory in [Zhao et al. ’23] is incomparable to ours — and actually it is quite complementary. Their theory studies linear networks with orthogonal weights (which are equivalent to linear diagonal networks because the modes evolve separately) in standard initialization scale regimes. In contrast, in our theory we study nonlinear networks with diagonal weights in small initialization scale regimes. * A recent ICML paper called “On the stepwise nature of self-supervised learning” analyzes the gradient flow dynamics of a simple model and an SSL objective. They reach a similar conclusion where progress is made in discrete steps, where the rank of the learned embeddings increase gradually. The analysis conducted in that paper differs from ours in that they use a linear model as a theoretical testbed, where we consider a nonlinear model. * Limitations. We agree that our theory has these limitations. But, for completeness, we recall that our experiments on ViTs and GPT2 are a contribution, and these do not depend on diagonal weights and small initialization. * References * Zhao, Jiawei, et al. "InRank: Incremental Low-Rank Learning." arXiv preprint arXiv:2306.11250 (2023). * Simon, James B., et al. "On the stepwise nature of self-supervised learning." arXiv preprint arXiv:2303.15438 (2023).
Summary: In this paper, the authors study the learning dynamics of transformers and argued that the difference between weights and their initial values increase in rank as the training progresses. Under small initialization, smoothness, non-degeneracy. convergence and robustness assumptions, the authors proof the incremental dynamics in diagonal-weighted transformers. Empirical results on toy models and VITs shows similar rank-progression dynamics on learned perturbations. Strengths: The authors make an interesting observation that the learned perturbations in transformers are of low rank and exhibit an rank-increasing dynamics. Theoretical analysis is given under a simplified setting and the empirical results seem to support the observation. I do think this is a topic worth exploring as the findings could support the recently popular low ranking fine-tuning studies and potentially help identifying more efficient low-rank training methods through better understanding the learning dynamics. Weaknesses: * The assumptions are very strong such that it is not clear if the theoretical results has real implications in practical cases. Not only does the theory only hold for attention layer only, diagonal weighted transformers, it also need strong assumptions on convergence, robustness. * The presentation of the paper could be improved. Section 4.1.1 and 4.1.2 are kind of confusing and does not help in understanding the reasoning behind the rank increasing dynamics. In addition, the author uses a lot of vague expressions like "very small", "non-negligible", "good approximation", when not providing quantitative reasoning like error/magnitude bounds. * Dynamic (3) seems to hold only for gradient descent training, which is hardly used in practice. It's not clear if the analysis holds for optimizers like SGD/Adam. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: * Why is the non-toy experiments only conducted on VIT? Is there any reason to choose only the vision transformers but not regular NLP transformers? * How small is a "small initialization" in practice? * Dynamic (3) seems to hold only for gradient descent training, how does it apply to other optimizers like SGD/Adam? * The learning dynamics theories in this paper is proved in continuous setting (learning rate $\rightarrow$ 0, learning steps $\rightarrow\infty$). The real discretized GD training is just a forward finite difference approximation of the dynamics (3). Under what condition does the result hold in the later case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The authors discussed about the limitations that the theory requires diagonal weights and small initialization. However, it is not discussed whether dynamics (3) could cover optimizers in practical cases. I don't think this paper would have potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We answer all your questions and address all your comments on weaknesses below. We hope that our responses will be sufficient to clear up any concerns and confusion. We would be happy to answer any more questions if you have them. * Q: On NLP transformers * Thanks for the suggestion. We have **added experiments on NLP transformers** (GPT-2 trained on Wikitext) which show the exact same behavior; this is to be expected since the architecture is quite similar; see attached document. * Q: “How small is a ”small initialization“ in practice?” * We observe gradual rank increase dynamics at the initialization scales used in standard practice (see our experiments on ViTs and on GPT-2). Furthermore, Figures 2 and 4 show that the gradual rank increase dynamics are even more pronounced if we take a smaller initialization scale, which is consistent with our theory. * For an indication that we might observe these dynamics at practical initialization scales, see Figures 6, 7, and 8. These figures show that in our toy model there is already some stage-wise learning behavior at initialization scale 0.1 which is a quite practical scale. We will emphasize this in the revision by moving Figure 6 to the main text. * Q: “Dynamic (3) seems to hold only for gradient descent training, how does it apply to other optimizers like SGD/Adam?” * For our theory, we analyze gradient flow training, which can be obtained as a limit of SGD or GD training with learning rate → 0 (see e.g., [Bach ’20]). Gradient flow is generally simpler to analyze than SGD, and it is a popular testbed for studying learning dynamics (see e.g., [Saxe et al. ‘13], [Arora et al. ’18], [Razin and Cohen ’20], to name just a few examples). Analysis of Adam or constant-step-size SGD is certainly an interesting question for future work. However, it is beyond the theoretical scope of this paper as (1) it would significantly increase the complexity of the analysis, which is already involved; (2) our analysis of gradient flow is already a significant novel contribution in view of the existing literature. * Our experiments show that gradual rank increase dynamics hold with Adam training. For the revision we have **added experiments on SGD-trained transformers** which show the same behavior; see attached document. * Q: “The learning dynamics theories... the later case?” * There are several works exploring how to transfer guarantees from gradient flow to GD (see e.g., [De Sa et al. '22]). In our case, the loss function is smooth, so by Gronwall’s inequality our main theorem holds automatically for GD with sufficiently small step size. We will add a remark in the main text. * Weakness: “The assumptions are very strong....” * On diagonal weights: Diagonal linear networks have recently been used actively as a toy model from which useful insights can be drawn and for which rigorous results can be potentially derived. This line of work has been active for several years, primarily in NeurIPS, ICML and COLT (see the various references in the paper), and remains an active area with several open problems on some of the simplest models. Our result in this line of work reaches a new level by obtaining a formal result for a transformer-inspired model that maintains the softmax; this is an important component as diagonal networks with a single non-linearity are still far from being well understood theoretically. * On initialization scale: see our response to your questions above. * On assumptions 4.2, 4.3, 4.4: These assumptions indeed hold for our toy model (an attention mechanism with diagonal weights). Please see Appendix C for experimental verification of these assumptions in our toy model. * Weakness: “The presentation of the paper could be improved.“ * Thank you for your feedback on the presentation of Sections 4.1.1 and 4.1.2. We were faced with a typical problem for theoretical papers: how to balance rigor and reader-friendliness in our proof sketch. There is a rigorous proof in Appendix A, so we decided to be less explicit on the error/magnitude bounds in the main text. We see now that this can be confusing for some readers. We will improve the presentation of Section 4 in the revision: * As suggested by Reviewer yLKb, we will move Figure 6 to the main text. This illustrates Theorem 4.5 more clearly than Figure 3. * As suggested by Reviewer yLKb, we will move some panels from Figures 9 and Figure 10 to the main text, since these illustrate assumptions 4.2,4.3,4.4 and our experimental verification. * We will avoid phrases like “very small” and “negligible”, which referred to order $o_{\\alpha}(1)$ in Lines 150, 170, 188, 191, 196. * We will improve the proof sketch in 4.1.1 and 4.1.2. This means adding more substance to the proof sketch in Section 4.1.1, and also shortening the proof sketch in Section 4.1.2 by removing technicalities. Concretely, in Section 4.1.1., we will explain how we use the conservation law to understand when the approximation to the dynamics is valid, since this is an important element. * References * [Bach ’20] "Effortless optimization through gradient flows." Machine Learning Research Blog. https://francisbach.com/gradient-flows (2020). * [Saxe et al. ’13] "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks." arXiv preprint arXiv:1312.6120 (2013). * [Arora et al. ’18] "On the optimization of deep networks: Implicit acceleration by overparameterization." International Conference on Machine Learning. PMLR, 2018. * [Razin and Cohen ’20] "Implicit regularization in deep learning may not be explainable by norms." Advances in neural information processing systems 33 (2020): 21174-21187. * [De Sa et al. ‘22] "From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent." Advances in Neural Information Processing Systems 35 (2022): 30963-30976. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. That does clear some of my confusions. I have increased my score accordingly. However, I still think the paper has a large room for improvement in its presentation. Overall I am still inclined for reject but won't be unset if it's accepted with major refactoring. --- Reply to Comment 1.1.1: Comment: Thank you for adjusting your score. In the revision, we will implement Reviewer yLKb's suggestions for improving the presentation. We will also implement the presentation changes (mostly in Section 4) that we promised in our response to you above. Apart from these, are there any other places where you would suggest changes to the presentation?
Summary: The article "Transformers learn through gradual rank increase" considers the dynamics of training for the neural networks models with attention mechanism. The authors relate the training dynamics to a particular type of gradient flow. They show under 3 important assumptions: i. diagonal weight matrices ii. initialization is small iii. only one coordinate is in "active" regime at a time that dynamics occur in discrete stages: (1) during most of each stage, the loss plateaus because the weights remain close to a saddle point (2) at the end, the saddle point is quickly escaped and the rank of the weights increases by at most one The developed theory is illustrated with a series of experiments. Strengths: 1. Nice and clean results that show us the dynamics of neural networks with attention mechanism 2. Well connected with the previous research on training dynamics 3. Paper is well written Weaknesses: Assumptions are quite restrictive. It would be nice to relax the assumptions, especially assumption #3. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is the paper more about attention in general than specifically about transformers? 2. What will happen if some weights activate simultaneously? How the theory would be affected? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are adequately discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review of our paper and your questions about the theory. We are glad that you found the results interesting and found our paper to be well-written. Please find below our response to your questions: * Q1: "Is the paper more about attention in general than specifically about transformers?" * Our paper has two novel contributions: an experimental contribution and a theoretical contribution. * The experimental contribution is that we observe that the difference between trained and initial weights grows in rank. The experiments are conducted on ViT and GPT2 transformers (which have attention layers inside). So the experiments are about transformers in practice. * Our theory on the training dynamics applies generally to nonlinear networks with diagonal parameterization. Transformers with diagonal weights are the important special case on which we focus. Our theory applies to these transformers if the attention layers are the only trainable parameters (see Example 3.2 in our paper for details). So the theory applies to transformers of any depth, where only the attention layers are being trained. * Q2: “What will happen if some weights activate simultaneously? How the theory would be affected?“ * If more than one weight can activate simultaneously, then it becomes more burdensome to write down the dynamics in Algorithm 1. Nevertheless, we believe that we might be able to prove the multiple-weight-activation dynamics are valid if we also modify Assumption 4.4 appropriately. Thankfully, multiple weights do not seem to activate at exactly the same time in practice (see our experiments in Appendix C), so this non-degeneracy assumption seems to be a valid simplifying assumption. * Weakness: “Assumptions are quite restrictive. It would be nice to relax the assumptions, especially assumption #3.” * We wholeheartedly agree that it would be nice to relax the assumptions 4.2, 4.3, and 4.4, but we also recall that they are validated on our toy model in Appendix C. Assumption 4.2 is mainly the assumption that two weights do not activate simultaneously, so see our answer above for how we believe this could be relaxed. We do not know how to relax Assumptions 4.3 and 4.4, but strong assumptions along these lines seem inevitable because of the generality of the theorem. In the revision, following the suggestion of reviewer yLKb, we will move some figures from Appendix C to the main text because these help illustrate the assumptions and our experimental verifications of them on the toy attention model. --- Rebuttal Comment 1.1: Title: Rebuttal answer Comment: Thanks for answering my questions. I tend to keep my score as my thoughts on the paper didn't change after reading other reviews and your answers.
Rebuttal 1: Rebuttal: We thank the reviewers for their generally positive evaluations and for their helpful feedback that has helped us improve the paper in the revision. As suggested by the reviewers, we have **added new experiments**: * on NLP transformers (GPT2 trained on Wikitext) * and SGD-trained transformers (ViTs on CIFAR-10/100). See attached file. These experiments further confirm the observations of the paper. We have responded to all reviewers’ questions individually, and we are happy to respond to any other questions they may have. Pdf: /pdf/540de413450e8193364ac14afb67716b1f92d66c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Diffusion Self-Guidance for Controllable Image Generation
Accept (poster)
Summary: The paper introduces a new method for detailed and versatile image editing and controlled image synthesis with large-scale text-to-image diffusion models. In particular, the authors propose self-guidance: Self-guidance uses the internal representations of the denoiser neural network of diffusion models for guidance. The work finds that the attention and feature maps of the denoiser network encode properties like objects' size, shape, position and appearance in an interpretable way. This is, some heuristics on the internal attention/feature maps can be defined that represent these properties well and then we can do guidance, leveraging these internal representations (note that this works only for objects and words that were mentioned in the text prompts driving generation). Hence, given an image (either generated by the diffusion model itself or a real one, reconstructed by the diffusion model), the novel self-guidance technique enables editing operations such as freely modifying objects' size, shape, position and appearance, without introducing noticeable artifacts. Importantly, this does not require any additional training or use of auxiliary models, and no additional paired data for training, which makes this a very powerful technique. The authors convincingly validate their proposed editing method mostly qualitatively by showing lots of interesting editing examples. Strengths: Generally, the main strength of the paper is the introduction of the novel self-guidance technique, which it then uses for detailed and advanced image editing and controllable image synthesis. To the best of my knowledge no previous image editing methods with text-to-image diffusion models reach this level of fine control with respect to objects' shape, size, position, appearance, layout, and composition, while introducing almost no artifacts. **Clarity:** The paper is very well written and easy to follow and understand. It is very clear. **Originality:** The main idea of self-guidance leveraging the model's internal representations is novel and original. That said, it is closely related to, and probably inspired by, existing technique such as Prompt-to-prompt editing (Hertz et al. "Prompt-to-Prompt Image Editing with Cross Attention Control") and paint-with-words (Balaji et al. "eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers"). **Significance:** Controllable image generation and image editing with large-scale text-to-image diffusion is an important direction with many relevant applications. The paper proposes a very powerful and versatile method with little requirements, which makes this a significant contribution. **Quality:** The overall quality of the paper is high. It is clear, well written, all claims are supported, limitations and societal impacts are appropriately discussed, background is provided, and related work is thoroughly discussed. Weaknesses: While the paper has few major issues, there are some concerns: - Editing and control is limited with respect to objects (and words more generally) that appear in the text prompt. This also means that given real images require an appropriate caption to be editable. Note that this limitation is acknowledged by the authors. - The method comes with a ton of hyperparameters: The guidance weights for the different edits need to be chosen manually and it seems like some trial and error can be necessary. Also, more detailed ablations on this would be helpful, like showing different guidance weights for same image/seed/prompt/edit, etc. - When performing a simple edit like moving a single object, we actually require a lot more guidance terms for keeping all the other objects in place and preserving their appearance. This means that even edits that may appear simple are actually quite complicated and require guidance weights for all different terms. - The way the different edits and controls are defined is largely heuristic (how size, shape, appearance, etc. is calculated from the attention/feature maps). It seems to work well, but in theory it would be great if there was a more principled approach to this. - The way the cross-attention maps are leveraged in self-guidance is related to the "paint-with-words" technique introduced in eDiff-I (Balaji et al. "eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers"). I believe this should be acknowledged and cited. - The approach is validated only in pixel-space text-to-image diffusion models. Can the same method also work in latent diffusion models? The most popular, and publicly available, large-scale text-to-image model is Stable Diffusion. It would be great to see the method also applied there, if possible. In conclusion, there are concerns, weaknesses and routes for further improving the paper. However, the strengths outweigh these weaknesses. Consequently, I am leaning towards suggesting acceptance of the paper. I am looking forward to the rebuttal by the authors. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I also have a variety of questions that did not significantly impact the rating of the paper: - I believe the paper intrinsically relies on a typical U-Net architecture for the denoiser network that gives rise to the useful attention and feature maps. There are various recent works that use pure vision transformer denoiser networks or advanced architectures like recurrent interface networks. Can the approach be extended to these architectures, too? I would be interested in the authors' thoughts on this. - In a similar spirit, do the authors believe that similar techniques can also be used in diffusion models that model video, 3D, audio, graph/molecule data, etc.? - While centroid, size and shape leverage all cross attention maps, the appearance term uses only the activations of the penultimate layer in the decoder as well as the final cross-attention operation. Why is that? Can the authors provide more intuitions for the qualitative differences in how the attention/activations are used in the different operations? - The paper entirely leverages DDPM-style sampling. Does the method also work with deterministic probability flow/DDIM sampling? Moreover, when using given real images, could there be an alternative way of editing when probability flow ODE/DDIM is used to deterministically encode a given image and then re-generate again while introducing the desired edits? I would be curious about the authors' perspective on this. Very Minor nitpick: - Please properly introduce the data $x$ as well as the noise schedule $\alpha_t$, $\sigma_t$ around equation 1. - I would suggest to properly introduce and define the indices in Line 155 and below, this is, what exactly is $i$, etc. to avoid any potential confusion. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations and potential negative societal impact have been appropriately discussed and are well addressed. There are no concerns in that regard. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for the in-depth reading of our paper and for the helpful comments and suggestions. **Limitation to entities mentioned in prompt:** Yes, this is indeed a primary limitation of self-guidance as presented. However, other forms of self-guidance could help in situations where a text description is not available. For example, to move objects, one can select a point on the canvas to extract the feature vector at that location and guide that feature to appear at another location instead; one could also extract a soft segmentation map by finding feature vectors (which have been shown in concurrent work to be meaningful, e.g. [1,2]) that are epsilon-close to the “clicked” feature, rather than by going through the cross-attention map, and using those as self-guidance targets (e.g. to resize objects or change their appearance). We leave fuller exploration of these ideas to future work, though preliminary results indicate the direction is promising. **Sensitivity to hyperparameters:** We agree that more discussion on sensitivity to hyper-parameters is valuable. As also discussed in responses to reviewers EBx5 and jH2E, we are releasing an open-source implementation of the model so that the community can explore these aspects, but we also find that self-guidance is not highly sensitive to choices of weights, etc. and that once good settings for hyperparameters are found, they can remain more-or-less fixed across images. We also plan on adding a figure demonstrating the effect of varying these axes to the final manuscript, and included a figure in the global response to the reviews visualizing this. **Self-guidance terms:** We agree that further exploration of self-guidance and a more theoretically grounded approach is an exciting future direction. Though the terms we propose are indeed “heuristic”, they proved effective for the desired edits on the models we explored. The mechanistic role of attention interactions in large models remains an open research question and we see self-guidance as another tool that can expose unexpected behaviors in these models, hopefully leading to a better understanding of their internals. For example, in Figure 9 (c,d), we find hints that – somewhat counterintuitively – spatial attention patterns themselves may somehow be used to encode desired appearance information of objects. **Pixel-space vs latent-space models:** We do find that self-guidance generalizes to Stable Diffusion and are releasing an open-source implementation of such, please see global comment for more detail. **Self-guidance across architectures and modalities:** The core idea behind self-guidance is that powerful generative models learn interesting internal representations, and we can use these representations to guide sampling. This does not rely on attention, but does rely on a method for identifying useful features from internals and a way to guide sampling. We are hopeful that similar techniques can be useful for other modalities. **Appearance term:** This is a fair point. Empirically, we found that guiding other features in the UNet did not control appearance (at least as perceived by humans) as much as the features toward the very end of the model, nearest to the output head. Attention maps at that stage are also higher-resolution with sharper edges. This motivated the formulation of a single appearance term rather than a bundle throughout the depth of the network. It is possible that we could effectively control other attributes by only guiding one attention layer as well, and this may allow a modest speedup at inference time. **Other sampling methods:** We found self-guidance to work somewhat worse with DDIM. We hypothesize this is due to the higher stochasticity of DDPM being less sensitive to going off-manifold or perhaps due to the inaccuracy intrinsic to the reverse diffusion sampler [3]. That being said, in the Stable Diffusion implementation, we are able to use other schedulers such as the Euler discrete sampler [4] with success. Recent work has shown the ability to reconstruct images by inverting multiple layers of DDPM noise [5], and this may indeed be a promising avenue to further improve quality of real image editing. **Minor nitpicks:** Thanks for the attentive reading of the manuscript and we will incorporate these changes into its final edition, including citing eDiff-I and defining indices and variables in equations. * [1] Diffusion Hyperfeatures: Searching Through Time and Space for Semantic Correspondence, Luo et al., 2023 * [2] Emergent Correspondence from Image Diffusion, Tang et al., 2023 * [3] Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC, ICML 2023 * [4] Elucidating the Design Space of Diffusion-Based Generative Models, Karras et al., NeurIPS 2022 * [5] An Edit Friendly DDPM Noise Space: Inversion and Manipulations, Huberman-Spiegelglas et al., 2023 --- Rebuttal Comment 1.1: Title: Thank you for rebuttal Comment: I would like to thank the authors for their rebuttal and for replying in detail to all my questions. I do not have any further questions. It's great to see that the method also seems to work in Stable Diffusion. The additional results in the pdf are also helpful and make me a bit less concerned about hyperparameters (although that's still a weakness). Overall, I believe this paper should be accepted. I raised my score by one point.
Summary: Large-scale text-to-image generative methods have demonstrated impressive performance given a detailed text prompt. However, many aspects of an image are hard or impossible to specify with texts. The authors proposed self-guidance to guide the generation of diffusion models. Experimental results demonstrated that object size, location, and appearance of objects can be controlled from normalized attention matrices and feature activations. Strengths: 1. The motivation and background is explained clearly. Arguments and claims are supported by references or experimental results. 2. The proposed approach is simple and effective. The presented empirical results on multiple tasks demonstrated the effectiveness of the approach. The authors claimed that all results are non-cherry-picked, which is quite impressive. 3. Adding geometrical/compositional control to the image generative models is an interesting and important research topic. The authors proposed a simple and smart method to control existing generative models with diffusion guidance. 4. Multiple tasks were considered, including adjusting individual properties, compositional generation, and real image editing. This proposed approach showed potentials to a range of real-world applications. Weaknesses: 1. From the abstract, the authors aimed to add more control over the image generation beyond text prompts. However, the proposed approach is not a good solution in this regard – it is strongly based on the tokens in the text prompt. There are two limitations: * Can we control fine-grained objects/parts not naturally present in the text prompt? For example, control the dog mouth when generating “a photo of a dog wearing a baseball cap”? All current results are on the object-level with no finer control. * The proposed approach is limited to objects/parts easily describable by texts. This is a disadvantage compared to the keypoint inputs from DragGAN. For instance, annotating a keypoint would be more straightforward than inputting “left-front leg of the dog”, for both the user and the model. 2. Based on the framework, designing more control seems difficult, e.g., 3D viewpoint of objects. Compared to parallel works, ControlNet [2] finetuned the LDM model with 3D representations and DragGAN [1] used finer keypoint inputs to accomplish this. 3. The proposed method is built on existing large-scale image diffusion models. It heavily relies on a good representation learned by these trained models. The aforementioned limitations are mostly due to the latent representations having limited feature disentanglement or 3D awareness. The proposed approach doesn’t seem to have any easy fixes to address these limitations. References: 1. X. Pan et al. Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold. 2. L. Zhang et al. Adding conditional control to text-to-image diffusion models. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. To see how entangled the representations are, can the authors generate a video or a dense sampling of intermediate images to interpolate the “control space”? For instance, changing the layout of the objects in Figure 5 over 16 frames, or changing the styles in Figure 4 over 16 frames? In this way we could see: (i) if a continuous control corresponds to smooth and continuous changes in the image space, and (ii) how the other objects are influenced by the controlled changes made to the target object. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: 1. The input space of the proposed control is still limited by texts: see weakness 1. 2. The proposed approach is haunted by entangled representation: see weakness 2 and 3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and feedback on our work. We respectfully disagree that self-guidance is not a “good solution” to “add more control over image generation beyond text prompts”. The edits that we perform, such as moving or resizing objects, are very challenging (if not impossible) to affect through text, especially if consistency with a source image is desired. Indeed, no previous method has shown this functionality. We acknowledge in the manuscript that self-guidance as we present it requires entities to be mentioned in the text prompt, but please also see discussion under “Limitation to entities mentioned in prompt” in the response to reviewer qibF. Self-guidance enables a wide variety of image manipulations that – though they rely on text-image attention – *are not themselves feasible just through prompting with text*. Concurrent work such as DragGAN has been implemented in diffusion models already, in fact, building on self-guidance [1], highlighting the strength and versatility of our approach. 3D viewpoint control is an interesting future direction, but we restrict our attention to 2D for this work. The fact that self-guidance relies on the strong representations learned by text-to-image models is a feature, not a bug – we show that many complex image manipulations require *no additional fine-tuning or supervision or auxiliary models* (which ControlNet does). We wish to highlight that many of the manipulations enabled by self-guidance have not been shown at all in the literature for diffusion models previously. [1] DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models, Mou et al., 2023 --- Rebuttal 2: Title: Final Rating Comment: The authors proposed a simple but effective approach for controllable image generation/editing. However, I believe there are two major limitations of this work: 1. Inconsistent editing (EBX5) or feature entanglement (hErC): a simple observation can be made from the qualitative examples that as we apply control to one object, other objects/background may change. This issue can be more severe as multiple controls are involved. 2. Control limited to existing entities in the prompt (qibF) and cannot achieve fine-grained control (e.g., parts of the object) (hErC). Regarding the first issue, the authors claimed that a superior approach can be adopted by sharing DDPM noise between the two images (as described in Supp. Mat L27-29), and the second limitation can be solved by annotating keypoints on the image and then applying control. It should be noted that both approaches are not verified or carefully tested. Sharing DDPM noise may have been tested on a few examples but a thorough test is necessary if this is claimed to address the consistency limitation. Using keypoints as a control is another highly risky idea as the proposed approach may not naturally work or requires a lot of effort to adapt. I believe the two issues significantly limit how widely and effectively the proposed approach can be applied. A thorough analysis of these limitations is necessary in the main text of the paper as understanding the strengths and weaknesses are both crucial for a sound paper. Overall I think this work is limited in a few aspects with direct or indirect comparison with parallel methods. All initial reviews lean towards acceptance but I believe the current scores is a bit overrated given these concerns above. --- Rebuttal Comment 2.1: Title: Response to reviewer Comment: Thank you for the continued engagement to improve our work. We openly discuss the editing limitations you mention (Sec 5, Fig 9), and agree that further work is needed to improve our method in the editing context. In many controllable generation settings (e.g. layout, centroid, and size conditioning, zero-shot DreamBooth), these limitations do not apply, and regardless, we are not aware of any prior work that has achieved the same breadth of capabilities without supervision (e.g. moving objects, resizing them, copying appearance from a real image into a generated one). If you are aware of any other existing methods with the same functionality enabled by self-guidance that do not require training, we are happy to include them in our paper. We attempted to provide comparisons wherever possible to other work such as DreamBooth (though it requires fine-tuning) and PromptToPrompt. We agree that the methods we presented for sharing noise to reduce changes in not-controlled parts of the image and our proposal to use keypoints for control are not thoroughly tested, but they are also not core components of our method (all figures in the Supp. Mat., Fig. 4, Fig. 5, do not use shared noise). The limitation of only being able to control what is described with text can be addressed by expanding the text prompt to describe more of the image. This form of control isn’t much more challenging than annotating objects with keypoints (as DragonDiffusion does, citing our work). As we mentioned in the response to EBx5, “we find that as long as the specified constraints do not contradict each other, the effectiveness of self-guidance does not decay as more terms are added.” Could the reviewer please point to evidence that “the issue can be more severe as multiple controls are involved” beyond this? Thanks very much again for your thoughtful response and reading the paper in detail.
Summary: This paper introduces diffusion self-guidance, an inference-time technique for controllable image generation using pre-trained text-to-image diffusion models. The key finding is that internal representations of the denoiser network carry meaningful information about the scene, and one can build custom energy functions around these representations to align image generation with user-defined scene properties, including object location, size, shape and appearance. Further, an appealing property of self-guidance is that the energy functions can be flexibly composed to support the simultaneous manipulation of multiple image attributes. Extensive experiments are conducted to demonstrate the effectiveness of self-guidance across a broad spectrum of image manipulation tasks. Strengths: - The proposed method enables controllable image generation / manipulation using existing text-to-image diffusion model checkpoints without costly fine-tuning. As a training-free method, it achieves controllability by steering the generation towards matching the desired internal representations at inference time. This guidance-based design is conceptually simple, easy to implement, computationally efficient and highly flexible. - The method represents object properties using aggregated statistics of denoiser representations. It thus reveals the internal working of the generative process. To this end, the impact of the paper goes beyond controllable image generation / manipulation. - Self-guidance enables several edits that are not possible with concurrent methods. These include change in object location, size and modification of scene layout. In the meantime, self-guidance allows the composition of multiple edits in a single generation pass. This is another unique property that is rarely seen in previous and concurrent works. - Finally, self-guidance offers compelling editing capabilities and supports a wide range of image manipulation needs. Among them are the mixing of objects and layout from multiple source images, transfer of attributes defined by non-object words, and the editing of real images. Weaknesses: I do not have major concerns about the proposed method. Some minor concerns are as follows. - The main text presents self-guidance in its generic form and provides the ingredients for defining a specific energy function. However, all concrete examples are left to the supplement. I understand that space is limited and the authors want to make room for the results (which are indeed compelling). However, it is awkward to alternate between the main text and the supplement in order to decipher how those edits are actually achieved. One possibility is to organize all instantiations of self-guidance in a big table and put them in the main text for the ease of understanding. - I did not find ablation experiments on hyper-parameters such as the number of inference steps, guidance strength and the range of steps to apply guidance. In practice, the editing quality is often quite sensitive to these hyper-parameters. While their choices are briefly discussed in the supplement, some visual examples will help visualize their effects. - The experiments are conducted using Imagen, whose design is substantially different from the publicly available Stable Diffusion (cascaded vs. latent diffusion). It is a well-known fact that an editing method may not work equally well on different diffusion models. Given that most players in the space of controllable image generation rely on Stable Diffusion, I would encourage the authors to showcase a few editing examples using Stable Diffusion in order to calibrate the performance of self-guidance. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please find my comments in the section above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: This paper includes a discussion on the limitations and potential societal impact of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read and review our paper! **Paper layout:** Thanks for the feedback on this. We entirely agree and have restructured the text so that the energy functions are in the main paper rather than the supplementary material. **Sensitivity to hyper-parameters:** We always apply self-guidance on all inference steps, and find that our method works on a wide range of steps – we experimented with 128, 256, 512, and 1024. Coarsely, we find that applying self-guidance only on early steps allows for control of layout but not appearance, and only on later steps allows for control of some appearance but not layout. That being said, we agree with the review that visual demonstrations of these effects are valuable to provide a better understanding of the method’s strengths and weaknesses, and we have included this ablation figure in the global comment (and will also incorporate it into the final manuscript). Thank you for the suggestion. Please also see response to reviewer EBx5 for more discussion. **Open-source Stable Diffusion implementation:** We agree that showcasing the method’s ability to generalize beyond one specific text-to-image diffusion model is valuable and are releasing an implementation on Stable Diffusion XL. Please see the PDF in the global comment for examples of self-guidance applied to Stable Diffusion XL. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. The clarification on hyper-parameter selection is helpful. The results on Stable Diffusion also look promising, and I am looking forward to the open-source implementation.
Summary: This paper proposes a method for image editing using pretrained text-to-image diffusion models. The method guides the sampling process with energy functions that are added similarly to classifier guidance. These energy functions are computed as the difference between some object's property and the target state we want to change it to. A property is either an object's location, size, shape, or appearance. These properties are represented by the cross-attention maps between the object's word token and the image, which has previously been shown to produce meaningful masks. Multiple energy functions can be composed to achieve composite edits. Experiments qualitatively demonstrate the appeal of the method. Strengths: The method is practical since it does not require retraining while still achieving impressive results (as demonstrated qualitatively with some samples). The paper proposes a simple method with a clear and concise presentation. The addressed problem (image editing using pretrained diffusion models) is of interest to both academics and practitioners. Weaknesses: Quantitative evaluations of the method (for example with user studies that rank/compare different editing results) would make this work more insightful. Currently, it is unclear whether the shown examples are representative of what to expect when using this method and where the limitations lie. A potential limitation that could be exposed through quantitative studies is the number of energy functions that can be composed at once. Editing real images is possible as shown in Figure 7, but it isn't clear how well it works when compared to generated images. Furthermore, from some examples (e.g., Figure 8 a-b) it looks like the non-edited parts of the image also change and do not remain consistent with the original image, as is usually desirable in image editing settings. There are missing details that could help in reproducing and better understanding the results. See the questions part. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What pretrained diffusion model is used in the experiments? Does this method perform differently for different diffusion models (Stable Diffusion vs. eDiff-I vs. Imagen vs. etc.)? I'm asking this because different diffusion models incorporate text conditioning differently (both architecturally and during training). How sensitive is the method w.r.t. hyperparameters like the attention layer, time step, and guidance strength? Is it necessary to tune the hyperparameters for every image individually? Maybe even for every energy function individually? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Many limitations regarding the method and potential negative societal impacts are discussed. Some that might be reasonable to also mention are listed above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thorough review and thoughtful comments. **Open-source Stable Diffusion implementation:** To facilitate an even better understanding of the limitations of self-guidance, as well as address questions about whether shown examples are representative of the method’s abilities and whether our method generalizes beyond Imagen, we are releasing an implementation of self-guidance on Stable Diffusion XL. We have also attached examples of controlled generation using self-guidance on SD-XL in Fig. 2 of the PDF in the global comment. **Limitations of self-guidance:** We agree that strong numerical evaluation of controllable image generation approaches remains an open problem. We qualitatively show that our approach in quality is comparable to previous work where there is overlap in functionality (e.g. Figs. 3, 6) though it requires no fine-tuning. Additionally, our method enables many manipulations that are not possible with previous work – and we include a representative demonstration of the main shortcomings of our method in Fig. 9. We also show that many energy functions can be composed (Fig. 4) with terms from multiple source images on appearance and layout – we find that as long as the specified constraints do not contradict each other, the effectiveness of self-guidance does not decay as more terms are added. We also show additional non-cherry-picked examples of image manipulation in the Supplementary Material, highlighting the diversity of outputs the method is capable of producing. As for editing real images, we do find a slight decrease in faithfulness to the original, which seems to stem from the limitations of the simple appearance term in our reconstruction methodology (an average-pooled per-token vector). We thank the reviewer also for pointing out slight inconsistencies in the background of the edit in Figure 8 (a,b). In this case, a higher-quality result can be obtained by sharing DDPM noise between the two images (as described in Supp. Mat L27-29), a technique we did not apply to results in that figure . **Sensitivity to hyper-parameters:** We use Imagen in all experiments shown in the paper, but our method only makes the assumption that a cross-attention interaction between text and image tokens exists in the architecture (which is the case for all current SOTA text-to-image systems). We guide all attention layers and timesteps across all edits, without per-edit tuning. Each energy function has its own “default value” weight that works well for it, and this value is not dependent on the prompt or image at hand. When composing many energy functions, one may wish to slightly increase or decrease the strength of one term when compared to others, but we find that the range of per-term weights across images is small and that any value within the range works reliably. We find hyperparameter ranges by running an ad-hoc binary search once for each self-guidance term (e.g. centroid, size, …): too-large values cause visible artifacts in image quality while small values fail to cause the edit to take place. We include details on these hyperparameters in Supp. Mat. (e.g. line 53, 58) as well as a visual demonstration of their effects in the one-page PDF attached to the global response to all reviewers.
Rebuttal 1: Rebuttal: We thank all reviewers for their time and detailed reading of our paper. Reviewers found our approach to controllable image generation “conceptually simple, easy to implement, computationally efficient and highly flexible” with “clear and concise presentation” in an “interesting and important research topic” with “many relevant applications”. Reviewers appreciated that, by guiding properties of the attention and features of large generative models, our method “does not require retraining while still achieving impressive results” and provides a level of control that “no previous image editing methods with text-to-image diffusion models reach”. A shared concern was the sensitivity of our method to hyperparameters, namely the guidance scale that effectively weights each self-guidance term differently. We include a figure (Fig. 1) in the PDF attached to this response to demonstrate the robustness of self-guidance to different choices of scale hyperparameters. Note that self-guidance performs reasonably for a wide range of values, but extremely small or large scales induce artifacts or fail to execute the edit. Some reviewers were curious whether our approach generalizes to latent-space text-image diffusion models such as Stable Diffusion. We plan on releasing an open-source implementation of self-guidance on Stable Diffusion XL. We are still working on this implementation, but have included preliminary results in Fig. 2 of the attached PDF, validating that our method indeed generalizes to other diffusion architectures. We are also sharing an anonymized Colab notebook containing the current state of the code used to generate this figure, and are working hard to implement the remaining pieces. Pdf: /pdf/9913ea6b606c6e3dbe03181af5ad3824ee62cbe2.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Convolutional Visual Prompt for Robust Visual Perception
Accept (poster)
Summary: The paper addresses test time adaptation by employing learnable convolution operation named CVP on out-of-distribution (OOD) images. The motivation is to take advantage of the inductive bias imparted by convolution operations with the added advantage of learning fewer parameters compared to existing approaches. CVP in conjunction with other sota approaches improve the performance on benchmark datasets. Strengths: 1. Using small convolution operations at the start for OOD test time adaptation to take advantage of inherent inductive bias seems to be new. 2. The combination of selfsupervision and convolution makes it both parameter as well as label efficient. 3. Main and ablation experiments are largescale Weaknesses: 1. Eqn. (1): When $y_{i,j}^s$ is 0, $i$ and $j$ are from different samples (negative pair). So, in the loss calculation negative pairs are not considered. Then how is it a contrastive formulation? 2. Line 111: It is said that $\mathcal{X}_s$ is the source domain data for training. If source domain data is used then it is not a test time adaptation framework and comparing it with such works would not be fair. 3. Line 150: One of the important assumptions made in this paper tells the distribution shift in images is often visually structured. Why is this assumption right? What tells that the distribution shift is 'visually structured'? 4. Line 180+: When describing the proposed model, it is told that the logits before the fully connected layer of the backbone model are extracted for training the SSL model. This is making me confused. Does it mean the conv operation is applied on the logits and not on the image? I think having a figure describing the proposed approach would have been useful here. I don’t see such a figure and thus I have to presume things. Also, it is said that a MLP gets trained in addition. Is it used during inference? Also, is this taken into account in the learnable parameter count, especially when compared with other related approaches like TENT or other traditional prompting methods? In this regard, the figure 6 in the supplementary is incomplete. What is the unit in x-axis? Millions, billions, K? 5. Line 184: When extending the proposed approach to CLIP, why only vision encoder is used for prompting? Why not text encoder only? Is there any ablation that is done? 6. Section 4.1: FT and PFT Baseline: Why is this restoring to initial weights required? Is it standard in literature? Any reference? 7. Line 206: What is a sharpness kernel? 8. Section 4.2, CVP Complements Other Test-Time Adaptation Methods: More details are required on what is meant by 'combining' CVP with these methods. Also, the results of this experiment (Table 4) are not convincing. Improvement is very marginal compared to TENT, in all three imagenet variants. Compared to standalone CVP, CVP+x provides huge error rate drop - then this means the major workforce behind the performance is x and not CVP. 9. Ablation study – Low rank structure: More details are needed on how the low rank prompts are created? Are they directly applied on the input? According to the original LoRA paper [24], low rank adaptation is shown to be effective in transformer architectures. How is it adapted to convolutional architectures? As two low rank matrices are involved (I'm supposing so, as details are missing), this evidently involves more learnable parameters and thus susceptible to overfitting. So, the comparison to conv kernels is not fair. The corresponding figure in Appendix (Fig. 9), does not have any descriptive captions. It’s better not to give such figures if everything about the figure has to be inferred by the reader. 10. Line 266-271 and Table 7: What is 'standard' SSL task? Is it employing eqn. (1)? Its difference with 'contrastive' variation needs to be detailed. While rotation or MAE were shown to perform worse compared to the standard SSL task (in Table 7), there is no detail of these two other SSL approaches. How many variations of rotation prediction were used? How much percentage of the images were masked in MAE? Is there any study by varying the mask percentage? How long are these other SSL tasks trained? Generally Contrastive and MAE tasks require a long training for better learning. Was this good practice followed? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Questions are all detailed with the associated weaknesses raised above (in ‘weaknesses’ section). Here, in addition, some minor typos are mentioned. - Line 156: Between ‘shifts’ and ‘In’, a stop is missing. - Line 174: ‘… method, We …’ -> ‘method, we’ - Line 175: Appendix number is missing - Line 178: ‘contains’ -> ‘containing’ - Line 200: ‘More baseline detailed are shown’ -> ‘More details about the baselines are provided’ - Line 204-205: ‘random initialize the kernel size k’ -> ‘randomly initialize the kernel of size k’ Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are not explicitly described. It may be useful to discuss the scope of the claims regarding the convolution operation in CVP handling structural bias in OOD. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Clarification for contrastive formulation** The loss indeed considers the negative pair, which is in the denominator. If $x_{i}$ and $x_{j}$ are positive pair, the $y_{i,j}$ is 1; then we maximize the similarity between the pos on the nominator and minimize the denominator, which is similarity between all negative pairs $x_{i}$ and $x_{k}$, where $j \neq k$. When $y_{i,j}$ is zero, the whole term is 0, and does not impact the loss. **Clarification for test-time adaptation framework** As the paper [A] defined, Test-time adaptation is running optimization at test time for target-domain data, which does not prohibit a model pre-trained with source data. Test-time adaptation methods, such as TENT, TTT, and MEMO, all train their model using the source data. [A] Gao, Yunhe, et al. "Visual prompt tuning for test-time domain adaptation." arXiv preprint arXiv:2210.04831 (2022). **What tells that the distribution shift is 'visually structured'?** Our visual world is known to be structured, and the changes are not random. For example, the shift introduced by blur is highly structured: you can tell there is blur in the image. This assumption works well according to our empirical results. We agree that our method may not handle arbitrary shifts that are not structured, but this is out of the scope of our paper and we leave this for future work. **Clarification Line 180+** - The conv. operation is applied to the image and only during the adaptation phase. We draw the flow of our proposed method in PDF Figure 1 in general responses. - Yes, the MLP is used during the inference, yet the weight of it is fixed and not counted as learnable parameters at inference. When compared with other approaches, Fig. 6 shows only the number of learnable parameters at the inference time. - For Fig. 6, we redraw it and show it in the PDF Fig. 4. The x-axis is in unit K. **Line 184: When extending the proposed approach to CLIP** - We focus on the convolutional prompt only applicable to the vision encoder. Combining text prompts with CVP will be future work. **Clarification: Section 4.1 FT and PFT Baseline** Yes. Standard test-time adaptation initials or reset weights after a one-time adaptation, such as MEMO. Because the samples that arrived in batches can be uncorrelated to each other at inference, using the updated weights from prior batches will hurt the performance. **Line 206: What is a sharpness kernel?** - Sharpness is a standard $nxn$ matrix that enhances edges with a high pass filter. For example, [[0,-1, 0], [-1, 5, -1], [0, -1, 0]]. **Compared to other TTA baselines** Our method is orthogonal to BN [53], TENT [60] and can be applied on top of them, improving those methods even more. Since our method updates only the input prompt without modifying the model weights, it works with frozen models and avoids the high cost of fine-tuning. **Details about low-rank prompts in ablation studies** 1. Pseudo algorithm for LVP. We will add this to the supplementary. - SVD on input $X$ → get initial $ U \Sigma V^T$ - Set rank number $r$ (ex: 3) - For every adapt iteration $t$: - Get inverse matrix $M_t = U_t * \Sigma_t * V^T _t$ - Apply low rank SVD on $M_t$ with rank number $r$ → get low rank $U'_t$, $\Sigma'_t$, $V'^T _t$ - Get new inverse matrix $M’_t = U’_t * \Sigma’_t * V’^T _t$ - Get low rank X → $X’ = X + M’_t$ - Calculate contrastive loss $L_s$ on $X’$ by inference the SSL model - Get gradient for $U'_t$, $\Sigma'_t$, $V'^T _t$ and Update the three matrices via SGD 2. Are they directly applied to the input? - Yes, the low-rank matrices are directly applied on input X 3. Clarification: How is it adapted to convolutional architectures? - Since our low-rank prompt is applied on the input, not the intermediate layers of the model, it can be applied to convolutional neural networks following prior literature [B]. - [B] Yang, Yuzhe, et al. "Me-net: Towards effective adversarial robustness with matrix estimation." arXiv preprint arXiv:1905.11971 (2019). 4. Compare low-rank prompts with CVP - We agree that low-rank prompts induce more learnable parameters due to the decomposition of matrices. However, compared with VP, CVP, and low-rank prompts require only 1\% and 9\% of the parameters, respectively, making both methods lightweight. Therefore, we want to highlight that both of these lightweight prompts have significant performance improvements. 5. We are sorry for the missing details. We already added a detailed explanation and full caption for Fig. 9 in our PDF Fig. 2. **Line 266-271 and Table 7, Questions about SSL tasks** 1. Sorry for the confusion. The 'Standard' means the backbone model without adding SSL. In Table 7, the standard model is the ResNet50 pre-trained on ImageNet. We compare the adaptation performance of three SSL tasks with this standard model. 2. Sorry for the typo. In Table 7, the reported results are the average robust accuracy, not the average error rate. As the subparagraph in the Ablation study mentioned, our CVP improves robust accuracy for every severity level on ImageNet-C when optimized with all three SSL tasks. We will add the details of SSL tasks in our later revision. 4. We have four degrees for our rotation prediction tasks, including 0, 90, 180, and 270. 5. We set up 75 \% of the images to be masked in MAE, following the default setting in [C] for setting up the mask percentage. We will add more studies on the varying mask percentages for MAE. - [C] He, et al. "Masked autoencoders are scalable vision learners." CVPR'22. 6. As our MLP model is small, the SSL tasks take only 1~2 hours for training. We train SSL only once before making the adaptation. Once the SSL has been trained, it is fixed, and we don't need to tune it at test time. SSL module introduces little inference overhead in exchange for a large gain in robustness. **Thanks for mentioning the minor typo, we will fix it in our later revision** --- Rebuttal Comment 1.1: Comment: Dear reviewer QKSZ, We appreciate your reviews and comments. We hope our responses address your concerns. In our rebuttal, we have added all the missing details and drew the flow for our proposed method (in general response PDF file). Please let us know if you have further question after reading our rebuttal. We hope to address all the potential issues during the discussion period. Thank you.
Summary: This paper proposes a novel label-free approach CVP for test-time adaptation on out-of-distribution data. The main idea is to use a convolutional kernal as the visual prompt. It captures the structure of data distribution shift, and reduces the trainable parameters. Experiments show that CVP improves model robustness and complements existing weight-adaptation methods. Strengths: 1. Prompt tuning has become an important technique to adapt large visual models. This paper provides a simple solution and may be useful in various applications. 2. The paper is well motivated. Using convolutional structure to inject the inductive bias is simple yet effective. It reduces the required parameters, and shows significant better performances than low-rank prompts. 3. The method does not require labeled data. 4. The method can be combined with other weight adaptation methods. It can also be generalized to multiple self-supervised objectives. The authors provide corresponding experiments with encouraging results. 5. The paper is well structured and nicely written. Empirical results are quite extensive. The attention visualization provides a clear insight. Weaknesses: 1. The corruption types considered in this paper seem to be restricted to low-level transformations. This explains why the convolutional kernel is suitable. I wonder if CVP would still work well for high-level distribution shift, such as style changes? Typo: Reference missing in Ln 175. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In Ln 206, "starting from a sharpness kernel is effective". Could the authors elaborate on what 'sharpness kernel' is and why it is effective? 2. The convolutional kernel applies on the whole image. Sometimes, the corruption may happen in some local regions, e.g., motion blur on foreground objects only. How do the authors comment on such situations? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have not discussed limitations. But I feel that the method is simple, and experimental analyze are extensive enough. It is helpful to discuss the effectiveness under other OOD types like open-class data, high-level style changes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for endorsing and recognizing our work as realistic, interesting, and well-studied.** **CVP would still work well for high-level distribution shift, such as style changes?** - It's a very good point. We do have experiments on those style changes benchmark such as the ImageNet-Rendition, Sketch, and Adversarial. Our results show that CVP can improve robust accuracy on such a high-level distribution shift. **In L206., "starting from a sharpness kernel is effective". Could the authors elaborate on what 'sharpness kernel' is and why it is effective?** - When adapting, the kernel should be initialized with fix/random initialization. We use a sharpness kernel as an initial point for the fixed initialization setting, which is a n*n matrix. It starts from specific values, which can control the edge enhancement effect by extracting the frequency information of inputs with a high pass filter. When the kernel size is 3, we set up the sharpness kernel as [[0,-1, 0], [-1, 5, -1], [0, -1, 0]] - Similar to the convolutional kernel prompt, sharpness transforms an image using a convolutional 2D filter. It simultaneously enlarges the high-frequency part of images and then filters the low frequencies part. **Can convolutional kernel applies on the partial region** - This is an interesting point. We only apply it to the whole image. We will study this in future work. --- Rebuttal Comment 1.1: Comment: Dear reviewer XDjm, We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further question after reading our rebuttal. We hope to address all the potential issues during the discussion period. Thank you. --- Rebuttal Comment 1.2: Comment: Dear Authors, Thanks for the response. I have carefully read that and other reviews. The response generally solves my previous questions on high-level shift and sharpness kernel. The proposed method works on ImageNet-R/S/A according to the experiments, though I agree with Reviewer uZKT that the motivation of using convolution should be made clearer, especially in the case of high-level shift. E.g., whether 'de-corruption operator' works for style-shift? Whether 'those shifts are often locally structured'? --- Reply to Comment 1.2.1: Comment: We thank reviewer XDjm for the responses. We want to provide more clarification and discussion. Whether 'those shifts are often locally structured'? / Whether 'de-corruption operator' works for style shift data? As some literature [1, 2] suggests that part of the style shift is the change in the local information, such as textures, and part of it is the change in the global, such as sketches where object shapes are preserved yet texture cues are missing. Our prior experimental results show that CVP improves on several style-shift OOD benchmarks, such as ImageNet-Rendition, Sketch, and Adversarial. For the ImageNet-Rendition, the data contains several styles, such as art, cartoon and are more related to the local (texture) change. For the ImageNet-Sketch, the data contains all sketches that only preserve object shapes and are more related to global change. Therefore, the de-corruption operator like CVP can work on both local / global change of style shifts. [1] Geirhos, Robert, et al. "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness." arXiv preprint arXiv:1811.12231 (2018). [2] Wang, Haohan, et al. "Learning robust global representations by penalizing local predictive power." Advances in Neural Information Processing Systems 32 (2019).
Summary: .The paper proposes a new method for test-time adaptation (TTA) to structured distribution shifts. The proposed method learns convolutional visual prompts to prevent the model from overfitting to SSL objectives due to high dimensional prompts. The results show that the proposed approach consistently results in performance improvements for TTA to OOD variants of CIFAR-10 and ImageNet. Strengths: - The idea of using convolutional prompts to avoid SSL overfitting is novel and well-motivated (Table 5 shows how SSL results in overfitting and Figure 3 shows the need for some form of inductive bias). - The results indicate consistent results when CVP are added to baselines and other prior TTA works. - The method tunes very few parameters. The parameter efficiency allows it to be adopted in edge computing devices working with low memory and compute. Weaknesses: 1. The paper compares their method with only two types of visual prompts (patch and padding based [40]). The paper does report results with CLIP ViT-B (Table 3), but does not compare to other prompts used for adapting ViTs [26] and text-only prompts for CLIP-pretrained ViTs. [A] 2. The paper mentions on L63: “The only one that does not update the model (for Test time adaptation) is proposed by [40]”. However the following uncited works do not update the model as well [A, B]. 3. The paper does not show results with unimodal ViT architectures (eg. pretrained models from TIMM). Similar gains over the baselines (and [26]) in the context of ViTs can boost the applicability of the approach. 4. The paper sets different hyperparameters for different datasets (range of $\lambda$, kernel sizes), without providing guidelines on how these are to be determined for a new dataset (or corruption type). 5. Missing details on evaluation and implementation (covered under Questions). Minor writing: $\lambda$ is introduced much later in text, while being introduced in 3.3. [A] Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models. Shu et al. [B] Test-Time Training with Masked Autoencoders. Yossi et al. [C] Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model. Xing et al. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I am happy to update my rating if the concerns around comparisons to prior work and hyperparameter selection are adequately addressed. But I request the authors to provide the missing details. 1. In Table 8, multiple choices are provided for kernel size and update iters, how are the chosen hyperparameters validated? 2. What are the hyperparameters used in Table 4/5/7, where the kernel size is not specified? 3. How is CVP applied to CLIP-ViTs? Is the image first convolved and then tokenized? 4. Which CLIP-ViT/32 model and can was used for the results in Table 3? [A] does report numbers on CLIP-ViT-base/16. 5. What architecture is used for the results in Table 7? 6. For the MAE baseline in Table 7: is the contrastive objective of CVP directly replaced by MAE objective? But this introduces additional decoder parameters as well right? It would be good to see a comparison of the number of parameters that are optimized in different SSL methods. 7. L9: “1% when compared to standard visual prompts”. How does this compare to the shallow prompts used in [26]. 8. Tables 9 and 12 show that CVP does not outperform baselines for fog/frost/snow corruptions. Also, Figure 3 shows CVP does not outperform low-rank prompts for snow corruption. Any intuition on why CVP does not perform well when weather conditions are changed? 9. In Table 15, are the reported losses on samples that were used to tune prompts? Reporting the loss on unseen samples may give a better idea of the extent of overfitting. 10. Since [A] is a prior TTA work for CLIP, would there be any benefits of dual text-visual prompt tuning (like [C]) with: CVP (your method) + [A] (NeurIPS’22)? [A] Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models. Shu et al. (NeurIPS'22) [B] Test-Time Training with Masked Autoencoders. Yossi et al. (NeurIPS'22) [C] Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model. Xing et al. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 4 excellent Limitations: The paper may need to add a limitation (and future work) section if the above concerns (applicability to uni-modal ViTs and hyperparameter tuning) can not be addressed in the current submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We appreciate the reviewer’s comments and suggestions. Thanks to the reviewer for recognizing the novelty of our work. We have answered and addressed the questions.** **Comparison with other prompts** - Thank you for your suggestion of comparing with other prompts. We have followed your suggestion and included an additional comparison with the shallow prompt [26] in the Table below. - We mention that [A] uses text prompts tailored to vision language model adaptation. Our method is more general and can be applied to any visual perception model. - We run the experiment on CIFAR10-C and show robust accuracy. The shallow prompt doesn't improve under our setting because it requires more learnable parameters in the prompt and leads to overfitting for adaptation. - | | s1 | s2 | s3 | s4 | s5 | |----------------|-------|-------|-------|-------|-------| | CLIP-ViT-b/32 | 58.58 | 48.45 | 40.12 | 33.38 | 27.51 | | Shallow prompt [26] | 56.87 | 47.22 | 38.72 | 30.49 | 24.10 | | CVP | 59.11 | 49.09 | 40.76 | 33.51 | 27.80 | **Other test-time adaption method** Thanks to the reviewer for pointing out those works [A] [B]. We will definitely cite them in our revision. [A] Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models. Shu et al. [B] Test-Time Training with Masked Autoencoders. Yossi et al. **ViT Results** - Thanks to the reviewer for suggesting this. We ran the experiment for CVP on the ViT-Base model with results showing a similar gain in performance. - The following table shows our results of the ViT-Base model for 15 types of corruption under severity level 1 on CIFAR-10-C. The test accuracy of ViT-Base on CIFAR10 is 96.42\%. Compared to the accuracy (\%) before adaptation (standard) and using VP to do the adaptation, CVP achieves up to 2% better performance. More detail of ViT results can be found in the PDF Table 1. in general responses. - | | | ViT-Base/32 | | |-------------------|----------|:--------:|-------| | | standard | VP | CVP | | Averaged Acc. | 66.45 | 66.48 | **67.77** | **The choice of hyper-parameter setting on kernel size, $\lambda$ range, and update iterations** - For the $\lambda$ parameter, it controls the magnitude of convolved output when combined with the residual input. We set the $\lambda$ range to be [0.5, 3] and run test-time optimization to automatically find the optimal solution, which does not require a validation set. - We use 3x3 for cifar-10 and 5x5 for ImageNet. In general, small kernel size is used to avoid overfitting. We increase the kernel size for large images, such as ImageNet. - For the update iterations, as the Table 17. show a larger number of iterations has better performance. However, the training cost becomes higher. **Missing details on evaluation and implementation** - Thanks for mentioning it. We will add all of these details in our later revision. 1. How are the chosen hyperparameters validated in Table 8? - See the above section, "The choice of hyper-parameter setting." 2. What hyperparameters are used in Table 4/5/7, where the kernel size is not specified - For Table 4, we set the kernel size for CIFAR-10-C as 3x3 and ImageNet-C, R, S, and A as 5x5. The adaptation iterations are all set as 5. - For Table 5, same as the setting above, the CIFAR-10-C uses the 3x3 and others use 5x5. The adaptation iterations are all set as 5. - For Table 7, we evaluate the performance for three SSL tasks on ImageNet-C. The standard model is ResNet50. For the CVP kernel, we use the same setting as above, the 5x5, which is the optimal choice. 3. How is CVP applied to CLIP-ViTs? Is the image first convolved and then tokenized? - Sorry for missing details. Yes, our CVP is applied before tokenized. The CVP is applied to CLIP-ViTs by adding a convolutional kernel on the input sample and then getting an adapted sample as a new input iteratively. In our later revision, we will add more detail on how CVP applies to CLIP. 4. Which CLIP-ViT/32 model was used for the results in Table 3? - Thanks for the careful check. The CLIP model we use for evaluation is the CLIP-ViT-base/32. In our later revision, We will clarify our model CLIP-ViT/32 as CLIP-ViT-base/32. 5. What architecture is used for the results in Table 7? - The architecture we use for Table 7 is the ResNet50 pre-trained with original ImageNet-C. 6. Clarification for MAE baseline - Yes, the contrastive objective of CVP is directly replaced by the MAE objective. We replace the contrastive loss with reconstruction loss. Since the decoder for MAE is fixed, we do not optimize additional parameters at test-time adaptation. 7. L9: 1\% when compared to standard visual prompts”. How does this compare to the shallow prompts used in? - Thank you for your question. We compared to shallow prompts, which is also less than 1\%. 8. Why does CVP not outperform baseline on weather conditions - While our CVP doesn't outperform the VP baseline on weather corruption, it outperforms the standard baseline results. - In Figure 2., we show deep insight into the contrastive loss distribution and empirically find that the weather corruption (for example, snow) leads to a huge shift from the source domain even in the low severity. Therefore, we conjecture that the weather corruption types over-damaged the intrinsic structure and need more learnable parameters to restore the structure. 9. In Table 15, are the reported losses on samples used to tune prompts? - Yes, we report the losses for samples already tuning with the prompt. - We run the loss analysis on unseen test samples and show CVP can avoid overfitting. (shown in PDF Fig. 3 in general responses). 10. Dual text-visual prompt tuning - Thank you for these suggestions. We feel that combining text prompts and CVP is an interesting idea. We will add it in future work. --- Rebuttal Comment 1.1: Comment: Dear reviewer HEpT, We appreciate your reviews and comments. We hope our responses address your concerns. In our rebuttal, we have added all the missing details and added the experiment, including ViT results, shallow prompts baselines on CLIP, and the loss analysis for unseen samples (in our PDF file). Please let us know if you have further question after reading our rebuttal. We hope to address all the potential issues during the discussion period. Thank you.
Summary: The paper proposes a variant of Visual Prompt Tuning (VPT) where the prompt is applied to a penultimate layer of the encoder network and is a result of a convolution with a 3x3 or 5x5 kernel (essentially it is an added residual block or residual adapter if you will). The added adapter is trained at test time by applying a contrastive loss on a test batch (so, in fact, it seems to be a variant of transductive learning, requiring a batch of test images - to form a negative set for the contrastive loss, so not truly online and does not seem to be able to operate on a single test sample). The authors test their approach on a collection of OOD benchmarks, some synthetic, and some real - these ones are in fact UDA / UDG benchmarks, however, UDG baselines are not compared. Small improvements are observed over naive baselines, though prior methods like [53] and [60] seem to deliver much better performance (Table 4) and the authors only show the benefit of their approach via ensembling / combining with the other methods. Strengths: - test time adaptation, VPT, and handling domain shifts are important topics - some improvement is observed over simple baselines Weaknesses: - novelty: adding a convolutional residual adapter + contrastive loss for transductive learning (test time adaptation on a batch) is hardly novel - comparisons to prior works: * table 4 shows [53] and [60] attain significantly better results than the proposed approach * relevant baselines / benchmarks seem to be missing (e.g. https://arxiv.org/pdf/2210.04831.pdf reports better results on ImageNet-C + DomainNet benchmark could be added) * I would expect comparison to / on top of UDG baselines that solve the same problem * combining methods as in table 4 is fine, but other methods could also be combined, would it yield even higher results? - Supervised variant of the approach explored in the ablation needs to be compared to few-shot and in particular transductive few-shot methods, expecting much higher results there. - what about the online test-time tuning (from one sample) seems not possible due to the use of the contrastive loss? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - PromptTuning is a weaker variant of the broader family of PEFT methods, usually applied on the input level (penultimate layer in this case) it does not have enough representation power to model more complex deviations, as opposed to more popular LoRA or prefix tuning methods. Would the authors' approach be applicable to the more powerful PEFT methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: limitations not discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Novelty of our work** - Our paper provides deep insights on what is a good visual prompt design for test-time adaptation, which is an important problem. The key novelty of our work is to propose this simple and effective convolutional visual prompt to address the overfitting challenge at test time. Prior work like convolutional residual adapter adapts the model architecture with lots of parameters, which is designed for training time optimization but can lead to overfitting on test-time adaptation. Our method only adds a small convolutional prompt on the inputs during the inference time, which is both lightweight and avoids test-time overfitting. - Moreover, the other Reviewers XDjm and HEpT find our method to be novel and well-motivated. Reviewer uZKT also mentions this research direction would interest the visual prompt community. **Performance compared with other methods (BN [53] and TENT [60])** - Our method is orthogonal to BN [53], TENT [60] and can be applied on top of them, improving those methods even more. Since our method updates only the input prompt without modifying the model weights, it works with frozen models and avoids the high cost of fine-tuning (especially on edge devices like smartphones). Our work provides a new direction for the robustness field by visual prompting at test time, which we think is worth publishing. **Compare with relevant baselines such as [A]** - We thank the reviewer for suggesting this relevant paper, which we will cite. - We highlight the difference between our work and this paper as follows. 1.) Our CVP creates during the adaptation phase, and we don't need any source domain sample to tune the prompt first. 2.) In contrast to [1] tuning a collection of visual prompts and the classification head, we adapt every batch sample using CVP, which is just a single convolutional kernel. Our method is simple, lightweight, and easy to use. - [A] Gao, Yunhe, et al. "Visual prompt tuning for test-time domain adaptation." arXiv preprint arXiv:2210.04831 (2022). **Comparison to / on top of UDG baselines that solve the same problem** - Standard UDG methods require training on multiple source domains and testing generalization on a different domain, which is not the ImageNet benchmark setup we evaluate on. - Our UDG scenario has only a single training domain, like Imagenet, and tests the model on multiple OOD domains. We thus only compare UDG methods that conduct single-domain training, such as BN, TENT, and MEMO. Experimenting with UDG with multiple training domains is out of the scope of our paper. - We would like to include more baselines if the reviewer can point out other single training domains UDG work that we are missing. **Combining other methods** - Thank you for your question. We agree that since our method applied a prompt for the input, it can be easily combined with other methods that change the model weights. We believe our method can further improve the robustness of future work that adapts model weights. **Compare the supervised variant of the approach with transductive few-shot** - We want to clarify that results from the supervised variant are only for training the best case and thresholding the upper bounds of our method. - Our goal is to focus on test-time adaptation without the annotations, where we indeed compare with the transductive few-shot method (For example: MEMO) under the self-supervised setting (see Table 4.). The supervised variant of the approach represents the baseline which is not our main contribution. **The online test-time tuning with one sample** - Our convolutional visual prompt is general and can be applied to other SSL tasks that use one sample. In Table 7, we show our method also works with rotation prediction and MAE, which only needs one sample and achieves up to 3.14\% improvement in robustness. **Can CVP complement other parameter-efficient tuning (PEFT) methods?** - Yes, they are complementary. The PEFT methods often update the model weights, while our CVP updates the input. Thus our method is orthogonal to PEFT and can be directly applied to PEFT to improve the model adaptation performance further. We will add this in our future work. --- Rebuttal Comment 1.1: Comment: Dear reviewer zs9s, We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further question after reading our rebuttal. We hope to address all the potential issues during the discussion period. Thank you. --- Rebuttal Comment 1.2: Title: post rebuttal Comment: thank you for your response. Reading the rebuttal, I still have concerns: 1. regarding comparison with Gao et al. 2022 - stating the difference does not change the fact it is a higher performance baseline. If prompt pre-training is seen as an advantage, I urge the authors to show their method could be used to improve on top of Gao et al. result by using prompt pre-training etc. Or adding CVP to the Gao et al. method and showing further improvement. 2. Regarding CVP only improving over other methods when combined with them - my concern was - other methods can also be combined, and also standard VPT can be added to other methods - all those are additional baselines the authors need to show gains over to establish the importance of CVP design. Also, in most cases, the combination gains of CVP with other methods seem rather small (Table 4) and I wonder if it is significant enough. 3. for UDG baselines - many UDG methods could be employed with single-domain pre-training. In fact, some are evaluated in source-target pairs as a standard benchmark - I think a comparison to some of those could be shown, but perhaps concern 2 above is more important 4. for single sample vs transductive - if a single sample method can perform better than (transductive) contrastive approach - it should have been featured in all other experiments besides Table 7, if not - transductive baselines should have been used - from my experience, transductive inference (offline prediction for several test samples at once allowing them to learn from each other without any label knowledge) always adds a significant (could be even 5% or more) improvement to the results, and I think it is covering the improvement range observed in the current paper. In light of the above, I prefer to keep my original rating. --- Reply to Comment 1.2.1: Comment: Thanks for your replying 1. Regarding comparison with Gao et al. 2022 - We thank the reviewer for pointing out this amazing work. However, we also realize this work is still unpublished and under review. Therefore, we will consider comparing with them in future work. 2. Regarding CVP only improving over other methods when combined with them - We are currently running the experiment of combining standard VPT with other TTA methods. We will report the numbers as soon as possible.
Rebuttal 1: Rebuttal: - We thank the reviewers for the constructive feedback and insightful questions. We are delighted that most reviewers like the well-motivated novel method of CVP and think it would interest the visual prompt community, the extensive experiments conducted across various OOD recognition at large scale, and the clear writing. We address two common questions here. **Q1. Motivation of convolutional visual prompt (CVP)** - Our paper systematically studies how to be robust under major natural distribution shifts, such as style, blurring, or lighting changes. Those shifts are often locally structured, which motivates us to use the convolutional kernel to handle locally structured data. Besides, the traditional visual prompts (VP) often induce too many learnable parameters, which causes the overfitting problem. Therefore, we propose a new structured prompt, the CVP. Our extensive experimental results show that CVP is better than traditional VP, which is more efficient and lightweight. **Q2. Performance comparison with other test-time adaptation baselines** - The traditional TTA methods usually assume the model weights can be adjusted during the test time. However, we want to claim that our method is orthogonal to those traditional TTA methods, such as TENT, MEMO, and BN, where we consider a more challenging scenario and assume the model weights are frozen during inference time. We thus demonstrate how our method can be combined with traditional TTA. Combining them shows that our method complements theirs and has improved on several OOD benchmarks. We address the questions in individual responses and include additional experiments and figures to support our responses. We will add all the details to our later revision. The figures and tables can be found in the PDF file. Thank you again for your efficient handling of our submission. Pdf: /pdf/62b3aa71d60e888b0fea9e28183efd44db66fc47.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: - The paper introduces Convolutional Visual Prompts (CVP), a novel methodology designed to increase the model robustness when faced with Out-of-Distribution (OOD) data during test time. - The primary innovation of CVP is the use of convolutional structures as inductive biases for adapting to visual OOD instances during test time. - The experimental results show that these convolutional structures are efficient and effective at dealing with OOD corruptions. Strengths: - *Originality*: The paper proposes a novel method, the use of convolutional structures as inductive biases for visual prompt tuning to OOD data. This work would interests the visual prompt community. - *Quality*: Extensive experiments have been conducted across a variety of OOD recognition tasks, providing empirical support for the effectiveness of CVP. - *Clarity*: The paper is well-written and comprehensible, offering clear explanations of the problem, the proposed solution, and the obtained results. Weaknesses: The major concern of this work is its Motivation behind the choice of convolutional structures: The paper does not provide a clear motivation or justification for choosing convolutional structures as the basis for the Convolutional Visual Prompts (CVP). While the paper mentions the effectiveness of convolutional operations for handling structured data with local motifs, a more explicit link between this principle and the task of OOD adaptation would be beneficial. For instance, the authors could hypothesize that the learned convolutional kernel might act as the de-corruption operator for specific corruptions, offering a possible reason for their choice (the point was raised by the visual evidence like Figure 11, which seems indicate the reason behind why CVP works). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What motivated the choice of convolutional structures over other possible structures for the CVP? - Could you discuss the wider usage of CVP when applied to settings other than test-time adaptation? - Could you also discuss the extendability of CVP model, like the possibility/effectiveness of adding more kernels? I am not saying to develop a new variant during the rebuttal, but to indicate how the followers can use the method to the best extend would enhance the significance of the work. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper seems not discuss the limitations, but when the authors try to answer the weakness and questions above, they might come up with more discussion on the limitation of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Motivation of using convolutional visual prompt for test-time adaptation** Thank you for your question. We agree with the reviewer that our CVP can be viewed as a de-corruption operator for natural images. Our paper studies how to be robust under major natural distribution shifts, such as style, smoothness, or lightening changes. Those shifts are often locally structured, which motivates us to use convolution that handles locally structured data. Empirically, our convolution directly undo the corruption, as shown in Figure 11. Moreover, our empirical results show that our proposed convolutional visual prompts outperform standard visual prompts and low-rank visual prompts. We will make the input de-corruption operation point more clear in our revision. It will be interesting to explore how well CVP adapts to more global changes, such as shape change, which we leave for future work. **Discuss the wider usage of CVP when applied to settings other than test-time adaptation** We thank the reviewer for pointing out this. We will add it in the discussion part for the later revision. We list a few additional usages of CVP below: 1. Beyond unsupervised test-time adaptation, we can also supervisedly optimize our convolutional visual prompt when a few labeled data are available on the target domain. Since our convolutional visual prompt has fewer parameters, applying it to this setting can reduce the training cost. 2. Continual learning can benefit from our lightweight CVP. Since the environment often changes incrementally, applying the structured CVP adaptation can potentially help continual adaptation without forgetting. 3. The model deployed on edge devices can also benefit from CVP. As it has limited memory and requires handling multiple types of corruption, the efficiency of the parameter of CVP allows us to maintain individual prompts for each scenario without using too much memory. **Discuss the extendability of CVP model** Thank you for your question. Yes, our method can be extended by stacking deeper convolutional prompts. In addition, future work can study adding CVP to the latent representations and study if the hyper network can be used to generate the parameters for the CVP for meta-learning. In terms of application, CVP can be used to adapt not only classification models but also generative models such as diffusion. --- Rebuttal Comment 1.1: Comment: Dear reviewer uZKT, We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further question after reading our rebuttal. We hope to address all the potential issues during the discussion period. Thank you.
null
null
null
null
null
null
Affinity-Aware Graph Networks
Accept (poster)
Summary: This work proposes to incorporate affinity measures as features into message-passing networks (MPNNs) in order to enhance the expressivity without enlarging the computational cost notably. The authors introduce three examples of random-walk-based affinity measures, e.g., effective resistance, hitting, and commute times, and provide efficient computation and low-rank approximation for them. On multiple graph benchmarks, ranging from graph-level tasks to transductive node-classification on large-scale graphs, the proposed affinity measures can remarkably improve the performance of MPNNs and reach state-of-the-art performance among models without privileged information. Strengths: 1. According to my best knowledge, this work is the first to propose that incorporating affinity-based measures into MPNNs can improve the expressive power of MPNNs, going beyond the 1-WL algorithm. 2. This work also provides several techniques to improve the efficiency of the introduced affinity-based measures. 3. The authors show that MPNNs+affinity measures are strictly more powerful than the 1-WL algorithm via the example of regular graphs. 4. The empirical experiments demonstrate the effectiveness of the proposed methods on graph-level tasks and transductive node-level tasks. The experiment on large-scale graphs also shows the efficiency of the proposed technique. Weaknesses: 1. The paper does not have more throughout comparisons with other positional encoding enhanced graph neural networks, theoretically and empirically. For details, please see the question section. 2. The baselines in empirical comparisons seem to have different numbers of parameters compared to the proposed method. It will be better to have a more detailed explanation on the choices of hyperparameters. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (Loukas, 2020; Dwivedi et al., 2021) mention that encoding can enhance the ability of MPNNs to uniquely distinguish nodes and hence improve the expressive power of MPNNs. I am curious whether the affinity measures enhance MPNNs in a similar way. Or they can provide extra valuable information that non-affinity-based encoding can not provide. In other words, if there exists a non-affinity-based encoding able to distinguish isomorphic graphs as well as the proposed affinity measures, such as subgraph counting (Bouritsas et al., 2022), will incorporating the proposed affinity measures into MPNNs enjoy extra advantages over such an encoding? Whether graph-level tasks and node-level tasks have different conclusions regarding this? If possible, it would be perfect if you could provide more comparisons theoretically/empirically against several PE-enhanced MPNNs (Dwivedi et al., 2021; Zhang et al., 2021; Wang et al., 2022; Lim et al., 2022; Li et al., 2020; Bouritsas et al., 2022) and potentially the PE for Graph Transformers (Zhang et al., 2023; Ma et al., 2023). - Loukas, A. (2020). How hard is to distinguish graphs with graph neural networks? Adv. Neural Inf. Process. Syst. - Dwivedi, V. P., Luu, A. T., Laurent, T., Bengio, Y., & Bresson, X. (2021). Graph Neural Networks with Learnable Structural and Positional Representations. Proc. Int. Conf. Learn. Representations. - Bouritsas, G., Frasca, F., Zafeiriou, S. P., & Bronstein, M. (2022). Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. - Zhang, Z., Cui, P., Pei, J., Wang, X., & Zhu, W. (2021). Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs. IEEE Transactions on Knowledge and Data Engineering. - Wang, H., Yin, H., Zhang, M., & Li, P. (2022). Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks. Proc. Int. Conf. Learn. Representations. - Lim, D., Robinson, J. D., Zhao, L., Smidt, T., Sra, S., Maron, H., & Jegelka, S. (2022). Sign and Basis Invariant Networks for Spectral Graph Representation Learning. ICLR 2022 Workshop on Geometrical and Topological Representation Learning. - Li, P., Wang, Y., Wang, H., & Leskovec, J. (2020). Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning. Adv. Neural Inf. Process. Syst. - Zhang, B., Luo, S., Wang, L., & He, D. (2023). Rethinking the Expressive Power of GNNs via Graph Biconnectivity. Proc. Int. Conf. Learn. Representations. - Ma, L., Lin, C., Lim, D., Romero-Soriano, A., K. Dokania, Coates, M., H.S. Torr, P., & Lim, S.-N. (2023). Graph Inductive Biases in Transformers without Message Passing. Proc. Int. Conf. Mach. Learn. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No limitations are provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. The paper does not have more throughout comparisons with other positional encoding enhanced graph neural networks... > If possible, it would be perfect if you could provide more comparisons... We note that we have already provided comparisons to the GSN model of Bouritsas et. al. for ogbg-molhiv. Regarding the DE-GNN model of Li et al., we note that it is difficult to compare our method with theirs, as their paper does not handle graph prediction tasks. In fact, the DE-GNN paper is concerned with learning structural representations of small sets of nodes and notes: To quote the paper, _“Our approaches may also help other tasks based on structural representation learning, such as graph-level classification/regression [14, 16, 23, 25, 30] and subgraph counting [59], which we leave for future study.”_ The fact that most experiments in our paper concern graph prediction makes a proper comparison with DE-GNN elusive. However, we have nevertheless attempted to compare our work to the key ideas of Li et al. and have included comparisons to distance-based features in Appendix D. Below, we provide some additional comparisons to PE-enhanced MPNNs: Dwivedi et al., 2021: On ogbg-molpcba, we note that the authors’ PNA-LSPE model (on 4 layers) obtains a test AP of 28.4 ± 0.2, while their GatedGCN-LSPE model obtains a test AP of 26.7 ± 0.2. We note that our best 4-layer, 8-layer, 16-layer models are competitive with the former (within the margin of error) while surpassing the latter. > 2. The baselines in empirical comparisons seem to have different numbers of parameters compared to the proposed method. It will be better to have a more detailed explanation on the choices of hyperparameters. When we add scalar features (e.g., ER or HT), there are only 2-3 extra features added, so this does not appreciably increase the number of hyperparameters. Even when we add the vector resistive embeddings in addition to the scalar features, we find that the number of hyperparameters increases by less than 10% (e.g., for the PNA dataset, the comparable MPNN models with and without affinity measures have around 9.7M and 8.9M hyperparameters, respectively, representing a 9% increase). Regarding the choice of hyperparameters, we find that using all affinity measures together (both ER and HT scalar features as well as vector resistive embeddings) generally works the best; however, on larger datasets for which GPU memory is a bottleneck, we stick to using just the scalar features. > Q. (Loukas, 2020; Dwivedi et al., 2021) mention that encoding can enhance the ability of MPNNs to uniquely distinguish nodes... Appendix D shows some experiments comparing affinity measures to other positional/structural encodings such as shortest path distance encodings and centrality encoding (which has been used in graph transformer architectures such as Graphormer (Ying et al. 2021)), providing evidence that affinity-based encodings can provide extra information that other encodings cannot. We further note that the experiments in Appendix D are on a collection of graph-level and node-level tasks, providing indication that affinity-based encodings can be beneficial in both settings. We nevertheless note that the question of whether affinity-based encodings can provide advantages via approaches other than feature augmentation is a scintillating one. While we have primarily incorporated affinity measures as features to an MPNN, one need not be limited by this route. Positional encodings (that allow nodes to be located/distinguished within a graph) have proved to be particularly important for non-message passing architectures such as Graph Transformers, which crucially rely on positional and structural encodings to make up for the loss of inductive bias arising from the separation of the computation graph from the input graph. A fascinating direction for future work would be to explore the use of affinity-based positional encodings in such architectures. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I will raise my score to 7. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you! We appreciate your taking the time to read through our rebuttal and adjust your score upwards.
Summary: The paper proposes a strategy to strengthen the node and edge features in order to enhance the expressiveness of a message passing neural network. In particular, the paper introduces a set of effective resistance (ER) features, including node-level resistive embedding, which further derives two edge-level affinity measures: hitting time and commuting time/effective resistance. Additionally, the paper employs Constructive Johnson-Lindenstrauss to randomly project resistive embeddings to a lower-dimensional space, which is utilized to expedite the computation of hitting time and commuting time. Finally, the paper present the empirical evaluations on the proposed ER features. Strengths: 1. The paper is well-written: the motivation is clearly presented, and most sections are straightforward to follow. 2. The fast approximation of computing hitting time and commuting time is novel. The dimensionality of resistive embedding is reduced without the requirement of specifying a set of anchor nodes. 3. The experiment on large graphs demonstrates that the computation cost of adding these features to MPNN is manageable. Weaknesses: 1. The paper would benefit from providing stronger evidence to support certain claims made throughout the text. Please refer to the "Questions" section for details. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I would like to seek clarification from the authors regarding the connection between "1-WL" and "standard GNN" in the context of node classification, as demonstrated in the paper to support the claim made by Theorem 3.6. The provided example gives me the impression that a "standard GNN" represents each node using a spanning tree rooted at the node of interest (referred to as a computation tree) and distinguishes nodes based on the isomorphism of these spanning trees. To facilitate readers' understanding of this concept, I kindly request the authors to provide a specific example of the computation tree that a standard GNN employs to represent a node. It would also be helpful if the authors could provide some justification or reference regarding the connection between the node representation learned by standard GNNs and the utilization of spanning trees. 2. By utilizing the initial node feature as the one-hot encoding of the node's index, denoted as $\textbf{h}_v^{(0)} = \textbf{e}_v$, and employing the forward function $\mathbf{H}^{(t+1)} = \mathbf{A}\mathbf{H}^{(t)}$, we can observe that after 4 layers, the node embeddings generated by the standard GNN successfully differentiate the 3 color categories. Specifically, if we define a score $s_v = <\textbf{h}_v^{(4)}, \textbf{e}_v>$, the value for nodes {1,2} is 15, for nodes {3,4,7,8} it is 17, and for nodes {5,6} it is 19. This observation contradicts the claim that "a standard GNN that is limited by the 1-WL test cannot distinguish any pair of nodes in a regular graph." Consequently, I have some follow-up questions based on this discrepancy. (1) What is the specific constraint on the input features to a standard GNN? (2) Will the benefit from resistive embedding become less evident as we increase the depth of an MPNN? 3. In Appendix D.2, a comparison is conducted between resistive embedding and shortest-path-distance embedding in terms of their ability to differentiate nodes within a cycle and the corresponding nodes in the path obtained by removing one edge from the cycle. The example effectively illustrates that node embeddings based on shortest-path-distance are insufficient in capturing the divergence that can be successfully captured by resistive embeddings. However, this raises a valid concern regarding real-life networks where edges may be lost due to noise. In such cases, will the strong ability of resistive embeddings to distinguish between graphs that are close in terms of edit distance potentially make them susceptible to overfitting the noise? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. I would like to seek clarification from the authors regarding the connection between... Indeed, the message passing mechanism of a GNN gives rise to a computation tree with depth given by the number of message passing steps. As an example, for the graph in Figure 1, if we use two rounds of message passing (i.e., 2 MPNN layers), the resulting node representation for node 1 would arise from the following depth-2 computation tree rooted at 1: ``` 1 ├── 2 │ ├── 1 │ ├── 3 │ └── 4 ├── 7 │ ├── 1 │ ├── 6 │ └── 8 └── 8 ├── 1 ├── 5 └── 7 ``` We would be happy to provide any further examples or clarification if needed during the discussion phase and can incorporate them in the camera-ready version. > 2. By utilizing the initial node feature as the one-hot encoding of the node's index... The standard setting for comparing GNNs to the 1-WL test is the one in which all nodes start with the same representation/color before the series of updates. Therefore, in the case of a regular graph, if one starts with the same representation for every node, updates (whether aggregation in a GNN or color/hashing updates in 1-WL) will leave every node with the same representation after every step. You note correctly that starting with one-hot encodings of the node index as initial node features will allow nodes to be distinguished by a GNN; however, in this case the appropriate comparison is the 1-WL procedure which _also starts with one-hot encodings as initial ‘colors’_, which allows all nodes to be trivially distinguished from each other. The fact that node indices can improve GNN expressivity has been well documented: even random node features (essentially a more scalable way of providing node indices, as one typically does not want the node feature dimensionality to scale with the number of nodes n, as one-hot encodings of nodes would do) improve GNN expressivity (see [1], [2]). Specifically, we answer your questions as follows: 1. In the “standard” setting, one starts with the same representation for every node (i.e., no use of distinguishing node features). However, one can also start with more general node representations/colorings, but in this case the 1-WL test being compared to must also use the same initial node colorings. 2. Increasing the depth of an MPNN too much can pose various general issues, such as vanishing gradient or oversquashing (see [3], [4], [5]). We expect these to still be the case when using resistive embeddings, so the benefit for large depths could be muted, as is the case more broadly for MPNNs. [1] Ryoma Sato. “A Survey on The Expressive Power of Graph Neural Networks.” [2] Ryoma Sato, Makoto Yamada, Hisashi Kashima. “Random Features Strengthen Graph Neural Networks.” [3] Uri Alon, Eran Yahav. “On the Bottleneck of Graph Neural Networks and its Practical Implications.” [4] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, Michael M. Bronstein. “Understanding over-squashing and bottlenecks on graphs via curvature.” [5] Francesco Di Giovanni, Lorenzo Giusti, Federico Barbero, Giulia Luise, Pietro Lio', Michael Bronstein. “On Over-Squashing in Message Passing Neural Networks: The Impact of Width, Depth, and Topology.” > 3. In Appendix D.2, a comparison is conducted between resistive embedding and shortest-path-distance embedding... In a worst-case scenario, removing an edge can change the effective resistance between a given pair of nodes appreciably. However, this is not a phenomenon limited to effective resistance—even in the case of shortest path distance, adding/removing a single edge can change the distance between the endpoints by an arbitrarily large factor. Nevertheless, we observe that in **real-world networks**, effective resistance tends to be a fairly _robust_ measure. Note that removing an existing edge $e$ can never decrease any effective resistances, and for a connected graph, one can show that it can only increase the effective resistance between any pair of nodes by at most a factor of $1/(1-R_e)$. (Note that we necessarily have $R_e \leq 1$ for any existing edge of the graph, and as long as e is not a bridge edge whose removal disconnects the graph, the inequality is strict.) Applying this fact to the underlying graph of the ogbn-arxiv dataset, we observe that 95% of the edges in the graph have $R_e < 0.5$, which implies that the removal of any such edge will increase the effective resistance between any pair of nodes by a factor of $< 2$ (this property does not, however, hold for shortest path distance). Similarly, it can be shown that adding an edge $(u,v)$ can only decrease the effective resistances by a factor of $1/(1+R_{uv})$. Thus, adding an edge between nearby nodes will also not affect the effective resistances much. --- Rebuttal Comment 1.1: Title: Thank you for rebuttal Comment: hank you for your feedback. I appreciate your consideration in including the discussions we've had in the final revisions of the paper. The paper presents an intriguing and engaging topic, and based on its quality, I highly recommend its acceptance. I also want to acknowledge that I have upgraded my score to 7. Best of luck!
Summary: This paper proposes to use affinity measures as additional features that can incorporate in common standard MPNNs and theoretically and empirically shows the improvement in expressiveness and performance in some datasets without loss of much scalability. The main contributions are listed below. 1. Generality. The proposed approach can be adopted in standard MPNNs. 2. Expressiveness. Incorporating the proposed features can make standard MPNNs more powerful than the 1-WL test. 3. Scalability. The affinity measures used in this paper are based on the random walk. By approximation, the features derived from the proposed measures can be incorporated with high scalability. 4. Performance. MPNNs equipped with the proposed additional features can make progress or even rank 1 in some datasets. Strengths: 1. This paper was inspired by effective resistances and proposed HT, ER and resistive embeddings all help common MPNNs in both performance and expressiveness. As a feature-based approach, it can adapt to many standard MPNNs and does not need to specify any anchor or target nodes to capture both distance and connectivity information using general-purpose node and edge features. 2. By Approximation and fast computation of Resistive Embeddings and HT, the proposed features can scale up to large graphs in time and space complexity. 3. The experimental results are impressive. With relatively low computation cost, the proposed method can have a competitive performance on small datasets and higher performance on homogeneous/heterogeneous citation networks. Further, the best performance achieved on PCQM4M-v1 is impressive. 4. Both theoretical proof and empirical evaluations are provided. Weaknesses: 1. The motivation needs to be clarified more clearly. What the main drawbacks of the existing approaches are and which the proposed one surmounts? It’s said that the proposed approach “without the need for specifying any anchor or target nodes” and the scalability problem is mentioned in the experiments section. However, the advantages should be clarified further and organized better since there are other feature-based works and other categories of works aiming at a similar goal. Some practical evaluation or comparison may help if possible. 2. To demonstrate the superiority and necessity of the proposed affinity feature-based approach, other similar approaches need to be compared. Similar to the proposed affinity-based features, approaches incorporating other additional features (e.g. Random features or other dedicated designed features), and other methods that can improve GNNs’ expressiveness (e.g. other 2 directions mentioned in Expressivity para in Sec. 2) should also be compared in scalability and performance. 3. Although by approximation, the computation of random embeddings can be done in near-linear time and can be scaled to a large dataset, it’s better to provide the actual runtime of the proposed and baseline methods. 4. Theorem 3.6 claims that MPNNs that make use of any one of the proposed features are strictly more powerful than the WL-1 test. However, the experiments for it are not enough. Experiments on ogbg-molhiv show that it outperforms substructure counts, but the reasons are not clear. Experiments on some (artificial) datasets (e.g. [1][2]) which are designed for testing expressiveness would be better. 5. More well-organized writing should be considered. The notations scattering in Sec. 3.1-3.3 bring some difficulty in reading. The different baselines, settings, and experimental results in each subsection in Sec. 5 are scattered, either in the tables or in the text, which may weaken the empirical demonstration effects of experiments and cause some confusion. Some nouns or notations are introduced but not explained clearly, such as ER scalar features, node or edge features used in experiments, e_v in Eq. (1), and random rotations. Introducing them in a centralized and unified manner may help. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. It’s said in Sec. 5.4 that ER embedding dimension become impractically high and authors alter to use ER scalar features. What are the theoretical and empirical (experimental) differences between the two kinds of features? If ER scalar features in small graphs can make the same progress with ER embedding? If not, why? 2. Incorporating hitting time features, resistive embeddings, and effective resistance are all proven to be more powerful than 1-WL in Sec. 3.4. Why does using ER/HT in experiments have different results? Is there any suggestion about selecting which feature? 3. The performance of the proposed method does not surpass the DGN in Table 2 and does not show advantages when more layers are in Table 3. More explanations are expected although the proposed method shows advantages in other aspects. 4. How the proposed features help in heterogeneous networks may be interesting. Possible explanations are expected. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: 1. As said in W3, the actual runtime in large graphs may be much higher than (scalable) baseline methods. 2. As said in Q2, experiments use different features (ER embedding/HT …), thus suggestion about selecting feature is needed. 3. When demonstrating the expressiveness of the proposed approach, experiments on some datasets tailored for evaluating expressive power are expected. For instance, one can synthesize a dataset containing instances indistinguishable by 1-WL(e.g. [1][2]) to see if the proposed approach can distinguish. [1] Sato R, Yamada M, Kashima H. Random features strengthen graph neural networks[C]//Proceedings of the 2021 SIAM international conference on data mining (SDM). Society for Industrial and Applied Mathematics, 2021: 333-341. [2] Abboud R, Ceylan I I, Grohe M, et al. The surprising power of graph neural networks with random node initialization[J]. arXiv preprint arXiv:2010.01179, 2020. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. The motivation needs... Features in GNNs are typically subject to a tradeoff between (a.) efficiency and (b.) high expressivity. For instance, random features or node index features address (a.) at the cost of (b.), while other features such as substructure counts help with (b.) but fall short in (a.), e.g., computation can be superquadratic in the number of nodes. Our work is _motivated by this tradeoff and addresses the challenge of bridging (a) and (b)_. We highlight that our approach is also general in nature, something often not true for structural features where one needs to identify the (domain-specific) structures of interest. > 2. To demonstrate... We note that the submission includes some comparisons to approaches you mention. For the PNA dataset, the results in Table 1 include MPNN baselines with (a) random features and (b) DGN features. Furthermore, for the molecular datasets (ogbg-molhiv, ogbg-molpcba), we have included comparisons to both the junction tree-based HIMP baseline (mentioned as [13] in Section 2) as well as the substructure-based GSN baseline ([6] in Section 2). We have additionally included random walk-based baselines such as GRWNN (see Table 2). > 3. Although by... Computation times for the ogbn-arxiv dataset are provided on page 8. We further highlight that the computation of effective resistances proceeds via computation of the embeddings, followed by a quick computation of L2-squared distances between pairs of embeddings (see Lemma 3.2). Without the use of approximate random embeddings, the computation would require an expensive matrix inversion that would be infeasible for the size of the dataset (169,343 nodes). > 4. Theorem 3.6 claims... Thank you for the suggestion. As mentioned in the global response to all reviewers, we have now included results on the synthetic datasets, TRIANGLE(N) and LSC(N) in [1]. Once again, the results (test ROC-AUC score, higher is better) are as follows (see global response for experimental details): _TRIANGLE (N)_: GIN: - GIN: 0.5 - GIN + rand feats: 0.924 - **GIN + res embeddings: 0.911** GCN: - GCN: 0.5 - GCN + rand feats: 0.857 - **GCN + res embeddings: 0.879** _LSC (N)_: GIN: - GIN: 0.5 - GIN + rand feats: 0.847 - **GIN + res embeddings: 0.935** GCN: - GCN: 0.5 - GCN + random features: 0.794 - **GCN + res embeddings: 0.873** > 5. More well-organized writing... We will work on improving the organization and writing in the final version. > Q1. There are two potential sources of scalability bottlenecks: **computational complexity** and **GPU memory**. The impracticality referred to in Sec. 5.4 concerns the latter, not the former. Indeed, even on ogbn-arxiv, we are able to compute resistive embeddings without any problem and, in fact, use them to obtain ER scalar values (via Lemma 3.2) to **avoid computationally infeasible matrix inversions**. However, the high dimensionality of the embeddings makes them infeasible to use in GPU memory. We note that resistive embeddings are still richer than ER/HT scalar values (the latter can be computed from the former) and have observed empirical improvements on the datasets where we try them. In addition to Table 1 for the PNA dataset, which shows lift from using embeddings along with scalar features, we note that for ogbg-molhiv, using ER scalar features yields a Test % ROC-AUC of 77.75 ± 0.426, while using hitting time scalar features yields 76.56 ± 0.915. However, as reported in Table 2, additionally _using embeddings produces a much higher Test % ROC-AUC of 79.13 ± 0.358_. > Q2. While it is indeed true that any of the features (ER, HT, resistive embeddings) results in more power than 1-WL, it is not _a priori_ clear that including these features would necessarily give better empirical test accuracy. We nevertheless show that affinity measures do improve empirical performance to varying degrees. It should also be noted that we generally use the scalar features as _edge features_, while embeddings are incorporated as both _node features_ and _edge features_. This can result in differing test performance, as MPNNs use node and edge features differently. Regarding selection of features, we have generally found that _using both embeddings and scalar features (ER and HT) results in the best performance_. However, for very large graphs (thousands of nodes), we stick to using just scalar features due to GPU memory bottlenecks. > Q3. Approaches to improve expressivity and performance of GNNs fall into three categories, viz., feature augmentation, message passing modulation, and graph modification — our technique falls under the first approach, while DGN falls under the second. While our approach has been to augment affinity measures as features, one is not by any means limited to this approach — an exciting future direction would be to use affinity measures to modulate the message passing, which would provide a fairer comparison to DGN. In order to give some indication of an apples-to-apples comparison to DGN, in Table 1 we have provided a “DGN (features)” baseline, which (instead of modulating the message passing mechanism of the MPNN) incorporates the relevant features via feature augmentation. > Q4. Heterogeneous networks may be viewed as graphs with multiple node and/or edge types. One can apply our techniques based on affinity measures in a few different ways. One way is to consider all types of nodes/edges together and compute affinity measures on the combined graph in the usual way, then incorporating them in the desired message-passing architecture (this is, for instance, our approach for experimentation on ogbg-molhiv, which has multiple edge types arising from different bond types). Another way is to consider the various node or edge types separately and compute affinity measures for separate graphs in isolation. Furthermore, with either approach, one can optionally aggregate different edge types separately, e.g., using an architecture like R-GCN). --- Rebuttal Comment 1.1: Comment: Thanks for the author's response, and it partially addressed my concerns.
Summary: In this paper, the authors present MPNN using affinity measures as node and edge features. As the affinity measures, the authors propose effective resistance, hitting times, and resistive embeddings. The authors demonstrate the effectiveness of using the affinity measures through experiments on various datasets. Experimental results on large graph datasets prove the good scalability of the scheme. Strengths: 1. I considered the need for a comparison with shortest path distances (SPDs), however, the related description in Section D (in the supplementary material) is convincing. 2. The proposed scheme has low computational complexity and good scalability. 3. The paper is well-written and easy to understand. Weaknesses: It appears that the setting for MPNN models used in the experiments is not fair. According to Table 6 in the supplementary material, the number of layers is set to 3 for all models. Do the authors consider this setting to be fair? Furthermore, the hidden size of each layer is set differently for each method. What is the basis for setting the hidden size for each method? For example, the hidden size for GAT is set to 64 while that for HT + ER (rand rot) is set to 512. minor comment - fom -> from (Line 112) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. Do the authors think that the experiments with the current hyper-parameter setting are fair? Q2. Could the authors explain on what basis they set the hyper-parameters for each method? Q3. Is it possible to improve performance by adding the proposed features to DGN? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not describe the potential negative social impact of their work while present future work to develop the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. Do the authors think that the experiments with the current hyper-parameter setting are fair? We note that the hyperparameters for each model/baseline were chosen via a hyperparameter sweep. This includes, for instance, the hidden size. For each model in the table, we provide the best setting of hyperparameters, i.e., the setting that results in the best score. Therefore, we believe that the experiments do indeed provide a fair comparison. We thank the reviewer for raising this important question, and we will make sure to clarify these experimental details regarding the hyperparameter sweep. > Q2. Could the authors explain on what basis they set the hyper-parameters for each method? We swept over hyperparameters as follows: - Hidden size: {64, 128, 256, 512} - Learning rate: {1e-4, 1e-3, 5e-3} - Number of layers (for MLP): {2, 3} - Message passing steps: {1, 2, 3} All combinations of the above hyperparameters were tried out (with repetition over multiple seeds) for every model/baseline, and the best setting of hyperparameters was reported for each model. We will, once again, add further clarification to the paper on the details of the hyperparameter sweep. > Q3. Is it possible to improve performance by adding the proposed features to DGN? Indeed, it is possible to add affinity features to DGN. We have not tested this out here, as DGN requires significantly altering the model architecture by modulating the message passing mechanism. However, in order to give some indication of how affinity measures can work in tandem with the features considered in DGN, below we report a new set of experimental results on the PNA dataset for an MPNN model that uses affinity features along with _DGN features_ (note that use of _DGN features_ without affinity features was already reported in Table 1 of the submission): - **DGN features + affinity measures: -2.938** - DGN features only (already reported in Table 1): -2.743 As in the paper, the above results on the PNA dataset denote the average score (log(MSE)) over the relevant set of six tasks, and the results are averaged over 10 random seeds. The above numbers show that adding affinity measures results in an improvement over _DGN features_. --- Rebuttal Comment 1.1: Title: Rebuttal Acknowledgement Comment: Thank you for the response. These comments substantially address my concerns.
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to look through our submission and provide insightful comments and questions. In light of the comments raised by reviewers, we have provided further results and addressed several concerns. We respond to each reviewer individually. There is one **experimental update** that we would like to highlight to all the reviewers: In response to questions about expressivity, we are now providing new experimental results with comparisons to the synthetic datasets, _TRIANGLE(N)_ and _LSC(N)_ (from [1]) to show stronger evidence that affinity measures exhibit improved expressivity. For both datasets, we follow the setup in [1] and use GIN and GCN models, which we augment with resistive embedding node features and compare to both (a.) vanilla models and (b.) models augmented with random features (the main contribution of [1]). The results (test ROC-AUC score, higher is better) are as follows: _TRIANGLE (N)_: GIN: - GIN: 0.5 - GIN + random features: 0.924 - **GIN + resistive embeddings: 0.911** GCN: - GCN baseline: 0.5 - GCN + random features: 0.857 - **GCN + resistive embeddings: 0.879** _LSC (N)_: GIN: - GIN baseline: 0.5 - GIN + random features: 0.847 - **GIN + resistive embeddings: 0.935** GCN: - GCN baseline: 0.5 - GCN + random features: 0.794 - **GCN + resistive embeddings: 0.873** We find that using resistive embeddings results in the highest ROC-AUC scores in all cases except for GIN on TRIANGLE, in which case random features and resistive embeddings are comparable. This provides strong evidence that the given affinity measures indeed provide enhanced expressivity. [1] Sato R, Yamada M, Kashima H. Random features strengthen graph neural networks[C]//Proceedings of the 2021 SIAM international conference on data mining (SDM). Society for Industrial and Applied Mathematics, 2021: 333-341.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ANTN: Bridging Autoregressive Neural Networks and Tensor Networks for Quantum Many-Body Simulation
Accept (poster)
Summary: The paper approaches the problem of designing wave function ansatz for the ground state estimation of the Heisenberg model. This problem introduces several challenges including the rich parameterization of the wave function, sampling of the states from the parameterized wave function, and incorporating symmetries into the model. There are two main approaches to this problem: autoregressive neural networks and tensor networks. The former allows for fast sampling and incorporating symmetries. The latter incorporates problem-specific inductive bias. The authors propose an autoregressive model which incorporates tensor networks thus, hopefully, inheriting the best from both approaches. Shortly, the proposed model at every step of auto-regression yields tensors instead of scalars. This allows for a richer class of wave function representations while enjoying the properties of auto-regressive models such as exact sampling. The authors study the proposed model both theoretically and empirically. The theoretical contribution of the paper is the proof of exact sampling and invariance properties. For the empirical study, the authors demonstrate the performance of the proposed ansatz for quantum state learning (fidelity maximization) and for the Heisenberg model (variational Monte Carlo). Strengths: I would say that the paper corresponds to all NeurIPS standards: - The paper approaches an important problem with clear applications. - The proposed method is novel and sound. - The experimental study is extensive. - It is well-written. Weaknesses: My only concern regarding the empirical study is the comparison in the number of parameters. Indeed, the main goal of the paper is to propose a richer family of models, where we would expect better approximation properties for the same number of parameters as in concurrent approaches. I believe that this can be better illustrated with experiments by reporting the number of parameters along with the energy values. Right now, the authors fix the shapes for the tensor network and then add a neural network "on top" of it. Therefore, it is not very clear how to compare the models. Maybe adding a plot (e.g., number of parameters vs energy) would improve the readability of the experimental section. Other comments: - line 127. Typo (articles). - line 144. Typo (the last index of $\alpha$). - line 153. Typo (many MPS represent) - line 160. Typo (up to qubit $j$). Also, the notation with the summation over $\alpha$ is not very clear here. - line 271. Typo. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have no questions for the authors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Summary:** > The paper approaches the problem of designing wave function ansatz for the ground state estimation of the Heisenberg model. This problem introduces several challenges including the rich parameterization of the wave function, sampling of the states from the parameterized wave function, and incorporating symmetries into the model. There are two main approaches to this problem: autoregressive neural networks and tensor networks. The former allows for fast sampling and incorporating symmetries. The latter incorporates problem-specific inductive bias. > The authors propose an autoregressive model which incorporates tensor networks thus, hopefully, inheriting the best from both approaches. Shortly, the proposed model at every step of auto-regression yields tensors instead of scalars. This allows for a richer class of wave function representations while enjoying the properties of auto-regressive models such as exact sampling. > The authors study the proposed model both theoretically and empirically. The theoretical contribution of the paper is the proof of exact sampling and invariance properties. For the empirical study, the authors demonstrate the performance of the proposed ansatz for quantum state learning (fidelity maximization) and for the Heisenberg model (variational Monte Carlo). > **Strengths:** > I would say that the paper corresponds to all NeurIPS standards: > The paper approaches an important problem with clear applications. > The proposed method is novel and sound. > The experimental study is extensive. > It is well-written. **The authors thank the reviewer for the appreciation of this work.** > **Weaknesses:** > My only concern regarding the empirical study is the comparison in the number of parameters. Indeed, the main goal of the paper is to propose a richer family of models, where we would expect better approximation properties for the same number of parameters as in concurrent approaches. I believe that this can be better illustrated with experiments by reporting the number of parameters along with the energy values. Right now, the authors fix the shapes for the tensor network and then add a neural network "on top" of it. Therefore, it is not very clear how to compare the models. Maybe adding a plot (e.g., number of parameters vs energy) would improve the readability of the experimental section. **Thanks for the comment. While it is hard to set the number of parameters of all the ansatz exactly the same, to address the question, we have conducted additional experiments with additional parameters in ARNN, and list the energies, number of parameters, and runtimes for the experiments conducted in this work in the tables and the figures in the global response. We note that our ANTN actually has a comparable number of parameters to ARNN and has much fewer number of parameters than bond 1024 DMRG, while at the same time achieving better performance than all of them. The results further confirm that our ANTN achieves efficient representation and optimization by integrating tensor network and ARNN.** > **Other comments:** >line 127. Typo (articles). >line 144. Typo (the last index of $\alpha$). >line 153. Typo (many MPS represent) >line 160. Typo (up to qubit $j$). Also, the notation with the summation over $\alpha$ is not very clear here. >line 271. Typo. **The authors thank the reviewer for pointing out the typos and we have fixed them in the updated version of the paper.** >**Questions:** >I have no questions for the authors. >**Limitations:** >The limitations are adequately addressed. --- Rebuttal Comment 1.1: Comment: Thank you for the response! I went over the rebuttal and I would like to keep my score. --- Reply to Comment 1.1.1: Comment: Thanks again for the valuable feedback! We have performed additional experiments comparing ANTN with ARNN to gain more understanding on their expressivity and optimization based on the suggestions during the discussion. We have tested a series of ARNN with different layers (up to 20 layers) and hidden dimensions (up to 72 hidden dimensions) such that the largest ARNN tested has more parameters than ANTN. We found that all the ARNN energies are not as good at ANTN (which is only based on 7 layers and 48 hidden dimensions), despite that ARNNs have more parameters and took longer time to train using the same GPU (see new data below). Furthermore, we observe that an increase of parameters can help ARNN to improve energy in general, but this improvement hits a diminishing return potentially due to difficulties in optimization as the number of parameters increases. In addition, we further test the (approximate) sign rule and found that it could help ARNN to obtain better energies but still worse than ANTN without sign rule (see Table 1 in the original paper and new data below). This indicates that the ANTN has the advantage of inheriting the flexible sign structure from MPS, avoiding the manual bias of adding the sign rule. The new results have further provided nice support for our work, both from an expressivity perspective and from the fact that ANTN has a better physics inductive bias than ARNN for optimization. We have also added the new benchmark in the updated manuscript. We hope these additional benchmarks will further resolve the concerns, and would be very appreciative if the reviewer could consider increasing the evaluation.
Summary: The paper proposes generic Autoregressive Neural TensorNet (ANTN), for quantum many-body simulation. In order to achieve high expressivity, sign structure preserving, physics inductive bias, accurate sampling, and symmetry preservation, ANTN combines tensor networks and autoregressive neural networks. The author demonstrates the improved performance of ANTN on quantum state learning and learning ground state energy. Strengths: 1. The proposed architecture absorbs the advantages of two state-of-the-art methods i.e., tensor networks and autoregressive neural networks. The idea is straightforward but novel and interesting. 2. This paper well illustrates its differences and advantages over tensor network based and neural network quantum state based methods for quantum many body simulation. 3. It provides a comprehensive theoretical analysis and the experimental results show that the proposed method can still maintain good expressive ability with fewer bond dimensions. Weaknesses: 1. The author solely compares and reports the bond dimensions taken up by the ANTN and MPS/DMRG approaches in Fig 1, Tab 1 and 2. Given that the first half of the ANTN is a sophisticated autoregressive neural network, reviewer concerns about that the author may evade a fact that whether the order of magnitude of the parameters of the autoregressive neural network could surpass the bond dimension of the following tensor network. As a consequence, the existence of the NN parameters will cancel out the efficiency brought on by the reduction in bond dimension. 2. Although the combination of autoregressive neural network and tensor network to build a new neural network quantum state method that absorbs the advantages of both is a delightful idea, reviewers are worried that such method will also inherit the disadvantages of both. For example, tensor networks are likely to be very tricky to characterize a highly entangled quantum state [1]. At the same time, traditional tensor networks may also perform poorly in learning the ground state energy of a long-range Hamiltonian [2,3]. These problems are also an important reason why some researches abandon the tensor network and embrace the neural network-based quantum states. The reviewer is pleased if the authors could clarify these points, and it would be preferable if they could provide some explanations and comparisons in the experimental or theoretical part. [1] Deng D L, Li X, Sarma S D. Quantum entanglement in neural network states[J]. Physical Review X, 2017, 7(2): 021021. [2] Glasser I, Pancotti N, August M, et al. Neural-network quantum states, string-bond states, and chiral topological states[J]. Physical Review X, 2018, 8(1): 011006. [3] Xiao T, Huang J, Li H, et al. Intelligent certification for quantum simulators via machine learning[J]. npj Quantum Information, 2022, 8(1): 138. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author does describe the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Summary:** >The paper proposes generic Autoregressive Neural TensorNet (ANTN), for quantum many-body simulation. In order to achieve high expressivity, sign structure preserving, physics inductive bias, accurate sampling, and symmetry preservation, ANTN combines tensor networks and autoregressive neural networks. The author demonstrates the improved performance of ANTN on quantum state learning and learning ground state energy. >**Strengths:** >The proposed architecture absorbs the advantages of two state-of-the-art methods i.e., tensor networks and autoregressive neural networks. The idea is straightforward but novel and interesting. >This paper well illustrates its differences and advantages over tensor network based and neural network quantum state based methods for quantum many body simulation. >It provides a comprehensive theoretical analysis and the experimental results show that the proposed method can still maintain good expressive ability with fewer bond dimensions. **The authors thank the reviewer for the appreciation of this work.** >**Weaknesses:** >The author solely compares and reports the bond dimensions taken up by the ANTN and MPS/DMRG approaches in Fig 1, Tab 1 and 2. Given that the first half of the ANTN is a sophisticated autoregressive neural network, reviewer concerns about that the author may evade a fact that whether the order of magnitude of the parameters of the autoregressive neural network could surpass the bond dimension of the following tensor network. As a consequence, the existence of the NN parameters will cancel out the efficiency brought on by the reduction in bond dimension. **The authors thank the reviewer for bringing up the question regarding the number of parameters for the MPS, ARNN and ANTN. The authors would like to clarify that for the experiments that we conducted, MPS with the bond dimension of $1024$ actually has the most parameters ($10^8$), given that the number of parameters for MPS scales as $N\chi^2$ with $N$ the system size and $\chi$ the bond dimension. The ARNN has fewer parameters ($5.0\times 10^5$) compared to the ANTN ($10^6$), because we fixed the shape of the underlying ARNN when conducting the experiment. In the global response, we list out in detail the number of parameters and runtimes for the experiments. We find that our ANTN outperforms both ARNN and MPS even though ANTN has a comparable number of parameters as ARNN and much fewer parameters than MPS.** >Although the combination of autoregressive neural network and tensor network to build a new neural network quantum state method that absorbs the advantages of both is a delightful idea, reviewers are worried that such method will also inherit the disadvantages of both. For example, tensor networks are likely to be very tricky to characterize a highly entangled quantum state [1]. At the same time, traditional tensor networks may also perform poorly in learning the ground state energy of a long-range Hamiltonian [2,3]. These problems are also an important reason why some researches abandon the tensor network and embrace the neural network-based quantum states. The reviewer is pleased if the authors could clarify these points, and it would be preferable if they could provide some explanations and comparisons in the experimental or theoretical part. **Thanks for the question. The authors understand the reviewer's concern about combining ARNN and MPS. However, as shown in Theorem 5.2, the ANTN has generalized expressivity over both ARNN and MPS. In fact, both ARNN and MPS are special cases of ANTN. Therefore, ANTN does not inherit the disadvantages of both ARNN and MPS. Moreover, we updated Theorem 5.2 in the global response, where we explicitly proved that neither ARNN nor MPS can efficiently represent ANTN. In addition, ANTN inherits the flexible sign structure from tensor network, which plays an important role for optimization. As we show in the new experiments in the one-page response, our ANTN achieves the best performance in those cases compared to RBM and ARNN with both sign rule and non-sign rule as well as MPS. Our new theoretical and experimental results strongly support that ANTN takes advantages of both MPS and ARNN. Regarding the specific example that the reviewer pointed out. ANTN is able to capture the entanglement very well. As we demonstrated in Figure 2(a), the ANTN achieves the same fidelity as the ARNN for random Bell states (maximally entangled). In an ANTN, the MPS and ARNN divide the work of representing a quantum state, with the MPS specializing in representing the quasi-local sign structure (Figure 2 (b)), and the ARNN focusing on representing the entanglement. Therefore, the ANTN actually inherits the advantages of both ARNN and MPS, which is exactly the reason that ANTN surpasses both MPS and ARNN in our experiments. We hope this explanation resolves the reviewer's concern.** >**Questions:** >Please see the weaknesses. >**Limitations:** >The author does describe the limitations. --- Rebuttal Comment 1.1: Comment: Thanks again for the valuable feedback! We have performed additional experiments comparing ANTN with ARNN to gain more understanding on their expressivity and optimization based on the suggestions during the discussion. We have tested a series of ARNN with different layers (up to 20 layers) and hidden dimensions (up to 72 hidden dimensions) such that the largest ARNN tested has more parameters than ANTN. We found that all the ARNN energies are not as good at ANTN (which is only based on 7 layers and 48 hidden dimensions), despite that ARNNs have more parameters and took longer time to train using the same GPU (see new data below). Furthermore, we observe that an increase of parameters can help ARNN to improve energy in general, but this improvement hits a diminishing return potentially due to difficulties in optimization as the number of parameters increases. In addition, we further test the (approximate) sign rule and found that it could help ARNN to obtain better energies but still worse than ANTN without sign rule (see Table 1 in the original paper and new data below). This indicates that the ANTN has the advantage of inheriting the flexible sign structure from MPS, avoiding the manual bias of adding the sign rule. The new results have further provided nice support for our work, both from an expressivity perspective and from the fact that ANTN has a better physics inductive bias than ARNN for optimization. We have also added the new benchmark in the updated manuscript. We hope these additional benchmarks will further resolve the concerns, and would be very appreciative if the reviewer could consider increasing the evaluation. --- Reply to Comment 1.1.1: Comment: | Algorithm | Energy per site | Number of layers | Hidden dimension | Bond dimension | Number of parameters (TN) | Number of parameters (NN) | Number of parameters (Total) | Runtime | |--------------------|-----------------|------------------|------------------|----------------|---------------------------|---------------------------|------------------------------|--------------------------------| | PixelCNN | -1.74098(29) | 7 | 48 | - | 0 | $5.0\times 10^5$ | $5.0\times 10^5$ | 19 hrs (A100) or 43 hrs (V100) | | PixelCNN | -1.86922(16) | 8 | 48 | - | 0 | $5.7\times 10^5$ | $5.7\times 10^5$ | 59 hrs (V100) | | PixelCNN | -1.90021(15) | 9 | 48 | - | 0 | $6.3\times 10^5$ | $6.3\times 10^5$ | 21 hrs (A100) | | PixelCNN | -1.90440(14) | 10 | 48 | - | 0 | $7.0\times 10^5$ | $7.0 \times 10^5$ | 23 hrs (A100) | | PixelCNN | -1.92826(11) | 11 | 48 | - | 0 | $7.7 \times 10^5$ | $7.7 \times 10^5$ | 26 hrs (A100) | | PixelCNN | -1.92986(10) | 12 | 48 | - | 0 | $8.4 \times 10^5$ | $8.4\times 10^5$ | 30 hrs (A100) | | PixelCNN | -1.92869(11) | 15 | 48 | - | 0 | $1.0\times 10^6$ | $1.0 \times 10^6$ | 51 hrs (A100) | | PixelCNN | -1.91537(13) | 20 | 48 | - | 0 | $1.4 \times 10^6$ | $1.4 \times 10^6$ | 66 hrs (A100) | | PixelCNN | -1.88536(19) | 8 | 72 | - | 0 | $1.3 \times 10^6$ | $1.3 \times 10^6$ | 35 hrs (A100) | | PixelCNN | -1.89266(15) | 9 | 72 | - | 0 | $1.4 \times 10^6$ | $1.4 \times 10^6$ | 40 hrs (A100) | | PixelCNN | -1.90718(15) | 10 | 72 | - | 0 | $1.6 \times 10^6$ | $1.6 \times 10^6$ | 47 hrs (A100) | | PixelCNN | -1.92965(13) | 11 | 72 | - | 0 | $1.7 \times 10^6$ | $1.7 \times 10^6$ | 52 hrs (A100) | | PixelCNN | -1.91186(14) | 12 | 72 | - | 0 | $1.9 \times 10^6$ | $1.9 \times 10^6$ | 57 hrs (A100) | | PixelCNN Sign Rule | -1.92915(12) | 11 | 48 | - | 0 | $7.7 \times 10^5$ | $7.7 \times 10^5$ | 31 hrs (A100) | | PixelCNN Sign Rule | -1.93112(11) | 12 | 48 | - | 0 | $8.4 \times 10^5$ | $8.4 \times 10^5$ | 34 hrs (A100) | | PixelCNN Sign Rule | -1.93233(9) | 15 | 48 | - | 0 | $1.0\times 10^6$ | $1.0\times 10^6$ | 47 hrs (A100) | | PixelCNN Sign Rule | -1.93058(10) | 20 | 48 | - | 0 | $1.4 \times 10^6$ | $1.4 \times 10^6$ | 64 hrs (A100) | | PixelCNN Sign Rule | -1.93211(10) | 11 | 72 | - | 0 | $1.7 \times 10^6$ | $1.7 \times 10^6$ | 54 hrs (A100) | | PixelCNN Sign Rule | -1.92812(12) | 12 | 72 | - | 0 | $1.9 \times 10^6$ | $1.9 \times 10^6$ | 59 hrs (A100) |
Summary: In this work, the authors introduce ANTN (Autoregressive Neural Tensor Net) as a novel architecture that combines features from both ARNN and Tensor Networks for simulating challenging quantum many-body systems. The main objective of this research is to utilize an ARNN-type network to generate a conditional wavefunction tensor rather than a direct conditional wavefunction, as commonly done in standard ARNNs. This latter can be interpreted as a probability density. Figure 1 provides a clear visualization of the idea. By integrating ARNN and TN within this framework, the authors aim to leverage the advantages of both methods, thereby achieving enhanced expressivity and a stronger adherence to physics inductive bias. To demonstrate the superiority of their approach, the authors present several theorems highlighting its advantages. While detailed proofs are available in the appendix, the main text often provides concise explanations, which significantly support the flow of the manuscript and complement the discussed results. The method's performance is initially evaluated by benchmarking its ability to learn 2-qubit Bell random states and real-valued shallow random circuits. Finally, the authors compare their ANTN approach to state-of-the-art methods in solving the ground state of a 2D $J_1-J_2$ Heisenberg model with open boundary conditions. This particular problem is known to be challenging due to its phase diagram exhibiting at least three distinct phases as $J_1$ and $J_2$ are varied. The results presented in Table 1 and Table 2 demonstrate that the ANTN method consistently outperforms existing methods across different system sizes and parameter choices for $J_2$. Strengths: - The paper is scientifically sound and well-written, with a comprehensive and meticulous literature review that covers all relevant works in the field. - The paper's structure is highly effective, resulting in a smooth and enjoyable reading experience. - Section 3 stands out as it provides a thorough overview of the essential components required to comprehend the more theoretical discussion starting from Section 4. This inclusion enhances the accessibility of the paper, making it approachable even for readers who are not thoroughly familiar with the field, such as myself. - The concept of combining the inherent flexibility of ARNN with the strengths of TN within a single framework is undeniably compelling. Weaknesses: - The manuscript frequently mentions the concept of "expressivity," but it is never explicitly defined or explained. It would greatly benefit the readers to dedicate a paragraph to clarify its meaning and highlight its significance, thereby implicitly assessing the advantages of the proposed approach. Specifically, Theorem 5.2 is not entirely clear to me. It would be helpful to have a clear definition of expressivity to shed light on this point. - The paper repeatedly refers to the "sign structure" problem, but this issue is never explicitly defined or explained. While referencing relevant literature is helpful, the paper should strive to be self-contained when discussing this critical aspect. Given its importance in evaluating the efficacy of the proposed method in the experimental section, a paragraph discussing this concept would be valuable for readers less familiar with it. - Although ANTN combines properties from both ARNN and TN, being a parametrized neural network, it is expected that ANTN also requires training. However, the main text does not discuss this in great detail. A comprehensive discussion on training ANTN in the main text is essential to provide a complete understanding of the methodology. - The manuscript lacks experiments that benchmark the runtime and memory complexity. It would be beneficial to compare ANTN with state-of-the-art TN methods in terms of these metrics. Such comparisons would enhance the overall evaluation of ANTN and its performance. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - How is the ANTN model trained? While intuitions are provided in Sections 3.2 and 3.3 (with additional details in the appendix), a more specific discussion would enhance understanding of the proposed framework. Additionally, it would be beneficial to explore the limitations of training ANTN. Is the training process significantly more challenging compared to plain ARNN? Can you provide an estimate of the computational time required to train a network for a specific downstream task? - The authors frequently mention that ARNNs lack the physics prior of the system under study. However, it raises the question of the extent to which this holds true. As correctly stated by the authors, inductive physics biases, such as symmetries, can actually be incorporated into pure ARNN networks to incorporate the desired physics prior. Could the authors provide some comments on this? - I wonder whether it would be interesting to replicate the analysis in Table 1 and Table 2 by fixing $J_2$ and modifying $J_1$ instead. Would substantial differences in the results be expected? This question arises because, as I understand it, $J_1$ modulates the interaction between nearest neighbors, potentially leading to collective behavior that may be more challenging to capture. - How do runtime and memory requirements scale in the experiments presented in Section 6.2? The insightful complexity analysis discussed in Section 5 suggests that it would be valuable to empirically visualize these factors through numerical experiments. - Is there a specific reason why DGRM results are not shown in Table 1, while RBM results are missing in Table 2? - In Figure 3, it appears that the DGRM method outperforms ANTN for medium-sized systems and increasing values of $J_2$. Since the bond dimension is fixed for both methods, do you have any intuition regarding how this plot would change as the bond dimension varies? Intuitively, one might expect that having a larger bond state for DGRM would improve its performance, particularly for larger system sizes. It would be intriguing to perform a detailed scaling analysis as a function of the bond dimension, acknowledging the potential increase in complexity and the enhanced capability of TN methods to capture more features, potentially leading to improved results. - In the broader impact section, the authors suggest that the ANTN approach may be applicable to other standard machine learning tasks. However, in such tasks, like image generation, for instance, there might not be strong physics prior to leverage using the TN representation. It would be interesting to hear the authors' perspectives on the benefits of applying their framework to standard machine learning benchmarks and how it could be advantageous in those scenarios. - Can this idea of combining TN with a generative type of neural networks be extended to other generative models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: In addition to the concerns mentioned in the weaknesses and questions sections, which implicitly highlight some limitations of the current manuscript, I believe that the primary limitation of this work may lie in its scalability and the training difficulties of ANTN compared to MPS. Although I am not an MPS expert, I am aware that sampling (and training) for an ARNN can be slow and computationally expensive, particularly when dealing with larger systems. While runtime may also pose a challenge for MPS, as increasing the bond dimension can become a computational bottleneck, I am uncertain whether this would be a minor issue compared to training, and sampling from, an ANTN when considering extremely large systems. A few minor comments: - In the introduction, it would be good to already have the relevant references being cited when a field/topic is firstly mentioned and introduced. - The Acronyms MERA and PEPS are never introduced in their extensive form. Would be good to do so when they are mentioned for the first time in line 72 of page 2. - Line 271 page 7: previous -> previously - Line 287 page 7: comes -> come - Line 690 page 19: staring -> starting Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for the appreciation of our work. We apologize that due to the character limit, we have to simply the response. We will provide additional explanations in the follow-up discussion.** >**Weaknesses:** >The manuscript frequently mentions the concept of "expressivity," ..... **Thanks for the comment. Indeed, there could be different definitions of "expressivity" based on different perspectives. Two typical important ideas of expressivity are related to entanglement and sign structure, which could be both incorporated into ANTN. In particular, the degree of entanglement is the bottleneck for MPS since it is designed to represent low entanglement quantum wave functions. The sign structure (or more generally the phase structure) stems from the fact that quantum wave function can have complex-number output, which requires more design of the neural network architecture. In this manuscript, we choose to define "expressivity" in a general way as a neural network's ability to represent (generic) quantum wave functions (potentially limiting the number of parameters). We also updated the proof of Theorem 5.2 in the global response. We have included the above discussion on expressivity in the updated paper.** >The paper repeatedly refers to the "sign structure" problem..... **Thanks for the suggestion. The sign structure (or more generally the phase structure) comes from the fact that quantum wave functions are L2-normalized functions. In other words, a quantum wave function can be viewed as a probability distribution with complex-valued phases ($\psi(x)=\sqrt{p(x)}\exp{i\theta(x)}$). For real-valued Hamiltonian, such phases are reduced to signs of $+1$ and $-1$ ($\theta(x)=0$ or $\pi$) (i.e. the sign structure). To capture such a structure, the neural network requires additional design to go beyond a probability representation. When the sign structure is known, the quantum wave function can be reduced to the probability distribution and can be solved more easily. However, such a structure is only analytically known for a limited number of models, and in the worst case it can be very random binary number distribution of $+1$ and $-1$ that are hard to represent. In this work, ANTN can learn this sign structure more efficiently due to the inductive bias from the MPS. We have included more discussion on the "sign structure" problem in the updated paper.** >Although ANTN combines properties...... **The ANTN is trained using standard gradient-based optimizers (Adam to be specific) with the gradient derived in Appendix A.2. We have added this in the updated paper to be clearer to the readers.** >The manuscript lacks experiments...... **Thanks for the comment. Yes, it would be very beneficial to provide benchmarks regarding runtime and memory complexity. A theoretical analysis is provided in Section 5.1. And additional experimental runtime results are included in the global response. In practice, all of the experiments expect DMRG in this work can be efficiently run on NVIDIA V100 GPU with 32 GB memory using at least 2000 samples in parallel. The DMRG algorithm with a bond dimension of 1024 requires a substantial amount of memory (with $10^8$ parameters) and the optimization is hard to be efficiently parallelized. Therefore it is run on CPU (which is standard for the DMRG algorithm.) We have included these discussions in the updated paper.** >**Questions:** >How is the ANTN model trained... **In Sections 3.2 and 3.3, we provide the training loss functions. For quantum state learning, the gradient is simply the gradient of the loss function. For the variation Monte Carlo algorithm, the gradient is similar to the policy gradient of the setup in reinforcement learning, which we derive in Appendix A.2. Then, Adam optimizer was used to train the neural network, with details written in Appendix C. We hope this answers the reviewer's question.** >The authors frequently mention that ARNNs lack... **As discussed previously, there are different perspectives of expressivity and inductive bias of a quantum wave function. While symmetry could be Incorporated into ARNN, there are other important inductive bias such as low entanglement and sign structure, which may not be easy to capture by ARNN. Indeed, it is known that MPS is better at capturing low entanglement (such as low dimension system ground state) and sign structure. Hence, by bridging MPS and ARNN together into ANTN, ANTN is equipped with more inductive bias, not only multiply symmetries from ARNN, but also sign structure and various entanglement degree (both low entanglement from MPS and high entanglement from ARNN).** >I wonder whether ... **Thanks for the comment. Since finding the ground state corresponds to finding the smallest eigenvalue of the Hamiltonian matrix, multiplying the matrix by a constant does not change the problem (it only rescales the eigenvalue). Hence, the only physically relevant quantity is the ratio $J_2/J_1$.** >How do runtime...... **Already answered above** >Is there a specific reason why DGRM .. >In Figure 3, it appears that the DGRM.. >In the broader impact section.. **We will answer these questions in the follow-up discussion due to the character limit.** >**Limitations:** >In addition to... **The authors understand the concerns that the reviewer raises. However, ARNN (and ANTN) are not slower compared to MPS, especially with the ability of exact sampling. In fact, because ARNN and ANTN can be trained more efficiently on GPU compared to MPS (with DMRG algorithm) due to the lower memory requirement and easier parallelization, ARNN and ANTN can be faster in practice with runtime details included in the global response.** >**A few minor comments:** **Thanks for the suggestions and corrections, and we have updated the manuscript accordingly.** --- Rebuttal Comment 1.1: Title: Additional Responses Comment: **We apologize for not being able to include all the responses above due to the character limit. Below, we will answer the remaining questions below.** >Is there a specific reason why DGRM results are not shown in Table 1, while RBM results are missing in Table 2? **Thanks for the question. Table 1 focuses on demonstrating the effect of the Marshall sign rule on different neural network architectures. Because DMRG algorithm is not affected by the sign rule by design, presenting the results is not necessary. As RBM is shown to be worse than PixelCNN in Table 1, it becomes unnecessary to further include the results on larger systems since it may dilute the main message of the table. We note that we indeed checked the RBM results for $10\times10$ systems and confirmed that the energies are not as good as either PixelCNN or ANTN.** >In Figure 3, it appears that the DGRM method outperforms ANTN for medium-sized systems and increasing values of $J_2$. Since the bond dimension is fixed for both methods, do you have any intuition regarding how this plot would change as the bond dimension varies? Intuitively, one might expect that having a larger bond state for DGRM would improve its performance, particularly for larger system sizes. It would be intriguing to perform a detailed scaling analysis as a function of the bond dimension, acknowledging the potential increase in complexity and the enhanced capability of TN methods to capture more features, potentially leading to improved results. **For large $J_2$, the system goes into a stripped phase as shown in (Nomura & Imada, 2021), which can decouple certain rows (columns) and therefore lowers the entanglement, where DMRG algorithm can be more advantageous. However, it is known that for 2D ground state in $L\times L$ system that satisfies area law, its entanglement grows with L and it requires MPS bond dimension to grow exponentially with L. The increase in entanglement for large systems would still make the DMRG algorithm more difficult in general.** >In the broader impact section, the authors suggest that the ANTN approach may be applicable to other standard machine learning tasks. However, in such tasks, like image generation, for instance, there might not be strong physics prior to leverage using the TN representation. It would be interesting to hear the authors' perspectives on the benefits of applying their framework to standard machine learning benchmarks and how it could be advantageous in those scenarios. >Can this idea of combining TN with a generative type of neural networks be extended to other generative models? **Thanks for the questions. TN algorithms have been investigated before in tasks such as image generation, and ANTN should be able to further enhance the performance due to more powerful representation. It will be an interesting direction to study how the ANTN performs in such tasks compared to standard NNs. It would also be meaningful to investigate how to combine TN with other types of neural networks, such as flow or diffusion models.**
Summary: This paper proposes Autoregressive Neural TensorNet (ANTN): a novel blend between Matrix Product States (MPS) and AutoRegressive Neural Network. Theoretically, this paper shows that ANTN parameterizes normalized wavefunctions, allows for exact sampling, generalizes the expressivity of tensor networks and autoregressive neural networks, and inherits a variety of symmetries from autoregressive neural networks. Experimentally, this paper shows that ANTN is more efficient than MPS and better biased than ARNN in quantum state learning, and show that ANTN outperforms MPS and ARNN in approximating the ground state of the 2D J1-J2 Heisenberg model with systems sizes up to 12x12. Strengths: The proposed architecture is novel, and importantly simple and intuitive therefore conveying a potential of enjoying “the best of both worlds” – an MPS above a very expressive AR neural network that creates a strong input representation. The core observation that one can construct such an architecture while retaining the ability to perform exact efficient sampling, is strong. The experimental results are non-trivial. The clean experiments in 6.1 tell an interesting story of efficiency with respect to MPS and bias with respect to ARNN. In section 6.2, it is an impressive achievement to consistently surpass ARNN on the one hand and MPS on the other on the challenging J1J2 task. Moreover, the positive trend with system size is promising. Overall, the experimental results in this paper bring an appetite to try this new architecture on larger systems and on more problems. Weaknesses: Overall, I find this paper to be not well structured. My main disappointment with the writing is the lack of details on the experimental part of the paper. It is by far more convincing than the theoretical part, but unfortunately does not receive enough space and details. For example, it is unclear to me what ANTN architecture was used for the experiments. Section 4 mentions the use of both PixelCNN and Transformer as the ARNNs underlying the ANTN. However, both tables 1 and 2 do not mention which variant was used, do not show results for both variants, and mention only the PixelCNN baseline. A long look at the experimental details in the paper body and in the appendix did not really help. It makes it very hard to assess the strength of the results when it is not clear which ARNN was used. Moreover, I am missing external performance references in order to assess the strength of the results. The field of many body quantum simulations has matured to the point where I expect new architectures to compare to external SOTA results; otherwise, it is not easy to judge results that are given relatively to baselines implemented by the authors, especially in the haziness / opacity of the current detailing of the experimental setup. Comparison to existing results on J1J2 will help convince me that the reported gains are meaningful. Besides that, much more details on the experimental setup on which the results were attained are required for this potentially important experimental outcome to be verifiable. Theoretically, Theorem 1 is important; Section 5.2 is disappointing and does not motivate using ANTN from an expressivity standpoint; and section 5.3 is not tied to the specific experiments conducted later on is used in the remainder in the paper (so though possibly interesting it does really contribute to the practical case advocating the use of ANTN). Elaborating, Theorem 5.2 does not provide an expressivity result that allows for reasoning regarding the benefits of ANTN. Specifically, it isn’t clear **by how much** ANTN is stronger than ARNN or MPS. The simple theoretical arguments provided do not make it clear, for example, whether ANTN is stronger than adding one more layer to the ARNN. Therefore, from an expressivity standpoint, the authors did not make a convincing case for the benefits of ANTN. Theorem 5.3 simply mentions the strength of ARNN as shown in other works, but does not contribute to the case of why ANTN is better than the existing ARNN from an expressivity standpoint. An experiment of ARNN with one more layer may very well show that the gains can also be pretty easily achieved with an existing architecture, which would be disappointing. (as a remark it isn’t clear how many layers are in the experiments on ARNN, the appendix gives some details but it isn’t clear if Transformer of PixelCNN was used). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Multiple references to the “Inductive bias of the physics structure of MPS” is pretty vague. When you discuss initializing the MPS with the optimized DMRG results of the same bond dimension that kind of makes sense, but then you could also initialize the ARNN with an optimized physics related representation, so I don’t see an advantage of MPS in this “optimzed initialization” regard. As for randomly initialized MPS vs ARNN, I don’t see that a-priori the former is better suited than the latter. In particular, all of the relevant physical symmetries of ATNN listed in section 5.3 seem to be inherited from ARNN and not from MPS. On the other hand, the experiments of section 6.1 mention “wavefunctions with local or quasi-local sign structures” and “shallow random circuit to mimic the short-range interaction and generate states with sign structures”. NeurIPS is ultimately a non-physics conference and such statements are hard to decipher for me as well as for broader readership -- what exactly is the task you ran? how exactly is it related to interesting physical systems? . Especially given the important role of this point in the paper’s main argument regarding the benefits of MPS, I find it to be insufficiently explained. Can you please elaborate and be more specific on what MPS inductive biases contribute to the physics suitability of ANTN? Further issues: What is h_dim? I don't see it defined. Line 63: state-of-the-arts Line 153: missing punctuation after “unique” Line 311: especially pass -> especially past Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We appreciate the reviewer's acknowledgment that our work is theoretically novel and experimentally nontrivial. The authors apologize that due to the character limit, we are unable to respond to all the questions and have to omit some contents. The authors would provide additional explanations in the follow-up discussions.** **As the reviewer points out, our proposed architecture ANTN gives rise to a strong input representation by integrating MPS with expressive ARNN and maintains efficient sampling. Meanwhile, we have demonstrated clear advantages over MPS and ARNN with various experiments.** >**Weaknesses:** >Overall, I find this paper to be not well structured. My main disappointment with the writing is the lack of details on the experimental part of the paper. It is by far more convincing than the theoretical part, but unfortunately does not receive ..... **The authors apologize for the oversight and thank the reviewer for nice suggestions on improving our writing. We will add relevant information explaining the different architectures used in different experiments in the captions as suggested. In terms of the architecture choice, all the experiments in Section 6.1 uses the Transformer and all the experiments in Section 6.2 uses the PixelCNN. The reason for such a choice is that the quantum state learning setup in Section 6.1 has a 1D structure, which makes Transformer a natural choice for the underlying ARNN, whereas in Section 6.2, the $J_1$-$J_2$ model has a 2D structure, making pixelCNN a better choice. This also makes a more fair comparison with the pure pixelCNN baseline.** >Moreover, I am missing external performance references ...... **Thanks for the reviewers' suggestion and agree that comparisons to external SOTA results are necessary. In fact, we compared our results with three previous SOTA results in the original paper: RBM from NetKet, recent MPBS result, and standard DMRG result with $\chi=1024$. In Table 1, we compared our results with the RMB results for $8\times8$ system using the existing NetKet package which has been one of the SOTA packages for many models in quantum many-body systems. In addition, we also compared with one of the recent SOTA results using MPBS at $J_2=0.5$ (written in paper line 314), which is worse than our results. Moreover, DMRG has been the SOTA in this field for decades, especially under open boundary conditions which takes DMRG's advantage. Here, we benchmarked against DMRG with $\chi=1024$, which contains $10^8$ parameters, 2 orders of magnitude larger than our model. In fact, the previous SOTA MPBS did not surpass DMRG with $\chi=1024$, and it can be viewed as a milestone that our results go beyond DMRG for large systems. We will adjust the writing to make these comparisons more apparent. To clarify the details of the setup, the pixelCNN used in this work follows the implementation as mentioned in the main paper, which is implemented according to the gated pixelCNN, but slightly modified for quantum wave functions. Then, the neural networks are trained using the Adam optimizer with gradients calculated from the variational Monte Carlo algorithm (Appendix A.2), and training details are listed in Appendix C. As the reviewer suggests, we will also include additional implementation details on the experimental setup in the updated version to make sure the outcome is verifiable.** >Theoretically, Theorem 1 is important...... **The authors thank the reviewer for the comment and the recognition of the importance of Theorem 1. The authors apologize that Section 5.2 writing may be too brief to highlight the expressivity of ANTN over both MPS and ARNN as an important motivation. To clarify, we have now provided detailed proof which shows that ANTN cannot be efficiently represented by either TN or ARNN to highlight this important result. (See global response.)** >and section 5.3 is not tied to the specific experiments ...... **Thanks for the reviewer's comment. Section 5.3 mainly serves as an important theoretical result that the symmetries that can be implemented in ARNN can also be implemented in ANTN. As implementing symmetries on ARNN is already non-trivial, with many researchers working in this direction, explicitly developing ANTN with symmetries can be an important and promising direction for future work.** >Elaborating, Theorem 5.2 does not provide ...... **The authors thank the reviewer for the comment. We have provided an updated proof above, where it is shown that a general ANTN can be written as a TN or ARNN with an exponential (in system size) number of parameters The proof demonstrates that ANTN has stronger expressivity over ARNN and TN (see global response).** >Theorem 5.3 simply mentions...... **Indeed, Theorem 5.3 is a theoretical result that shows ANTN has the nice feature that it can inherit the symmetries from ARNN. The motivation here is not to show ANTN is more expressive than ARNN, but how it can non-trivially incorporate various symmetries even with tensor network on top of ARNN.** >An experiment of ARNN...... **The authors thank the reviewer for the suggestion of adding more layers to the ARNN. In the global response, the authors attached additional benchmarking results for ARNN with more layers, where ARNN still lacks behind ANTN, which further supports the updated proof of Theorem 5.2. Our results clearly demonstrate that ANTN is more expressive than ARNN both theoretically and experimentally (see global response).** >(as a remark ...... **The authors apologize for the confusion. All the experiments in Section 6.1 use Transformer with 2 layers, and all the experiments in Section 6.2 use PixelCNN with 7 layers. The authors will update the caption to indicate the details of neural network architectures.** >**Questions:** **The authors will answer the questions in the follow-up discussions due to the character limit.** --- Rebuttal Comment 1.1: Title: Additional Responses Comment: **We apologize for not being able to include all the responses above due to the character limit. Below, we answer the questions that the reviewer raises.** >**Questions:** >Multiple references to the “Inductive bias of the physics structure of MPS” is pretty vague. **Thanks for the question. In general, MPS captures the low entanglement wave function, which tends to the ground state of low dimension, and meanwhile is flexible to represent the sign structures of the wave function. In this work, we took advantage of MPS's flexibility in representing sign structures and used its representation to ground state as prior for initialization. We have added more explanation and discussion in the updated paper.** >When you discuss initializing the MPS with the optimized DMRG results .... MPS in this “optimzed initialization” regard. **The reviewer raises a good point of initializing ARNN with an optimized physics-related representation. In fact, in this work, we extensively used the transfer learning technique (mentioned in the Appendix). The pixelCNN was trained first on small systems, to learn the necessary representations easily, before training on large systems. In addition, the Marshall sign rule used in Table 1 also provides the pixelCNN with some (approximate) physics-related representation. As the table shows, this indeed improves the result in certain regimes, but it still lacks behind the ANTN. We further check if MPS initialization could improve the result of ARNN. We pretrain pixelCNN on learning DMRG results and present the results in the global response. We find that DMRG pretraining negatively affects the result. This is understandable, as learning DMRG results with complex sign structure is essentially the same task as learning a quantum state from shallow random circuit, which has been shown a difficult task for ARNN in Figure 2(b) of the original paper. In addition, since the DMRG result is just an approximation of the actual ground state, learning this state can actually make a worse initialization for pixelCNN.** >As for randomly initialized MPS vs ARNN...... **Even randomly initialized MPS could still improve ANTN on its underlying ARNN, because of the flexible sign structure that the TN part of ANTN permits. To the end, DMRG is just an efficient training algorithm for MPS, which makes it desirable to train the MPS with DMRG before integrating into ANTN. However, gradient-descent-based approaches should still work when training both the TN part and ARNN part together. In the global response, we support this argument with additional results of training the ANTN without initializing the MPS with DMRG. The result shows that the ANTN performs equally well even without DMRG initialization.** >In particular, all of the relevant physical symmetries....... **Thanks for the comment. ANTN inherits symmetries from ARNN, which is a nice feature, while at the same time ANTN also inherits the flexible sign structure from MPS that does not exist in ARNN.** >On the other hand, the experiments of section 6.1 mention .... contribute to the physics suitability of ANTN? **The authors thank the reviewer for asking for clarification. To make it clear, a quantum wave function can depend on the basis in which the wave function is expressed. In a physical system, a quasi-local change of basis should not change the relevant physics of the wave function (such as long-range entanglement) but could affect the sign structure. More explicitly, a quantum wave function can be written as $\psi(x)=\sqrt{p(x)}\exp{i\theta(x)}$, with $p(x)$ a probability distribution and $\theta(x)$ a phase factor, which only takes values of $0$ and $\pi$ for real-valued Hamiltonian. The change of basis affects both $p(x)$ and $\theta(x)$, making the wave function consists of structured positive and negative values (i.e. the sign structure) that can be hard for ARNN to learn via the conditional wave functions. On the other hand, the additional bond dimension of the conditional tensors of ANTN allows the wave function to be "rotated" to the correct sign.** **To clarify the random circuit task, a quasi-local change of basis can be described using shallow-depth quantum circuit. In addition, shallow-depth quantum circuits can generate quantum wave functions with short-range entanglement which arises from many physical systems with local interaction. Therefore, the test on random shallow-depth quantum circuits can be viewed as a test of the neural networks for generic sign structures arising from physical systems. The details of generating the quantum wave functions under the shallow random circuit is described by Algorithm 2 in Appendix A.1. We have included more details to explain this task and the MPS inductive biases contribution to ANTN in the updated version as the reviewer suggests.** >**Further issues:** **Thanks for the corrections. We have updated the manuscript to include these changes as well as the experimental details.**
Rebuttal 1: Rebuttal: **We would like to thank all the reviewers' comments and suggestions. We apologize that due to the character limit and math rendering issue, some explanations are simplified/omitted. We will provide additional explanations in the follow-up discussion period.** **Reviewer 2bnN acknowledges our novel blend of MPS and ARNN for ANTN takes the best of both worlds and achieves non-trivial experimental performances. Reviewer Vtcz acknowledges that our ANTN which integrates both MPS and ARNN is "undeniably compelling". Reviewer dUKK acknowledges that our proposed ANTN absorbs the advantages of two state-of-the-art methods (MPS and ARNN) with comprehensive theoretical analysis and experimental results. Reviewer wT6D acknowledges that our approach is novel and sound for an important problem with extensive experimental support.** **The main concerns of reviewers are on the theoretical side of our theorems and the details of experiments. We have addressed all the questions with both updated theory and new experimental data. In particular, our updated theorem 5.2 clearly demonstrates that ANTN has stronger expressivity than TN and DMRG, which is also further supported by our new experiments in terms of performance and number of parameters. (See one-page pdf)** **We have also implemented all the writing suggestions from the reviewers and provided answers to each review. Based on our update, we appreciate if the reviewers can consider raising their scores.** **On the theory side, reviewers expressed the concern that Theorem 5.2 is not strong enough. In response, we updated Theorem 5.2 to show that our ANTN has stronger expressivity over TN and ARNN. More specifically, a general ANTN can be written as a TN or ARNN with an exponential (in system size) number of parameters. The updated Theorem (simplified due to math rendering issue) is listed below.** Updated Theorem 5.2. Autoregressive Neural TensorNet has generalized expressivity over tensor network and autoregressive neural network. Theorem: 1. Both TN and ARNN are special cases as ANTN 2. A general ANTN can be written as a TN or ARNN with an exponential (in system size) number of parameters. Detailed proof (will add in appendix) 1. As already shown in the paper 2. (a) In ANTN, the base ARNN outputs the conditional tensors $\tilde{\psi}^{\alpha_{i-1} \alpha_i} (x_i|\boldsymbol{x_{<i}})$ to form the resulting wave functions. These conditional tensors, when explicitly written out, contain $\chi^2\cdot2^i$ elements (where $\chi^2$ comes from the two bond dimensions and $2^i$ from all the qubits at or before the current qubit $i$). Since it has been shown that ARNN has volume law entanglement while TN doesn't. The conditional tensors generated from ARNN do not permit efficient tensor decomposition. Therefore, tensor network almost has to parameterize the full conditional tensor resulting in $\sum_{i=1}^N\chi^2\cdot2^i\sim\mathcal{O}(\chi^2\cdot2^N)$ parameters in total for all qubits. (b) In ANTN, the conditional probability is generated from the marginal probability as shown in Eq. 4. The summation over $\alpha_i$'s contains $\chi^{2i-1}$ terms when expanded, each of which can be viewed as a (quasi-)marginal probability $q^{\alpha_1,\alpha_1',\dots,\alpha_i}(\boldsymbol{x_{\le i}})$ generated by the underlying ARNN of ANTN. Using conventional ARNN architecture, a weight matrix of shape $h_{\mathrm{dim}}\times\chi^{2i-1}\times 2$ (where $h_{\mathrm{dim}}$ is the hidden dimension, and 2 is the local dimension of the current qubit) is required to fully parameterize all the (quasi-)probabilities, leading to $\sum_{i=1}^N2\cdot h_{\mathrm{dim}}\cdot \chi^{2i-1}\sim\mathcal{O}(h_{\mathrm{dim}}\cdot\chi^{2N})$ parameters in the last layer in total. Thus, ANTN generalizes the expressivity over both TN and ARNN and cannot be efficiently represented with either TN or ARNN. **On the experiment side, we answer the reviewer's questions by running additional experiments and providing additional details regarding the number of parameters and training time for various algorithms in the attached PDF. The detail is summarized below.** 1. **Does increasing the number of layers allow pure pixelCNN to beat ANTN?** In attached Table R.1, we include additional results using pure pixelCNN with more layers. Although the added layer improves the result to some extend, the pixelCNN fails to beat ANTN in terms of energy calculation. This is consistent with the updated Theorem 5.2 that ARNN cannot efficiently parameterize a generic ANTN. 2. **How does the inductive bias/physics prior of MPS affects the result? Does the improvement come from the DMRG initialization or the MPS structure?** In attached Table R.1, we perform 2 addition tests: a) pretrain pure pixelCNN by learning DMRG results; b) train ANTN (elementwise) without DMRG initialization. We find that DMRG pretraining negatively affects the result, while ANTN without DMRG initialization performs as good as with DMRG initialization. This is understandable, as learning DMRG results with complex sign structure is essentially the same task as learning a quantum state from shallow random circuit, which has been shown a difficult task for ARNN in Figure 2(b) of the original paper. Since the DMRG result is an approximation of the actual ground state, learning this state actually makes a worse initialization for pixelCNN. (We note that the pixelCNN was originally initialized from the result of smaller systems sizes, as we described in Appendix C under transfer learning.) 2. **How do the runtime and number of parameters for the different algorithms compare?** In Table R.1, Table R.2, and Figure R.1, we show the number of parameters and runtime of each algorithm. In summary, DMRG has the most parameters, and is the slowest algorithm, taking as long as 2 weeks. The ANTN is much more efficient compared to DMRG while obtaining better energies, and it is comparable to pure ARNN while obtaining much better energies. Pdf: /pdf/77ab9c1381f4c6da97e3f1718fbaa0002c57a815.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Cross-Domain Policy Adaptation via Value-Guided Data Filtering
Accept (poster)
Summary: The authors propose a method that does domain adaptation for RL. Unlike other approaches that do this via domain randomization, offline interaction from the target env, or adjusting the sim with system ID from a small real dataset, the aim is to learn from a large source of sim transitions and a small number of **online** interactions. The closest comparison to the work is DARC, which was also evaluated in a similar setting. DARC is from a class of methods that learns a reward correction term $\Delta r$ that captures the change in environment dynamics between source and target domain, and estimates $\Delta r$ using classifiers. This work suggests that DARC's estimation method is too conservative. In particular, given two tuples $(s,a,s_1')$ and $(s,a,s_2')$, if $V^\pi_{tar}(s_1') \approx V^\pi_{tar}(s_2')$, then for the purposes of value estimation $s_1'$ and $s_2'$ can be considered similar, even if the individual states themselves are quite different. This is an example of a case where DARC could produce a larger reward penalty. This work aims to estimate $V^\pi_{tar}(s)$ from a small number of interactions in the target domain + many interactions in the source domain. To do so, an ensemble of dynamics models $T_\phi(s'|s,a)$ is learned from target data. Sampling target state $s_{tar}'$ from the $T_\phi(s'|s,a)$ family gives an ensemble of Q-value estimates $Q_{tar}(s,a)$, and this lets us model $V^\pi_{tar}(s)$ as a random variable with some amount of uncertainty. The function is then updated with all target data, and some source data, where the source data is used only if it is measured as sufficiently "in-distribution" to the target environment. (In practice the authors use the top 25% most "in-distribution" source data.) Strengths: I found the theoretical arguments reasonably clear, modulo a few typos (see later section), and on double-checking them, everything looks correct. I think the core idea makes sense: that for the purposes of learning, if you assume your value estimate $V$ is accurate enough, then it is sufficient to compare environments based on your value estimate $V(s_{src})$ vs $V(s_{tar})$ since the value function encapsulates all future variation. This idea is compelling. Weaknesses: In terms of references, there are some approaches that use value function equivalence as an objective for learning their manipulable sim (i.e. RL-CycleGAN, if you view learning an image-to-image GAN as a manipulable sim). My more specific reservation is with the comparison to DARC, the most competitive of the baselines. The GridWorld example given shows an environment where DARC is overly pessimistic in bottleneck states, and fails to explore outside of that region. DARC has theoretical guarantees of correctness, so what is going on here? Perhaps in practice, the theory is too conservative (the field has seen similar things with TRPO / PPO using less-conservative approximations of theory). In this scenario, my natural response would be to not to derive a new method, but to try weighting the reward correction term $\Delta r$ in DARC by some hyperparameter. It is not very theoretically principled but would probably handle the pessimism alright. Does this approach meaningfully improve on DARC if the conservativism term was weighted? It's unclear. Setting this aside for now, if $V$ is learned well, then this paper (VGDF) makes sense. But to me this seems like a very strong "if" to make. It's a little chicken and egg, where $V$ should only be learned well if we did not need the source data in the first place? To determine the closest section of source data, we must already learn a 1-step dynamics model of the target environment $T(s_{t+1}|s,a)$, and estimate $V_{tar}(s')$ to estimate the future value from $s'$. If both those pre-requisites are true, it feels like we ought to just use a model-based method instead. As implemented, the paper always uses the 25% closest segment of the source data, but there is no guarantee this 25% closest segment is actually that close to the target data. The ablations in Figure 6 suggest that the separation function is doing something, but I am a bit concerned that performance does not seem to differ much for many different thresholds $\xi$. I am also not so fond of the exploration policy $\pi_E$. It makes sense but I believe the baselines do not use a separately trained exploration policy (please correct me on this if wrong). It doesn't seem like $\pi_E$ is a major component based on the ablation run, so this is a lesser concern for me. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Are there any results suggesting the accuracy of the value function $V$ during training? Or at least suggesting that any inaccuracies in $V$ are correlated during training? I would also appreciate more details on how the Gaussian dynamics models $T_\phi$ were learned - given that the paper's central argument is to measure state distance by $V_{tar}(s)$, the method section for how $V_{tar}$ is learned seems oddly rushed. Misc questions: line 580 of appendix: it should be $W_{j+1} = ... + V(s_{j+1})$ instead of $V(s_j)$ right? line 565 of appendix: I'm not sure why the proof is 4 lines. Lemma C.1 gives $\eta_1(\pi) - \eta_2(\pi) = \frac{\gamma}{1-\gamma} E[ E_{P_1}[V_{tar}(s')] - E_{P_2}[V_{tar}(s')] ]$. The end of Theorem B.2 is equivalent to, $\eta_1(\pi) - \eta_2(\pi) \le$ "the outer expectation, if you take the absolute value of the insides before integrating". So I'm not sure why there's a step where the $P_{src}$ and $P_{tar}$ terms are shuffled around before taking the absolute value, seems like needless notation unless I'm missing something. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations seems addressed fine. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback, and we will respond to each one of your questions as follows. (**W and Q denote weakness and question, respectively.**) (W1) Thank you for reminding us of the related work RL-CycleGAN, which utilizes value equivalence for sim2real visual control. We will add RL-CycleGAN as one of the references. However, it's important to note that our work differs from RL-CycleGAN in several aspects: (1) We specifically focus on dynamics shifts between domains, while RL-CycleGAN generalizes across different visual or state spaces. (2) RL-CycleGAN requires bidirectional consistency, whereas our method solely addresses single-directional adaptation. (W2) In fact, DARC's theoretical guarantee requires an assumption, there is at least one policy that is near-optimal in both source and target domain. (Assumption 1 in DARC paper). This assumption constrains the method to the scenarios, where the gap between the source domain and the target domain cannot be too significant, as discussed in Lines 152-159 of our paper. In the toy task of Section 4.1, the assumption does not hold. That is, there is no policy that can complete navigation in both the source and target domains. Thus, DARC fails due to overly pessimistic assessments of the bottleneck transitions. (W3) Regarding the idea of reweighting the correction reward to address the problem of DARC, we agree that it might alleviate the pessimistic assessments, but it may not entirely resolve the issue. As discussed in Lines 152-159, the reward correction of DARC (i.e., $\Delta r(s,a,s’) = log (P_{tar}(s’|s,a)/P_{src}(s’|s,a))$) tends to be $-\inf$ when the source domain transition is not likely in the target domain (i.e., $P_{tar}(s’|s,a)\approx 0$). Thus, even if we use a weight $0< \alpha < 1$ to scale down the reward correction, DARC still produces negative reward correction (penalties) for transitions, which indicates the pessimistic assessments are unchanged even if the transitions are transferable from the value-discrepancy perspective. An alternative solution is to replace the reward correction of DARC with dynamics-based data filtering, which removes the pessimistic effect on the reward, as discussed in Appendix F.5. However, the results in Fig.15 demonstrate that the method still underperforms VGDF. (W4) Theoretically, perfect data filtering does require an accurate value function, and the accurate value function might only be learned with pure target domain data. However, we believe that a small number of target domain data can provide a good initialization for data filtering. In addition, the source domain data is selected only when the value equivalence holds, meaning the selected source domain data is nearly equivalent to the corresponding target domain data, concerning the value function training, as discussed in Lines 160-165. Regarding the accuracy of the dynamics model, we acknowledge that it also influences data filtering. However, unlike model-based methods (e.g., MBPO) that use the generated samples for efficient training without considering their validity, VGDF only compares the values of them with others without training with them. This alleviates the model exploitation problem common in model-based works. Furthermore, many model-based methods perform multi-step generation starting from collected samples, requiring a high level of prediction capability. In contrast, VGDF only requires the one-step prediction of the dynamics model. Thus, VGDF can tolerate a less accurate dynamics model, while the model-based might fail given the same model. (W5) We agree using 25% of batched samples does not guarantee value consistency, and the result of Fig. 6 verifies that it is helpful to restrict the usage of source domain data in some environments. The vertical axis of Fig. 6 is scaled too small, which makes the actual performance difference less obvious. In fact, there will be hundreds of score gaps at different thresholds in the two Half-cheetah environments. Thus, the threshold still influences performance to a non-trivial extent. (W6) It is true that the baselines do not incorporate optimistic exploration. Using the maximum of Q ensemble has been proved to obtain an optimistic estimation or an approximated upper bound of the value in prior works [1,2,3]. Here we intended to encourage the agent to collect source domain transitions that could be potentially high-valued concerning the target domain. Although the asymptotic performance improvement induced by the exploration technique is insignificant, the mechanism still produces non-trivial sample-efficiency improvement in specific environments. (Q1) Here we analyze the value accuracy by plotting the scatters involving predictions and ground truth Monte Carlo returns. Fig 1 in the **Global Rebuttal PDF** demonstrates that the value functions at different training stages approximate the expectation of the ground truth returns, which further validates that filtering source domain data via our proposed value consistency does not significantly influence the accuracy of the value functions. (Q2) The dynamics models are learned through MLE (Eq. 13 in Appendix E.1) with all samples collected from the target domain, following prior model-based works [4,5]. (Misc Questions) Thank you for pointing out the presentation flaws in the theoretical parts. We will ensure to rectify these issues in the next version of the paper. We really appreciate your meticulous review. [1] Ciosek K, et al. Better exploration with optimistic actor critic. NIPS 2019. [2] Lee K, et al. Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning. ICML 2021. [3] Moskovitz T, et al. Tactical optimism and pessimism for deep reinforcement learning. NIPS 2021. [4] Janner M, et al. When to trust your model: Model-based policy optimization. NIPS 2019. [5] Chua K, et al. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. NIPS 2018. --- Rebuttal Comment 1.1: Title: reply Comment: Thank you for the reply, this helps clarify my concerns. I do not plan to update my score but I think the paper is quite reasonable. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We really appreciate your effort to review our paper and your recognition of our work! The constructive suggestions you gave during the rebuttal session are greatly helpful in improving the quality of our paper. Thanks again for your time and the meticulous review!
Summary: The paper is concerned with the online dynamics adaptation problem, where an agent is tasked to generalize from a source domain with cheap access to a target domain with a limited online interaction budget. The approach filters what data from the source domain is used in the target domain based on whether the state-action pair will result in a state of similar value in both source and target domain. This approach is theoretically justified and motivated in a toy example. The authors show empirical results in modified versions of the Deepmind control suite. Strengths: The paper benefits from a relatively tight relation between theoretical motivation and practical implementation. The "value discrepancy perspective" is original in this context, and the authors carefully compare and ablate their method. The paper is easy to read and to understand. As sim2real transfer is a very real and pressing issue in robotics and control, I think the paper makes a significant contribution. Weaknesses: The paper could potentially benefit from additional experiments in environments that are not in the "short-term control" regime that is typical for the Deepmind control suite, but more long-horizion like physical manipulation scenarios or a more complex version of the maze that is used for the motivating example. Since in such scenarios, value assignment is generally harder to do correctly, it might be interesting to see how the performance of value-based data sharing is affected in such a case. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - I did not really understand why the fact that DARC learns overly pessimistic value estimations should be (as argued in section 4.1 and elsewhere) a direct consequence of DARC being dynamics-based rather than value-based? Would it not be possible to create a "more forgiving" version of DARC that, for example, uses a data-filtering mechanism rather than the (arguably over-pessimistic) reward shaping mechanism it currently uses? If I did not misunderstand the motivating example, I feel like it does not really disentangle the two aspects of (1) reward shaping vs. data filtering and (2) dynamics-based vs. value-based, but then the difference in results is mostly assigned to (2). The argument that value difference is more long-horizon than dynamics difference makes sense to me, though. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations section is quite brief. I appreciate that space is very constrained, but maybe it would improve the paper to expand this a little for the camera-ready version. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback, and we will respond to each one of your questions as follows. **(W1) The paper could potentially benefit from additional experiments in environments that are not in the "short-term control" regime ...** We appreciate your suggestion regarding additional experiments in long-horizon tasks to further investigate the performance of our method. Indeed, learning the value function in such environments can be challenging, and our method relies on the value function for data filtering. We think that addressing the difficulty of learning value functions in long-horizon tasks might be beyond the scope of the current paper. However, we believe that integrating additional mechanisms could be beneficial, and the simplicity of VGDF allows for these potential extensions to tackle the issue. For instance, we could explore using Episodic Control [1] or enhanced credit assignment techniques [2,3,4] to handle long-term credit assignment problems and improve performance in long-horizon tasks. These could be interesting future directions to enhance the applicability of VGDF in more complex environments. **(Q1) I did not really understand why the fact that DARC learns overly pessimistic value estimations should be ...** Thank you for pointing out the difference between DARC and VGDF in terms of using reward-shaping mechanisms. To disentangle these two aspects, we have introduced a variant of DARC that performs data filtering with estimated dynamics difference in Appendix F.5. The results presented in Fig.15 demonstrate the superiority of the value-guided perspective in three out of four environments, highlighting the effectiveness of our value-based approach in comparison to dynamics-based methods. **(Limitation) The limitations section is quite brief. I appreciate that space is very constrained, but maybe it would improve the paper to expand this a little for the camera-ready version.** We are grateful for your feedback regarding the limitations of our work. In the next version of this paper, we will extend the limitation discussion to address potential challenges and areas for improvement, as suggested. [1] Hu H, Ye J, Zhu G, et al. Generalizable Episodic Memory for Deep Reinforcement Learning[C]. International Conference on Machine Learning. PMLR, 2021: 4380-4390. [2] Arjona-Medina J A, Gillhofer M, Widrich M, et al. Rudder: Return decomposition for delayed rewards[C]. Advances in Neural Information Processing Systems, 2019, 32. [3] Raposo D, Ritter S, Santoro A, et al. Synthetic returns for long-term credit assignment. arXiv preprint arXiv:2102.12425, 2021. [4] Gangwani T, Zhou Y, Peng J. Learning guidance rewards with trajectory-space smoothing[C]. Advances in Neural Information Processing Systems, 2020, 33: 822-832. --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: I thank the authors for their response, and for pointing out the additional experiments in appendix F.5, which I did not find initially, and that are indeed interesting. I had no major concerns initially, and I maintain my original score. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We sincerely thank you for your recognition of our work! We really appreciate your effort to review our paper and your valuable comments! Thanks a lot.
Summary: This paper considers a setting of online dynamics adaptation, where the goal is to train a near-optimal policy in the target domain using transition data from the source domain and the target domain with different dynamics. The authors propose to select source domain data to train Q-functions if the value discrepancy between the source and the target domain is minimal. The authors propose Fictitious Value Proximity (FVP) that represents the likelihood of the source domain state value and select source domain transitions with the top 25% quantile of FVP. The authors evaluated the proposed method on environments with different dynamics, including kinematics and morphology change. Strengths: 1. This paper tackles the problem of generalizing policies across dynamics mismatch, which is significant in reinforcement learning. 2. The writing and the clarity are good. Also, this paper includes a theoretical performance bound controlled by value difference to support the claim. Weaknesses: 1. The proposed idea seems to be simple and obvious. VGDF wants to select source domain data with similar state transition dynamics for additional training data, and it uses a value function to measure the similarity of state transition dynamics. 2. Although VGDF demonstrates superior performance baselines in some environments, it has comparable or inferior results to a fine-tuning method on the target domain in other environments. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. There seems to be a discrepancy between the method described in the paper and the code implementation of it. In the paper, VGDF selects the source domain samples based on the likelihood of the next state value $V(s’)$. On the other hand, in the code, VGDF selects the source domain samples based on the likelihood of $r + \gamma V(s’)$, which approximates the current state value $V(s)$. In the code, the ensemble of dynamics models predicts the reward $r$ and the next state $s’$ in the target domain (line 286 in vgdf.py). These predictions are then used to compute Fictitious Value Ensemble (FVE) of $r + \gamma V(s’)$ (lines 244 and 303 in vgdf.py) and Fictitious Value Proximity (FVP) of $r + \gamma V(s’)$ on source domain samples. (line 261 in vgdf.py). This means that VGDF chooses source domain samples based on $V(s)$ instead of $V(s’)$, which contradicts the description in the paper. It is unclear why the authors chose to use $r + \gamma V(s’)$ instead of $V(s’)$ in the code implementation. 2. According to Fig. 7, using $\pi_E$ seems insignificant. How exactly the exploration policy behaves in the source domain? I wonder whether maximizing $Q_{UB} = \max\{Q1, Q2\}$ actually leads to optimistic exploration. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors included the limitations in the Conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback, and we appreciate the opportunity to clarify and improve our paper based on your suggestions. **(W1) The proposed idea seems to be simple and obvious. VGDF wants to select source domain data with similar state transition dynamics for additional training data, and it uses a value function to measure the similarity of state transition dynamics.** Thank you for raising the question regarding the difference between value difference and dynamics difference and our approach to data filtering. We would like to clarify that dynamics difference refers to the discrepancy between a source domain transition and the corresponding target domain transition concerning the transition probabilities. On the other hand, value difference measures the discrepancy between the target domain's next state and the source domain's next state concerning the values. Thus, a minor dynamics difference of a transition can lead to a minor value difference, while the minor value difference does not necessarily lead to a minor dynamics difference. Filtering transitions based on small value differences is not proposed for obtaining transitions with minor dynamics discrepancies. As discussed in Section 4.1, methods solely relying on dynamics discrepancies tend to provide overly pessimistic assessments of transitions, even though some transitions with significant dynamics differences could be beneficial for target domain policy training. Motivated by this observation, we devised an alternative approach, which involves filtering source domain transitions based on value discrepancies. These value discrepancies ensure that the selected transitions are equivalent to the corresponding target domain transitions concerning the value function training (as discussed in Lines 160-165). Additionally, the dynamics discrepancy considers only single-step shifts, whereas the value discrepancy takes into account the long-term influence of the transition. Theoretical analysis reveals distinct performance bounds for the two discrepancies, and empirical results validate the soundness of the value discrepancy perspective. **(W2) Although VGDF demonstrates superior performance baselines in some environments, it has comparable ...** As for the performance of Finetuning, we believe that finetuning can be regarded as the most fundamental method of behavior transfer. It is possible to collect high-valued transitions by performing the pretrained behaviors in the target domain, which further benifits the policy training during finetuning. In the previous work [4], finetuning has even been proven to be stronger than several meta-learning algorithms. It is therefore not surprising that finetuning has comparable performance to VGDF in a few cases. **(Q1) There seems to be a discrepancy between the method described in the paper...** We appreciate your observation regarding the implementation comparison of TD targets ($r(s,a) + \gamma Q(s’,a’)$) and next-step values $V(s’)$. As we focus on the setting without reward function shifts ($r: S\times A \rightarrow \mathbb{R}$), the reward of the current step can indeed be considered equivalent since they derive from the same state-action pair $(s_{\text{src}},a_{\text{src}})$. Moreover, we can use the $Q$-function to approximate the $V$-value by taking expectation on the current policy. However, to provide empirical evidence for the theoretical approximation, we perform additional ablation experiments on using TD targets versus next-step values. The results presented in Table 3 demonstrate that the performance is not significantly influenced by the implementation difference mentioned here. | Algorithm | HalfCheetah – broken back thigh | HalfCheetah – no thighs | Hopper – big head | | ---- | ---- | ---- | ---- | | VGDF – TD target | $4735 \pm 340$ | $3579 \pm 198$ | $3154 \pm 121$ | | VGDF – V | $5016 \pm 259$ | $3785 \pm 213$ | $3187 \pm 65$ | Table 3: Performance of VGDF using different implementation mechanisms for data filtering. **(Q2) According to Fig. 7, using $\pi_E$ seems insignificant. ...** Using the maximum of Q ensemble has been proved to obtain an optimistic estimation or an approximated upper bound of the value in prior works [1,2,3]. The goal behind employing this mechanism is to encourage the agent to collect source domain transitions that might be potentially high-valued concerning the target domain. While the improvement induced by the exploration technique might not be significant, the mechanism still yields non-trivial sample-efficiency improvement in specific environments. $\quad$ We hope these explanations address your concerns and improve the clarity of our paper. If you have any further suggestions or questions, please feel free to share them with us. We value your feedback and are committed to addressing all aspects to enhance the quality of our work. [1] Ciosek K, Vuong Q, Loftin R, et al. Better exploration with optimistic actor critic[C]. Advances in Neural Information Processing Systems, 2019, 32. [2] Lee K, Laskin M, Srinivas A, et al. Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning[C]. International Conference on Machine Learning. PMLR, 2021: 6131-6141. [3] Moskovitz T, Parker-Holder J, Pacchiano A, et al. Tactical optimism and pessimism for deep reinforcement learning[C]. Advances in Neural Information Processing Systems, 2021, 34: 12849-12863. [4] Zhao M, Abbeel P, James S. On the effectiveness of fine-tuning versus meta-reinforcement learning[C]. Advances in Neural Information Processing Systems, 2022, 35: 26519-26531. --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: I appreciate the author's responses. **Regarding (Q1) using TD target instead of next-step value in the implementation of VGDF** Thank you for clarifying the comparison between using Fictitious Value Ensemble (FVE) and Fictitious Value Proximity (FVP) of TD target $r(s, a) + \gamma Q(s', a')$ and using those of next-step value $V(s')$. To prevent any potential confusion among readers, there are a couple of aspects to be clarified in the paper. Firstly, it would be helpful if the authors could elaborate on their reason for choosing to learn the target domain reward function $r(s, a)$ and for providing results based on FVE and FVP of TD target $r(s, a) + \gamma Q(s', a')$ rather than next-state value $V(s')$, even under the assumption of equivalent rewards between domains at the current step. Additionally, it would be beneficial if the authors could address why the differences between the methodology proposed in the paper and its subsequent implementation were not explicitly discussed within the paper itself. The differences seem significant, but the only thing mentioned is that the dynamics model outputs the current-step reward in addition to the next state, as in the appendix. **Regarding (Q2) optimistic exploration $\pi_E$** Thank you for clarifying the approach of using the maximum of the Q ensemble for optimistic estimation. The references in the rebuttal demonstrate the application of the upper confidence bound of Q defined by $\mu_Q + \beta \sigma_Q$, utilizing both empirical mean ($\mu_Q$) and standard deviation ($\sigma_Q$) of the Q ensemble. Employing the maximum of the Q ensemble can be conceptually understood as setting the parameter $\beta$ to 1, a point emphasized in [1]. Given that this approach is already present in the literature, the authors are encouraged to reference the prior works in the paper. --- Reply to Comment 1.1.1: Title: Response to Reviewer zSdL Comment: We appreciate your valuable feedback, and we hope the following explanations help address your concerns. **(Q1)** Regarding using the TD-target instead of the value, there are two confusions proposed by the reviewer, if we understand correctly: (1) Why learning the target domain reward function $r_{tar}(s, a)$ if we have already assumed that the reward functions across domains are identical? (2) Why the differences between using the TD-target and using the value are not explicitly discussed in the paper? For the first question, we would like to clarify that the reward function is not purposefully devised, and our decision to learn the reward function is rooted in the implementation commonly found in Model-based Reinforcement Learning (MBRL) works. To ensure compatibility and prevent unexpected issues during training, we followed the codebase of the MBRL work [1] to train the dynamics model, wherein the dynamics model is realized as an MLP $P_{\theta}(s',r|s, a): S\times A \rightarrow \mathbb{R}^{|S|+1}$. In their authorized implementations, the reward function is not learned via some individual module, instead, the reward of the current step $\hat{r}(s,a)\in \mathbb{R}^1$ is predicted as one of the elements from the output vectors. In conclusion, the reason for learning the reward function is that we utilize the common implementation of the dynamics model to prevent unexpected problems during training. Since we focus on the setting with identical reward functions (i.e., $r_{src}(s, a)=r_{tar}(s, a)$) and shifted dynamics, comparing the paired TD-targets {$r_{tar}(s_{src}, a_{src}) + \gamma V_{tar}(s_{tar}'), r_{src}(s_{src}, a_{src}) + \gamma V_{tar}(s_{src}')$} is identical to comparing the paired value {$V_{tar}(s_{tar}'), V_{tar}(s_{src}')$}. Assuming the paired values are equal, the equivalence between the paired TD-targets can be derived as follows: $$ V_{tar}(s_{tar}') = V_{tar}(s_{src}') \quad \Rightarrow \quad r_{tar}(s_{src},a_{src})+\gamma V_{tar}(s_{tar}') = r_{src}(s_{src},a_{src})+\gamma V_{tar}(s_{src}'). $$ Regarding the second problem, we would like to start with the motivation proposed in Lines 162-164 "*..the paired transitions are nearly equivalent for **temporal different learning** if the induced value estimations are close (i.e., $|V (s_{src}') - V (s_{tar}')|< \epsilon$)*". Temporal difference learning fits the value of a state-action $Q(s, a)$ to the corresponding TD-target $r(s, a)+\gamma V(s')$. Supported by the problem setting stated in Lines 99-100 that the two MDP share the same reward function $r(s, a)$, the closeness between the paired values is in direct proportion to the closeness between the paired TD targets $$|V (s_{src}') - V (s_{tar}')| \propto |(r(s,a) + \gamma V (s_{src}')) - (r(s,a) + \gamma V (s_{tar}'))|.$$ Thus, we respectfully disagree with the claim that you think the difference is significant since the two implementations both align with our motivation and are two forms of our insights. Furthermore, the supplemented experiments have demonstrated that the difference makes no significant variations concerning the empirical performance, either. However, we do apologize for the inadequate discussion of the implementation details which was supposed to be mentioned in the Appendix, and we appreciate your valuable suggestions that will be incorporated into the revised manuscript to strengthen the clarity of our work. **(Q2)** Regarding the optimistic exploration technique, we agree with you about the missing reference paper. We appreciate your feedback and will mention the related work within the context (Lines 216-223) when preparing the next version of the manuscript. We hope these explanations address your concerns and improve the clarity of our paper. If you have any further suggestions or questions, please feel free to share them with us. [1] Janner M, et al. When to trust your model: Model-based policy optimization. NIPS 2019. --- Reply to Comment 1.1.2: Title: Supplementary responses to Reviewer zSdL Comment: As we approach the discussion deadline, we kindly mention that we haven't received your feedback yet regarding the effectiveness of our rebuttal in addressing the concerns raised. In light of this, we would like to provide additional clarification to ensure that we have adequately addressed any lingering uncertainties. Considering main concern about the difference between the TD-target and V-value, we would like to provide additional explainations: 1. We focus on the problem setting wherein reward functions remain consistent across both the source and target domains. This principle is also evident in the Mujoco domains, where paired domains share the identical reward function independent of the dynamics change. For instance, in the HalfCheetah, the reward hinges on the robot's x-velocity and action norm, independent of morphology variations like robot height fluctuation. 2. $V(s')$ equals to $\mathbb{E}_{a'\sim \pi(\cdot|s')}[Q(s',a')]$, allowing us to use the Q-function to estimate V-values. Furthermore, since we use SAC as our backbone already involving the Q-function learning, we leverage the Q-function (as discussed in Lines 208-210) to avoid the need for learning the separate V-function. In conclusion, though using TD-target and V-value seems to be different, the discrepancy between paired TD-targets and the one between paired V-values are identical in our problem setting and our empirical investigations, which has been formulated accordingly in our last response. We genuinely hope that this supplementary explanation enhances the clarity of our response and successfully addresses the concerns. As the discussion deadline approaches, we eagerly await your feedback on whether our response resonates with your expectations and if you would consider reevaluating our work. If you have any further suggestions and questions, we sincerely welcome you sharing with us.
Summary: Disclaimer: I found it challenging to fully comprehend the paper. This may be due to either inadequate clarity in the writing or my own limitations in understanding the subject matter. This work introduces a method called Value-Guided Data Filtering (VGDF), which aims to enable online dynamic adaptation. By utilizing a set of state-action pairs from a source domain, VGDF selects relevant transitions to train a policy that performs well in a target domain with differing dynamics, all while not requiring the need for extensive interactions with the target domain. To assess the effectiveness of the proposed method, experiments are conducted on four Gym Mujoco environments that feature diverse, dynamic shifts, such as kinematic changes and morphological variations. Strengths: - The main results depicted in Figure 4 exhibit considerable strength and indicate that VGDF outperforms the baseline methods. - The proposed method is supported by a theoretical analysis presented in Section 4. - The authors have conducted a good number of analyses, including ablation studies and quantification of dynamic mismatch. Weaknesses: The main weakness of this paper lies in its lack of clarity in communication and presentation. Overall, I found it challenging to follow the arguments and ideas presented. Furthermore, it remains unclear what the significant contributions of this work are and the novel aspects of the proposed filtering mechanism. For instance, in lines 52-58, the authors attempt to summarize their contributions. However, empirically demonstrating the superiority of their method (contribution number 4) is not considered a contribution itself, as every claim requires proper evaluation. Another weakness of this work is the limited breadth of the experimental analysis. While the main results in Figure 4 undoubtedly showcase excellent performance for VGDF in Gym Mujoco environments, it is uncertain whether this generalizes to other environments beyond that scope. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: I strongly recommend that the authors undertake a significant revision of their paper, particularly focusing on Section 4 and Section 5. Additionally, it is important to ensure that the paper is self-contained and does not necessitate reading the appendix. Therefore, I suggest that the authors incorporate additional details regarding the experimental setup from the appendix into the main body of the paper. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: The authors seem to have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback, and we appreciate the opportunity to clarify and improve our paper based on your suggestions. **(W1) The main weakness of this paper lies in its lack of clarity in communication and presentation. ...** We apologize for any confusion regarding the contributions of our work. To address this, we have provided a detailed description of the contributions as follows: 1. In Section 4, we present a motivation example that highlights the limitations of prior methods using dynamics-based measurements. We provide theoretical analysis to interpret the results and introduce our novel value-based perspective. Unlike dynamics-based measurements that explicitly evaluate domain differences, our proposed perspective quantifies the transferability of transitions concerning the learning process itself. This enables superior performance in scenarios with significant domain differences, as demonstrated in Section 4.1 and Section 6. 2. In Section 5.1, we derive the practical algorithm VGDF based on the insights proposed in Section 4.2. 3. To extend the applicability of our method beyond the online source with online target setting, we introduce a variant of VGDF called VGDF+BC in Section 5.2, which is suitable for offline source with online target scenarios. 4. In Section 6, we conduct extensive experiments to investigate the performance of our method. To simulate diverse dynamics shift scenarios, we design kinematic shifts and morphology shifts. Additionally, we perform ablations to analyze the effectiveness of our method. **(W2) Another weakness of this work is the limited breadth of the experimental analysis. ...** The works for dynamics adaptation problems typically use Mujoco as a standard testbed to investigate the performance, including a large quantity of peer-reviewed papers [1,2,3,4,5,6,7,8,9,10]. Since the empirical investigation of dynamics adaptation problems requires simulating the dynamics shift scenarios, Mujoco turns out to be a well-suited platform thanks to the simplicity of changing the physical properties of the simulation model. Though we only perform experiments in Mujoco, we devise two different types of dynamics shifts (Kinematics and Morphology) which are different from prior peer-reviewed papers. Prior works typically only consider one type of dynamics shift, either Kinematics [1,2,3,4,5,6,7] or Morphology [8,9,10]. Thus, the empirical results over both types of scenarios further demonstrate the superiority of our method. The details of the environments are presented in Appendix D. **(Q1) I strongly recommend that the authors undertake a significant revision of their paper...** We understand your concern regarding the script revision. After carefully considering your feedback and the feedback from other reviewers, we believe that our script is self-contained, and the main contents are presented in the main paper in a well-organized form. Deferring the experimental settings to Appendix is a common practice due to the page limitations. We have ensured that Sections 4 and 5 provide sufficient motivation, insights, and algorithm designs for a clear understanding of our work. Furthermore, each section starts with a conclusive paragraph for a comprehensible presentation. However, we would be grateful if you provide any suggestions for the revision of Sections 4 and 5. [1] Kumar S, Kumar A, Levine S, et al. One solution is not all you need: Few-shot extrapolation via structured maxent rl[C]. Advances in Neural Information Processing Systems, 2020, 33: 8198-8210. [2] Eysenbach B, Chaudhari S, Asawa S, et al. Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers[C]. International Conference on Learning Representations. 2020. [3] Shen Q, Li Y, Jiang H, et al. Deep reinforcement learning with robust and smooth policy[C]. International Conference on Machine Learning. PMLR, 2020: 8707-8718. [4] Lee K, Seo Y, Lee S, et al. Context-aware dynamics model for generalization in model-based reinforcement learning[C]. International Conference on Machine Learning. PMLR, 2020: 5757-5766. [5] Lee S, Chung S Y. Improving generalization in meta-rl with imaginary tasks from latent dynamics mixture[C]. Advances in Neural Information Processing Systems, 2021, 34: 27222-27235. [6] Ball P J, Lu C, Parker-Holder J, et al. Augmented world models facilitate zero-shot dynamics generalization from a single offline environment[C]. International Conference on Machine Learning. PMLR, 2021: 619-629. [7] Mu Y, Zhuang Y, Ni F, et al. DOMINO: Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning[C]. Advances in Neural Information Processing Systems, 2022, 35: 27563-27575. [8] Liu X, Pathak D, Kitani K M. REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy Transfer[C]. International Conference on Machine Learning. 2022. [9] Chiappa A S, Marin Vargas A, Mathis A. DMAP: a Distributed Morphological Attention Policy for learning to locomote with a changing body[C]. Advances in Neural Information Processing Systems, 2022, 35: 37214-37227. [10] Hong S, Yoon D, Kim K E. Structure-aware transformer policy for inhomogeneous multi-task reinforcement learning[C]. International Conference on Learning Representations. 2021. --- Rebuttal Comment 1.1: Comment: I appreciate your response and clarifications. However, I believe that the rebuttal does not adequately address my concerns, specifically in relation to W2. Many previous works (such as [1,2,3]) have conducted experimental analyses across several different environments. Given this, I would uphold my current evaluation. [1] Lee, K., Seo, Y., Lee, S., Lee, H., & Shin, J. (2020, November). Context-aware dynamics model for generalization in model-based reinforcement learning. In International Conference on Machine Learning (pp. 5757-5766). PMLR. [2] Barekatain, M., Yonetani, R., & Hamaya, M. (2019). Multipolar: Multi-source policy aggregation for transfer reinforcement learning between diverse environmental dynamics. arXiv preprint arXiv:1909.13111. [3] Nagabandi, A., Clavera, I., Liu, S., Fearing, R. S., Abbeel, P., Levine, S., & Finn, C. (2018). Learning to adapt in dynamic, real-world environments through meta-reinforcement learning. arXiv preprint arXiv:1803.11347. --- Reply to Comment 1.1.1: Title: Rebuttal to Reviewer Xugz Comment: Thank you for your review, and we appreciate your feedback. To address your concern about the limited breadth of empirical investigations, we tried our best to obtain the results on another two environments, named PyBullet-Hopper and PyBullet-HalfCheetah, that use Bullet as the physical engine instead of Mujoco. We first provide the details of the dynamics gap in the new environments. In both environments, we regard the original environments as the source domains. In PyBullet-Hopper, we devised the target domain by increasing the torso size from 0.05 to 0.15, to simulate the morphology change. In PyBullet-HalfCheetah, we constrain the joint range of the front thigh from $[-1.5, 0.8]$ to $[-1.5, 0.4]$, and the joint range of the front shin from $[-1.2,1.1]$ to $[-1.2,0.1]$, to simulate the broken joint scenario that is widely used in the related works mentioned in our first rebuttal. Due to the limited time left for the discussion, we compare VGDF to the main baselines: DARC and Finetune. The results are shown in the following two tables, where we denote all algorithms in the form **"Alg (the number of source domain samples, the number of target domain samples)"**. All results are **averaged across five runs with different seeds**. The results demonstrate that VGDF outperforms or matches baselines even with a smaller number of target/source domain data in the environments, demonstrating the generalizability of our method. **Due to the discussion deadline already imminent, we will report the complete experiment results in the next version of our paper.** | Return in the Target domain | VGDF (200k, 20k) | DARC (1M, 100k) | Finetune (1M, 20K) | Finetune (1M, 100K) | Zero-shot (1M, N/A) | | ---- | ---- | ---- | ---- | --- | --- | | PyBullet-Hopper | $854 \pm 52$ | $99 \pm 20$ | $240 \pm 189$ | $869 \pm 32$ | $37 \pm 3$ | $ $ | Return in the Target domain | VGDF (100k, 10k) | DARC (1M, 100k) | Finetune (1M, 10K) | Finetune (1M, 100K) | Zero-shot (1M, N/A) | | ---- | ---- | ---- | ---- | --- | --- | | PyBullet-HalfCheetah | $658 \pm 60$ | $679 \pm 131$ | $653 \pm 51$ | $678 \pm 38$ | $-316 \pm 77$ | $ $ Finally, we would like to point out that the work [1] only uses four Gym Mujoco environments and two classic control tasks (Pendulum and CartPole) that also derive from Gym. Besides, most simulation environments in the work [2] are all variations of Gym Mujoco environments. $ $ [1] Lee, K., Seo, Y., Lee, S., Lee, H., & Shin, J. (2020, November). Context-aware dynamics model for generalization in model-based reinforcement learning. In International Conference on Machine Learning (pp. 5757-5766). PMLR. [2] Nagabandi, A., Clavera, I., Liu, S., Fearing, R. S., Abbeel, P., Levine, S., & Finn, C. (2018). Learning to adapt in dynamic, real-world environments through meta-reinforcement learning. arXiv preprint arXiv:1803.11347. --- Rebuttal 2: Comment: Dear reviewer Xugz, Thank you for the review. As we near the end of the discussion, does the author response help clarify understanding or do you still have concerns with clarity? I agree that papers should be largely self-contained though it is inevitable that some details get pushed to the appendix in many works. Are there some details you think are critical to move to the main body? Note that if the paper is accepted, the authors will have an additional page to add results or descriptive details. Thanks, Your AC
Rebuttal 1: Rebuttal: We appreciate valuable feedback and suggestions from all reviewers. If you have any further suggestions or questions, please feel free to share them with us. We value your feedback and are committed to addressing all aspects to enhance the quality of our work. Pdf: /pdf/ab14660129f582d9df02302ec13f245f0b29c2d3.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper focuses on the online dynamics adaptation problem where the agent has access to a large number of samples or an offline dataset from a source domain, and must adapt with a smaller number of samples in a target domain. To address this problem, this paper introduces a framework for using a value discrepancy perspective. Specifically, they consider transitions with consistent value targets equivalent, even if they have different dynamics. This is different from prior works that focus purely on dynamics discrepancy, and encourage the agent to avoid state-actions with different dynamics. Towards this end, they show some theoretical analysis demonstrating a bound on policy performance given value consistency. Given this perspective, they present the Value-Guided Data Filtering (VGDF) algorithm which uses value consistency to do selective data sharing from the source domain to the target domain. The practical implementation of this algorithm involves learning an ensemble of gaussian dynamics models for the target domain. These different models are used to generate different fictitious transitions that can be used to estimate a gaussian distribution of potential values in the target domain for a given state-action pair from the source domain. Thus, the consistency of state-actions from the source domain are evaluated by Fictitious Value Proximity (FVP), which computes the likelihood of the estimated value of the source next state given the estimated gaussian distribution of potential values in the target domain. They train a Q function for the target domain by using the samples from the target domain and samples from the source domain if they are in the top $\xi$-quantile likelihood estimation of the sampled minibatch. They train an evaluation policy and exploration policy with SAC, with the main distinction being the exploration policy is trained to optimized the max of the 2 Q functions instead of the minimum. To tackle the setting where the agent only has an offline dataset collected in the source domain, they introduce a BC term to their optimization similar to TD3 + BC. They evaluate their results in several Mujoco environments where the target domain introduces a kinematic shift or morphological shift to the robot. They generally find that their method outperforms their baselines in both the online and offline setting. The ablations demonstrate the benefits of the filtering by FVP and training the separate exploration policy. Finally, they demonstrate how FVP can be used to analyze different types of dynamics shifts in the target domain. Strengths: The paper is well written, clear, and flows well. I believe an expert could reproduce the algorithm given the details in the paper. I believe the discussion on the value discrepancy perspective is an interesting contribution to the field that would interest other researchers. I think in particular the theoretical results in section 4.2 and the motivating examples and figures in section 4.1 do a great job demonstrating the validity of this perspective. As far as I am aware, VGDF is a novel and interesting algorithm that is well explained and justified by their value discrepancy perspective. The authors include good experimental results and ablations that demonstrate the effectiveness of their algorithm and specific design choices. Additionally, I think the setting of doing online dynamics adaption from an offline dataset is a currently underexplored topic that should get more interest from the research community. The VGDF + BC is a seemingly simple yet effective extension of VGDF that seems to do well in this setting. Weaknesses: In my opinion, there are no major weaknesses in this paper, but I do have 2 small criticisms I would like to be addressed. I would appreciate comparisons in Section 6.3 to more offline RL algorithms that were designed for offline pretaining and online finetuning, like IQL. I believe there are many real-world scenarios where online samples are prohibitively expensive compared to simulated samples or previously collected offline data. Thus, I would appreciate more ablations with higher numbers for $\Gamma$, but less online samples. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Do you think your method will be effective in doing sim-to-real transfer? It seems to be a motivating example, but their are no results that directly indicate that your method will be better at sim-to-real transfer, than prior approaches. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No obvious limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback, and we will respond to each one of your questions below. **(W1) I would appreciate comparisons in Section 6.3 to more offline RL algorithms that were designed for offline pretaining and online finetuning, like IQL.** We would like to highlight the difference between the algorithms for Offline-Pretraining-Online-Finetuning (OPOF) and our algorithm that is designed for Online Dynamics Adaptation. Algorithms for OPOF typically do not consider the dynamics gap problem, the environment for offline training and online finetune are identical. In contrast, Online Dynamics Adaptation mainly focus on the dynamics gap problem, where a dynamics gap exists between the source domain and the target domain. Nevertheless, our algorithm is applicable to the offline-online setting. We compare VGDF-BC with IQL in OPOF setting with dynamics gap, the results shown in Table 1 verify the deficient performance of IQL when facing with the dynamics gap. | Algorithm | HalfCheetah – broken back thigh | Hopper – broken joints | | ---- | ---- | ---- | | IQL ($10^6$ offline pretraining + $10^5$ online finetuning) | $2114 \pm 141$ | $896 \pm 134$ | | VGDF-BC ($10^6$ offline data + $10^5$ online data) | $4834 \pm 250$ | $2785 \pm 75$ | Table 1: Performance comparison between IQL and VGDF-BC in OPOF setting with dynamics gap. **(W2) I believe there are many real-world scenarios where online samples are prohibitively expensive compared to simulated samples or previously collected offline data. Thus, I would appreciate more ablations with higher numbers for, but fewer online samples.** We agree that the online samples from real scenarios would be scarce and expensive. The ablations on the data ratio $\Gamma$ of offline-online setting would be important to evaluate our method under the data-shortage scenarios. We have performed ablations on the data ratio $\Gamma$ in the offline-online setting, and the results are presented in Table 2. These experiments demonstrate that the asymptotic performance of VGDF-BC is not significantly influenced by the higher $\Gamma$ values with fewer online samples. This finding highlights the robustness and effectiveness of our algorithm even with limited online samples. | Data ratio $\Gamma$ | Hopper – big head | | ---- | ---- | | $\Gamma = 10$ (# source = $10^6$, # target = $1\times 10^5$)| $3060 \pm 60$ | | $\Gamma = 15$ (# source = $10^6$, # target = $6.7\times 10^4$)| $3074 \pm 74$ | | $\Gamma = 20$ (# source = $10^6$, # target = $5 \times 10^4$)| $2995 \pm 36$ | Table 2: Performance of VGDF-BC with various data ratios $\Gamma$. **(Q1) Do you think your method will be effective in doing sim-to-real transfer?** We think that our data-sharing method is qualified to handle the sim2real problem. Actually, the most recent work [1] has demonstrated that simply sharing the experiences from tasks with different dynamics can be beneficial to the sim2real transfer. Current sim2real frameworks can be summarized by the four categories shown in Fig.1 of the main paper. We think that different methods might fit in different problem settings. For instance, the prevailing domain randomization for sim2real is essential when the target/reality is inaccessible. In contrast, our method focuses on scenarios when limited target domain data are available, which can lead to more directional and effective policy learning than domain randomization. However, our method does require online interactions with the target domain, which can result in safety issues in sim2real problems. Considering the advanced performance of our method in the dynamics adaptation setting, we believe integrating safe exploration techniques [2] into our framework would be an interesting future direction to devise novel sim2real frameworks. Once again, we appreciate your valuable feedback and suggestions, and we will incorporate these points into the revised manuscript to strengthen the clarity and contributions of our work. If you have any further questions or concerns, please feel free to let us know. [1] Smith L, Kew J C, Li T, et al. Learning and adapting agile locomotion skills by transferring experience. arXiv preprint arXiv:2304.09834, 2023. [2] Thananjeyan B, Balakrishna A, Nair S, et al. Recovery rl: Safe reinforcement learning with learned recovery zones[J]. IEEE Robotics and Automation Letters, 2021, 6(3): 4915-4922. --- Rebuttal Comment 1.1: Comment: I thank the authors for including these results. Considering that my initial concerns were quite minor, I do not plan to update my score. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We really appreciate your effort to review our paper and your recognition of our work! The constructive suggestions during the rebuttal session are indeed helpful in improving our paper. Thanks again for your time and hard work!
null
null
null
null
null
null
Parsel🐍: Algorithmic Reasoning with Language Models by Composing Decompositions
Accept (spotlight)
Summary: The paper proposes Parsel, a framework that decomposes algorithmic reasoning problems into subparts, samples programs for each subpart and verify them. To decompose a problem, Parsel transforms it into an intermediate language that describes the functionality for each subpart and how the subparts depend on each other. To sample programs, Parsel uses existing language models trained on code. To verify the subparts and the whole program, Parsel analyzes their dependencies and use existing test generation techniques. Parsel is able to improve the performance in solving code problems and virtual robot control. Strengths: 1. The overall idea of divide-and-conquer and verifying subparts is reasonable and intuitive. 2. Using an intermediate language makes both the reasoning and parsing easier. 3. The idea for involving human programmers to help in the loop is inspirational for designing better code assistants. Weaknesses: ### 1. The comparison between Parsel and Codex (codeT improved) is not fair. Therefore, the claim on L145 that Parsel “substantially improves over prior work, from 14.5% to 25.5%” does not hold. If I understand correctly, this comparison is demonstrated in Figure 4, where the number 14.5% comes from the blue point labeled **“50 (improved)”** and the number 25.5% comes from the green point labeled **“8x16”**. In my opinion, this comparison is unfair because of two reasons. - **First, the sample budgets are different.** As claimed on L142, the comparison uses an “effective number of complete programs” to represent the sample budget. So the “50 (improved)” setting has a sample budget of 50 while the “8x16” has 128 sample budget. On the own interpolated green curve in Figure 4, Parsel’s performance at sample budget 50 is similar to that of Codex. - **Second, Parsel uses significantly more evaluation budget.** Even if we ignored the first issue and assumed “50 (improved)” and “8x16” had the same sample budget, the comparison would still be unfair because of significant differences in evaluation budgets. “50 (improved)” achieved a pass rate of 14.5% with **50 evaluations**. While ”8x16” achieved 25.5% with **10^6 evaluations**. I do understand that one advantage of Parsel is the ability to increase the number of evaluations without more samples, but it’s still unfair to compare the result from 10^6 evaluations with that from 50 evaluations. Because arguably, evaluation budget is more important than sampling budget in code generation, as a failed evaluation would result in penalty in programming competitions and crashes in real production. No one would want a code generation system that gets the correct program after 10^6 failures. It’s both unrealistic and unfair. From the reported data, a fairer pass rate for “8x16” would be 4% (when the evaluation budget is 100), which is much worse than “50 (improved)”. To give more idea about how large a 10^6 evaluation budget is and how much improvement it gets, AlphaCode achieves >35% pass rate on CodeContests (Figure 8 from https://arxiv.org/pdf/2203.07814.pdf) with 10^6 evaluations and 6% pass rate with 100 evaluations. Judging from CodeT’s results, CodeContests is harder than competition-level APPS problems. Because of these two reasons, I find the claim on Parsel’s ability to solve more competition-level problems not reliable. ### 2. Lack of details and analyses on how verification of decomposed functions helps. Parsel emphasizes the importance of decomposition of the problem into modular functions (components) and the verification of these functions. While decomposition can be justified by the exponential evaluation budget (though it creates unfair comparison) and the dropping pass rates when the number of chained components is large, how verifying these decomposed functions separately can help is not sufficiently justified, leaving several questions unanswered, to name a few: - **How are constraints (unit tests) generated for different components generated?** I understand that the test cases for the entire program is generated using CodeT, but how different components are verified is not clear. - **How many components have unit tests?** The paper claims that for test-less component functions, they can be tested with the test cases for the entire program. I wonder how many components actually have unit tests or if most of them are test-less. - **Are unit tests mostly correct?** If not, what is the point of using them instead of just using the general test cases? - **Do testing components separately lead to better perfiormance?** - **Do components that pass unit tests always build up to a correct program?** - **How much does partitioning the function reference graph into SCCs help?** The paper claims that for mutually dependent functions, grouping them into SCCs can help save some sampling budget. Does it really? To me, the results reported look like they are mostly from enumerating different combination of functions and verifying them using global constraints. The claim in your TLDR that the problem is decomposed into subparts and solved by “verifying subparts, and then composing them” does not seem well supported. It’s more like “enumerating the combination of subparts and then verifying them as a whole”. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: I would love to see the answers to the questions in the Weakness section. I may consider raising the score if they are well answered. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review and helpful questions! > As claimed on L142, the comparison uses an “effective number of complete programs” to represent the sample budget. So the “50 (improved)” setting has a sample budget of 50 while the “8x16” has 128 sample budget. On the own interpolated green curve in Figure 4, Parsel’s performance at sample budget 50 is similar to that of Codex. Overall we agree that Codex 50 (Improved) shouldn’t be directly compared to Parsel 8x16. We believe the scaling properties of Parsel exceed those for Codex, but unfortunately it isn’t possible to verify this with, eg “Codex (Improved) 128”. Codex (Improved) refers to the version of Codex used in [11]. Unfortunately, it’s also hard to extrapolate a comparison here. They do not report more than 50 attempts; however, in their CodeContests results, there is little improvement from 50 to 100 attempts (7.1% to 8.8%). It is, of course, difficult to extrapolate, but a proportional improvement here would represent a change of 3.5% – even with 3x this, it would still underperform Parsel. We attempted to replicate their APPS results with Codex, but could not because 1) they do not provide their few-shot prompt and 2) we found we were rate limited substantially more often than when using Parsel and each generation took longer. (This is likely because we were generating longer completions each time. This made replicating their results at the same scale infeasible, especially given the new restrictions on Codex access.) > Second, Parsel uses significantly more evaluation budget. … Because arguably, evaluation budget is more important than sampling budget in code generation, as a failed evaluation would result in penalty in programming competitions and crashes in real production… From the reported data, a fairer pass rate for “8x16” would be 4% (when the evaluation budget is 100), which is much worse than “50 (improved)”. A key motivation behind this paper is the usefulness of tests in generating and identifying useful programs (i.e., test-driven development). Ultimately, whether evaluation or generation is more expensive will depend on use cases, but if the goal is to find a program passing standard unit tests, the compute necessary for generating hundreds of tokens from a large language model is many orders of magnitude more than evaluating an APPS program. There are many such problems where verification is far cheaper than generation. As you mention, given a similar evaluation budget, Parsel performs worse with Codex: this should come as no surprise when it is a format for representing algorithms that Codex has never been trained on. If anything, the fact that Parsel can overcome this gap is noteworthy. > How are constraints (unit tests) generated for different components generated? I understand that the test cases for the entire program is generated using CodeT, but how different components are verified is not clear… How many components have unit tests? The paper claims that for test-less component functions, they can be tested with the test cases for the entire program. There are multiple ways to generate constraints. First, one can just generate top-level constraints based on the task, as done in the original CodeT paper. We did this in the HumanEval experiments to make sure that the comparison to CodeT was fair. Second, in the case of human-written Parsel solutions, humans can write tests for relevant components. Third, some of the case studies, like the problem-solving example (Figure A.9), can be solved by generating constraints for all functions and applying the heuristic where we aim to maximize the minimum CodeT score of any function in an implementation. > Are unit tests mostly correct? If not, what is the point of using them instead of just using the general test cases? We plan to include more discussion of CodeT [11] in the paper, but one of the key findings was that one doesn’t need all-correct tests to identify correct functions. It’s motivated by the Anna Karenina principle (“All happy families are all alike; each unhappy family is unhappy in its own way”): if many functions pass the same tests, one is likely correct. [11] Codet: Code generation with generated tests. (Chen et al. 222) > Do components that pass unit tests always build up to a correct program? Not necessarily - the provided unit tests may not be comprehensive. As mentioned in Appendix S.2, one can also enable a backtracking flag, where a parent component will consider other implementations of child components if a solution cannot be found. However, we did not use this in the large-scale coding experiments, as these relied on unit tests only for the top-level functions. > How much does partitioning the function reference graph into SCCs help? The paper claims that for mutually dependent functions, grouping them into SCCs can help save some sampling budget… Do testing components separately lead to better perfiormance? If n functions with m implementations have no mutual dependencies, or if the sizes of the SCCs are no more than c, then partitioning takes it from O(n^m) to O(nm) or O((n^c)/c m). So, for whether it leads to better performance, the answer is certainly: for some Parsel programs like the lisp interpreter (Appendix H), they would be virtually impossible to synthesize without the decomposed testing. For complex programs with many functions, it is infeasible to consider combinations of all of their implementations at the same time. > The claim in your TLDR that the problem is decomposed into subparts and solved by “verifying subparts, and then composing them” does not seem well supported. This is a reasonable point. Given that the primary focus of the paper is not on the verification, but rather on the composition and decomposition, we would be more than happy to update the TLDR to “Language models can solve algorithmic reasoning tasks by decomposing them, solving subparts, and composing them.” --- Rebuttal Comment 1.1: Comment: Thank you for the response! Most of my concerns are well addressed. However, I'm still not convinced by your argument on generation budget vs evaluation budget. I understand that in terms of computation cost, running large language models is way more expensive than running the programs they created. I think our disagreement might be on the meaning of tests here. It seems to me that you treat system tests (in codecontests) as merely a metric for correctness that language models and humans have access to during development. But to me, failing tests in a coding contest is the same as crashing in a production environment, which leads to serious consequences. It's not the computation cost of evaluation I'm talking about, it's the risk and consequences of program failure. Therefore, I'm not against using a large evaluation budget on your generated unit tests, but I am against an unfairly large evaluation budget on the system tests. --- Reply to Comment 1.1.1: Comment: There's a genuinely interesting philosophical question here. The question we (and, in our view, most prior work) were looking at with APPS is, given these particularly hard problems, limited by your generation budget, what's the chance you find a correct solution? But if you instead view each test as a successive submission, where any failure is like a production crash, you would almost certainly take a different approach in practice -- one more like the one we took in the HumanEval test generation experiments, one that uses interpreter feedback to revise solutions, or perhaps even one that uses the language model to simulate the result of tests. These two interpretations are important, but they highlight different concerns: if you can guarantee 100% success on the first submission on a problem, but it requires a trillion generated tokens, that's also not very useful in practice. A method's ability to efficiently generate reasonable solutions in the first place is one key bottleneck, and assuming an imperfect model, the ability to identify them is another. We believe a discussion around this would improve the paper, so we intend to add this - thank you! We could also run a pass@1 evaluation using generated tests, like in the current HumanEval experiment, for the camera-ready. Lastly, if you believe we've addressed most of your concerns, we would sincerely appreciate it if you raised your score.
Summary: The paper presents a system called Parsel for program synthesis. First, a LLM predicts a Parsel program given a task specification or natural language plan. The Parsel program contains a hierarchy of functions which themselves might have functions inside. Each function is described with a function signature, a natural language description of what it does, and sometimes constraints (e.g., I/O examples) and references to other functions to call. Then, the Parsel synthesizer attempts to translate the Parsel program into a traditional program, e.g., a Python program. The LLM generates multiple implementations of each function and a combinatorial search is performed to find combinations of implementations that lead to passing the constraints. Overall, the Parsel approach is shown to outperform directly generating solutions with Codex and AlphaCode on the APPS dataset of competitive programming problems, when controlling for the number of complete programs generated. Other experiments are performed on HumanEval (simple Python programming problems) and VirtualHome (robotic planning). Strengths: Originality: The Parsel language and overall approach is quite novel and creative, and in my opinion this is the paper’s biggest strength. Parsel’s motivation is clear: we want to decompose problems into multiple functions or groups of functions, such that each group can be implemented independently. However, it was previously not clear how this goal should be achieved, as it requires a planning step that decides how the problem should be decomposed. The Parsel approach nicely resolves this by providing a natural outline of the hierarchical decomposition, containing just the right amount of information to facilitate synthesis of individual functions, while allowing different implementations to be explored. Quality: The Parsel approach makes a lot of sense from a technical perspective. The experiments explore various aspects of Parsel. Clarity: The motivation, intuitions, and related work are well written. Significance: These results are quite significant themselves, and the problem being tackled (program synthesis) is important too. Weaknesses: In my opinion, the paper’s main weakness is the lack of a deep analysis of *why* Parsel has its good performance. The approach has many parts and it is unclear which parts are the really important ones. Here are some potential reasons why Parsel is good, and potential experiments to gauge the importance of those reasons: 1. Predicting a **high-level sketch or plan** helps the model gain an overall understanding of the problem in a chain-of-thought kind of way, leading to better downstream predictions. * Reason 1 is partially analyzed in Lines 156 - 164 (when Codex is given the high-level plan, it performs much worse than Parsel given that plan). But I’m also interested in the other kind of ablation: how well does Parsel do without the high-level plan, going straight from the task specification to the Parsel program (on the APPS dataset)? 2. The **Parsel program itself** makes synthesis easier, since it contains a good decomposition with just the right amount of information for easy synthesis of the individual functions. * What if we replaced the Parsel program with a Codex prediction of the entire solution, which we then use as an outline? Specifically, we prompt Codex to implement a code solution that is decomposed into helper functions with docstrings. (If a function is not generated with a docstring, we can prompt an LLM to predict a docstring given the function and other relevant context from the task.) Then we take the function signatures, docstrings, and references (call graph) from the solution to serve as a replacement for the Parsel program. Thus, we sample new implementations of the individual functions and search for implementation combinations as usual. 3. The approach enables **trying many combinations of implementations** with a low cost (effective number of complete programs sampled). * What if we disallowed trying multiple combinations of implementations, i.e., only used pass@n x 1? 4. The authors might identify other potential reasons that I didn’t think of. By including these sorts of ablations, we would get a much more clear understanding of the relative importance between these factors. This could help readers gain a deeper intuition behind why Parsel works, so they may better adapt or extend the approach in future work. Even though my rating is positive right now, I *strongly encourage* the authors to perform these ablations, because I believe the paper will be much stronger (an amazing paper) with a deeper analysis of Parsel’s core ideas. Another weakness of the paper is that the writing could be made clearer about the algorithm and experiments. Please see the questions below. Please note: I will be happy to raise my score if I think these weaknesses are adequately addressed in a revision. Technical Quality: 3 good Clarity: 3 good Questions for Authors: ## About the algorithm If there is a function with constraints where none of the k implementations of the function pass the constraints, then what happens? I assume this causes the entire Parsel program to be unsuccessful, but this is not explicitly stated in the text. Does this lead to a tradeoff where, if the decomposition is too detailed with many individual functions, then there is a higher likelihood of a single function having no correct implementation? And, intuitively it seems useful to include constraints for all functions where the LLM can predict constraints confidently, but then more functions with constraints also leads to a higher likelihood of a single function having the wrong constraints. Can you comment on this “weakest link” issue? There is an exponential blowup in the number of function implementation combinations. I understand that SCCs of the call graph and test dependency graph are used to reduce the number of functions considered at once, but still the exponential blowup remains. Do you impose some upper limit on the amount of combinations tried? Line 113 mentions *samples* -- when do you sample versus try all combinations, and how many samples are used? Does a Parsel program always define a single outermost function that serves as the solution to the problem specification? This would make sense given the rest of the algorithm but is not explicitly stated. Once we find an implementation for a function (or SCC) passing its constraints, is the implementation “locked in” permanently or does the algorithm also consider other implementations/combinations later? In other words, do we end up with at most 1 complete implementation for each Parsel program, which is then checked against the problem specification (possibly with hidden test cases)? Or, might we continue to search for other implementations of the Parsel program and evaluate those on the hidden tests too? The CodeT score is a core part of the approach in the case of generated tests and is referenced repeatedly for the HumanEval experiments. The paper should provide some background on CodeT. I know that Appendix F contains Parsel pseudocode in Parsel style, which might clear up some of the questions above. I did not read that pseudocode (Figure A.11 and A.12) carefully because it’s a full page long, and I hope the authors can write a more succinct version in a more traditional style in the main paper. I also think Parsel programs are not as clear as actual pseudocode, because they lack detail about how inner functions are used to implement the outer function. It’s as if one had to understand how a Python program worked, but only seeing the function signatures, docstrings, and call graph... many important details would be lost. ## About the experiments What I/O examples are given in the problem specifications, and are there held out tests used to evaluate the correctness of synthesized programs? The description of the experiment setup is lacking these kinds of details. Is it expected that the LLM can predict *correct* constraints in terms of I/O examples for individual functions? It is commonly observed that LLMs are quite bad at executing code or even performing arithmetic. I would expect that many constraints would have subtle errors that lead to incorrect implementations being selected from the search over function combinations. For example, is the constraint in Figure 2 (ii) directly copied from the problem specification, or does the LLM generate it from scratch? Can multiple constraints be added to a single function? Line 150 mentions that using the LLM to generate programs is much more computationally expensive than running programs on a CPU. This is certainly true in general. However, it is unstated how many programs are actually run (considering the exponential number of implementation combinations being considered), and Figure 4 appears to have a datapoint close to 1 million program evaluations per problem. Because APPS is a dataset of competitive programming problems, I’d expect many programs to have long-running evaluations, either from implementation errors (infinite loops) or suboptimal algorithmic design. Even if only 1% of programs run until timeout, the time limit is set to 1 second (a small time limit in actual competitive programming competitions), and we assume that all other programs evaluate instantly, then 1 million programs would evaluate for 2.8 CPU-hours, per problem. Such an amount of compute cannot be swept under the rug and should be discussed! Certainly from an end-user perspective, waiting several seconds for an API call to GPT-4 feels much better than waiting for multiple CPU-hours of program executions. Appendix E.3 does mention parallelizing program executions but the resulting wallclock time per problem is not mentioned. For the HumanEval experiment, the writing is less clear about what the main conclusion or takeaway should be. For example, could you summarize Lines 175 - 194 in 1 sentence? The writing would be improved by adding that sentence, and cutting back on some of the details. Similarly, Lines 199 - 204 contain many details that are not useful for readers who do not dig up Parsel’s solutions to the problems mentioned. For the VirtualHome experiment, it is not clear how to interpret the numerical results. Does Figure 7 say that the Codex (baseline) plans were the *most preferred among the 3 plans* roughly 30% - 35% of time, which is actually not bad for the baseline? How exactly does the comparison work, considering that Parsel has 2 attempts but the baseline only has 1? Does “X is 70% more likely to be preferred over Y” (Line 283) imply P(X preferred) = 85% and P(Y preferred) = 15% such that 85% - 15% = 70%, or something else? Why not just report P(X preferred) directly? What exactly does “X preferred” mean: “one of the 2 Parsel programs was better than the baseline”, or “the indented Parsel program was better than the baseline”? Overall I am less excited about the VirtualHome experiments because of the lack of rigorous correctness checking (instead relying only on rankings of accuracy and clarity). The example in Figure A.49 does not alleviate my concerns as the Parsel solutions seem much worse than the baseline’s solution, which itself is not completely correct either. What is the main takeaway from the VirtualHome experiments that was not evident from the other experiments? I know it is hard to investigate contamination issues considering the opacity of some LLMs, but the potential issue should be discussed in greater detail. For example, is it possible that GPT-3 has seen solutions to the APPS problems while AlphaCode and Codex have not? For the Lisp example (Line 354), it seems likely that the code LLM has already seen example implementations of the Lisp interpreter, and it recognizes the general patterns even without any mention of the word “Lisp” in the Parsel program. ## Minor questions and suggestions * Line 99: “Parsel implements functions with **a** post-order traversal from **the** function at the root” * Figure 4 caption: a period is missing after “AlphaCode [35]” * Figure 4 caption: “we examine to understand the effect of evaluation number” is awkward phrasing * Line 165: missing space between “HumanEval” and “[12]” * Line 319: missing some punctuation (comma or period) between “model” and “and” * Line 347: missing space between $\ge$ and 0.99 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: One undiscussed limitation is that Parsel likely requires more expensive LLM calls for the same effective number of complete programs sampled, for three reasons: 1. Each transformation from task specification to the high-level plan and then to the Parsel program requires an LLM call, accounting for many tokens that would not be needed by the Codex baseline. 2. Each function is implemented separately which requires encoding a different prompt for each function with function-specific context. 3. The decomposition of the entire solution into many functions might lead to more total code tokens sampled compared to a solution obtained without explicit decomposition. The paper could use some discussion about these factors, potentially summing the total LLM cost per problem for Parsel versus Codex. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the thorough and helpful review. We sincerely appreciate the attention to detail and the encouraging and supportive framing. Note we are constrained to 6,000 characters here (a new rule…), but are happy to elaborate during discussion. > I’m also interested in the other kind of ablation: how well does Parsel do without the high-level plan? Great suggestion! On a 200-problem competition-level APPS sample we found that, without a high-level plan, the accuracy falls to 13% (from 25.5%, w/ the same configuration). This is better than the ablation where Parsel wasn’t used, suggesting that Parsel plays a larger role than the high-level plan on APPS but both are necessary. Note part of this improvement is from the challenge of prompting Codex to generate Parsel, given long APPS problem statements. We’ll include the new prompt in the Appendix. > What if we replaced the Parsel program with a Codex prediction of the entire solution? This is an insightful proposal; we explored something very related. There are some challenges that initially limit its generality, many of which are solvable but collectively likely an entirely new work. The main ones are 1) extracting a call tree across many kinds of Python solutions and 2) getting Codex to generate meaningful standalone function descriptions. This may be possible by backtranslating generated Python solutions to Parsel (which does not require Codex to know anything about Parsel) – see Appendix K – but for these reasons there are many functions where that doesn’t work. > What if we disallowed trying multiple combinations[?] We touched on this in Fig. 8, but it should be much more clearly signposted. We interpret the results to suggest that the number of Parsel programs and the number of combined implementations play similar important roles. In other words, one can compensate for implementation ability with Parsel-generation ability and vice-versa (to a limit). > [What if] there is a function with constraints where none of the k implementations of the function pass? … Once we find an implementation … is [it] “locked in”? There are a few things that happen, depending on flags and context. By default, if no implementation passes the provided constraints, this is a failure. But, we also support backtracking (see Appendix S.2) - if enabled, if a parent fails to pass its constraints, it can reattempt its children - otherwise child implementations are locked in. When generating all tests and using CodeT scoring, then aside from some naive heuristics (e.g. a function with no passed constraints), detecting failure is difficult. > Is it expected that the LLM can predict correct constraints in terms of I/O examples for individual functions? Not necessarily – CodeT found that one doesn’t need all-correct tests to identify correct functions. It’s motivated by the Anna Karenina principle (“All happy families are all alike; each unhappy family is unhappy in its own way”): if many functions pass the same tests, one is likely correct. > Line 150 mentions that using the LLM to generate programs is much more computationally expensive than running programs... Do you impose some upper limit on the amount of combinations tried? Good point! We sample and test up to 100,000 items with a timeout of 0.04 seconds each, and also have a per-problem two minute limit – noted in Appendix M but we’ll highlight it. Note, if we’d used LLM-level compute per problem, this evaluation would’ve been near-instant (also easier to speed up evaluations via parallelization vs LM generation). > For the HumanEval experiment, the writing is less clear about the takeaway… Could you summarize in 1 sentence? Excellent point; “Given the same set of generated tests and program generation budget, and selecting only one best solution, Parsel significantly increases the probability that it’s correct.” > For the VirtualHome experiment, it is not clear how to interpret the numerical results. You’re right, this is confusing and we’ll clarify. Each comparison is pairwise (not best of three). Figure 7 compares only the (non-hierarchical) Parsel-generated plan and the Codex-generated solution. In other words, “the non-indented Parsel program was better than the baseline” (and note there was no preference on indentation). I.e. when people compared non-indented Parsel to the baseline, they said it was more accurate two thirds of the time. > What is the main takeaway from the [VH] experiments[?] There are two: First, they show Parsel's flexibility in handling more open-ended tasks beyond traditional code generation on an increasingly relevant LM reasoning domain. Second, they offer a chance to evaluate if the Parsel-generated plans are more clear – the results suggest they are. However, we strongly agree more robust metrics are needed in this emerging domain of LLM-based robotic planning. > [Is] it possible [GPT-3] has seen solutions to the APPS problems while AlphaCode and Codex have not? [T]he code LLM has already seen examples of the Lisp interpreter They’re based on one model so, while it’s sadly hard to know for sure, it’d be a bit surprising if code data was used to train the text version but not the code version. As for lisp, good point – we’ll add this. But note, many descriptions are not obviously Lisp-related. 1. *Does a Parsel program always define a single outermost function?* Yep! 2. *The paper should provide some background on CodeT.* We'll be happy to elaborate in the related works. 3. *I hope the authors can write [the pseudocode in a] traditional style.* We’ll revise Figure A.11 - also allowing a useful comparison of Parsel to traditional pseudocode. 4. *What I/O examples are given?* For APPS and HumanEval, there are public tests in problem statements and private tests for evaluation. 5. *Parsel likely requires more expensive LLM calls.* It depends! It’s discussed in App. S.6 but we agree that highlighting this clearly and discussing these costs more would be good. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications and new ablation. > trying many combinations of implementations Figure 8 helps a bit, but I'm curious for more specifics. You might say something like: "With a sample budget of 50 programs, Codex (Improved) solves 14.5% of APPS problems. Instead, we could sample 5 Parsel programs with 10 implementations each. If we directly test these 50 implementations without mixing-and-matching functions, we would solve only **X%** of problems. However, after mixing-and-matching functions (trying **Y** combinations on average), the pass rate significantly improves to **Z%**." Is it easy to fill in those numbers? With this comparison (or a similar one), the paper can better quantify the benefit provided by mixing and matching functions. > We sample and test up to 100,000 items with a timeout of 0.04 seconds each, and also have a per-problem two minute limit – noted in Appendix M but we’ll highlight it. I'm confused about the 100K items. The highest point in Figure 4-left is the 25.5% highlighted on line 145, pass@8x16. If you claim this result is with only 100K items, then what is the highest point on Figure 4-right, which also looks like 25.5%, pass@8x16, but looks close to 1 million items on the log-scale x-axis? The point on Figure 4-right at 100K items looks around 22%, definitely not >25%. 2 minutes per problem is a reasonable CPU compute budget. Please do highlight this -- otherwise, it is easy for readers to dismiss the paper by thinking "they just swap implementations of helper functions, and _of course_ you'd get good results after spending (literally) hours trying a million different combinations". --- Reply to Comment 1.1.1: Comment: Thank you very much for the follow-up and the suggestions! > Figure 8 helps a bit, but I'm curious for more specifics. Got it; we've now run the suggested ablation. It is somewhat similar to the one described in lines 156-164, but instead of skipping the Parsel step, we generate the implementations based on the Parsel program (and consider entirely independently generated implementations), and consider 48 implementations. To maximize the use of cached generations, we evaluated 6 Parsel programs and 8 implementations each (48 implementations per problem). We believe the most salient comparison to be the following: with these 48 Parsel-generated implementations, the performance was only 3%; on the other hand, when combining them, the overall pass rate with 6 Parsel programs and 8 implementations was 14.6%, with the only difference being that we considered combinations (specifically, 34k combination evaluations on average per Parsel program, with a limit of 100,000 evaluations per Parsel program). Note that 3% is from a sample of 200 random competition-level problems, 34k is from a sample of 200 random Parsel programs, and 14.6% is across all competition-level problems. We also want to highlight that we believe the scaling properties of Parsel exceed those for Codex, but unfortunately, it isn't possible to verify this directly with, e.g., "Codex (Improved) 128". Unfortunately, they do not report more than 50 attempts; however, in their CodeContests results, there is little improvement from 50 to 100 attempts (7.1% to 8.8%). We attempted to replicate their APPS results with Codex but could not because 1) they do not provide their few-shot prompt, and 2) we found we were rate limited substantially more often than when using Parsel and each generation took longer. > I'm confused about the 100K items. We sincerely apologize for this miscommunication: we should have specified that that's 100,000 attempts per Parsel program (which is why that rightmost point is 800,000, corresponding to the 8x16 point) and that the 2-minute per-problem limit is also per Parsel program. Otherwise, a Parsel program with only ten combinations might never be run because the others have many more combinations. Thank you again for the questions - we hope these results help paint a clearer picture of Parsel's strengths and limitations! We'll also incorporate these details into the paper.
Summary: This paper proposes Parsel, a framework for algorithmic reasoning with LLMs. Parsel can be seen as a kind of “programming language” that is implemented with mostly natural language which describes the functionality of the program -- that is, the algorithmic reasoning plan. Then, the Parsel synthesizer can translate the plans into corresponding languages like Python. Finally, the reasoning problem can be solved by executing the synthesized Python code. All the processes are completed by analyzing the Parsel program and prompting LLMs. Experiments show that Parsel can improve the LLM’s coding ability and generate preferred robotic plans. Strengths: 1. Parsel reveals the power of hierarchical planning and decomposition for complex reasoning tasks. 2. Parsel is a step towards a new kind of programming paradigm. That is, people can program more naturally with Parsel which lowers the programming barrier. 3. Parsel might be a new kind of prompting method for complex reasoning tasks. Users (or LLM) can write a Parsel program first and then the Parsel synthesizer can automatically decompose the Parsel program into small function pieces and prompt LLM with these pieces, and the synthesizer should compose these pieces to form a whole function. 4. The result on code generation is promising, although I still have some questions about it. Weaknesses: 1. Although Parsel can be generated by LLMs, the Parsel synthesizer is not reliable for human users. When coding with Parsel and the synthesized Python code is wrong, users do not know whether the LLM for synthesizer is not working correctly, or if there are bugs in the Parsel code they wrote themselves. 2. Also, there are no debugging operations provided in Parsel. Users cannot locate the bugs in Parsel, and they cannot debug on Parsel. 3. The correctness of Parsel synthesizer relies on IO constraints heavily: It seems that IO constraints are the only guarantee for the correctness generation. This is a little bit weird because the main part of Parsel is the natural language description, which is where users spend a lot of time. However, to debug Parsel and ensure the final result is correct, a more effective approach is to provide more detailed and fine-grained IO or allow the LLM to just regenerate to meet IO, rather than making modifications to the natural language or code structure. 4. The main experiment on APPS (figure 4) doesn’t seem like a fair comparison: Parsel is provided with detailed plans generated with GPT-3 while baselines are not. Providing results like “plans generated by codex (if possible)” “baseline results on GPT-3/4” or just more results with different budget settings on the “ablating the Parsel synthesizer” part would be good. 5. Experiment settings are not clear, see questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Baseline settings are not well explained in figure 4: What does “codex improved” mean? The improvement relative to this baseline is very marginal. 2. Does Parsel use test generation in the evaluation? Do baselines use it? 3. Figure 1 shows Parsels ability in multiple languages, but the main paper only showcased Python, making figure 1 overstated. 4. The github link is 404. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are discussed well in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments – the use of Parsel by human users was a key motivation, so these questions are really useful! > Although Parsel can be generated by LLMs, the Parsel synthesizer is not reliable for human users. When coding with Parsel and the synthesized Python code is wrong, users do not know whether the LLM for synthesizer is not working correctly, or if there are bugs in the Parsel code they wrote themselves… Also, there are no debugging operations provided in Parsel. Users cannot locate the bugs in Parsel, and they cannot debug on Parsel. We absolutely recognize and acknowledge that there are changes that could be made to improve the usability for human users. Indeed, this is a direction we are actively pursuing. However, there are some useful debugging features already present. For example, because users can write constraints/unit tests, and the program is implemented from the leaves up, it is often clear which parts of the program are failing and which unit tests they are failing on. In other words, it is often unambiguous when functions are somehow underspecified, whether in language or in constraints. In addition, due to caching, errors are not generally stochastic. > The main experiment on APPS (figure 4) doesn’t seem like a fair comparison: Parsel is provided with detailed plans generated with GPT-3 while baselines are not. Providing results like “plans generated by codex (if possible)” “baseline results on GPT-3/4” or just more results with different budget settings on the “ablating the Parsel synthesizer” part would be good. Thank you for this point! There are two things worth noting here (and also worth highlighting in the paper). First, the GPT-3 model we used and Codex are both based on the same model, which makes it somewhat less likely that any improvements come from GPT-3 simply being better (though clearly not a guarantee). Second, since the ablation highlighted in the paper clearly indicates that the intermediate plan is not sufficient for the observed performance improvement, we felt it would also be valuable to test whether it was necessary. We thus conducted an additional experiment where we skipped the plan generation step and instead asked Codex to generate Parsel directly. We observed that on a sample of 200 random competition-level APPS problems, generating Parsel directly from the problem solved 13% of them. Given that this is a more substantial improvement than the Parsel ablation, this suggests that, on this dataset, Parsel plays a larger role than the high-level plan, but both are necessary. Note that we used three few-shot Parsel examples for this experiment, as we observed that including many few-shot Parsel translation examples would result in it disregarding the problem statement, which for APPS is often quite long (e.g., a page of plaintext). > Baseline settings are not well explained in figure 4: What does “codex improved” mean? The improvement relative to this baseline is very marginal. Codex (Improved) refers to the version of Codex used in [11]. Unfortunately, it’s hard to make a direct comparison here - from 10 to 50 solutions, their performance improves from 6.3% to 14.5%. They do not report more than 50 attempts; however, in their CodeContests results, there is little improvement from 50 to 100 attempts (7.1% to 8.8%). It is, of course, difficult to extrapolate, but a corresponding improvement here would only represent a change of 3.5% – even with 3x this improvement, this would still underperform our result. We attempted to replicate their APPS results with Codex, but could not because 1) they do not provide their few-shot prompt and 2) we found we were rate limited substantially more often than when using Parsel and each generation took longer. This is likely because we were generating longer completions each time. This made replicating their results at the same scale infeasible, especially after we were unable to split up evaluation between members of the team following the new restrictions on Codex. > Does Parsel use test generation in the evaluation? Do baselines use it? In the HumanEval comparison, we include test generation when we comparing to CodeT pass@1, using the same generated tests for both. In the APPS eval, we do not include test generation. > Figure 1 shows Parsels ability in multiple languages, but the main paper only showcased Python, making figure 1 overstated. The two examples in Figure 1 correspond to the programming and robotic planning sections of the paper. In addition, we include examples of Parsel generating Lean (formal theorem proving) in the appendix. > The github link is 404. We anonymized the GitHub URL to maintain anonymity. However, we released our code (indeed, we already have, and it is actively being used by others outside our team, with over 300 stars). We would include the de-anonymized URL in a camera-ready version. [11] CodeT: Code generation with generated tests. (Chen et al. 222) --- Rebuttal Comment 1.1: Comment: Thanks for the response! I've read the response and the other reviews, this is a nice work and I vote for acceptance. Just one minor question for further discussion: How could Parsel potentially be applied to real-world scenarios, such as programming and robot control? --- Reply to Comment 1.1.1: Comment: Great question! We discuss this a bit in Appendix A. For programming, we think there are two main ways in which real-world programmers could use Parsel. First, writing the Parsel sketches from scratch, including some of the associated tests, should allow for programmers to work at a higher level of abstraction while prioritizing test-driven development. People may still look "under-the-hood" at the programs generated, but as long as the language models continue to get more reliable, this should become less necessary. This should support nearly arbitrarily large codebases, and, with good tests and future Parsel features like a more standard object-oriented syntax, could realistically speed up development. With generated tests, it may also facilitate more complete code coverage. The second way is with Parsel generation, for generating longer snippets of code that are beyond the ability of whatever the current best language model happens to be, but likely not in production. Alternatively, with Parsel generation, the initial sketch may be generated by a language model but then revised by programmers. For robot control, the Parsel generator can allow end users to describe an often-repeated task and then have the model generate a flexible algorithmic plan for that particular task - one future direction that would benefit this use substantially is mentioned in Appendix C, specifically incorporating more detailed robotic asserts like in [56] and testing potential programs in simulation. Real-time robotic use would likely require further advances in language models, and potentially balancing the compute used for plan generation and implementation generation, since implementing individual functions is often simpler than planning to solve a complex task. We also believe Parsel may be a valuable educational tool, as it may allow teaching students algorithmic problem solving removed from the details of programming syntax. Lastly, as we discuss briefly in Appendix D, we also anticipate that Parsel may be useful in formal theorem proving, decomposing proofs into lemmas and then using the formal proof checker as the constraint. [56] "Progprompt: Generating situated robot task plans using large language models." Singh et al. 2022
Summary: This paper proposes Parsel, a code generation framework that decomposes a problem specification into subproblems specified in a intermediate pseudocode-like language (the Parsel language) and then searches over combinations of subproblem solutions. The experiments show that decomposing and searching over subproblems leads to gains on standard code generation benchmarks and a robotic planning task. Strengths: - Very well-written and motivated - Thorough experiments on performance gain + analysis of inference cost / scale Weaknesses: No major weaknesses, besides the minor questions on clarity below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - When you set a budget of number of evaluations, how do you decide what to evaluate? - The "Functions without constraints" section was dense. - How does the parent enforce constraints? - Human expert comparison: this was a cool experiment, but the setup wasn't clear to me: how was the expert using Parsel? Were they initialized with a Parsel-generated ?program and then revised from there, or was it more of an iterative process - l212-216: I wasn't clear on why this is the conclusion from the human expert results - nit Fig 4: it initially wasn't clear that 50 (Improved) was the Codex API and that this was the "prior work" you were referring to on line 145 - nit Fig 4: what is 5@50000? - It'd be helpful to have an example of mutual recursion. I see this is possible in theory but couldn't think of when this would occur and how common it is - why would two function implementations depend on each other? - How often are child functions shared between parents / called between different parts of the program? In cases where it is shared, it seems like it'd be helpful for the child description to be generated conditioned on _all_ the caller descriptions (whereas from my understanding, the child description is currently generated conditioned on the description of the first caller). I could see this not being an issue for current code generation problems, but it seems like it'd be important in real-world programs. - Why is it sufficient to only consider strongly connected components? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your comments and questions! We intend to incorporate our responses into the revised paper. > When you set a budget of number of evaluations, how do you decide what to evaluate? We consider a random subsample of the possible combinations. > The "Functions without constraints" section was dense. How does the parent enforce constraints? The parents’ constraints are used to validate the children. If a child’s parent passes the parent’s constraints, it’s assumed that the functions it depends on are correct (which can, of course, be an incorrect assumption in many situations). In other words, if function $f$ calls function $g$ and $f$ has tests but $g$ doesn’t, then we generate implementations of both $f$ and $g$ and then aim to find a pair of implementations that satisfy $f$’s tests. > Human expert comparison: this was a cool experiment, but the setup wasn't clear to me: how was the expert using Parsel? Were they initialized with a Parsel-generated ?program and then revised from there, or was it more of an iterative process They started from scratch, so did not use the Parsel generator. Instead, they solved the problem and wrote their solution in Parsel, which was then synthesized into Python. > l212-216: I wasn't clear on why this is the conclusion from the human expert results There were a subset of problems that were hard, such that we were unable to generate Parsel solutions to them by prompting the language model directly. However, when asking a human to generate a Parsel solution, they were able to solve multiple new problems. This suggests that the language is not the primary > nit Fig 4: it initially wasn't clear that 50 (Improved) was the Codex API and that this was the "prior work" you were referring to on line 145 Thanks for the point! The prior work that that’s from is [11], but we’ll make this more clear. > nit Fig 4: what is 5@50000? n@k is how AlphaCode referred to the pass rates, so they used this to refer to selecting 5 from 50,000 samples, and this was their highest-reported number. > It'd be helpful to have an example of mutual recursion. I see this is possible in theory but couldn't think of when this would occur and how common it is - why would two function implementations depend on each other? We have one example in Figure 1, e.g., the recursive Collatz conjecture. In practice, this is surprisingly common and almost all cases of recursion can be expressed this way. For example, in our lisp interpreter, get_procedure calls eval_procedure which itself calls get_procedure. Similarly, eval_exp depends on list_case which depends on eval_exp. > How often are child functions shared between parents / called between different parts of the program? In cases where it is shared, it seems like it'd be helpful for the child description to be generated conditioned on all the caller descriptions (whereas from my understanding, the child description is currently generated conditioned on the description of the first caller). I could see this not being an issue for current code generation problems, but it seems like it'd be important in real-world programs. Currently, children do not get information about their parents in their implementations. This has the additional benefit of caching: if you change only a function’s description, you do not need to regenerate its children. However, if a function has multiple children (i.e., depends on multiple functions) then all of those functions are listed. > Why is it sufficient to only consider strongly connected components? The strongly connected components correspond to the sets of functions whose behaviors are mutually dependent. If two functions depend on one another, they must be implemented together and they will be in the same strongly connected component. If the functions do not depend on one another, then either 1) neither depends on the other or 2) one depends on the other, in which case we can implement the dependency first. --- Rebuttal Comment 1.1: Title: Acknowledgment Comment: I've read the rebuttal response — thanks for answering my questions!
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful, positive, and constructive comments! Your suggestions have helped strengthen the work and clarify key points. Based on the reviews, here are some of the main changes: 1. We conducted a new ablation study generating Parsel directly from problems, ablating the high-level plan, in addition to the earlier ablation where we ablated the Parsel synthesizer. This lowered performance to 13% from 25.5% (on a random 200 competition-level APPS problems), showing that both the plan and Parsel decomposition are important. 2. We will clarify details and bring key information which is currently in the appendix into the main text, such as the number of evaluations and the mechanism behind CodeT [11]. 3. We will highlight more key takeaways for the sections. For example, for HumanEval, we will note that “Given the same set of generated tests and program generation budget, and selecting only one best solution, Parsel significantly increases the probability that the solution is correct.” Once again, we really appreciate the supportive feedback and strongly believe that these reviews have strengthened the work. [11] CodeT: Code generation with generated tests. (Chen et al. 222)
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces Parsel—an intermediate programming language for large language model program synthesis. It decomposes a complex programming task into several strong-connected components and uses large language models to generate candidate code pieces for each component, combined with combinatorial search and test case generation. Evaluation on HumanEval and APPS demonstrate the method's outstanding performance when used with GPT-4 and Codex. Strengths: - I like the idea of combining language models for function generation and using combinatorial search to find valid programs. It is an approximate form of the fast-and-slow thinking of human problem-solving. The idea is intuitive enough, and the method works well. - The Parsel language is well-designed as an intermediate programming language in the program synthesis pipeline. It takes advantage of both modeling uncertainty of human language and deterministic descriptive language (IO-specs). - Experiment results on HumanEval, Apps and VirtualHome are good, reaching outstanding performance with a lower budget (large language model API budget) by trading some local computation cost (combinatorial search), which is something good to see in the LLM era. - Analysis of Parsel on HumanEval is also valid and persuasive for proving that Parsel can generate longer and more complex programs. Weaknesses: Overall, I like the paper. I have a few concerns about the weakness. - (minor) reproducibility: OpenAI is ending their public API service for Codex recently (I think it is before the submission of NeurIPS), resulting recent paper using Codex is hard to reproduce the experimental results after. I suggest using publicly available GPT-3.5 (text-davinci-003 or gpt-3.5-turbo) to give additional reproducible results. - Parsel design: I am thinking of if the Parsel language is expressive enough (for example, it may lack data structure design (which is something that makes APPS hard). Can Parsel generate data-structure specs? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weakness. I have no additional questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have addressed limitations in the current draft. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the encouraging review! > Parsel design: I am thinking of if the Parsel language is expressive enough (for example, it may lack data structure design (which is something that makes APPS hard). Can Parsel generate data-structure specs? Thank you for this excellent point. This is definitely a challenge. One approach which we’ve used is to manually apply a header (discussed in Appendix Q.2) for this - for example, you might specify that an object is a dictionary with certain keys. As we mention in like 356, for the lisp interpreter, we described the environment dictionary in the header. However, this is clearly an imperfect substitute for a proper object-oriented syntax. We’ll add this to the limitations of Parsel for programmers in Appendix A.1.1 and elaborate it in the main text. > (minor) reproducibility: OpenAI is ending their public API service for Codex recently (I think it is before the submission of NeurIPS), resulting recent paper using Codex is hard to reproduce the experimental results after. I suggest using publicly available GPT-3.5 (text-davinci-003 or gpt-3.5-turbo) to give additional reproducible results. At the moment, OpenAI has (fortunately) extended researcher access to Codex. In the long term, we are optimistic that open-source language models (both for code in particular and code and natural language generally) will also become more competitive with these tools. We mention briefly that we attempted to apply these techniques on a 2.7B parameter Codegen model, but even with the same Parsel outlines, it was unsuccessful. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thank the author for the response! My rating remains the same, and I recommend acceptance for this work.
null
null
null
null
null
null
Multi-Head Adapter Routing for Cross-Task Generalization
Accept (poster)
Summary: This paper proposed a new parameter-efficient few-shot fine-tuning method for pretrained language models. The method is a follow-up work of Poly. The authors proposed to fine-tuning the routing function and freezing the multi-head adapters. This way, the number of updated parameters reduced significantly while achieving a similar accuracy level in the down-streaming tasks. Strengths: - This paper studied an important problem in efficient fine-tuning of a pretrained language model, i.e., how to achieve a better trade-off between updated parameters and the final accuracy. The paper is well-motivated. - Overall the paper is well-written, especially the experimental section. The authors conducted a comprehensive ablation study on some key questions raised in the paper. - Although the method adopted in the paper (updating routing parameters only) was simple and straight forward, it seemed to work well in multiple different down-streaming tasks. Weaknesses: - The main shortage of the paper is the limited novelty. I think in comparison to Poly, the main difference of this work is to fine-tune routing only without the adapters. The authors conducted some heuristic analysis in the experimental section to understand the intuition behind. However, the paper may still not distinguish quite clear to the prior work. I would like to see more quantitative results on why fine-tuning routing is essential and necessary, e.g., the learned multi-head adapters are orthogonal in the task space? - Some method parts are not fully clear to me. For example, the authors discussed (IA)^3 in Sec.2.1. However, the definition of h^{k,v} and h^f are not properly introduced. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We address below your concerns point by point. #### **1. On the novelty of the proposed method** We refer the reviewer to the global comments. #### **2. On why finetuning (only) the router is necessary** We do not mean to claim that fine-tuning only the router is necessary. However, we show that, contrary to Poly, MHR offers this possibility: one of the motivations for fine-tuning only the router is to enable higher parameter efficiency for new tasks. If we misunderstood your question, please let us know and we will be happy to engage. #### **3. Further clarifications** We apologize for not properly introducing $h^{k,v}$ and $h^f$. These vectors represented the output of the the keys and values in attention mechanisms and the inner activations in position-wise feed-forward networks. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I think overall this paper had a solid experimental section and might benefit the community about model adaptation although it had a limited novelty. I keep my original rating (Borderline accept).
Summary: The authors propose a new parameter-efficient routing method, Multi-head routing (MHR), which combines parameter subsets as opposed to averaging all weights together. They find this yields better performance, and even only finetuning the routing matrix after initial training works well. Furthermore, they explore the training dynamics of MHR and find its gradients are more aligned than other approaches, suggesting that routing-based training helps to mitigate negative transfer during multi-task training. Strengths: A straightforward approach, with well-performed experiments over a wide variety of tasks. Relaxing the routing to allow for different parameter subsets is a nice extension of the Poly work. Results seem consistent across settings. The exploration of why these routing approaches work well is interesting and poses some interesting questions that could be explored by future work. Weaknesses: - I think the adaptersoup baseline is a bit unfair since it uses a different parameter-efficient adaptation method to MHR/Poly. Ideally, you would apply the AdapterSoup approach to these other methods. Otherwise, it’s hard to say if adaptersoup is truly worse or if it is just that adapters are suboptimal compared to LoRA. - Gains are somewhat small over Poly, but this can be the case for parameter-efficient tuning. It would be useful to compute statistical significance. - The finding that gradient cosine similarity is enhanced is interesting, but is weak evidence (in my opinion) for task interference/transfer. The suggestion that MHR is helping with negative/positive transfer would be much better served with some experiment directly targeting this (e.g. examining MHR with two tasks known to cause positive/negative transfer, and examining the gains/gradients there) - The fact that using routing doesn't help in the few-shot stage (section 5.3) suggests that averaging and throwing away the modules learnt via MHR would be a better strategy than keeping the routing, no? Since keeping one set of PEFT weights and no routing is simpler than continuing the routing strategy. Overall, I think this paper is solid and well done. While the gains of the method are small, I think the idea is interesting and the results around routing-only finetuning and task gradient alignment are interesting and pose interesting questions for future works. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Do you think the adaptersoup baseline is hindered by its use of adapters? What would its performance be like if you used Lora as the PEFT module instead? - If averaging the learnt modules is better than routing during few-shot adaptation, why should we use the routing method after the multitask pretraining stage? Isn’t this simpler than the main MHR approach proposed? - Which T5 XL variant did you use? It would be useful to specify this, since some T5 variants had multi-task data inserted during pretraining (I’m assuming you used the v1.1 lm-adapted ones, which did not). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors do not explicitly discuss limitations, although they discuss impacts in the appendix. It would be good to directly add a section on limitations, including e.g., the need to perform training over few-shot data vs using an ICL approach, the fact that routing is not needed during few-shot training (it seems), or any other limitations you can think of. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing a thorough and constructive review. We will address the questions one by one. #### **1. AdapterSoup Baseline and Backbone Clarifications** We apologize for the lack of clarity regarding this. Indeed, our AdapterSoup baseline uses the same LoRA configuration (same base layers, same LoRA rank). As you pointed out, in order to properly assess the contribution of the routing mechanism of AdapterSoup, we kept other design choices fixed. Moreover, you are correct in our choice of backbone, we opted for the t5-xl-lm-adapt v1.1. Lastly, we agree with the limitations you raised regarding ICL vs adapter tuning. We will add this to the next version. #### **2. On whether to route vs average during-few shot adaptation** We agree that in settings where transfer is limited to a limited set of test tasks, averaging the learned modules is a more straightforward solution. However, in settings where high levels of parameter efficiency are required per task, options such as MHR-z require that the skills be kept. Moreover, we believe that an interesting future research direction would be to apply MHR to continual learning settings, where modular methods have been shown to work well [3]. In such setting, the multi-task optimization step could be composed of multiple phases, in which case keeping the full set of skills may be beneficial. #### **3. On statistical efficiency and pertinence of results** We refer the reviewer to the global comments. #### **4. On gradient alignment as a proxy for transfer / interference during training** We agree with the reviewer that indeed, a more thorough investigation on how MHR effects aids transfers / mitigates interference would be beneficial. While we could not find clear task pairs which are known to cause interference, we can however look at alignment across tasks known to be similar. For this, we looked at the cosine similarity of the resulting adapters for all 36 **summarization tasks**. We found that MHR had an average alignment of **0.76** for MHR and **0.73** for Poly. We repeated this experiment for all **question answering tasks**, and again found superior alignment for MHR than Poly (**0.77** vs **0.74**). [3] Ostapenko, Oleksiy, et al. "Continual learning via local module composition." Advances in Neural Information Processing Systems 34 (2021): 30298-30312. --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: Hi, thank you for the response and clarifications! I've carefully read your response and the other reviews and am satisfied and keeping my score as-is. I agree that the multi-head splitting over the parameters of the PEFT method itself is interesting and novel, and I think that the experiments and ablations provide interesting insight into multi-task and parameter-efficient learning. While it is somewhat incremental (the overall framework not being *that* different to Poly), I still think the paper provides useful and interesting findings for the field, and passes my bar for novelty. Following up on the response: > For this, we looked at the cosine similarity of the resulting adapters for all 36 summarization tasks. We found that MHR had an average alignment of 0.76 for MHR and 0.73 for Poly. We repeated this experiment for all question answering tasks, and again found superior alignment for MHR than Poly (0.77 vs 0.74). What was the average/median alignment across all tasks, and between diff-task pairs (e.g. summarisation/question-answering)? On their own, it's hard to gauge if the difference between MHR and Poly here is significant, and if the alignment values here are actually high compared to orthogonal or diverging tasks. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to engage with us. Here are some clarifications regarding the adapter alignment results shared in the previous reply. We report the average adapter cosine similarity across different combinations of tasks. | Task Pairs | MHR | Poly | |-------------------------------------|------|------| | all `(313 * 312 / 2)` task pairs | 76.3 | 74.1 | | all Q/A task pairs | 76.7 | 74.3 | | all summarization task pairs | 76.4 | 73.4 | | all Q/A vs summarization task pairs | 75.7 | 72.1 | While the gap across different task pairs is relatively small, we do see that overall, MHR tends to offer better alignment across similar (and less similar) task pairs. That being said, we agree with the reviewer that a more thorough investigation is needed to properly assess how MHR aids transfer across tasks. We will update the paper accordingly in the next version. Thank you.
Summary: This paper introduces a method called Parameter-efficient Fine-tuning (PEFT) to improve how pre-trained language models adapt to new tasks. They use small adapters and a routing function to select specific adapters for each task. The authors found that finer-grained routing provides better results and propose a method called Multi-Head Routing (MHR) that outperforms previous approaches. They also discovered that the success of their method is mainly due to improved multi-task optimization rather than specific adapter properties. They introduce a simplified variant called MHR-μ that achieves competitive performance with fewer parameters by discarding routing during fine-tuning. Strengths: ++ The paper demonstrates clear and concise writing, making it easily comprehensible. ++ The authors conducted a thorough ablation study to thoroughly evaluate the effectiveness of their proposed approach. Weaknesses: -- The paper lacks significant technical novelty. The approach of decomposing the parameters in LoRA into different sets is not particularly interesting. Additionally, the designed modules bear strong resemblance to multi-head self-attention and MoE (Mixture of Experts). Moreover, the intuition behind these designs seems ad-hoc and lacks novel insight, which is crucial for assessing the value of the paper. -- The paper fails to provide specific details about the model parameters used in the experiments. Considering that the proposed model is substantially larger than the baselines, it is necessary to have more information in order to properly evaluate the proposal. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The authors are encouraged to address the mentioned concerns in order to provide a better understanding of their work. It would be beneficial for them to provide further clarification regarding the technical novelty of their proposed parameter decomposition approach in LoRA, highlighting any unique aspects that distinguish it from existing methods such as multi-head self-attention and MoE. Offering additional insights and motivations behind their design choices would also help in assessing the novelty and value of their work. Furthermore, it is important for the authors to address the lack of specific information about the model parameters used in their experiments. Providing details about the size and configuration of the proposed model in comparison to the baselines would enable a more accurate evaluation of its effectiveness. These additional facts would assist in making a fair assessment and potentially lead to a reconsideration of the paper's overall score. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: -- No limitation is discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We address the raised concerns point by point. #### **1. On the novelty of the proposed method** We refer the reviewer to the global comments #### **2. Additional information about model parameters and configuration** For all experiments in the paper, all the reported methods besides ia3 use LoRA adapters, and are inserted **on exactly the same layers**. In other words, the amount of adapted layers are the same. In the main results of the paper, we add adapters to key, value, query and output linear mappings of attention mechanisms, as well as to the two linear layers in the feed-forward transformer block. Except for “LoRA-big”, all methods using LoRA use a rank of 1. We refer to the additional parameters that must be conserved after multi-task pretraining as `Multi-Task Params`. We denote the additional parameters that must be kept for each new downstream task as the `Adaption Params`. These results are using T5-XL, the lm-adapted version with 3B params. Importantly, **MHR outperforms other baselines with more parameters** (see LoRA rank 16 and AdapterSoup). | Model | Multi-Task Params | Adaptation Params | Avg. Test Performance | |--------------------|-------------------|-------------------|-----------------------| | IA3 | 540K | 540K | 62.4 | | AdapterSoup | 84M | 2.2M | 62.1 | | LoRA | 2.2M | 2.2M | 66.0 | | LoRA rank 16 | 35M | 35M | 65.4 | | Poly-Z | 17M | 3.5K | 66.4 | | Poly | 17M | 2.2M | 68.0 | | Poly-mu | 2.2M | 2.2M | 67.8 | | MHR-z | 17M | 220K | 68.3 | | MHR | 17M | 2.2M | 69.1 | | MHR-mu | 2.2M | 2.2M | 69.1 | We see that methods with MHR routing better optimize the adaptation parameter / test set performance tradeoff. Additional results with different configurations can also be found in the Appendix A.1. Please let us know if you have any other questions. --- Rebuttal Comment 1.1: Title: Borderline accept Comment: I'd like to express my gratitude to the authors for addressing my concerns, particularly regarding the parameter size. I believe the paper presents a compelling approach to efficient fine-tuning and demonstrates notable performance improvements. As a result, I've adjusted my rating to borderline accept.
Summary: In this work, the authors proposed MHR for cross-task generalization. To achieve extreme parameter efficiency, MHR- z and MHR-u are proposed to balance the performance and efficiency. Besides, this work emphasized the importance of the routing function, which is very insightful for the community. Strengths: 1. Excellent presentations for motivation, and experimental results. 2. I am impressed by the performance of MHR-z, the accuracy is close to poly while there are only very few parameters that need to be adjusted in the fine-tuning stage. Weaknesses: 1. The improvement of the MHR is quite limited, which only outperforms the Poly by 1.1%. Besides, there is no significant test, which is very important for such a marginal improvement. 2. Fig. 1 is not well presented. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In Figure 3, why MHR with 32 heads performs worse than 8 heads version? 2. In Figure 1, I would suggest the authors to mark where is the task-specific head. In addition, More information is needed for MHR-Z in the Fig. 1 as well. 3. Does each task associated with a task-specific Z matrix? If not, how to select active modules based on the input task? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors did not present any limitations in the manuscript. For the performance issue I pointed out in the weakness section, one potential reason could be the subheads are combined via averaging in your case. In some Mixture-of-experts papers, a set of dynamic weights can be generated to combine the experts dynamically based on the input information(task in your case). This could be helpful to further improve the performance, and related discussion can be conducted in the manuscript as well. [1] Condconv: Conditionally parameterized convolutions for efficient inference, NeurIPS 2019 [2] A mixture of h−1 heads is better than h heads, ACL 2020 [3] Attention over Self-attention: Intention-aware Re-ranking with Dynamic Transformer Encoders for Recommendation, TKDE 2022 Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. Below we address your concerns point by point. #### **1. On statistical efficiency and pertinence of results** We refer the reviewer to the global comments. #### **2. Clarification of the task-module routing matrix $Z$.** The task-module routing matrix has shape `(n_tasks, n_modules)`. Therefore, the information specific to task $t$ resides in the $t$-th row of this matrix. This row has shape `(n_modules,)` and selects the active modules for the $t$-th task. In the case of multiple routing heads, each head has its own routing matrix. Therefore, a given MHR adapter layer has `n_heads` $Z$ matrices, each of size `(n_tasks, n_modules)`. For a given task, we retrieve the appropriate row of each $Z$ matrix, and use the prescribed mixing weights to mix the shared modules in a task-specific way. #### **3. Improving Figure 1.** Thank you for the suggestions on our main figure. Let us try to better describe it. In Fig. 1, we are showing MHR with `n_heads=2`. Each head has its own task-module routing matrix $Z$. We assume in Fig. 1 that the input belongs to the first task, therefore highlight the first row of $Z$ as the active task. This row shows that the first and third module are selected, thus we select and average those modules for the first head. A similar process is down for the second head. Lastly, the outputs from the two heads are concatenated to form a LoRA adapter. For MHR-Z, we show an example of transfer to a new downstream task. For each head, the task-module allocation matrix (vector) has shape (`n_tasks == 1 x n_modules`). In this example, only this $Z$ vector is trained (red) while the underlying modules are frozen (blue). To improve the figure, we modified the figure to better highlight this process for the second head, and increased visibility to highlight the task specific row of the Z matrices. This version can be found in the updated pdf. Please let us know if you have further concerns or suggestions on how to improve the figure. #### **4. Clarification on Figure 3, performance of 32 heads vs 8 heads.** Actually, both methods perform very similarly (68.73 vs 68.79). #### **5. Addressing limitations** Thank you for pointing out relevant papers. Indeed, the proposed routing method can potentially benefit from leveraging additional input information (rather than task-only information). We agree that this is a promising direction for future work. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. Most of my concerns are addressed, and I would keep my rating.
Rebuttal 1: Rebuttal: Dear Reviewers, we thank you for your valuable feedback! We really appreciate the time you spent providing us constructive advice to improve our submission. We list below our general response to all of you, and address individual concerns in separate threads. #### **1. Novelty w.r.t to prior work** *i)* MHR is an application of multi-head splitting for routing-based approaches like Polytropon. Although multi-head splitting has been used in other contexts (notably in self-attention), its application for combining adapters is, to our knowledge, novel. Moreover, we show that it unlocks the ability to trade-off parameter efficiency and performance. This is not something that Polytropon enables by itself, as Poly-Z severely underperforms given that it relies on linear combinations of adapters. *ii)* We also provide valuable insights on the role of routing in cross-task setups. We showcase: *a)* **when** routing is critical. Indeed, routing is critical to multi-task optimization, but not few-shot adaptation. In fact, we can average modules, which makes MHR-$\mu$ a better initialization for LoRA than standard multi-task training. We show that MHR-$\mu$ also outperforms AdapterSoup, which relies on fixed routing. This is a new finding, and no prior work has shown that this is possible with, e.g. MoEs. *b)* **why** routing-based methods such as Poly and MHR work, by showing that routing yields stronger gradient alignment during multi-task optimization. This finding opens the door to the design of new routing methods leveraging prior work on gradient alignment in multi-task learning [1-2]. #### **2. Statistical significance of reported improvements** We employed a statistical analysis to assess the presence of a significant difference in performance between MHR/MHR-z and Poly. Specifically, we executed a matched-pairs Wilcoxon signed rank (WS) test. Our assessment protocol, as outlined in our primary findings (Figure 2, T5-XL backbone), was extended to encompass five separate test seeds. The outcomes distinctly reveal that MHR/MHR-z exhibit a statistical improvement (p < .05) in relation to Poly, as evidenced by the WS-test results. [1] Yu, Tianhe, et al. "Gradient surgery for multi-task learning." Advances in Neural Information Processing Systems 33 (2020): 5824-5836. [2] Wang, Zirui, et al. "Gradient vaccine: Investigating and improving multi-task optimization in massively multilingual models." International Conference on Learning Representations (2021) Pdf: /pdf/cb0c300d14bdfd21a306eb0b4896ae389c008d0f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
One Risk to Rule Them All: A Risk-Sensitive Perspective on Model-Based Offline Reinforcement Learning
Accept (poster)
Summary: This paper proposes a new model-based offline RL algorithm that learns a policy which is risk-averse (wrt aleatoric uncertainty measured by some dynamic risk measure) and pessimistic (wrt epistemic uncertainty from modeling). First, the algorithm learns a posterior distribution over MDP transitions given the dataset, using an ensemble of neural nets. Then, the algorithm samples successor states from the worst-case perturbation of the learned model, effectively modeling the Bellman equation for dynamic risk. Strengths: The method is clean and the high-level ideas are clearly explained. Extensive experiments are also encouraging and paper is pretty well written. Weaknesses: 1. My understanding is that this method is targeting the dynamic risk of the coherent risk measure with envelope B_p. However, all of the evaluations, as well as most of the paper, seem to suggest that 1R2R is a good algorithm for risk-neutral and static risk objectives (which the authors indeed show in experiments). It's not clear to me why optimizing for the dynamic risk should result in good performance for risk-neutral or static risk objectives, and so the method seems more hand-wavy. For example, if we assume that model learning succeeds, can we prove any PAC bounds wrt risk-neutral or static risk objectives? 2. It would be interesting to have some ablations on the success of 1R2R: are the improvements in performance mostly due to pessimism wrt aleatoric uncertainty, or epistemic uncertainty, or simply model-based offline RL? Also, please see Questions section. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Why is Line 184 labeled "Problem 1"? I don't see a Problem 2. 2. Why do you take on the Bayesian perspective for learning \bar T? Another way of learning the model is the MLE, ie train a single neural net to maximize log likelihood of successor state, so I'm wondering why you choose to learn P(T|D) and then derive \bar T from that? 3. How is the adversarial perturbation actually computed in practice (Line 10)? Does you need to perform a two-stage optimization procedure? 4. How are the hyperparameters of the method selected, and how many online samples were used for hyperparameter tuning? (In theory, papers about offline RL should only be using offline samples for algorithm design, but in practice, this is almost always violated. So, it would be nice to report how many online samples were used, especially given that this paper introduces many more hyperparameters (ie Table 4)). 5. Have you compared with ATAC (Adversarially Trained Actor Critic for Offline Reinforcement Learning), which is one of the SOTA offline RL methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Please see weaknesses/questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable time you have spent reviewing our paper. We respond to your comments and questions below. ### Theory and Dynamic vs Static Risk We have **added theoretical justification** for why our approach avoids both epistemic and aleatoric uncertainty in Proposition 1 in the response to Reviewer RPWe. As you point out, our approach utilises dynamic risk and we evaluate our approach using static risk objectives. This is necessary because we cannot evaluate the performance for dynamic risk in practice. In Lines 355-365 of the paper we discuss this limitation. Dynamic and static risk can be related in the following way. Dynamic risk is equivalent to adversarially modifying the transition dynamics *independently* at every time step according to the risk envelope. Static risk is equivalent to adversarially modifying the transition dynamics at each time step subject to the constraint that the perturbation to each *entire trajectory* remains within the risk envelope (see e.g. [1, 2]). Thus, the optimal behaviour with respect to both static and dynamic risk is the optimal behaviour in an adversarially perturbed MDP, where the worst transitions are made more likely. This is why we expect optimal behaviour for dynamic risk to be similar to that for static risk. We are unaware of PAC bounds relating the two. We think this is an exciting direction for future work. [1] Chow, Yinlam, et al. "Risk-sensitive and robust decision-making: a CVaR optimization approach." NeurIPS, 2015. [2] Rigter, Marc, et al. "Planning for risk-aversion and expected value in MDPs." ICAPS, 2022. ### Additional Ablations As you have suggested, we have **expanded the ablations** in Tables 1 and 2 of the global response. The two key components of our approach are 1) risk-averse sampling and 2) using an ensemble of models. In the new results, we separately ablate each of these components. When either of these components is removed, we observe that training may diverge. This illustrates that both components are necessary to prevent value function instability resulting from backing-up out-of-distribution value estimates (i.e. avoiding epistemic uncertainty). The ablations also show that risk-averse sampling is necessary to achieve strong risk-averse performance to aleatoric uncertainty on the stochastic Currency Exchange domain. We have also added comparisons to more model-based RL algorithms, COMBO and MOPO. We observe that 1R2R easily outperforms these algorithms on the stochastic domains (Currency Exchange and HIV Treatment). This indicates that the strong performance of 1R2R is not solely due to being model-based. ### Question 1. In the final version we will use the name “Dynamic Risk Optimisation in Bayesian MDP” for our problem formulation, to make this clearer. ### Question 2. We define $\overline{T}$ using a distribution over models, $P(T | D)$, so that we can relate learning in an ensemble of models to risk-sensitivity. Recall that risk-measures require a *distribution* as input (they map distributions to scalar values), and that we require an ensemble of models to capture epistemic uncertainty. Given our ensemble of models, we need to be able to define $\overline{T}$: the distribution over successor states given the model ensemble. We do this using Equation 5 in the paper, which defines $\overline{T}$ using $P(T | D)$. In our practical implementation (Lines 246-247), we use a finite set of $M$ neural networks to define the model ensemble, and each of these networks is separately trained using MLE. We assume that $P(T | D)$ is a uniform distribution over these $M$ networks, i.e. $P(T | D) = \frac{1}{M}$ for all $T$ in the ensemble. This assumption means that when sampling from $\overline{T}$ in Line 8 of Algorithm 1, we first uniformly sample from the $M$ models in the ensemble, and then sample a successor from that model. ### Question 3. The optimisation problem in Line 10 of Algorithm 1 is over a discrete set of samples, making it straightforward. We first sort the samples from lowest to highest value successor states. Then, for the risk measures we consider (CVaR and Wang) the solution is computed in closed-form using Equations 11 and 12 in Appendix A.1. ### Question 4. To choose the hyperparameters, we evaluate each policy for 10 episodes in D4RL and 20 episodes in the stochastic domains at the end of training (for 5 seeds). Then, we choose the hyperparameters that obtained the best performance for the desired objective. We also optimise the hyperparameters for the baselines in this manner. ### Question 5. Additional Baselines In the Global Response, we have added **additional results** for ATAC as well as MOPO and COMBO (as requested by Reviewer RPWe). We observe that MOPO performs very poorly on all domains. 1R2R only slightly outperforms ATAC and COMBO on the D4RL domains. However, on the stochastic domains (Currency Exchange and HIV Treatment) 1R2R easily outperforms both ATAC and COMBO. This demonstrates that 1R2R outperforms current state-of-the-art algorithms. It also highlights that we should not rely on benchmarking only on deterministic environments. **Experiment Details** The D4RL results for ATAC, MOPO, and COMBO are from the original papers. For Currency Exchange and HIV Treatment, we ran the algorithms ourselves. For ATAC and MOPO we used the official implementations. For COMBO, there is no official implementation so we used the implementation from d3rlpy. We tuned the hyperparameters to obtain the best performance for the CVaR objective: - MOPO: rollout length and conservative weighting each tuned within {1, 5} following the original paper. - COMBO: the conservative weighting was tuned within {0.5, 1, 5} following the original paper. - ATAC: $\beta$ was tuned within {$0, 4^{-4}, 4^{-2}, 1, 4^2, 4^4$}.
Summary: > Our concerns are addressed by the authors. Thanks for your effort. We will upgrade the rating to weak accept. This paper introduces a model-based risk-averse algorithm that utilizes risk aversion as a mechanism to jointly address the distributional shift problem and risk-related decision-making problems in risk-sensitive offline RL. The authors employ a risk-averse risk measure to simultaneously perturb the belief distribution and transition functions in an adversarial manner, enabling risk aversion towards both epistemic uncertainty and aleatoric uncertainty. Risk aversion to epistemic uncertainty reduces transition probabilities to state-action pairs that are out-of-distribution, and risk aversion to aleatoric uncertainty helps avoid actions that are inherently risky. The authors conduct experiments in both deterministic and stochastic environments, confirming the superior performance of the 1R2R algorithm. Strengths: 1. The paper is well written. The original contributions are highlighted clearly. 2. The paper provides a concise and understandable introduction to the background and related work. It is highly reader-friendly, ensuring ease of comprehension for the intended readers. 3. The paper demonstrates a well-organized structure, and the approach of utilizing risk aversion to address the distributional shift problem is innovative and holds theoretical viability. Weaknesses: 1. While adopting a joint risk-aversion mechanism for both epistemic uncertainty and aleatoric uncertainty is simple and efficient, I think this approach is less flexible compared to previous methods which can address distributional shift and risk-sensitivity separately. 2. The design of ablation experiment merely removes the risk-sensitivity, which I think to be overly simplistic. Ablation Experiments part requires appropriate expansion, such as comparing the computational costs with the approaches using distributional value functions, and comparing the performance between using dynamic risk measures and using static risk measures. 3. I think 1R2R does not have a significant advantage over other baselines (i.e., RAMBO). Can the authors discuss more in-depth regarding this issue? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the argmin is performed in Line 10 of Algorithm 1? 2. In my opinion, instead of characterizing this approach as risk-aversion methods, I think it resembles a conservative value update method. By introducing adversarial perturbations to the transition function, it increases the likelihood of transitioning to low-value successor states. Can the author clarify this? 3. For the experimental results of D4RL MuJoCo, how can you validate that the performance improvement of the 1R2R algorithm is indeed due to addressing the 'distributional shift' problem? 4. In Line 139, the equation is written as ($Z_{\mathrm{MDP}}=\sum_{t=0}^{\infty} \gamma R\left(s_t, a_t\right)$). Shouldn’t this be ($Z_{\mathrm{MDP}}=\sum_{t=0}^{\infty} \gamma^{t} R\left(s_t, a_t\right)$)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This approach is less flexible compared to the previous methods which can address distributional shift and risk-sensitivity separately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable time you have spent reviewing our paper. We respond to your comments and questions below. ### Flexibility of the Approach We agree that our approach is simpler yet less flexible than existing approaches that can separately adjust how they address each source of uncertainty. In Lines 370-374, we discuss this limitation. We also discuss that a straightforward extension of our work is to apply *composite* risk measures [1]. Composite risk measures allow the aversion to aleatoric vs epistemic uncertainty to be adjusted separately. This is a direction we wish to explore in future work. [1] Eriksson, Hannes, et al. "Sentinel: taming uncertainty with ensemble based distributional reinforcement learning." *UAI*, 2022. ### Ablations As you have suggested, we have **expanded the ablations** in Tables 1 and 2 of the global response. The two key components of our approach are 1) risk-averse sampling and 2) using an ensemble of models. We include an additional ablation so that we separately ablate each of these components. When either of these components is removed, we observe that training may diverge. This illustrates that both components of our approach are necessary to prevent value function instability resulting from backing-up out-of-distribution value estimates (i.e. avoiding epistemic uncertainty). The ablations also show that risk-averse sampling is necessary to achieve strong risk-sensitive performance to aleatoric uncertainty on the stochastic Currency Exchange domain. ### Advantage Over Baselines On the D4RL domains (which are deterministic) our approach obtains similar to performance to some of the strongest baselines such as RAMBO. However, in the stochastic Currency Exchange domain (where the objective is to optimise CVaR) only 1R2R and CODAC are able to achieve strong risk-averse performance. In this domain, RAMBO performs well for average performance but not for the desired objective of CVaR. CODAC performs poorly on most of the other benchmarks. Thus, 1R2R is the only algorithm that achieves strong performance in the deterministic domains, as well as generating performant risk-averse behaviour in the stochastic domains. ### Question 1. Because the minimisation is performed over a discrete set of candidate successor states, it is a straightforward optimisation problem. To perform the minimisation, we first sort the sampled successor states from lowest to highest value. Then, the solution is computed in closed-form according to Equations 11 and 12 in Appendix A.1 for CVaR and the Wang risk measure, respectively. ### Question 2. Conservative value function methods (e.g. [2]) aim to learn a value function that lower-bounds the value function in the true environment. Risk-measures consider mappings from *distributions* to scalar values. This means that risk-measures can be viewed as the expectation under an adversarially perturbed distribution [3]. Our approach optimises the policy under an adversarially perturbed transition function, which is why we characterise it as a risk-sensitive approach. Under some conditions, it may also be the case that the value function that our algorithm computes is a lower bound on the true value function. If this is the case, our approach can also be considered conservative in this sense. Exploring this connection is an exciting direction for future work. [2] Kumar, Aviral, et al. "Conservative Q-learning for offline reinforcement learning." NeurIPS, 2020. [3] Artzner, Philippe, et al. "Coherent measures of risk." Mathematical finance, 1999. ### Question 3. If we naively apply online RL algorithms in the offline setting, the RL algorithm will back-up values estimated for out-of-distribution (OOD) state-action pairs, where the value function is inaccurate. This is referred to as the “distributional shift” problem. Because RL algorithms choose the highest value actions, this leads to the policy selecting OOD state-action pairs where the Q-value is overestimated. Repeatedly performing Bellman backups in this manner often leads to the value function diverging (see [4], Figure 1). In the results for our algorithm, we do not observe this issue of the value function diverging. However, if we ablate either component of our approach (either the ensemble or risk-averse sampling), then for some datasets the value function is unstable and diverges. These runs are labelled “div.” in the results when the value function reaches a magnitude greater than 1e9. See Table 1 of the Global Response. Thus, the fact that our algorithm avoids this issue of value function instability indicates that our algorithm prevents the value function being updated on transitions to out-of-distribution state-action pairs. This is because we do not see an alternative reason for the instability of the ablated algorithms, which operate using the same dataset and hyperparameters. Therefore, this is a strong indication that our algorithm mitigates the issue of distributional shift, [4] Kumar, Aviral, et al. "Stabilizing off-policy Q-learning via bootstrapping error reduction." NeurIPS, 2019. ### Question 4. Thank you for pointing out this typo, we will fix this. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing clear explanations and responses to each of my questions, this is a responsible and persuasive rebuttal. I believe that the updated ablation experiment section is more complete compared to before, and the addition of baselines has made the experimental results more convincing. Additionally, the authors have acknowledged the limitations of their work and have listed potential solutions for future study.
Summary: This paper considers the problem of offline reinforcement learning for risk-averse decision-making with the distributional shift. The core insight of this paper is to incorporate epistemic uncertainty (from the distributional shift) and aleatoric uncertainty (from usual statistical errors) together and develop a way to simply penalize high uncertainty, which can handle both problems simultaneously. Strengths: I think the biggest strength of this paper is the novel idea of combining both epistemic uncertainties from the distributional shift and aleatoric uncertainty together and using a simple risk-aversion algorithm to handle both problems. I think this key idea can be applied not only to offline reinforcement learning but also to other areas where the distributional shifts are key, e.g., predictions and causal inference problems. Weaknesses: I do not think there is any obvious weakness in this paper. I only have some suggestions below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **How to choose the distribution over MDPs to capture the epistemic uncertainty?** One of the key ideas in this paper is to use the distribution over MDPs to represent the epistemic uncertainty over the real environment. As far as I understand, on page 7 (Implementation Details), the authors suggest using an ensemble of neural networks to estimate different distributions over the MDPs. It seems to me that this choice of model classes is actually the fundamentally important part of capturing epistemic uncertainty. For example, if we include a large number of different neural networks (e.g., more than 100), the epistemic uncertainty is guaranteed to be larger than cases when we include a smaller number of neural networks (e.g., 5). **But, without an explicit model or assumption about the distributional shifts, how can researchers choose what models to estimate MDPs for approximating the epistemic uncertainty?** **In Section 5 (Experiments), how did the authors choose the model class for capturing the epistemic uncertainty?** I think this is an important question as there is an inherent tradeoff --- if we include more models, we can be more robust to a wide range of potential distributional shifts, but if we include too many models, we might be too conservative. **Is there any theoretical guidance about how to approximate the epistemic uncertainty?** Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper clarifies its limitation clearly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable time you have spent reviewing our paper. We agree that deciding how to represent epistemic uncertainty in deep learning is an important problem, and a key aspect of our work. We used a small ensemble of 5 neural networks to ensure that our work is a fair comparison to previous works, which also use a small ensemble. MOPO [1], COMBO [2], and RAMBO [3] each use an ensemble of 5 models, and MOReL [4] uses an ensemble of 4 models. Like our work, MOPO and MOReL use these small ensembles to estimate epistemic uncertainty. We did conduct experiments using a larger ensemble of 15 models. These results can be found in Table 7 in Appendix C.4. We found that the results obtained with a larger ensemble are similar to our main results which utilise the small ensemble. The influence of the size of the model ensemble on the uncertainty estimate *depends on the type of uncertainty estimate used*. The recent work [5] analyses this empirically in the context of offline RL. The variance or standard deviation between the members of the ensemble is fairly stable as the size of the ensemble changes - see Figure 2 of [5]. However, other metrics, such as the maximum disagreement between any two members of the ensemble, increase significantly as the size of the ensemble increases (again, see Figure 2 of [5]). In the latter case, we would indeed expect the uncertainty estimate to be highly sensitive to the number of models in the ensemble. In our approach, the risk-sensitive Bellman equation (Equation 4) penalises the *variability* in samples drawn from the ensemble. As stated above, the variance between ensemble members is quite stable as the size of the ensemble increases. Thus, we do not expect our approach to be highly sensitive to the size of the ensemble, as supported by the results in Table 7. Furthermore, we present a theoretical analysis in Proposition 1 in the response to Reviewer RPWe that thoroughly analyses how our approach mitigates epistemic and aleatoric uncertainty in an ensemble of models. [1] Yu, Tianhe, et al. "Mopo: Model-based offline policy optimization." NeurIPS (2020). [2] Yu, Tianhe, et al. "Combo: Conservative offline model-based policy optimization." NeurIPS (2021). [3] Rigter, Marc, Bruno Lacerda, and Nick Hawes. "Rambo-rl: Robust adversarial model-based offline reinforcement learning." NeurIPS (2022). [4] Kidambi, Rahul, et al. "Morel: Model-based offline reinforcement learning." **NeurIPS (2020). [5] Lu, Cong, et al. "Revisiting Design Choices in Offline Model-Based Reinforcement Learning." ICLR, 2022. --- Rebuttal Comment 1.1: Comment: Thank you so much for the rebuttal and your detailed clarification. These answered my questions well!
Summary: One Risk to Rule Them All (1R2R) is an offline-RL method that seeks to reduce aleatoric and epistemic uncertainty. Their method is simple; the authors introduce the notion of risk in the bellman update by adding learned adversarial perturbation to the learned transition dynamics models in the model-based setting. This is different from prior risk-based methods which adversarially perturbate the value function as opposed to the TD function. Strengths: - The narrative/flow is clear; this work focuses on minimizing both aleatoric and epistemic uncertainty, both of which hamper offline RL agent performance. - Proposed methodology is simple, just add a learned adversarial perturbation to TD model. - Figure 1 is an excellent figure that illustrates that the learned risk-aware value function penalizes both uncertainties. Yet, it is difficult to absorb/fully understand, and it might be better to split it into two subfigures. For example, highlight regions with different colors outside of dataset and regions where TD uncertainty is high. (It is also nice to have to compare the value function to CQL's too, to show that CQL might be aleatoric unaware) - Evaluation is thorough and evaluated over difficult environments with large continuous action spaces. Weaknesses: - The intro was not clear. At the end, it says that epistemic uncertainty is avoided through model-ensemble variance. However, the analogy to aleatoric uncertainty was not made clear. It might be better to straight-up state that risk-averse RL applied in the offline setting reduces both uncertainties for reasons X and Y. - Authors claims method is simpler than prior work due to only considering risk; however, the risk incorporates model-based approaches, which is much more complicated and much harder to work in practice. - Why does risk-averse RL reduce aleatoric uncertainty? (good to show math in appendix) It seems to be a miraculous cure to a problem. - Methodologically, this paper is not novel; it seems more like A + B, where A = model-based ensemble in MOPO and B = risk-averse RL objective in CvAR, applied in the offline RL setting. The real novelty seems to stem from identifying applicability of risk in offline RL. - The Risk background is difficult to understand; suggest removing it and immediately diving into the Bellman risk-sensitive representation. - It'd be nice to have a figure illustrating the final algorithm architecture and well as typical reward x training iteration graphs in the evaluation. - Where is the evaluation against MOPO/COMBO, another model-based offline RL method? It's be worthwhile to show that this work is the SOTA model-based offline RL approach. - Suggesting removing one of the evaluation environments and focusing more on ablation and analysis. - Why does 1R2R flounder on expert dataset (medium expert)? - Another way to avoid such uncertainty is the field of safe RL, which seeks to minimize constraint violations while maximizing environmental reward (Recovery RL). There is a lot of prior work in this field and is somehow obfuscated in this paper. -Nit: There are several typos, including in Algorithm 1 line 9, and Section 3 Static and Dynamic Risk MDP formulation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Unrelated, it would be nice to have a model-free version, as I'd generally avoid model-based approaches (as does industry). For example, if you add the adversarial perturbation to the distributional value function (instead of the learned TD function), despite prior work. - I'd upgrade this paper to strong accept (as possibly higher) if it explains WHY adding risk-aversion reduces epistemic and aleotoric uncertainty, as this is a very beautiful insight. Bonus if there is math. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time you have spent reviewing our paper. We respond to your main comments below. We are unable to respond to all comments due to space constraints. ### Why does our approach reduce both epistemic and aleatoric uncertainty? In Proposition 1, and the explanation that follows, we provide mathematical justification for why our approach reduces both epistemic and aleatoric uncertainty. We will add this to the final paper. In Proposition 1, we assume that each transition function in the ensemble is Gaussian, with standard deviation $\sigma_A$ representing the aleatoric uncertainty. We also assume that the distribution of the means of each member of the ensemble is Gaussian with standard deviation $\sigma_E$ (representing epistemic uncertainty). Proposition 1 and the corollary that follows demonstrate that our approach penalises taking actions that have *either* large $\sigma_A$ or $\sigma_E$. We discuss Proposition 1 in detail below. --- **Proposition 1:** Consider some state $s$ in a 1D state space, and some action $a$. Assume 1) that there is an ensemble of $M$ Gaussian transition functions, each denoted $T_i$ with mean $\mu_i$ and standard deviation $\sigma_A$: $\\{T_i(s' | s, a) = \mathcal{N}(\mu_i, \sigma_A^2) \\}_{i=1}^M$; and 2) that the mean of each Gaussian, $\mu_i$, is normally distributed with mean $\mu_0$ and standard deviation $\sigma_E$: $\mu_i \sim \mathcal{N}(\mu_0, \sigma_E^2)$. According to Eq. 5, to sample a successor state $s'$ from $\overline{T}$ we must sample a transition function $T_i$ from the ensemble, and then sample $s' \sim T_i(s' | s, a)$. Assume that the value of the successor state is linear around $\mu_0$ with some linearity constant, $K$: $$ V(s') = V(\mu_0) + K (s' - \mu_0) $$ Then $V(s')$ is distributed according to: $V(s') \sim \mathcal{N}\Big(\mu = V(\mu_0), \sigma^2 = K^2(\sigma_A^2 + \sigma_E^2) \Big)$. **Proof**: The joint probability of sampling an ensemble member with mean, $\mu_i$, and then sampling a successor state, $s'$, is: $$ P(\mu_i, s') = P(\mu_i) P(s'\ |\ \mu_i). $$ We marginalise over $\mu_i$: $$ P(s') = \int_{-\infty}^\infty P(\mu_i) P(s'\ |\ \mu_i) \mathrm{d} \mu_i  $$ $P(\mu_i)$ and $P(s'\ |\ \mu_i)$ are both Gaussian. Substituting the appropriate probability density functions we have that $$ P(s') = \int_{-\infty}^\infty \frac{1}{\sigma_E \sqrt{2\pi} } \exp^{- \frac{(\mu_i - \mu_0 )^2}{2 \sigma_E^2}} \frac{1}{\sigma_A \sqrt{2\pi} } \exp^{- \frac{(s' - \mu_i )^2}{2 \sigma_A^2}} \mathrm{d} \mu_i \\ = \frac{1}{\sqrt{\sigma_A^2 + \sigma_E^2} \sqrt{2\pi} } \exp^{-\frac{(s' - \mu_0 )^2}{2 \sqrt{\sigma_E^2 + \sigma_A^2}}} = \mathcal{N}(\mu_0, \sigma_E^2 + \sigma_A^2). $$ Thus, $s'$ is normally distributed with mean $\mu_0$ and standard deviation $\sqrt{\sigma_E^2 + \sigma_A^2}$. The random variable $K(s' - \mu_0)$ is therefore normally distributed with mean 0 and standard deviation $|K|\sqrt{\sigma_E^2 + \sigma_A^2}$. Finally, $V(s') = V(\mu_0) + K (s' - \mu_0)$ is also normally distributed with mean $V(\mu_0)$ and standard deviation $|K|\sqrt{\sigma_E^2 + \sigma_A^2}$. $\square$ --- In Proposition 1, $\sigma_E$ defines the level of disagreement between the members of the ensemble, and therefore represents the level of epistemic uncertainty. $\sigma_A$ represents the aleatoric uncertainty. The proposition tells us that if either $\sigma_A$ or $\sigma_E$ are high, then there is high variability over the value of the successor state when sampling from the ensemble (i.e. sampling from $\overline{T}$ in Eq. 5). Risk measures penalise high variability. The risk-sensitive Bellman equation (Eq. 4) applies a risk-measure the value over the successor state. Therefore, applying Eq. 4 to samples from the ensemble penalises executing state-action pairs for which either $\sigma_A$ or $\sigma_E$ is high. Thus, our approach penalises actions with *either* high aleatoric or epistemic uncertainty. It therefore favours choosing actions that have *both* low aleatoric and epistemic uncertainty. The following corollary uses the example of conditional value at risk (CVaR) to show how the value computed in the risk-sensitive Bellman equation decreases as either $\sigma_A$ or $\sigma_E$ increase. --- **Corollary:** Under the assumptions in Proposition 1, the CVaR at confidence level $\alpha$ of the value of the successor state is: $$ \mathrm{CVaR}_\alpha\big(V(s')\big) = V(\mu_0) - \frac{|K|\sqrt{\sigma_E^2 + \sigma_A^2}}{\alpha \sqrt{2 \pi}} \exp ^ {-\frac{1}{2}( \Phi^{-1}(\alpha) )^2} $$ where $\Phi^{-1}$ is the inverse of the standard normal CDF. **Proof:** This follows directly from Proposition 1 and the formula for the CVaR for a Gaussian random variable [1]. [1] Khokhlov, V.. "Conditional value-at-risk for elliptical distributions." European Journal of Economics and Management (2016). --- ### Extra Figures In Fig 1 of the Global Response, we have added a summary figure. We will add this to the paper. We have also included example plots of the performance vs training iterations. We will add these plots for all datasets to the final paper. ### Extra Ablation We have added an **additional ablation**. See the response to Reviewer 8yT3 for details. ### MOPO/COMBO We have **expanded the results** to include MOPO, COMBO, and ATAC. See the response to Reviewer unPm for details. ### Medium-Expert Datasets Poor performance of model-based approaches on high-quality datasets is an established issue [1, 2]. The cause of this is an open question in the community which we wish to investigate in future work. Model-based methods are generally strongest on noisy sub-optimal data [1]. [1] Lu, C, et al. "Challenges and opportunities in offline reinforcement learning from visual observations." TMLR, 2023 [2] Rigter, M, et al. "Robust adversarial model-based offline reinforcement learning." NeurIPS, 2022 ### Related Work We will expand the related work to discuss constrained RL and recovery RL.
Rebuttal 1: Rebuttal: Thank you for your reviews. Please find attached the Global Response. In the attachment, we provide updated results which include an **additional ablation**, as well as comparisons to **additional baseline algorithms**: MOPO, COMBO, and ATAC. We also include additional figures. The summary figure (Figure 1 in the global response) will be added to the final version of the paper. We have also provided examples of performance vs training iterations (Figure 2 in the global response). These figures will be added for all datasets in the final version. Finally, we have also added stronger **theoretical justification** for why our approach reduces both aleatoric and epistemic uncertainty. Please see Proposition 1 in the response to Reviewer RPWe. This theoretical motivation will also be added to the final version of our paper. Please let us know if there is anything further that you would like to discuss! Kind regards, The authors Pdf: /pdf/1323bcd0e29f7ed733156af7bc15df5b053447ad.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Multi-Objective Intrinsic Reward Learning for Conversational Recommender Systems
Accept (poster)
Summary: In this paper, the authors propose an algorithm for learning the intrinsic reward function in order to solve the problem of poor results due to improper design of the reward function of the dialogue strategy module in current conversation recommendation systems. Specifically, a multi-objective bi-level optimization problem is designed, where the inner layer uses the learned reward function to optimize the selection strategy, and the outer layer updates the reward function used to optimize the system metrics. Excellent results are achieved in two conversation recommendation datasets. Strengths: It is interesting research direction to design an internal reward based on the results of each interaction, in addition to the external reward generated by the conversation results, to satisfy the results of two recommendation goals through a multi-objective bi-level optimization framework. A hindsight internal reward function was designed to calculate the reward score of the current strategy after each target item was hit. In order to obtain successful recommendation results faster and thus reduce the number of conversation rounds, the authors propose the Recommendation Preference Matching module, which improves the possibility of selecting the right decision. The authors designed a Multi-Objective Bi-Level Optimization strategy in which the inner loop, using two rewards, optimizes the decisions of the conversation, while the outer loop optimizes the reward function. The experiment design is comprehensive and solid, and the result analysis is clear and convincing. Weaknesses: First of all, your baseline is not enough, the CRIF model published in SIGIR in 2022 is not compared, and from the results of CRIF, your performance is worse than it, which needs to be verified by adding more experiments. I can't understand the example you showed in Case Study, pop and rock are very common examples, and Franz Ferdinand also has the label of pop rock, why pop gave a negative score it, and the contribution of this article is mainly in the conversation policy module, in the second step to raise a completely useless question, is this a policy failure? Your work contains only the content of the dialogue policy module, and in the experimental part we can see that it has good recommendation performance, but the article does not describe how to get the recommendation results, please explain. It is suggested to include a significance test to prove the validity of the experiment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please refer to weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive suggestions to strengthen the empirical support of our work by adding more advanced baselines and significance testing. --- **[Q1]** Compare with CRIF **[A1]** We discussed the CRIF model in our appendix and reported the experiment comparisons with it in the appendix B. Please kindly check it in the supplementary material. CRIF depends on a heuristic to identify the best action at each round of interaction. However, this heuristic abuses the design of the user simulator that all attributes of the target item will be accepted. This creates a form of information leakage and an unfair advantage over other solutions that do not exploit this specific knowledge. Thus we did not include it in the main paper. However, it is still interesting to understand whether the learned intrinsic rewards can augment this strong heuristic. The experiment results in appendix B showed that learning intrinsic rewards directly from user interactions offers an effective augmentation to CRIF, resulting in better performance. We will explicitly refer to this study and its conclusions in the main paper. --- **[Q2]** Recommendation results **[A2]** Building upon [1], we employ a unified policy in which the action space consists of both attributes and items. When the policy selects an attribute action, the agent inquires if the user prefers that specific attribute. Conversely, if an item action is chosen, the agent compiles a recommendation list for the user, with items ranked based on the scores assigned by the policy. A better policy is recognized by its ability to not only strategize questions more effectively (i.e., the average turn metric) but also produce an optimized recommendation list (i.e., the ranking quality). To evaluate the quality of the recommendation list, we utilize the hDCG metric. We will add a detailed description of how to obtain the recommendation results in the final version. [1] Yang Deng, Yaliang Li, Fei Sun, Bolin Ding, and Wai Lam. Unified conversational recommendation policy learning via graph-based reinforcement learning. arXiv preprint arXiv:2105.09710, 2021. --- **[Q3]** Clarification of the Case Study **[A3]** Thank you for your insightful observation regarding the Case Study! In the provided example, even though "pop" was accepted by the user, it received a negative intrinsic reward because it is a broad genre, associated with too many artists. This means that while "pop" may be liked, it is still not the most informative for refining recommendations at the moment. As a result, the intrinsic reward function gives "pop" a slightly negative reward, despite Franz Ferdinand falling under "pop rock". While the asked questions may occasionally seem redundant, it’s expected as the policy tries to avoid jumping to conclusions based on limited information (e.g., recommending at the beginning of the conversation). We see it as laying a foundational understanding of a user's preferences. --- **[Q4]** Significance test **[A4]** Thanks for your suggestion! All the experiments are repeated 3 times with different random seeds, and we report the final averaged metrics. We will add the results of the significance test to ensure the validity of the results.
Summary: This paper study the problem of reinforcement learning based conversational recommender systems. The paper claims that it is difficult to design a handcraft reward function for each step of the conversations. Thus the paper proposes a multi-objective bi-level optimization method that the inner level optimizes the policy with real rewards and the learned intrinsic rewards, and the outer level optimizes two CRS objectives: maximizing the success rate and minimizing the number of turns in successful conversations. The paper validates the effectiveness of the proposed methods in 3 CRS datasets. Strengths: 1.The motivation of the paper is very clear. 2.The idea of hindsight reward shaping is sound and interesting. 3.The multi-objective bi-level optimization is also interesting and easy to follow. 4.The paper provide detailed introductions of evaluation metrics, and the performance improvement is significant. 5.The paper shows the ablation study of the HRS and the RPM component. Weaknesses: 1.The major concern of the paper is that it is not clear that how the proposed techniques work together. First of all, the intrinsic reward function defined in Eq.(3) is related to the rank of the target item under the state. How is it related to the parameter \phi? 2.How does the RPM module work? It seems we train the policy with Eq.(7). Is it related to the intrinsic reward? 3.The performance improvement of CRSIRL is marginal compared with MTL, which means that we do need to use the MDGA algorithm. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the motivation of our paper and our idea being interesting. We hope the following responses can address the reviewer’s concerns. --- **[Q1]** Clarification about how the components of CRSIRL work together **[A1]** We develop a bi-level optimization framework to learn intrinsic rewards from users’ explicitly provided reward signals for CRS. Our solution consists of an inner loop and an outer loop for the induced bi-level optimization problem. In the inner loop, we update the policy $\pi$ with the learned intrinsic reward (parameterized by $\phi$) to obtain an updated policy $\pi’$. In the outer loop, we update $\phi$ by calculating the meta loss $\mathbb{L}$ induced by $\pi’$ over the users’ provided explicit reward signal. Intuitively, we hope the learned intrinsic rewards can help us find a better policy $\pi’$, leading to a lower meta-loss. More specifically, to obtain the gradient of $\phi$, in the inner loop, we create a differentiable connection between $\pi$ and $\phi$ via gradient update (Eq.9). Such a connection allows us to calculate the gradient of $\phi$ w.r.t. the meta loss using chain rule (Eq.10 and 13). After obtaining the gradient of $\phi$ in the outer loop, we are able to update it using Eq.14. To better leverage explicit reward from user feedback, we design two CRS-specific objectives HRS (Eq.3) and RPM (Eq.7), and construct the multi-objective meta loss $\mathbb{L}$. Hence, we will not directly use HRS and RPM to update the policy, but use them to guide the learning of intrinsic reward. We will add the pseudo code of CRSIRL to facilitate better understanding in the final version. --- **[Q2]** Performance comparison between CRSIRL and MTL **[A2]** While MTL is a strong baseline that leverages our proposed HRS and RPM objectives, its static weight setting between the two objectives makes it less adaptable in different environments. This rigidity can result in one objective dominating the other. For example, in Table 3, MTL is skewed towards optimizing the average turn on the LastFM dataset. In contrast, CRSIRL does not merely establish a simple trade-off between the two objectives by a manually-set hyper-parameter, but strives to optimize them concurrently (as explained between line 221-234). By utilizing the MDGA algorithm, CRSIRL can flexibly adjust the weights between objectives, striving for a Pareto optimal solution, as supported by the findings in [1]. This ensures simultaneous optimization of both objectives without one excessively dominating the other. Our multi-objective framework not only theoretically avoids suboptimal outcomes but also demonstrates better empirical results over MTL, as observed in Table 3. Furthermore, our experiments were repeated three times with different random seeds, and the improvement brought by CRSIRL over MTL has been found to be statistically significant ($p<0.05$). We'll be incorporating the significance test results in the final version of the paper. [1] Désidéri, Jean-Antoine. "Multiple-gradient descent algorithm (MGDA) for multiobjective optimization." Comptes Rendus Mathematique 350.5-6 (2012): 313-318. --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for the rebuttal, I am now pleased with the paper and I have increased the score.
Summary: Mainstream reinforcement learning-based CRS solutions heavily rely on handcrafted reward functions, which may not be aligned with user intent in CRS tasks. Therefore, the design of task-specific rewards is critical to facilitate CRS policy learning, which remains largely under-explored in the literature. This paper proposes a novel approach to address this challenge by learning intrinsic rewards from interactions with users. Specifically, the paper formulates intrinsic reward learning as a multi-objective bi-level optimization problem. Extensive experiments on three public CRS benchmarks show that the developed algorithm significantly improves CRS performance by exploiting informative learned intrinsic rewards. Strengths: (1) The paper is well motivated, with a convincing motivating example. (2) The develop technique is solid and has clear intuitions. (3) Clear improvements on multiple datasets are observed in the experiments. Weaknesses: (1) More experiment details can be included. Previous works usually show the curves reporting the performance with different training episodes and different turns of conversations, for example, in baselines Yang Deng, Yaliang Li, Fei Sun, Bolin Ding, and Wai Lam. Unified conversational recommendation policy learning via graph-based reinforcement learning. arXiv preprint arXiv:2105.09710, 2021. Wenqiang Lei, Xiangnan He, Yisong Miao, Qingyun Wu, Richang Hong, Min-Yen Kan, and Tat-Seng Chua. Estimation-action-reflection: Towards deep interaction between conversational and recommender systems. In Proceedings of the 13th WSDM Conference, pages 304–312, 2020a. Usually learning a more generic reward function may need more samples compared to learning a handcrafted reward function, although after convergence, learning the generic reward function can lead to better performance compared to learning a handcrafted reward function. It would be interesting if the paper can report SR@K, where K is a small number, because the user experience in early interactions is very important in conversational recommendations. (2) The way of user interactions in Figure 1 (System Ask, User Respond) may become less interesting, especially when nowadays where users can progressively and flexibly describe what they want by LLMs with decent accuracy of understanding the desired attributes. For example, as in Table 2, by all the algorithms in this setting (System Ask, User Respond), even after 15 interactions, very few algorithms can achieve satisfying recommendations. However, if the user can progressively and flexibly describe what they want by LLMs, it is likely that the user can get proper recommended items with only very few interactions. Even the user may not be progressive in some cases, a hybrid setting (where the user can either respond yes/no or progressively describe what they want during the conversations) is more practical. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) What are the comparison results, using the metric SR@K, where K is a small number (e.g., 3 or 5)? (2) Are the inner loop optimization and outer loop optimization in an alternating fashion during training? If so, is there any insight or theoretical analysis about the convergence of the training procedure? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations seem not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the constructive comments on enriching experiment results and more practical CRS paradigm. --- **[Q1]** What are the comparison results, using the metric SR@K, where K is a small number (e.g., 3 or 5)? **[A1]** We report the SR@5 of CRSIRL, compared with the strongest baseline UNICORN, | | LastFM | LastFM* | Yelp* | |:-------:|:------:|:-------:|:-----:| | UNICORN | 0.104 | 0.215 | 0.068 | | CRSIRL | 0.262 | 0.324 | 0.182 | This notable performance gain can be attributed to CRSIRL's strategy of posing more informative questions early in the conversation, allowing it to make more accurate recommendations as the conversation progresses. We will include more experiment details in the final version. --- **[Q2]** Are the inner loop optimization and outer loop optimization in an alternating fashion during training? If so, is there any insight or theoretical analysis about the convergence of the training procedure? **[A2]** The inner and outer loops are operated in an alternative fashion, as all other solutions for bi-level optimization. Please refer to the general response CQ1 for the convergence analysis of CRSIRL. --- **[Q3]** Progressive CRS paradigm **[A3]** Progressive CRS presents a more reasonable paradigm, allowing users to take a more active role in the conversation by explicitly describing their preferences. The idea of learning CRS with such actively participative users is interesting. However, at the time of our paper's submission, creating a progressive simulator based on LLMs is still an emerging field of study [1]. It's important to clarify that designing such a progressive user simulator, or any specific type of user simulator, is not the focus of our paper. Instead, our primary objective is the effective learning of optimal reward functions for CRS. For evaluations, we employed the widely-accepted "System Ask, User Respond" [2] paradigm. While the fundamental mechanism of reward learning remains consistent, irrespective of the CRS setting, we believe that the integration of our proposed method within the progressive CRS paradigm will be a seamless process. Exploring this integration further is a promising direction and we leave it as an important future work. [1] Wang, Xiaolei, et al. "Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models." arXiv preprint arXiv:2305.13112 (2023). [2] Zhang, Yongfeng, et al. "Towards conversational search and recommendation: System ask, user respond." Proceedings of the 27th acm international conference on information and knowledge management. 2018. --- Rebuttal Comment 1.1: Title: Thanks for the responses Comment: Thanks for the responses. I acknowledge that I have read the responses and would keep the score.
Summary: The paper addresses the problem of designing effective reward functions for conversational recommender systems (CRS), which is critical but largely under-explored in the literature. The paper proposes a novel approach to learn intrinsic rewards from user feedback, which can better capture the user intent and optimize multiple CRS-specific objectives, such as success rate and conversation length. The paper formulates intrinsic reward learning as a multi-objective bi-level optimization problem, and develops an algorithm to solve it. The paper evaluates the proposed approach on three public CRS benchmarks, and shows that it significantly improves CRS performance by exploiting informative learned intrinsic rewards. Strengths: 1) Originality: The paper proposes a novel approach to learn intrinsic rewards from user feedback, which can better capture the user intent and optimize multiple CRS-specific objectives. This represents a creative combination of existing ideas and a new problem formulation in the field of conversational recommender systems. 2) Quality: The paper formulates intrinsic reward learning as a multi-objective bi-level optimization problem, and develops an online algorithm to solve it. The proposed approach is rigorously evaluated on three public CRS benchmarks, and shows significant improvement in CRS performance by exploiting informative learned intrinsic rewards. 3) Clarity: The paper is well-written and clearly presents the research problem, methodology, results, and conclusions. The technical details are explained in a clear and concise manner, making it accessible to a broad audience. 4) Significance: It is questionable if The paper makes a significant contribution to the field of conversational recommender systems. I am not sure if any of this will actually be used in practical systems because algorithms are evaluated on non practical scenarios/variants of datasets. Weaknesses: 1) The paper does not provide any details on convergence of the algorithm or computations complexity of the proposed method against existing methods. 2) Even though it seems like a useful strategy to use intrinsic rewards, It is unclear how the intrinsic rewards affect the policy learning process. This is not well motivated using experimental results. 3) How does a user simulator affect overall results? Can authors justify the specific choice of user simulator? Why does the attribute set Pv be treated as the oracle set? 4) It is also unclear how the hyperparameters of the model are tuned and how they affect the performance. 5) Not exactly a weakness but this would be an interesting study too: The paper does not compare the proposed method with any baselines that use real user feedback, such as ratings, reviews, or explanations. It is possible that the user simulator may not capture the true user preferences or behavior, and that the intrinsic rewards may not reflect the real user satisfaction. It is also possible that the user feedback may provide additional information or guidance for improving the recommendation quality and user satisfaction. 6) The paper does not evaluate the proposed method on a diverse set of datasets such as e-commerce, music, or movies. It is unclear how the proposed method would generalize to different domains, contexts, or user types. It is also unclear how the proposed method would handle practical challenges, such as data sparsity, noise, or diversity. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) How does the user simulator approach compare to other methods of evaluating CRS solutions, such as human evaluation or simulation with real user logs? 2) How does the proposed CRSIRL handle the exploration-exploitation trade-off in policy learning, especially when the intrinsic rewards are uncertain or noisy? 3) How does the multi-objective bi-level optimization framework deal with the potential conflicts or trade-offs between the two objectives of maximizing success rate and minimizing number of turns? Authors mention this in section 3 but it's not analyzed in the results section. 4) Can CRSIRL incorporate user feedback on the generated recommendations, such as ratings, reviews, or explanations? 5) Can CRSIRL handle the cold start problem, when there is no or little prior information about the user preferences or behavior? 6) Can CRSIRL cope with the dynamic and evolving nature of the user preferences and behavior, especially in long-term interactions? 7) How does the CRSIRL leverage the contextual information, such as user profile, location, time, or mood, to improve the recommendation quality and user satisfaction? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments on our work, and the constructive questions to help enrich our work in the future. --- **[Q1]** How does the user simulator approach compare to other methods of evaluating CRS solutions? **[A1]** Due to interactive nature of conversational recommendation, training and evaluating CRS with real users is costly. We use a user simulator, inspired by [1], for efficient training and evaluation. Assuming users favor all attributes of the target item, we treat the attribute set $P_v$ as the oracle, ensuring consistent feedback. While crafting a more realistic simulator is compelling, our paper focuses on reward learning. For fair comparisons, we choose to align our evaluation with established baselines On the other hand, using logged data to learn or evaluate a RL algorithm is known to be challenging, due to the issue of distribution shift. Though various solutions have been proposed, they have different limitations (e.g., variance vs., bias trade-off, and computational complexity), which add additional complexity in our study. Given we are the first to study the reward learning problem in CRS, we choose simulations to evaluate our proposed solution. [1] Sun, Yueming, and Yi Zhang. "Conversational recommender system." The 41st international acm sigir conference on research & development in information retrieval. 2018. --- **[Q2]** How does the proposed CRSIRL handle the exploration-exploitation trade-off in policy learning? **[A2]** While intrinsic rewards can be noisy, our framework relies on noise-free extrinsic rewards in the outer loop for stable feedback. These extrinsic rewards offer a clear training signal, enabling CRSIRL to refine intrinsic rewards and guide the policy to more effective actions. As a result, despite the noisy intrinsic feedback, CRSIRL maintains a balance between exploration and exploitation during the training. Additionally, as suggested by [1], noisy feedback can enhance exploration, potentially benefiting early training stages. [1] Kannan, Sampath, et al. "A smoothed analysis of the greedy algorithm for the linear contextual bandit problem." Advances in neural information processing systems 31 (2018). --- **[Q3]** Potential conflicts or trade-offs between the two objectives **[A3]** Our multi-objective optimization framework adeptly navigates these trade-offs, ensuring convergence to a point on the Pareto frontier, as evidenced in [1]. CRSIRL does not merely establish an arbitrary trade-off between the two objectives by a manually-set hyper-parameter, but strives to optimize them concurrently (as explained between line 221-234). CRSIRL realizes it by dynamically adjusting the weights, denoted as $\alpha$, for each objective throughout the training process. As Table 3 illustrates, while the multi-task learning variant can indeed benefit from the dual objectives, CRSIRL can still outperform it by establishing a more desirable trade-off between the two objectives. [1] Désidéri, Jean-Antoine. "Multiple-gradient descent algorithm (MGDA) for multiobjective optimization." Comptes Rendus Mathematique 350.5-6 (2012): 313-318. --- **[Q4]** Can CRSIRL incorporate user feedback on the generated recommendations? **[A4]** CRSIRL is capable of integrating various types of user feedback, such as ratings, reviews, or explanations, as signals of extrinsic reward. And they can be included as other optimization objectives within our multi-objective framework. However, it's worth noting that incorporating user feedback would not fundamentally alter the focus of this paper, which is to investigate effective reward learning strategies in CRS. We leave how to effectively utilize other types of user feedback for intrinsic reward learning as an important future work. --- **[Q5]** Can CRSIRL handle the cold start problem, when there is no or little prior information about the user preferences or behavior? **[A5]** CRS is expected to handle cold-start problems in recommendation by profiling a new user via eliciting her preference about item attributes on the fly. And CRSIRL is expected to better handle the cold-start problem by learning more effective rewards to guide CRS policy learning. --- **[Q6]** Can CRSIRL cope with the dynamic and evolving nature of the user preferences and behavior, especially in long-term interactions? **[A6]** The dynamic and evolving nature of user preferences actually signal a shift in the reward distribution, which poses a non-stationary learning environment. This is known as a significant challenge in reinforcement learning in general. Not only does the learned reward function but also the policy learning algorithm has to cope with it. Currently, the CRSIRL framework is not designed to handle such a non-stationary environment, particularly in the context of long-term interactions where user behavior and preferences may evolve considerably. The reviewer’s comment is very well taken, and we acknowledge this as an area for potential enhancement of our framework. --- **[Q7]** How does the CRSIRL leverage the contextual information to improve the recommendation quality and user satisfaction? **[A7]** Currently, we learn the intrinsic reward function to maximize the recommendation quality. However, it is possible to further improve user satisfaction by learning context-aware reward functions by encoding contextual information into reward functions, i.e., context-aware CRS. For example, different context induces different conversation strategies. It would be an interesting research direction for us in the future. --- **[Q8]** Hyper-parameter setting **[A8]** We provide the search ranges for hyper-parameters in Section 5.1 under "Training Details." Meanwhile, an in-depth discussion regarding the impact of different $\lambda$ is included in Section 5.3. --- **[Q9]** Convergence analysis **[A9]** Please refer to the general response CQ1. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you for the detailed rebuttal. Many of my questions were answered. The impact of convergence analysis was not clear and authors should connect that to experiments too. In any case, I am improving my score to 6.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their thoughtful comments and constructive suggestions, which will help us strengthen our paper. We are encouraged to find that the reviewers appreciate the clear presentation (reviewer reFg, vRxN), motivation of our study (reviewer reFg, Kjom, vRxN, 5ZDc), novelty of our approach (reviewer reFg, Kjom, vRxN), solid experiment design and improvement (reviewer reFg, Kjom, vRxN, 5ZDc). In the following, we will first provide the answer to the common question regarding the convergence analysis of our method, and then endeavor to provide individual responses to each reviewer. --- **[CQ1]** Convergence analysis of CRSIRL (Reviewer reFg and Kjom) **[CA1]** We provide the proof sketch of the convergence rate as follows. Based on the results in [1], to prove the convergence of CRSIRL, we only need to prove our multi-objective meta loss function is Lipschitz smooth. Formally, we prove the following lemma, *Lemma 1*. Given Lipschitz smooth loss functions $f$ and $g$, the following function is also Lipschitz smooth, $\alpha f+(1-\alpha) g,$ where $\alpha \in [0, 1]$. *Proof*. Let $f$ and $g$ be Lipschitz smooth with Lipschitz constants $L_f$ and $L_g$ respectively. This means: 1. $|f(x)-f(y)| \leq L_f|| x-y||$, for all $(x, y)$, 2. $|g(x)-g(y)| \leq L_g || x-y||$, for all $(x, y)$. Consider a function $h(x)=\alpha f(x) + (1-\alpha )g(x)$, we need to show $h(x)$ is Lipschitz smooth. $ |h(x)-h(y)|=|\alpha f(x)+(1-\alpha)g(x)-\alpha f(y)-(1-\alpha)g(y)| =|\alpha (f(x)-f(y))+(1-\alpha )(g(x)-g(y))|. $ Using the triangle inequality, we have $ |h(x)-h(y)| \leq \alpha |(f(x)-f(y))|+|(1-\alpha)(g(x)-g(y))| = \alpha |(f(x)-f(y))|+(1-\alpha)|(g(x)-g(y))|. $ By using the inequalities we induced by Lipschitz smooth, we get $ |h(x)-h(y)| \leq \alpha L_f || x-y ||+(1-\alpha)L_g||x-y|| = (\alpha L_f +(1-\alpha)L_g)||x-y||. $ Thus, the function $h$ is Lipschitz smooth with Lipschitz constant $L_h=\alpha L_f +(1-\alpha)L_g$. *QED.* By plugging *Lemma 1* into *Theorem 1* and *Theorem 2* of [1], we are able to prove the convergence guarantee of CRSIRL. We will provide the complete proof in the final version of our paper. [1] Liu, Runze, et al. "Meta-reward-net: Implicitly differentiable reward learning for preference-based reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 22270-22284.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Are GATs Out of Balance?
Accept (poster)
Summary: The work derives a conservation for the dynamics of the GAT gradient flow during training. The conservation law is used to explain why it is challenging to train deep GAT models and in particular why a large portion of parameters do not change much throughout training. A new initialization scheme is introduced to mitigate these issues. Strengths: Training GATs is notoriously problematic and this is especially the case with many layers. The work takes an interesting angle of trying to further explain why this is the case by studying conservation laws induced by the gradient flow during training. The observation that weight gradients must be small in order to satisfy such a conservation law is very interesting and practically useful. The balancing algorithm proposed is also practically useful and seems to perform well especially with deep GATs. Weaknesses: While the work focuses on GAT, it would be interesting to have a statement on more general classes of MPNNs. There are some faint connections in the paper to GCN for instance, but it would be interesting to have a more general result as well. Experimentally this might also be interesting to see if such an initialisation can be adapted to different types of MPNNs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Would the proposed initialisation technique work well for other types of MPNNs (with appropriate modifications)? Would you have experiments/theory to back this up? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors address the fact that the derived theory defines a conservation law that does not explain detailed dynamics but still remains relatively coarse. As such it is not able to explain some phenomena as why attention parameters change most on the first layer. I still believe that the work is a good step in helping explain phenomena in this direction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer hDWR for their interesting comments on our work and the constructive feedback. In response to their question, we discuss the adaptability of the balanced initialization scheme to other MPGNNs, which we would be happy to add to the manuscript. While in principle the argument of imbalanced-ness is applicable to nearly all types of MPGNNs, initializing the model in a balanced manner requires deriving conservation laws inherent to the specific architecture. Nevertheless, we outline a number of cases in which our derived conservation law (and consequent balanced initialization) is directly applicable. Firstly, we note that GAT is a generalization of the GCN, and therefore all theorems are also applicable to GCNs (where the attention parameters are simply 0). We provide additional experiments in the following table, comparing the performance of GCN models of varying depths with imbalanced and balanced initializations trained on Cora using the SGD optimizer. We use the same hyperparameters that we reported in the paper. Table 1: **GCN** trained on **Cora** using SGD: | Depth | Xavier | Xavier+Bal | LLortho | LLortho+Bal | | :----: | :--------------------: | :-------------------: | :-------------------: | :-------------------: | | 2 | 77\.8 &pm; 0.9 | 80\.5 &pm; 0.5 | 78\.0 &pm; 0.3 | **80\.9 &pm; 0.4** | | 5 | 73\.2 &pm; 3.4 | 78\.3 &pm; 0.8 | **80\.3 &pm; 0.8** | 79\.6 &pm; 1.0 | | 10 | 24\.1 &pm; 4.5 | 77\.6 &pm; 2.0 | 80\.0 &pm; 1.1 | **80\.0 &pm; 0.9** | | 20 | 14\.4 &pm; 11.2 | 62\.8 &pm; 3.6 | 78\.7 &pm; 0.5 | **78\.8 &pm; 1.5** | | 40 | 13\.4 &pm; 0.9 | 65\.9 &pm; 7.3 | 28\.3 &pm; 16.2 | **77\.1 &pm; 0.9** | | 64 | 9\.8 &pm; 5.4 | 33\.0 &pm; 13.4 | 27\.3 &pm; 12.7 | **76\.7 &pm; 1.3** | | 80 | 12\.4 &pm; 19.3 | 33\.8 &pm; 12.9 | 38\.9 &pm; 21.3 | **77\.1 &pm; 1.3** | Table 2: **GCN** trained on **Citeseer** using SGD: | Depth | Xavier | Xavier+Bal | LLortho | LLortho+Bal | | :----: | :--------------------: | :-------------------: | :-------------------: | :-------------------: | | 2 | 66\.6 &pm; 20.0 | 71\.3 &pm; 1.8 | 66\.0 &pm; 3.2 | **72\.3 &pm; 0.9** | | 5 | 60\.9 &pm; 12.3 | 66\.9 &pm; 15.0 | 69\.0 &pm; 6.4 | **70\.1 &pm; 1.8** | | 10 | 23\.8 &pm; 36.8 | 66\.0 &pm; 5.9 | **70\.6 &pm; 0.9** | 69\.8 &pm; 10.9 | | 20 | 16\.4 &pm; 18.2 | 47\.9 &pm; 10.0 | 67\.0 &pm; 8.6 | **69\.7 &pm; 4.5** | | 40 | 13\.9 &pm; 56.8 | 37\.3 &pm; 92.8 | 44\.8 &pm; 6.8 | **64\.7 &pm; 13.6** | | 64 | 13\.8 &pm; 41.4 | 29\.5 &pm; 15.0 | 37\.3 &pm; 79.6 | **66\.3 &pm; 0.5** | | 80 | 12\.4 &pm; 42.7 | 25\.8 &pm; 3.6 | 30\.1 &pm; 21.8 | **64\.1 &pm; 3.2** | Secondly, we recall that we cater to several architectural variations within the GAT architecture itself. For example, both the GATv1 [1] and GATv2 [2] have the same conservation law and we derive more general versions of the law for two GAT variants: i) unshared weights for feature transformation of source and target nodes, and ii) multiple attention heads. Furthermore, residual skip connections between layers are also supported in a balanced initialization provided their parameters are initialized with zero. Lastly, our derived conservation law also holds for a more recent architectural variation of GAT, the **$\omega$GAT** [3], which also seems to benefit from a balanced initialization. We conduct additional experiments to verify this. The results follow a similar pattern as for GATs (See **Figure 1(b)** of the attached PDF). A balanced orthogonal initialization with looks linear structure (LLortho+Bal) of $\omega$GAT performs the best, particularly by a wide margin at much higher depths (64 and 80 layers). In this work, we focus our exposition on GATs and take the first step in modeling the training dynamics of attention-based models for graph learning. An intriguing direction for future work is to derive modifications in the conservation law for other attention-based models such as SuperGAT [4] and Transformers which both utilize the dot-product self-attention mechanism. [1] Petar Veličković et al. Graph Attention Networks. In International Conference on Learning Representations, 2018. [2] Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? In International Conference on Learning Representations, 2022. [3] Dongkwan Kim and Alice Oh. How to find your friendly neighborhood: Graph attention design with self-supervision. In International Conference on Learning Representations, 2021. [4] Moshe Eliasof, Lars Ruthotto, and Eran Treister. Improving Graph Neural Networks with Learnable Propagation Operators. In International Conference on Machine Learning, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the response and the additional experiments with GCN. The smoothing results with GCN indeed seem quite promising. This is an interesting approach to a useful problem and I recommend accepting the work.
Summary: This paper focuses on the parameter struggle training problem of the well-known GNN structure GAT. The authors propose to alleviate the issue via parameter norm balancedness. A conservation law is derived for GATs with positive homogeneous activation function as the theoretical support for the parameter norm balancedness based initialization method. The authors also conduct extensive experiments to prove its effectiveness and fast convergence property. Strengths: - The problem is well identified. - The proposed theories are solid, and well serves the problem. - The proposed initialization method is simple yet effective in improving accuracy and speeding convergence. Weaknesses: - Some explanation skips details and may raise confusions. For example, the equations in line 140-142. - Some formats could be improved for better illustration, such as larger figure size and fitted table style. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: GAT mainly exploits the attention mechanism. Can this initialization be adapted to any other methods that utilize the attention mechanism? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The author may address the limitation from the generalization of the proposed method beyond the GAT structure. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer d9X8 for their constructive feedback. In line with their suggestions, we will i) include the details of equations used in lines 140-142 arising from the definition of Xavier initialization, and ii) increase figure sizes and improve the table style for better illustration. The underlying principle of a balanced network initialization holds in general. However, adapting the balanced initialization to different methods entails modification of the conservation law derived for GATs to correspond to the specific architecture of the other method. For example, the proposed balanced initialization can be applied directly to a more recent variant of GAT, **$\omega$GAT** [1], for which the same conservation law holds. We conduct additional experiments on $\omega$GAT to verify this. The results follow a similar pattern as for GATs. (See **Figure 1(b)** of the attached PDF). A balanced orthogonal initialization with looks linear structure (LLortho+Bal) of $\omega$GAT performs the best, particularly by a wide margin at much higher depths (64 and 80 layers). However, the derived conservation law and consequent balanced initialization can not be directly applied to SuperGAT [2], which employs a different kind of self-attention, namely the dot-product attention similar to the Transformer architecture. An intriguing direction for future work is to derive modifications in the conservation law for such other attention-based models. Currently, as suggested by the reviewer, we will mention this limitation in the main paper. [1] Dongkwan Kim and Alice Oh. How to find your friendly neighborhood: Graph attention design with self-supervision. In International Conference on Learning Representations, 2021. [2] Moshe Eliasof, Lars Ruthotto, and Eran Treister. Improving Graph Neural Networks with Learnable Propagation Operators. In International Conference on Machine Learning, 2023.
Summary: The authors propose a theoretical analysis of the initialization of GATs and their impact on the performance of such networks, focusing on the performance vs depth aspect. The authors propose an initialization algorithm, that starts from a random initialization and modifies the initial random weights to adhere to the findings of the theoretical analysis. The authors then show the impact of their proposed initialization on several node classification datasets. Specifically, they show that by initializing GATs with the proposed algorithm, deep GATs can be trained to achieve better than standard (Xavier) initialized GATs. Strengths: The paper addresses a real issue with GATs - the degradation of performance when more layers are added. The theoretical analysis seems correct to me and is supported with experimental results as well as the inspection of actual training artifacts and details (e.g., the gradients of the GAT layers). Weaknesses: The paper can be slightly better written, in terms of organization. I think that adding more paragraphs/subsections to better separate between the parts of the paper can help to ease the reader. The paper lacks a few relevant citations that also consider graph neural networks as gradient flow (see [1][2][3]). However, their focus is not on the initialization of GATs, and therefore the paper here is novel on its own. While the experimental results are compelling and show the benefit of the proposed method, I think that the authors should also include comparisons with other methods. [1] GRAND: Graph Neural Diffusion [2] Understanding convolution on graphs via energies [3] Improving Graph Neural Networks with Learnable Propagation Operators Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can the authors add results with more layers? For instance 64,80 layers? does the method still work with very deep network? 2. Regarding the initialization of graph neural networks, it is discussed in [3][4] that the GNN weights are initialized with identity matrices. Can the authors comment on this point? is it related to the proposed method? 3. Can the proposed initialization scheme be applied to other graph attention layers, such as superGAT[5] ? [4] PDE-GCN: Novel Architectures for Graph Neural Networks Motivated by Partial Differential Equations [5] How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer yMby for recognizing the novelty and relevance of our work and appreciate their suggestions, for which we will be taking the following actions: i) introduce more subsections and break down larger paragraphs to improve the readability and comprehension of the paper; ii) cite the proposed relevant literature [1-5]; iii) add experiments in line with the suggestions. **Comparison with other methods:** We would like to highlight that the main contribution of our work is the derivation of a conservation law for GATs and the insight into how this law can explain trainability issues for standard initialization schemes. In line with this reasoning, it is not feasible to directly compare with [1,2,4], as they rely on different GNN architectures with potentially different inherent conservation laws. Since [3] proposes an architectural variant of GAT, the $\omega$GAT, our proposed balanced initialization would apply to this setting and could potentially improve the trainability of deep $\omega$GAT. Since the code for this very recent work has not been shared, we cannot provide extensive experiments in this short time frame. However, we do implement $\omega$GAT in our setup and the results follow a similar pattern to GAT (See **Figure 1(b)** of the attached PDF). A balanced orthogonal initialization with looks linear structure (LLortho+Bal) of $\omega$GAT performs the best, particularly by a wide margin at much higher depths (64 and 80 layers). Yet, a feasible comparison can be carried out with [6] which proposes a Lipschitz normalization technique aimed to improve the performance of deeper GATs in particular. We use the code provided by [6] to reproduce their experiments on Cora, Citeseer, and Pubmed for 2,5,10,20 and 40-layer GATs with Lipschitz normalization and compare them with our results of LLortho+Bal initialization (Bal$_O$ Init.), reported in the main paper, as follows: | Layers | Cora | | Citeseer | | Pubmed | | | :-----: | :-----------: | :--------------: | :--------: | :--------------: | :--------: | :--------------: | | | Lip. Norm. | Bal$_O$ Init. | Lip. Norm. | Bal$_O$ Init. | Lip. Norm. | Bal$_O$ Init. | | 2 | **82\.1** | 79\.5 | 65\.4 | **67\.7** | 74\.8 | **76\.0** | | 5 | 77\.1 | **80\.2** | 63 | **67\.7** | 73\.7 | **75\.7** | | 10 | 78\.0 | **79\.6** | 43\.6 | **67\.4** | 52\.8 | **76\.9** | | 20 | 72\.2 | **77\.3** | 18\.2 | **66\.3** | 23\.3 | **77\.3** | | 40 | 12\.9 | **75\.9** | 18\.1 | **63\.2** | 36\.6 | **77\.5** | As evident from the above table, the balanced initialization results in a much higher accuracy as the depth of the network increases than the application of the Lipschitz normalization to a standard-initialized network. Note that Lipschitz normalization has been shown to outperform other previous normalization techniques for GNNs such as pair-norm and layer-norm. **Higher depth:** Additional results for 64 and 80-layer networks are presented in **Figure 1** in the additionally submitted PDF. The improved performance of models with balanced initialization as opposed to models with standard initialization is upheld even more so for very deep networks. **Initialization with identity matrix:** Regarding the initialization of GNNs with identity matrices, the identity matrix can be considered a special case of an orthogonal initialization. Given that the hidden layers are all of the same dimensions, a network initialized with identity weight matrices for the intermediate layers and zero attention parameters would be balanced for all the hidden layers (but not with respect to the first and last layers). However, for ReLUs a looks-linear structure where the submatrix is initialized as identity matrix would be more effective. Note that identity initializations have also been explored in the context of standard feed forward neural networks. While they tend to work in practice, the lack of induced feature diversity can be problematic from a theoretical point of view (see e.g. for a counter example [7]). We conduct experiments using identity matrices to initialize the hidden layers and Xavier initialization for the first (input) and last (output) layers. We compare this with a balanced version by adjusting the weights of the first and last layer to have norm $1$ (as identity matrices have row-wise and column-wise norm $1$). However, potentially due to the reasons regarding feature diversity and ReLU activation discussed above, the balanced looks-linear random orthogonal initialization (LLortho+Bal.) outperforms initialization with identity matrices (See **Figure 1** in additional PDF). In most cases, the balanced versions outperform the imbalanced version of the base initialization. **SuperGATs**: The proposed initialization cannot be directly applied to the self-attention layer used in the SuperGAT architecture [5]. SuperGAT combines the attention layer of the original GAT architecture with the dot-product self-attention similar to the Transformer architecture. This requires a modification of the derived conservation law, which would be an intriguing direction for future investigations. [6] Dasoulas et al. Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks. In International Conference on Machine Learning, 2021. [7] Bartlett et al. Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks. In International Conference on Machine Learning, 2018. [8] Chen et al. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. --- Rebuttal Comment 1.1: Title: Thank you for the detailed rebuttal Comment: I would like to thank the authors for the detailed rebuttal. Your answers addressed all my questions and therefore I am happy to increase my score.
Summary: This work proves a conservation law for GAT architectures, which is similar to conservation laws shown for fully connected networks. The conservation law shows a simple connection between the norms of weights of two consecutive layers. Using this law, an intuitive explanation is given for the trainability issues of deep GAT architectures. To mitigate this issue, a balanced initialization is suggested which shows improvements in generalization and training speed of deep GATs. Strengths: 1. Novel conservation law for GAT architectures. 2. The paper and technical details are well written. Weaknesses: There are no major weaknesses. 1. The motivation for using the balanced initialization and why it should improve only for deep networks is not so clear. 2. The experimental results are not complete. See Questions for more details. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Does LLOrtho without balancing work well for deep networks? It is not clear if the cause for improvement is only balancing or also the initialization with orthogonal vectors. 2. The trainability issue is explained intuitively in pages 4-5: (a) What is the reason that the trainability issue is amplified with depth? This is not explained. (b) Why does this trainability issue occur only for GATs and not for fully connected networks? It seems that the same explanation holds for FC nets as well. (c) Suggestion: It would be helpful to explain why the balanced initialization solves the trainability issue (it is not explained). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are not discussed. Maybe it would be helpful to add a section on this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer cXS5 for their insightful review and constructive questions, which are answered below. We would be happy to include the additional experiments and clarifications in the main paper as well. **1. Effect of orthogonal initialization:** We include results to compare the LLortho and LLortho+Bal initializations (See **Figure 1(a)** in additional PDF). We observe that balancing the LLortho initialization results in an improvement in the generalization ability of the network in most cases and speeds up training, particularly for deeper networks. However, note that the orthogonal initialization itself also has a positive effect on trainability in particular of deep networks. This follows from the comparison of Xavier+Bal and LLOrtho+Bal (or LLOrtho). Contributing to this fact is the mostly-balanced nature of the LLortho initialization i.e. given hidden layers of equal dimensions the network is balanced in all layers except the first and last layers, which allows the model to train, as opposed to the more severely imbalanced standard Xavier initialization. This further enforces the key takeaway of our work that norm imbalance at initialization hampers the trainability of GATs. In addition, the LLortho+Bal initialization also speeds up training over the LLortho initialization even in cases in which the generalization performance of the model is at par for both initializations. **2a): Amplification by depth:** The reason the trainability issue is amplified with depth can also be explained by our main theorem. Recursive substitution of Theorem 2.2 on the first term of the left side of Equation 8 (Page 4) results in a telescoping series yielding: $$ \sum_{j=1}^{n_{1}} \sum_{m=1}^{n_{0}} {W_{jm}^{(1)}}^2 \frac{\nabla_{W_{jm}^{(1)}} \mathcal{L}}{W_{jm}^{(1)}} - \sum_{l=1}^{L-1} \sum_{o=1}^{n_{l}} {a_{o}^{(l)}}^2 \frac{\nabla_{a_{o}^{(l)}} \mathcal{L}}{a_{o}^{(l)}} = \sum_{i=1}^{n_{L-1}} \sum_{k=1}^{n_L} {W_{ki}^{(L)}}^2 \frac{\nabla{W_{ki}^{(L)}} \mathcal{L} }{W_{ki}^{(L)}} $$ Note that $n_0$ is the input feature dimension. Generally, $2n_1 < n_0$ and thus $\mathbb{E} \left\lVert W^1[j:] \right\rVert ^2 = 2n_1/(n_1 + n_0) < 1$. Since the weights in the first layer and the gradients propagated to the first layer are both small, gradients of attention parameters of the intermediate hidden layers must also be very small in order to balance the equation. As the scalar product of attention parameters and gradients summed over the first and all intermediate layers must be less than the scalar product of weights and gradients of the first layer, the attention parameters and gradients in each intermediate layer do not have much room to change which hampers trainability. Evidently, the problem aggravates with depth where the same value must now be distributed over the parameters and gradients of more layers. **2b) Connection to fully-connected networks:** While it is true that a similar explanation as given in the paper holds also for fully connected networks (FCs), where the attention parameters are missing in Equation 8, the problem is more severe for GATs for two reasons: i) Firstly, as explained above in (a), the summation of all intermediate layers of attention parameters is not a concern in FCs. ii) Secondly, FCs can achieve perfect dynamical isometry by employing an orthogonal looks-linear structure of feature weights, which enables signals to pass through very deep architectures [2]. However, due to the peculiarity of neighborhood aggregation in GATs (or GNNs), the same technique does not induce perfect dynamical isometry. Exploring how dynamical isometry can be achieved or approximated in general GNNs is an interesting follow-up direction of this work. **2c) Effect of balanced initialization:** On pages 4 and 5, we use Equation 8 to explain how norm imbalance is a cause of the trainability issue. We revisit this to analogously see how the balanced initialization mitigates the problem. A balanced initialization implies the weight norm of the last and second to the last layer are equal on both sides of the equation (as the attention parameters $a$ are set to 0). This allows larger relative gradients of the second to last layer (left side of the equation) (as compared to when the weights on the left were much larger than the weights on the right) which can enhance gradient flow in the network to earlier layers. In other words, gradients on both sides of the equation have equal room to drive parameter change. **3) Limitations:** We mention inline in the paper (line 159) that our theory defines a coarse-level conservation law, and thus cannot completely explain fine-grained training dynamics such as why the attention parameters change most in the first layer. Secondly, the conservation law applies only to the self-attention defined in the original GAT and GATv2 models, and their architectural variations such as $\omega$GAT [3]. Note that the conservation law also holds for the non-attentive GCNs which are a special case of GATs (where the attention parameters $a$ are simply zero). Modeling different kinds of self-attention such as the dot-product self-attention in [4] entails modification of the conservation law, which has been left for future work. Following the reviewer's suggestion, we will dedicate an independent section to discuss these limitations. [1] Brody et al. How attentive are graph attention networks? In International Conference on Learning Representations, 2022. [2] Burkholz et al. Initialization of ReLUs for dynamical isometry. In Advances in Neural Information Processing Systems, volume 32, 2019. [3] Eliasof, et al. Improving Graph Neural Networks with Learnable Propagation Operators. In International Conference on Machine Learning, 2023. [4] Kim et al. How to find your friendly neighborhood: Graph attention design with self-supervision. In International Conference on Learning Representations, 2021.
Rebuttal 1: Rebuttal: We thank the reviewers for acknowledging the importance and relevance of our work and appreciate their constructive feedback, insightful questions, and encouraging comments. We take several actions to improve the paper in line with the reviewers' suggestions, summarized as follows: i) We provide a more detailed explanation of some key ideas in the paper such as why the increasing depth of the network amplifies the trainability issue, how balancing the network mitigates the problem, and the effects of an orthogonal initialization. ii) We conduct additional experiments as requested by the reviewers. Firstly, we include comparisons between imbalanced and balanced versions of Looks-Linear-Orthogonal, Identity, and Looks-Linear-Identity initializations. We also show the benefits of a balanced initialization on two other GNN models: the standard GCN and an architectural variation of GAT, the $\omega$GAT. We present these results in an additionally attached PDF and inline within individual responses to reviewers. iii) We discuss the limitations of our work in more detail such as the need for modifications of the derived conservation law for other attention mechanisms such as self-attention in SuperGAT and Transformers. We are grateful to the reviewers for their suggestions that enhance the presentation of our work. We will update our manuscript accordingly and would be happy to engage in discussions with the reviewers and answer any further questions. Pdf: /pdf/a45279feb8b39b77999868cdd0b27ce4833164ca.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CoPriv: Network/Protocol Co-Optimization for Communication-Efficient Private Inference
Accept (poster)
Summary: This paper presents a framework that simultaneously optimizes the 2PC inference protocol and the neural network architecture to achieve a significant reduction in communication. The framework outperforms state-of-the-art (SOTA) approaches by achieving communication reduction. Strengths: 1. The paper is well-written and easy to understand 2. The paper highlights the current scenario where nonlinearity no longer dominates the communication overhead. Weaknesses: 1. It would be beneficial to include more ablation experiments to demonstrate the individual contributions of the ReLU pruning and re-parameterization approaches. 2. Providing the percentage or number of multiplication reduction resulting from the Winograd algorithm would provide additional insights into the overall communication reduction. 3. Please ensure that the legends are appropriately positioned within the figures, specifically those that cover certain data points. 4. Empirically comparing the proposed ReLU pruning method with prior methods would provide a deeper understanding of its effectiveness. 5. The most recent work on the ReLU reduction is as follows, the author could consider including it in the introduction: "S. Kundu, et al., Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference, CVPRW 2023." Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please refer to the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer P7t8 for your thoughtful feedback! --- **Q1:** Include more ablation experiments to demonstrate the individual contributions of ReLU pruning and re-parameterization. **A1:** Thanks for your suggestion! In Section 5.5, we perform an ablation study by adding our proposed techniques step by step. And we also add more ablation experiments with 60% and 30% ReLU remained shown in the table below. For 30% ReLU, CoPriv achieves 1.4$\times$ online communication reduction after pruning, 3.2$\times$ and 3.6$\times$ online/total communication reduction after re-parameterization compared with baseline MobileNetV2. All of these results indicate that our proposed optimizations are indispensable for improving the communication efficiency. Model (60% ReLU) | Online Comm. (GB) | Total Comm. (GB) ------ | ------ | ------ Baseline MobileNetV2 | 0.82 | 8.00 +Pruning | 0.64 | 7.81 +Re-parameterization | 0.43 | 5.62 Model (30% ReLU) | Online Comm. (GB) | Total Comm. (GB) ------ | ------ | ------ Baseline MobileNetV2 | 0.82 | 8.00 +Pruning | 0.58 | 7.76 +Re-parameterization | 0.26 | 2.21 --- **Q2:** Providing the percentage or number of multiplication reduction resulting from the Winograd algorithm would provide additional insights into the overall communication reduction. **A2:** For ResNet-18 and 32 with regular convolutions, Winograd algorithm reduces the number of multiplications by 2.25$\times$ with $F(2\times2, 3\times3)$ transformation theoretically and empirically (shown in our micro-benchmark in Section 5.2), resulting $\sim 2.1\times$ communication reduction; And for our proposed CoPriv, we compute the number of multiplications of original MobileNetV2 and CoPriv with 40% ReLUs remained which is shown in **Figure 1 in the rebuttal PDF**. From this, we observe that the number of multiplication is significantly reduced to 68% with our Winograd algorithm, resulting a lower communication. --- **Q3:** Ensure that the legends are appropriately positioned within the figures. **A3:** Thanks for your kind suggestion on our figures! We will carefully improve our writing and figures in the revised version. --- **Q4:** Empirically comparing the proposed ReLU pruning method with prior methods would provide a deeper understanding of its effectiveness. **A4:** The key insight of our ReLU pruning is 1) to directly use communication to guide the pruning instead of proxy metrics, e.g., ReLU count, and 2) to enforce two ReLUs within the same block to share $\alpha$ so that the two ReLUs can be removed simultaneously to enable re-parameterization and Winograd-based optimization. In contrast, SENet/DeepReDuce/SNL just focus on reducing ReLU count and thus, can barely reduce the total communication. So, our focus is mainly on the optimization pattern rather than the pruning algorithm itself, and note that other pruning method can be easily plugged into our proposed framework. In this paper, we have compared CoPriv with ReLU-optimized methods including SNL, SENet and DeepReDuce in Section 5.3 and 5.4. Among them, SNL and SENet propose NAS (SENet further uses pruning sensitivity to analyze ReLU importance) to prune ReLUs while DeepReDuce manually decides which ReLUs to be removed. One big difference between our pruning method and prior methods is the $L_{comm}$, which means ReLUs in different layers are not equivalent, and makes our method communication-aware rather than ReLU count-aware. We discover that $L_{comm}$ helps us better trade-off the efficiency and accuracy. To show the importance of introducing $L_{comm}$, we compare our pruning method w/ $L_{comm}$ and pruning methods of SNL/SENet w/o $L_{comm}$ in the **Figure 2 in the rebuttal PDF**. As we can see, $L_{comm}$ helps to focus the pruning on the later inverted residual blocks, which incur more communication overhead, and our method effectively penalizes the importance of the costly blocks (e.g., block #16 and #18), achieving significantly lower communication. By adjusting $L_{comm}$, CoPriv has a strong ability to better trade-off the efficiency and accuracy. --- **Q5:** The author could consider including the most recent work (CVPRW 2023) on the ReLU reduction in the introduction. **A5:** Thanks for pointing out the valuable work. We will include this paper in our revised version. We also make the following comparison: 1) Similarity: the two papers both reduce the ReLUs and the network depth. 2) Motivations are different: the CVPRW paper still regards ReLU as the main latency bottleneck and removes convolution layers to reduce computation. In contrast, we fuse neighboring convolution layers to reduce truncations and better leverage our Winograd-based optimization for communication reduction. The difference in motivation leads to different criterion when selecting convolutions to remove. The CVPRW paper selects convolutions based on ReLU sensitivity while we consider both accuracy and communication cost. 3) Methods are different: the CVPRW paper first determine which convolutions to remove and then, use the gated branching mechanism to train new convolutions. In contrast, we simultaneously train the architecture parameters $\alpha$ with the model weights and the re-parameterization is conducted **post training**, enabling us to better leverage the benefits of over-parameterization as shown in the RepVGG paper (RepVGG: Making VGG-style ConvNets Great Again). As shown in the table below, our method achieves both better accuracy and lower communication compared to the CVPRW paper. Method | Top-1 Acc. (%) | Online Comm. (GB) | Total Comm. (GB) ------ | ------ | ------ | ------ CVPRW 2023 | 69.10 | 0.81 | 46.5 CoPriv (ours) | 70.58 | 0.43 | 5.14
Summary: The paper presents optimizations to secure two-party computation of convolutional network inference. There are optimizations for both linear and non-linear layers, resulting in an overall single-digit factor improvement. Strengths: The optimizations look interesting and are underlined well with benchmarks. I particularly appreciate the trade-off visualization in Figure 8. Weaknesses: Table 1 doesn't make sense to me. I don't think there is merit in pointing out that prior work hasn't optimized certain aspects because it might be that the work is efficient without extra effort. If anything, the table should contain numerical speed-ups. Using the acronym ASS for arithmetic secret sharing might put off readers as it's identical to a vulgar word. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: n/a Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer AJR8 for your thoughtful feedback! --- **Q1:** Table 1 should contain numerical speed-ups. **A1**: We thank the reviewer for the valuable feedback. As shown in Figure 1 in the paper, operations like ReLU, truncation, and convolution, are major contributors to the online or total communication. To reduce their communication, we hope to emphasize the importance of both protocol and network optimization, and all the components of the neural network should be fully considered, and we also compare our CoPriv with existing methods. Hence, we have a qualitative comparison in Table 1 and leave quantitative comparison in experimental results. We do agree simple qualitative comparison may not be very useful. We augment the table in **Table 2 in the rebuttal PDF** following your suggestion to include numerical speed-ups in the table. We will think of better ways to make the comparisons as well. --- **Q2:** Using the acronym ASS for arithmetic secret sharing might put off readers as it's identical to a vulgar word. **A2:** Thanks for your advice! We agree with your opinion, and we will consider a more appropriate acronym for arithmetic secret sharing like SS or ArSS in our revised version. --- Rebuttal Comment 1.1: Comment: Q1: I'm not arguing against the benefit of optimization, I'm just saying that I cannot think of an objective definition of what constitutes optimized or not. The improvement figures are much appreciated, but I still don't see the point of having ticks for optimization. --- Reply to Comment 1.1.1: Comment: Thanks for your valuable suggestion and comment! We admit that simply using ticks for optimization is not appropriate. Therefore, we modify this table as shown below to include more details to make the comparison more clear for understanding. The below table compares our CoPriv with prior works in terms of the optimized algorithms as well as the used techniques. | Method | Protocol Opt | Network Opt: Conv | Network Opt: Trunc | Network Opt: ReLU | ---- | ---- | --- | --- | --- | | [33, 32, 30, 6, 29] | ReLU, Trunc, Conv | - | -| - | | [25, 5, 16, 31, 28, 4] | - | - | - | ReLU count-aware/sensitivity-aware NAS | | [37, 20] | Conv (Online Comm. to Offline Comm.) | - | - | ReLU count-aware NAS | | [27] | - | Channel Reduction | Channel Reduction | Channel Reduction | | CoPriv (ours) | Conv (Winograd-based Protocol) | Re-paramerization | Re-paramerization | Communication-aware NAS | The descriptions and comparisons of the mentioned works are included in our Related Works in Appendix A.
Summary: The paper introduces CoPriv, a framework that optimizes the 2-party computation (2PC) inference protocol and the deep neural network (DNN) architecture to reduce communication overhead. CoPriv features a new 2PC protocol for convolution based on Winograd transformation and develops DNN-aware optimization to reduce inference communication. Strengths: The authors highlight a significant point that pruning ReLU may no longer be the most effective method for reducing computational and communication costs in private inference. This is due to the increasing prominence of linear and truncation operations. This insight is of considerable importance to the community. Weaknesses: 1. Figure 1 could benefit from more specific details. The left figure should include specific numbers for each portion and the amount of communication cost reduction achieved by each technique. It's also unclear which private inference method is used in the left figure. 2. The paper does not clearly explain why the DNN-aware adaptive convolution protocol can reduce communication costs. As a major contribution, it would be beneficial if the authors could provide a detailed explanation of how the selection of protocol initializer impacts communication costs and the criteria for selecting the initializer. 3. The novelty of the proposed ReLU pruning method is not clear unless the authors can explain how L_{comm} affects the training results. It would be valuable to compare the proposed ReLU pruning method with DeepReduce/SNL/SENet in terms of which ReLUs remain in the network and final accuracy. 4. It would be beneficial to differentiate the proposed re-parameterization method from the following work: "Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference" (https://openaccess.thecvf.com/content/CVPR2023W/ECV/papers/Kundu_Making_Models_Shallow_Again_Jointly_Learning_To_Reduce_Non-Linearity_and_CVPRW_2023_paper.pdf). 5. From Figure 9, it appears that the authors do not apply ReLU pruning and re-parameterization to each block. The criteria for deciding which blocks are suitable for ReLU pruning and re-parameterization are not clear. 6. In Table 4, the accuracy remains the same for MobieNetV2 with and without pruning+re-parameterization. Since pruning and re-parameterization typically lead to accuracy degradation, could the authors provide an explanation for these results? 7. Given that Cheetah outperforms CrypTFlow2, it would be more valuable and informative to compare the proposed protocol optimization method with Cheetah rather than CrypTFlow2. This comparison could provide a more accurate assessment of the proposed method's performance relative to the current state-of-the-art. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: While the authors acknowledge the limitation of Winograd convolution, stating that it can only be applied to 3x3 depth-wise convolution, they do not discuss the limitations of their entire work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer vLvc for your thoughtful feedback! --- **Q1:** Figure 1 could benefit from more details. **A1:** Thanks for your advice! We will improve this figure carefully to include more details. For ReLU, [22] represents Gazelle which uses garbled circuit (GC), [33] represents CrypTFlow2 which uses IKNP OT, and [19] represents Cheetah which uses VOLE OT. We get these numbers directly from each paper. --- **Q2:** Not clearly explain the DNN-aware adaptive convolution protocol. **A2:** As introduced in Section 4.1, in the Winograd-based convolution protocol, with tile aggregation, the server and client need to jointly run the OT-based matrix multiplication protocol for $(m+r-1)^2$ times. For each OT-based matrix multiplication, the server and the client hold the weight and activation of shape $(K, C)$ and $(C, T)$, respectively, where $K$, $C$, and $T$ denote the number of output channels, input channels, and # tiles, respectively, and are impacted by the DNN architecture. We observe the cost of the OT-based matrix multiplication depends on the OT initializer. More specifically, when the server initializes the OT, the round of communication and the communication of each round are $O(CK)$ and $O(\lambda + T)$, respectively (the complexity is derived based on CrypTFlow2). In contrast, when the client initializes the OT, the round and communication of each round become $O(CT)$ and $O(\lambda + K)$, respectively. As $K$, $C$, and $T$ are known before inference, our DNN-aware adaptive protocol will select the optimal OT initializer for each network and each layer to minimize the communication cost. --- **Q3:** How $L_{comm}$ affects training. **A3:** The key insight of our ReLU pruning is 1) to directly use communication to guide the pruning instead of proxy metrics, e.g., ReLU count, and 2) to enforce two ReLUs within the same block to share $\alpha$ so that they can be removed simultaneously to enable re-parameterization and Winograd-based optimization. In contrast, SENet/DeepReDuce/SNL just focus on reducing ReLU count and thus, can barely reduce the total communication. To show the importance of introducing $L_{comm}$, we compare our pruning method w/ $L_{comm}$ and pruning methods of SNL/SENet w/o $L_{comm}$ in **Figure 2 in the rebuttal PDF**. As we see, $L_{comm}$ helps to focus the pruning on the later layers, which incur more communication cost, and penalizes the costly blocks (e.g., block #16/#18). In contrast, SENet focuses on pruning early layers with more ReLU counts. --- **Q4:** Differentiate the proposed re-parameterization from CVPRW 2023. **A4**: Thanks for pointing out the valuable work. We will cite this paper in our revised version. We also make the following comparison: 1) Similarity: the two papers both reduce the ReLUs and the network depth. 2) Different motivations: the CVPRW paper still regards ReLU as the main latency bottleneck and removes convolution layers to reduce computation. In contrast, we fuse neighboring convolution layers to better leverage our Winograd-based optimization for communication reduction of all operators. The difference in motivation leads to different criterion when selecting convolutions to remove. The CVPRW paper selects convolutions based on ReLU sensitivity while we consider both accuracy and communication cost. 3) Different methods: the CVPRW paper first determines which convolutions to remove based on ReLU sensitivity and then, uses the gated branching method for training. In contrast, we simultaneously train the architecture parameters with the model weights, and the re-parameterization is conducted **post training**, enabling us to leverage the benefits of over-parameterization shown in RepVGG (RepVGG: Making VGG-style ConvNets Great Again). As shown in **Table 1 in the rebuttal PDF**, CoPriv achieves both better accuracy and lower communication compared to the CVPRW paper. --- **Q5:** The criteria for ReLU pruning and re-param is not clear. **A5**: We prune ReLUs based on the architecture parameter $\alpha$ of each block. $\alpha$ is trained jointly with model parameters and considers both accuracy and communication cost. During searching, we gradually fix small $\alpha$ to 0 until the required communication is achieved. After ReLU pruning, there are only three sequential convolutions in the inverted residual block. Then, we further re-parameterize the block into a single convolution. --- **Q6:** Explain the accuracy degradation of pruning and re-param. **A6:** MobileNetV2 with pruning degradates the accuracy compared with original MobileNetV2, but here we directly report the accuracy of MobileNetV2 after pruning. In contrast, re-parameterization will not hurt the accuracy because it **equivalently** merges the convolutions within an inverted residual block into a single convolution **post training**. We show the detail of re-parameterization in Appendix C. --- **Q7:** Compare the protocol optimization with Cheetah. **A7**: Thanks for the valuable advice. Cheetah uses HE-based protocol for convolution instead of OT in CrypTFlow2. Cheetah achieves lower communication compared to CrypTFlow2 at the cost of more computation overhead for both the server and client. Hence, we believe Cheetah and CrypTFlow2/CoPriv have different applicable scenarios. For example, for less performant clients, Cheetah may not be applicable while when the bandwidth is low, CrypTFlow2/CoPriv may not be the best choice. When comparing with Cheetah, we observe a similar trend. As in **Table 3 in the rebuttal PDF**, we find while CoPriv always outperforms CrypTFlow2, for high bandwidth, CoPriv achieves lower latency compared to Cheetah while for low bandwidth, Cheetah incurs lower latency. Hence, in our paper, we focus on comparison with CrypTFlow2 and other OT-based baselines. **Limitation:** There is still an efficiency gap between ciphertext and plaintext inference, and we leave more valuable work as our future work. --- Rebuttal Comment 1.1: Title: Thanks! Comment: Thank you to the authors for the clarifications. My concerns are now addressed, and I have accordingly raised my score.
Summary: This paper presents CoPriv that jointly optimizes 2PC protocols and DNN architectures. It argues that SOTA 2PC protocols mainly focus on minimizing ReLU-based metric, which no longer contributes to the majority of communication. It proposes a new protocol for convolution with Winograd transformation and proposes a series of other DNN-aware optimizations. It shows communication reduction compared to SOTA protocols, CrypTFlow2, and other network optimization methods. Strengths: (1) The observations in the motivation section are informative, well written and supported. (2) The Winograd based transformation with tile aggregation is well motivated and well illustrated (Figure 4). (3) The adaptive convolutional protocol is novel, insightful and effective. (4) The ablation study which adds optimizations step by step is conclusive and well-conducted. (5) The proposed methods consistently achieve communication reduction compared to SOTA. Weaknesses: The implication of end-to-end speedup is not well studied. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The paper shows a consistent communication reduction, which the reviewer appreciated. However, it would be better if the author can provide some analysis on the end-to-end speedup using common setups. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper only analyzes the communication volume, where the effect on the communication time, inference speedup has not been well discussed. It can be the case that with higher bandwidth, this communication volume reduction will not be significant. However, the reviewer appreciates the communication volume reduction alone, and would not consider this as a major limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer yVeD for your thoughtful feedback! --- **Q1:** The implication of end-to-end speedup is not well studied. **A1:** Thanks for the meaningful suggestion on the inference speedup! In our experiments, we compare the inference latency of MobileNetV3 (with different capacities), ReLU-optimized networks (including SNL and SENet), and SOTA pruning methods (including uniform pruning and MetaPruning) on the ImageNet dataset with a widely used LAN communication setting (i.e., 377 MBps bandwidth and 0.3ms echo latency) following CrypTFlow2. As shown in Figure 8(c), on the ImageNet dataset, 1) our proposed CoPriv outperforms SNL with 6% higher accuracy and 1.4$\times$ latency reduction, and 2) CoPriv outperforms SENet with 2.8% higher accuracy and 2.2$\times$ latency reduction. These results demonstrate a consistent accuracy improvement with latency reduction. When the communication bandwidth becomes lower, e.g., in the wireless setting, we expect our latency reduction can be further improved. In brief, we do agree with your valuable suggestions on the inference speedup, and we will add more results about the speedup in the wireless setting with a lower bandwidth in the revised version. --- Rebuttal Comment 1.1: Comment: Thanks a lot for the great response. My concerns are addressed. I will keep my positive rating. Please consider accepting it.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for the thoughtful feedback and helpful comments! **Rebuttal One-page PDF:** we attach the one-page PDF here to include the figures and tables mentioned in the following responses. We give a brief introduction of these figures and tables below to provide convenience for quick review for the reviewers: * Figure 1: Comparison of the number of multiplications in each block between original MobileNetV2 and CoPriv with Winograd transformation (based on the comments of Reviewer P7t8). * Figure 2: Comparison of different pruning method and the influence of $L_{comm}$ during the search in each block (based on the comments of Reviewer vLvc and P7t8). * Table 1: Accuracy and communication comparison between the CVPRW paper and our CoPriv (based on the comments of Reviewer vLvc and P7t8). * Table 2: Comparison with prior-art methods with qualitative optimization level (based on the comments of Reviewer AJR8). * Table 3: Latency comparison with Cheetah for different blocks and bandwidths (based on the comments of Reviewer vLvc). Pdf: /pdf/c3e7a9bb7e1d56a8044068f27cf66c70752bf60b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
LambdaBeam: Neural Program Search with Higher-Order Functions and Lambdas
Accept (poster)
Summary: The paper extends the CrossBeam method with synthesizing intermediate lambda functions to solve the programming by example task. The authors introduce the Merge operator to construct new lambda functions by choosing an operator from the DSL, and its arguments from existing terms (variables or lambda functions). The inputs to the Merge operator are predicted by the neural model, so the lambda functions are built step-by-step bottom-up, similarly to the whole synthesized program. Lambda functions can not be executed on the input/ouput examples so they are executed on hardcoded canonical argument tuples instead and are encoded using property signatures computed on the results, the expected output of the program, and the arguments of the lambda function. Strengths: As the authors claim and also as far as I know this is the first neural search method to synthesize general-purpose (not hardcoded) lambda functions, which has been an open problem for years. DreamCoder (cited in the Related Work) can also synthesize lambda functions but it does so by finding common program fragments in synthesized programs in a Bayesian framework. Weaknesses: The paper - understandably - refers to the CrossBeam paper many times. It contains a summary of CrossBeam in lines 100-121, but reading that paper still helped a lot to undertand this paper. Also, I think that Figure 2 from the CrossBeam paper should be included as Section 3.3 talks about parts of that Figure. It would be good to include CrossBeam without lambda functions as a baseline for the evaluations; currently it is hard to know how much of the improvement is due to the lambda functions. Part 3.1 could be clearer: - the example is at the end of the section, maybe an example-first approach could be better - Merge ensures that there are no free variables and also unifies the arguments. When reading through the section it seems from line 128 to 147 that the only criterion we need is that we have no free variables and that alone ensures the unification of the arguments. - I'm not sure these are correct: - line 157 says that Merge runs the function $f$, - line 160 says that $a_k(i_k)$ evaluates to a concrete value, I think it should be an expression line 217 says that $S$ contains variable tokens, but I think it also contains lambda expressions. I'm not sure how to intepret line 223: "an embedding of the weight of this value". I couldn't find which LLM the authors used as a baseline, it should be cited. I couldn't find the range of the inputs and outputs, it would be good to include them. Technical Quality: 3 good Clarity: 3 good Questions for Authors: There were 16 canonical argument tuples for each combination of tuple length and argument types. To me it seems somewhat low. Wouldn't more be helpful, or does the slowdown of the search counteract the improvement? Why do most of the methods perform worse on the handwritten tasks compared to the synthetic tasks? Is it because the generation of the synthetic tasks is similar to their algorithm? Figure 4 (Synthetic tasks) shows significant improvement in the LLM results as the task weight grows from 9-10 to 11-12. Is that an anomaly or is there an explanation? It would be interesting to see what happens after task weight 11-12. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are addressed at the end of the Results, maybe they could have their own section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > include CrossBeam without lambda functions as a baseline CrossBeam would not perform well because 85/100 handwritten evaluation tasks and 53/100 synthetic evaluation tasks use a lambda function in the solution. CrossBeam would not be able to solve those problems and would end up being the worst in our comparison. > Part 3.1 could be clearer We used “unify” loosely here. We mean that the arguments $a_k$ are alpha-renamed using $i_k$ so that we can choose which variables are used in which locations. For example, if we have a term $T = \lambda v_1. (v_1 + 1)$, then this alpha-renaming step is needed to distinguish between $Merge(\times, T, [v_1], T, [v_2]) = \lambda v_1, v_2. (v_1 + 1) \times (v_2 + 1)$ and $Merge(\times, T, [v_1], T, [v_1]) = \lambda v_1. (v_1 + 1) \times (v_1 + 1)$. We will replace our usage of “unify” with “alpha-rename”. Line 157: Instead of “Merge runs $f$ on the arguments”, we should say “Merge creates an expression that calls $f$ on the arguments”. Line 160: The argument $a_k(i_k)$ is indeed an expression. We mean to point out that this expression is not immediately usable as a lambda in a higher-order function and needs to be wrapped with an explicit lambda. For example, suppose we have the term $T = \lambda v_1. v_1 + 1$. We’d use this in a first-order function as follows: $Merge(\times, T, [v_1], x, []) = \lambda v_1. (v_1 + 1) \times x$. If we use a higher-order function in the exact same way, we get $Merge(map, T, [v_1], x, []) = \lambda v_1. map(v_1 + 1, x)$ which is not valid since the first argument to map must be a function. We instead need $Merge(map, T, [u_1], x, []) = map(\lambda u_1. T(u_1), x) = map(\lambda u_1. u_1 + 1, x)$, where the $\lambda u_1.$ part is explicitly added by our definition of Merge because we know that $map$ requires its first argument to be a function with arity 1. Line 217: This sentence has ambiguous parsing which we will revise. $S$ contains variable tokens for constructing lambdas, and $S$ also contains lambda expressions and non-lambda expressions. Line 223: The weight of the value is an integer, and we look up the integer in a learned embedding table to obtain $z$. This is done in the same way as in CrossBeam (Section 3, “Value module” paragraph, in the CrossBeam paper), for both lambda and non-lambda values. > I couldn't find the range of the inputs and outputs For inputs and outputs in our handwritten and synthetic tasks, all integers are in the range [-256, 255] as in DeepCoder, and lists have lengths in the range [0, 10] which we felt was reasonable for PBE users to specify. > There were 16 canonical argument tuples ... it seems somewhat low Through profiling, we found that the majority of LambdaBeam’s time is spent computing property signatures (not running the model!), especially for lambda functions since they must be run many times. For this reason, we didn’t include too many canonical argument tuples. > Why do most of the methods perform worse on the handwritten tasks compared to the synthetic tasks? There are two potential reasons. One is actually just visual: in Figure 4, the handwritten tasks have weight buckets shifted leftward compared to the synthetic tasks, because we have handwritten tasks of weight 13-19 but all synthetic tasks have weight at most 12. For tasks of weight <= 10, LambdaBeam+Restarts performs equally well on handwritten and synthetic tasks, and $\lambda^2$ even performs better on the handwritten tasks. The other reason is that the tasks are distributed differently in the handwritten vs synthetic datasets. For example: ``` The handwritten evaluation tasks include: * 20 tasks of weight 7 - 8: 17 with lambdas, 3 without * 25 tasks of weight 9 - 10: 25 with lambdas, 0 without * 20 tasks of weight 11 - 12: 17 with lambdas, 3 without The synthetic evaluation tasks include: * 20 tasks of weight 7 - 8: 13 with lambdas, 7 without * 20 tasks of weight 9 - 10: 16 with lambdas, 4 without * 20 tasks of weight 11 - 12: 9 with lambdas, 11 without ``` The handwritten tasks include more tasks that use lambdas, and there are abnormally many synthetic tasks of weight 11-12 without lambdas which appear to be easier overall. There are likely other distribution shifts such as in the shape of the solution (deep vs wide expression trees), the operations used, or the I/O types. It is thus an impressive result that LambdaBeam+Restarts performs the best on both evaluation datasets. > Figure 4 (Synthetic tasks) shows significant improvement in the LLM results as the task weight grows from 9-10 to 11-12 This is an anomaly. In fact, for synthetic tasks of weight >= 8, all of the LLM’s “solutions” are false positives with some form of “if the input is <hardcoded> then return <hardcoded>” logic. Indeed, Figure 5 shows the LLM has a very high false positive rate on synthetic tasks. This solution pattern is easier to implement when the output is an integer as opposed to a list, and there are abnormally many synthetic tasks of weight 11-12 with integer outputs. ``` The handwritten evaluation tasks include: * 20 tasks of weight 7 - 8: 11 have int output, 9 have list output * 25 tasks of weight 9 - 10: 4 have int output, 21 have list output * 20 tasks of weight 11 - 12: 8 have int output, 12 have list output The synthetic evaluation tasks include: * 20 tasks of weight 7 - 8: 7 have int output, 13 have list output * 20 tasks of weight 9 - 10: 6 have int output, 14 have list output * 20 tasks of weight 11 - 12: 14 have int output, 6 have list output ``` --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for your insightful response. In Part 3.1 I was not confused by the use of the word "unify", but the structure of the beginning of the section: lines 124-127 talk about equivalent expressions with different variable names (renaming), then from line 128 the paper is about free variables. The two parts are not connected. Renaming (or unifying) is talked about again only from line 148. So my problem was that free variables appear suddenly out of nowhere and seemingly there is a connection but it's not explained. After reading the whole section it cleared up, but I feel it could be improved. I don't think the word "unify" should be changed. I believe that the differences between the handwritten and synthetic tasks should be mentioned in the paper and their distributions should be included in the Appendix. --- Reply to Comment 1.1.1: Comment: Thanks again for all of the helpful suggestions. We will revise the text of Section 3.1 and also include our discussion on handwritten vs synthetic tasks in an appendix.
Summary: The paper presents LAMBDABEAM, a nn-based search method for program synthesis which is built upon CROSSBEAM and can handle lambda functions and higher-order functions. Specifically, to build lambda terms, LAMBDABEAM enforces that every term constructed during search has no free variables by introducing a novel operator called MERGE. Furthermore, to learn lambda expressions, LAMBDABEAM constructs a new generalization of property signatures to represent lambda expressions. The paper shows that LAMBDABEAM outperforms existing techniques in the integer list manipulation domain (a modified DeepCoder). Strengths: 1. This addresses a meaningful problem, that is how to search for lambdas and higher-order functions which would enable the synthesis of arbitrary looping computations and extend the boundary of neural program synthesis. 2. The experiment result is promising. 3. The writing is clear. 4. In the current era dominated by LLMs in the field of program synthesis and code generation, this paper makes a good attempt towards small and meaningful works. I believe that this type of work and LLMs-related works will inspire and complement each other. Weaknesses: 1. Placing a figure that illustrates the LAMBDABEAM Model architecture would be better. Although the design largely follows CROSSBEAM, a LAMBDABEAM figure is necessary for showing the differences and for readers unfamiliar with CROSSBEAM. 2. Experimental settings are somewhat confusing. For example, what causes the differing number of I/O examples for list output and integer output in the hand-written? What is the name of the pre-trained LLM since the performance gap among different LLMs on program synthesis is significant. Can the authors further explain these settings? 3. Can the authors fine-tune the LLM on the proposed DSL which might be a better comparison? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Will CROSSBEAM be improved with restarts? 2. Considering the elapsed time with quantitive computing resources used would be better if possible. 3. How might this kind of nn-based search method be combined with LLMs? 4. A experimental comparison with Dreamcoder is needed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations in the results part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > Weakness 1. We will include more info about CrossBeam (see global response). > what causes the differing number of I/O examples for list output and integer output in the hand-written? In general, PBE tasks can be specified with fewer examples if the examples are more constraining. It would be easy for a program to coincidentally output the correct integer using the wrong approach, but it is less likely that an entire list matches by coincidence. For this reason, we used fewer examples (3) to specify the problem when the output is a list, and more examples (5) when the output is an integer. These numbers were chosen to avoid underspecifying the problem, while also being realistic in the number of examples a program synthesis user might want to provide. > Can the authors fine-tune the LLM on the proposed DSL It would cost a large amount of compute resources to train the LLM (a very large sequence model) on our DSL. Instead, we have trained a smaller sequence model from scratch on our DSL, which is the RobustFill approach already included in our experiments. > Will CROSSBEAM be improved with restarts? We hypothesize that CrossBeam will also improve with restarts for an appropriate restart frequency (a hyperparameter), but it is unclear how much improvement will be gained. The amount of improvement could vary by the domain or dataset used, as was the case in our experiments for handwritten vs synthetic tasks. (Also, we note that any such improvement to CrossBeam would not affect our experimental conclusions, since CrossBeam as a baseline would fail to solve a good majority of our evaluation problems which require using lambda expressions.) > Considering the elapsed time with quantitive computing resources used would be better if possible. We agree it would be better, but this is very hard to do considering the different hardware used to run CPU-only approaches ($\lambda^2$ and enumeration), CPU with GPU (LambdaBeam), mainly GPU (RobustFill), and multiple accelerators in parallel (the LLM). > How might this kind of nn-based search method be combined with LLMs? This is an excellent question for future work! There are some very recent works that use LLMs to generate programs iteratively, e.g., to self-debug their predictions (https://arxiv.org/abs/2304.05128). In a similar vein, it would be very interesting to see whether LLMs can be made to perform program synthesis search guided by other sources of info such as program evaluations. > A experimental comparison with Dreamcoder is needed. DreamCoder is fundamentally an algorithm for enriching an impoverished DSL, and shows how that enrichment process can synergize with neurally-guided program search. Therefore, it does not make sense to compare *against* DreamCoder, but to experimentally consider *augmenting* DreamCoder with LambdaBeam (using our work as DreamCoder’s neurally-guided search strategy). While that would be a fascinating avenue to explore, we believe it is sufficiently involved to not be a reasonable piece of work to include within the scope of this paper. However, we will revise the paper to include this explanation of how DreamCoder and LambdaBeam could synergize in future systems. --- Rebuttal Comment 1.1: Comment: Thanks for the response! Most of my concerns have been addressed. My remaining concern is about "the differing number of I/O examples for list output and integer output in the hand-written". I understand that lists need fewer I/Os than integers. I'm confused about the specific choice of "3" and "5". Why not choose both of them to be "5"? Is there any cherry-pick on this choice? Also, I'm curious about the performance of prompt + GPT-3.5/4. (And that's why I asked questions about the name of the LLM and the combination of search+LLMs) Overall, it is a good paper. --- Reply to Comment 1.1.1: Comment: Thank you for your helpful comments. > My remaining concern is about "the differing number of I/O examples for list output and integer output in the hand-written". I understand that lists need fewer I/Os than integers. I'm confused about the specific choice of "3" and "5". Why not choose both of them to be "5"? Is there any cherry-pick on this choice? We wanted the benchmarks to have a good balance between being realistic (using a small number of examples, because users might not want to specify many examples) and being well-specified (we need enough examples to avoid under-specifying the task). While creating the first few handwritten tasks, we found that 3 list outputs gave a good balance, and 5 integer outputs to similarly give a good balance (note it is easier to provide examples of integer outputs). We did not use 5 list outputs because we felt the extra 2 examples were unnecessary. There was **no cherry-picking** on this choice or in the benchmarks overall. These choices were set before we started using the benchmarks during development and initial research. During development, the benchmarks were only altered to resolve clear issues (e.g., mistakes in handwritten examples) or to add more benchmark tasks.
Summary: In this work the authors introduce LambdaBeam a method crafted to explicitly handle lambda functions and higher-order functions for neurally guided program synthesis. Towards this goal the authors first introduce a method to represent lambda functions which enables variable order independent canonical representation, and eases creation of lambda functions by merging other lambda functions. Then, the authors adapt CrossBeam a pre-exiting method for program synthesis to synthesize lambda functions, while also employing property signatures to represent lambda functions. When deployed on integer list manipulation tasks, Lambda beam surpasses other competitive baselines, in terms of both speed and success rate. Strengths: ### Originality Previous works do not explicitly model lambda functions, or learn how to compose them. This is an original contribution of the paper. ### Quality & Clarity The paper at a paragraph level is well written. ### Significance The approach towards representation of lambda functions is a interesting, useful and novel contribution that may be useful for many future program synthesis approaches. Furthermore, the approach is able to beat strong baselines such as lambda^2 an off-the-shelf program synthesis tool, and a 60B parameter large language model. Weaknesses: ### Weaknesses 1. I think the paper does not clearly justify *why* prior works cannot model lambda functions or higher-order functions. Particularly, it states Programming by examples (PBE) demands a more systematic search strategy but its unclear why that is the case. The paper also doubles down on the belief that other methods *cannot* synthesize programs with arbitrary looping computation, but its unclear why that is the case. For example, large scale models (GPT 3.5) can indeed produce programs with higher-order functions and lambda functions. Its unclear why the paper strongly posits that other methods cannot do this. 2. The paper is not easy to understand and seems to depend on the reader being familiar with CrossBeam> On that note, LambdaBeam strongly depends on CrossBeam which is a small drawback as well (though the other contribution - representation of lambda functions is a general and useful contribution). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Why did the authors use DeepCoder benchmark? Is integer manipulation benchmark the right choice when solutions many involve a lot of hierarchical reasoning. I am especially concerned since the input contains only 3-5 examples, which might not be sufficient to triangulate the true function. Furthermore, the False Positive rate might be higher simply because of the benchmark used. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper does not discuss limitation or potential negative societal impact. Adding information regarding both these aspects in the appendix might further improve the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > Weakness 1. We were careful with our wording, but we will revise to make it more clear. When we discuss previous methods that cannot handle lambda functions (line 23), we are referring specifically to the types of prior works listed on lines 22-23, NOT referring to LLMs. We refer to this line of work, on deep learning for PBE, as “neural synthesis search”. Previous work in neural synthesis search (as cited on lines 22-23) indeed does not generate arbitrary looping computations. Why not? We discuss a key difference in lines 32-26, and two difficulties in lines 41-49 and 50-56. In short: When explicitly searching over programs, the search space is very large. The best way that has been discovered in neural synthesis search is to evaluate partial programs, and use the result as input to the neural search policy. When building up a lambda function during search, the search method does not know yet what the inputs to the lambda will be, so we cannot directly evaluate the lambda. How might we represent lambdas efficiently (pruning the search space) and provide evaluation information to the neural search policy? These two difficulties are addressed by key contributions of our paper. As for LLMs: Yes, LLMs *can* produce programs with loops, and quite impressively, but they perform synthesis from _natural language_ which is very different from PBE where we have only input-output examples. LLMs seem to depend strongly on natural language for their success. When given input-output examples alone (i.e., the PBE setting), we find in our results that LLMs perform poorly. This is why we say “PBE demands a more systematic search strategy”. To our knowledge, LambdaBeam is the first neural synthesis search method that handles lambda functions or arbitrary looping computations. Note that $\lambda^2$ handles lambdas but is not neural, and language models also handle loops and lambdas but do not perform any clever search (other than beam search or rejection sampling), and hence have trouble with PBE. > Weakness 2. We will include more info about CrossBeam (see global response). > Why did the authors use DeepCoder benchmark? We needed a benchmark that included arbitrary lambda functions, and the DeepCoder benchmark was the closest to that since it had hardcoded lambda functions. Thus, we extended DeepCoder to allow arbitrary lambda functions. For the 100 handwritten evaluation tasks, we generally found that 3-5 examples are enough to sufficiently describe the task, and that false positive solutions are generally complicated unnatural programs that are accidentally correct on the examples. However, a higher proportion of the synthetic tasks are actually ambiguous, which can be seen from the significantly higher false positive rate on synthetic tasks compared to handwritten tasks. > The paper does not discuss limitation We discuss limitations on lines 342 - 347. --- Rebuttal Comment 1.1: Title: Post-Rebuttal update Comment: I thank the authors for the detailed rebuttal. This rebuttal, along with other reviewer's notes has been useful to improve my understanding of this work. > Weakness 1 "Neural synthesis search" generally covers a much wider space than "deep learning for PBE", and is likely to be misleading for readers. I believe the paper would be improved by making the claims more specific. Example Excerpt from lines 29-31: "The fundamental question explored in this paper is whether a neural program synthesis search policy can learn to reason about lambdas and higher-order functions, which would enable the synthesis of arbitrary looping computations that were not previously possible with neural synthesis search" -> not possible with neural synthesis search techniques which relies on intermediate expression evaluation. Also, any auto-regressive token-wise prediction approach which does not rely on intermediate expression evaluations can model lambda functions (**not just LLMs**). i.e. transformer models which perform program synthesis via a simple next-token-prediction task (as done in PLAD [1]), **without relying on natural language**, can indeed predict lambda functions (if its trained on examples containing lambda functions). > Weakness 2 I appreciate the authors response! > DeepCoder Benchmark I appreciate the authors response! Its indeed true that the handwritten benchmark has a smaller false-positive rate (which reflects well on the proposed method). I can understand the reasons for using this benchmark. Hopefully, in the future works more suitable benchmarks are employed. > Limitations I thank the authors for correcting my statement. The paper does mention the limitations of the proposed approach. I would still suggest the authors add potential negative societal impact in the appendix. The authors have addressed my queries. Therefore, I am raising my rating to 7 Accept (on the expectation that authors will edit the draft to be more specific about the paper's contribution w.r.t. prior work such as in Lines 29-31). ### Reference [1] PLAD: Learning to Infer Shape Programs with Pseudo-Labels and Approximate Distributions, R. Kenny Jones et al., CVPR 2022. --- Reply to Comment 1.1.1: Comment: Thanks again for your helpful suggestions and discussion. We will clarify lines 29-31. Yes, autoregressive sequence models can predict lambda functions if trained on them; after all, our RobustFill comparison does exactly this, so we agree that it's important to be clear about this.
Summary: This paper presents a method for training a neural module to guide a search-based program synthesis procedure that supports lambda functions. This is accomplished by leveraging the existing technique of property signatures, which essentially represent program constructs using a hand-designed vector of features. The authors design features for representing lambda functions, including evaluating the lambda function on a hardcoded set of inputs. This allows lambda functions to be incorporated in a prior bottom up program search technique called CrossBeam, and they introduce a new Merge operator to build lambda terms from the bottom up. The results on a modified version of the DeepCoder benchmark show their approach outperforms several strong baselines, including symbolic search and an LLM fine-tuned on Python. Strengths: Representing and synthesizing (lambda) functions is a significant step forward for neural program synthesis. Although the individual techniques are mostly prior art, I consider the combination of techniques to be novel. The writing is generally clear, though I found the implementation details to be quite sparse in places (see Weaknesses) Weaknesses: The biggest weakness is that the approach is tested on only one synthetic dataset and also relies on hand designed features. Furthermore, the lambda functions only contain 2 variables. It remains to be seen if this approach could scale to more realistic datasets with more data types and more complex functions. Additionally, there are almost no examples in the paper. At the very least, the appendix should include some examples of the synthesis dataset as well as programs synthesized by LambdaBeam. Finally, many of the implementation details are not listed out fully, which impedes reproducibility (e.g., the architecture of the models) and in severe cases, the reader's ability to contextualize the results (e.g., a list of the property signatures used and the evaluation tasks). Technical Quality: 3 good Clarity: 3 good Questions for Authors: How many programs are there of weight at most 12 (i.e., that were sampled from for the training set)? What size is the property signature? (or how many properties are there) Can you elaborate on why you add an embedding of the weight of a value for embedding lambda expressions? Is Merge complete for lambda functions? This should be addressed in the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors should also address the extent to which the property signatures are tailored to the DSL, and the implications for broader applicability / scalability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > there are almost no examples in the paper This is a great point. Please see the global response for some example tasks and synthesized programs. > many of the implementation details are not listed out fully, which impedes reproducibility (e.g., the architecture of the models) We provide more details below, and we will add them to the paper: * I/O encoder: it encodes the property signatures of I/O examples using a 2-layer relu-MLP with hidden size and output size of 512. * Value Module: it encodes the property signatures of a value using a 2-layer relu-MLP with hidden size of 512 and output (embedding) size of 256, with layer-norm applied after each linear projection. We have different MLPs for non-lambda and lambda expressions, as mentioned in Line 224 in the paper. * Argument Selector Module: we use an operator-specific 3-layer LSTM with hidden size 256. The prediction head is a 2-layer MLP of hidden size 512. During training, we use beam size 10 to generate on-policy data, where the effective batch size is 32 and a constant learning rate of 5e-4 is used with the Adam optimizer. We will release our code and model checkpoints if accepted, to aid in reproducibility. > The biggest weakness is that the approach ... also relies on hand designed features > a list of the property signatures used > What size is the property signature? We indeed use some hand-designed features. However, our approach actually requires less manual design than it might seem, because we devised a system of combinatorially combining property functions to greatly expand the richness of the property signatures. Appendix A (in the supplementary material) describes this in full detail. In particular, in total across types in the DSL, we defined 20 “basic properties” of objects, 8 other objects “relevant” to understanding an object, and 24 properties for comparing two objects of the same type. These are all explicitly listed in Appendix A. With only these building blocks, using the compositional approach in Appendix A, we encode lambda values with property signatures of length 558, and encode non-lambda values with property signatures of length 359. > How many programs are there of weight at most 12 (i.e., that were sampled from for the training set)? When generating training data, we first randomly generate some inputs to a synthesis problem, i.e., the input variables’ values across multiple examples (line 258). Then, the cardinality of the space of training programs varies depending on the random inputs (the number of inputs and their types), and because we exclude suboptimal solutions from the training set. That is, a program is excluded from the training set if there is a different program found earlier in our enumerative search that evaluates the same way on the input variables for all examples. That said, we can still provide ballpark numbers. Consider the “map:replace” task from the global response. Then, starting from those inputs and ignoring the task’s outputs, our baseline enumeration finds the following programs (all with different behavior when run on the examples): * 313,842 programs with weight at most 8, in 5 minutes * 1,573,527 programs with weight at most 9, in 30 minutes * 8,390,593 programs with weight at most 10, in 170 minutes When generating training data, not all searches reach weight 12 within the 1 hour time limit (depending on the random inputs). Of course, there would be many more programs if we relax the constraint of solution optimality, and there are even more programs that are syntactically valid but fail to typecheck. > Can you elaborate on why you add an embedding of the weight of a value for embedding lambda expressions? This is carried over from CrossBeam. Every value in CrossBeam and LambdaBeam (including lambda and non-lambda expressions) has an embedding of the value’s weight added to the embedding of the value. This helps the model understand the “cost” associated with using this value which may influence the model’s decisions, e.g., to not use values that have too high of weight. This may help the model avoid getting stuck exploring a “rabbit hole” of larger and larger expressions that ultimately do not lead to progress. > Is Merge complete for lambda functions? Yes, Merge is complete, in the sense that we can use it to generate any function of the inputs in our DSL. Let $x_1 \dots x_n$ represent the inputs to the programming-by-example (PBE) task. All solutions to the PBE task are a function $\lambda x_1 \dots x_n. t$, where $t$ has no unbound variables other than $x_1 \dots x_n$. We can show that Merge generates the set of all terms $t$ in our DSL that have no unbound variables other than $x_1 \dots x_n$. Proof sketch: Use structural induction. The recursive case is where the expression has the form $t = \lambda v_1 \dots v_n. f(a_1 \ldots a_K)$. For each of the $a_k$, we define a new term $b_k$ which binds all of the free variables in $a_k$. Inductively, $b_k$ can be generated using Merge. We then set variable tuples $i_k$ appropriately such that $Merge(f, b_1, i_1, b_2, i_2, \dots)$ produces $t$. Intuitively, the $i_k$ can be seen as undoing any alpha-renaming when going from $a_k$ to $b_k$. We will clarify this in the paper, and add a formal proof in an appendix. --- Rebuttal Comment 1.1: Comment: Thanks for the response! This has addressed substantially all my concerns and I will increase my score to a 7.
Rebuttal 1: Rebuttal: We appreciate all of the insightful reviews! We will revise our paper to incorporate our clarifications and new information wherever appropriate. This global response includes information helpful for multiple reviewers, and we also respond to each reviewer individually. **The paper could use more background on CrossBeam** (reviewers c5as, WhTE, JwfZ) If accepted, we will use the extra page to include more background on CrossBeam, and LambdaBeam’s relationship to CrossBeam (e.g., with a figure), to help alleviate this weakness. **Which pre-trained LLM is used?** (reviewers WhTE, JwfZ) We omitted the name and citation for double-blind purposes. We will certainly include more details and citation if accepted. **Minor correction** Line 260 says each synthetic training task has between 3 and 5 I/O examples. Actually, synthetic tasks have between **2** and 5 I/O examples. **The paper could use examples of evaluation tasks and synthesized programs** (reviewer ErYC, but we think this context will be helpful to everyone) We will add some evaluation tasks and synthesized programs to an appendix. We cannot upload new revisions during the review period, so we provide some interesting examples here. The task names are only for convenience and are not used in our comparisons. **“map:replace”** This handwritten task has 3 inputs (`x`, `f`, and `r`), 3 examples demonstrating the task (“in `x`, find instances of `f` and replace them with `r`”), and a handwritten ground-truth solution using a relatively complicated lambda function: ``` inputs_dict={ 'x': [[7, 2, 4, 6, 4, 2, 5], [-6, -3, 4, 3, -5, -3, 2, 1, 5], [18, 48, 27, 26, 27, 27, 28, 17, 27, 33]], 'f': [4, -3, 27], 'r': [-1, 7, 99], } outputs=[[7, 2, -1, 6, -1, 2, 5], [-6, 7, 4, 3, -5, 7, 2, 1, 5], [18, 48, 99, 26, 99, 99, 28, 17, 99, 33]] solution='Map(lambda u1: If(Equal(u1, f), r, u1), x)' ``` In `inputs_dict`, each of the entries for `'x'`, `'f'`, and `'r'` is a list of length 3, which contains the input for each of the 3 examples. LambdaBeam+Restarts finds the same solution (weight 10) in each of the 5 trials: `Map(lambda u1: (lambda v1: If((lambda v1: Equal(f, v1))(v1), r, v1))(u1), x)`, taking a median time of 202 seconds. The solution looks complicated due to the Merge operation causing lots of variable renames (i.e., $a_k(i_k)$ in the Merge definition). We have implemented an algorithm to simplify the solution by statically resolving these renames. In this case, the solution simplifies to `Map(lambda u1: If(Equal(f, u1), r, u1), x)` which is essentially identical to the ground-truth solution. **“multi:multiply_odds”** This handwritten task has 1 input and uses multiple higher-order functions to compute a running product of only the odd elements: ``` inputs_dict={ 'x': [[3, 5, 8, 2, 1], [5, 2, 1, 3, 3, 1, 4], [3, -4, -1, 8, 2, 0, -3, 0, 9, -1]], } outputs=[[3, 15, 15], [5, 5, 15, 45, 45], [3, -3, 9, 81, -81]] solution='Scanl1(lambda u1, u2: Multiply(u1, u2), Filter(lambda u1: IsOdd(u1), x))' ``` In each of the 5 trials, LambdaBeam+Restarts finds the same solution (weight 11) that simplifies to the ground-truth solution, taking a median time of 75 seconds. **“synthetic:weight_9_function_7”** This synthetic task clips every element to the range [0, 4]: ``` inputs_dict={ 'x1': [[-9, -2, -10, -6, 0, -10, -6, 3, 1], [-1, -5, 8, 5]] } outputs=[[0, 0, 0, 0, 0, 0, 0, 3, 1], [0, 0, 4, 4]] solution='Map(lambda u1: Min(4, Max(0, u1)), x1)' ``` LambdaBeam+Restarts finds a correct solution in all 5 trials with a median time of 38 seconds, but the solutions are slightly different (the simplified solutions are listed): * `ZipWith(lambda u1, u2: Min(4, Max(0, u2)), x1, x1)` * `ZipWith(lambda u1, u2: Min(4, Max(0, u1)), x1, x1)` * `Reverse(ZipWith(lambda u1, u2: Min(4, Max(0, u2)), x1, Reverse(x1)))` * `Reverse(Map(lambda u1: Min(4, Max(0, u1)), Reverse(x1)))` (this solution is found in two trials) Note that these are not the shortest solutions, but nevertheless all of these solutions are equivalent to the ground-truth solution. LambdaBeam solutions could benefit from a postprocessing simplification step (line 339) but this is orthogonal to our contributions.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Multi-resolution Spectral Coherence for Graph Generation with Score-based Diffusion
Accept (poster)
Summary: This paper proposes Wave-GD, a score-based generative model, to generate graphs with high fidelity. By capturing the dependency between nodes and edges at multiple resolutions in the spectral space, it claims it overcomes the over-smoothing problem and achieves real-like frequency characteristics of nodes and edges. Strengths: 1. The paper is well-written and easy to follow. 2. The paper's motivation is clear, and the paper has shown some limitations that previous models may have. 3. The idea of the paper is simple and straightforward. Weaknesses: 1. The experiment is not sufficient to support the proposed method. (1) lack of baselines: some baselines that utilize spectral information/graph characteristics are missing. It would be great to see the author compare them.[1,2,3] (2) lack of more robust graph datasets: Ego-small and Community-small graphs are too simple, I think over-smoothing can't be a major issue for graphs in such scales. And how the "multi-resolution" should be defined in such small graphs? I suggest the authors run experiments on more complex graphs such as Planar graphs, SBM graphs, and large networks if possible with hundreds of nodes. (3) I suggest the author also include DiGress in Figure 5 -- since it also utilizes spectral information. (4) Another comment in Figure 5: it's not clear whether the performance drop when increasing #GNN layers is really due to the over-smoothing problem, more investigation should be conduct to further prove this phenomenom. 2. While the author propose to also diffuse on the edge weights obtained from SGWT, there are also other ways to define the edge importance (e.g., edge conductance). Justiifcation should be provided why SGWT is the chosen over others. [1] Luo, Tianze, Zhanfeng Mo, and Sinno Jialin Pan. "Fast Graph Generative Model via Spectral Diffusion." arXiv preprint arXiv:2211.08892 (2022). [2] Martinkus, Karolis, et al. "Spectre: Spectral conditioning helps to overcome the expressivity limits of one-shot graph generators." International Conference on Machine Learning. PMLR, 2022. [3] Chen, Xiaohui, et al. "Efficient and Degree-Guided Graph Generation via Discrete Diffusion Modeling." arXiv preprint arXiv:2305.04111 (2023). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please address the weakness above and: 1. How exactly can the SGWT addresses the over-smoothing problem? 2. Can you give a concrete example why it can capture "multi-resolution" of the graph, and can you provide a formal definition of the multi-resolution. 3. what's the motivation of conducting the experiment with larger sample size? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: 1. The result on molecule generation is not that expressive. 2. The contribution may be insignificant: there are many GNN design that may alleviate the over-smoothing problem. Any baseline model with a better score networks may overcome this limitation easily. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1) Compare with more baselines. A) We thank the reviewer for introducing great references. We compared Wave-GD with Martinkus et al (ICML 2022) and Chen et al (ICML 2023) and reported the results in the pdf of the global response. As shown in the result, our method outperformed both baselines in three datasets. We will add these results in the revision and discuss their methods. W2) Compare with additional datasets with larger graphs. A) As suggested by the reviewer, we conducted an experiment on the Planar data and presented both qualitative and quantitative results in the global response. We compared MMDs of test and generated graph sets on degree, clustering coefficients, and orbit counts. Our method outperformed five baselines including SPECTRE and EDGE for the average of three MMDs. The resultant optimal scales for the Planar dataset were also reported in the general response, which shows that their optimal scales are relatively smaller than other generic graph datasets. These results indicate that capturing high-frequency and localized features from individual nodes was more informative than capturing cluster-related features in this dataset. Note that we rushed these experiments with limited time and resources, and hyperparameters such as learning rate and $J$ were not fine-tuned. But still, we obtained the results of Wave-GD outperforming many recent baselines. Regarding SBM dataset, we were not able to pull reportable results given limited time but confirmed them to be promising. We plan to prepare a journal version and these additional experiments in more complete form will be included there. W3) Include DiGress in Figure 5. A) We thank the reviewer for the suggestion. However, in Figure 5, we compared our method with GDSS for multiple Graph Multi-Head attention (GMH) layers to observe robustness against oversmoothing caused by repetitive graph convolutions (within the GMH layers). As DiGress does not use graph convolutions nor GMH layers, we cannot directly compare our method with DiGress to examine its performance against oversmoothing problem, we believe that the reviewer will easily see why DiGress cannot be compared directly within Figure 5. W4/Q1) It is not clear that the performance drop in Figure 5 is caused by the increase in the number of GNN layers. How can the SGWT address the over-smoothing problem? A) The performance drop in Figure 5 is caused by repetitive graph convolutions with increasing GMH layers. As shown in lines 190-192, the query, key, and values in the GMH layer were made up of graph convolutions, which aggregate features from neighboring nodes. Many previous literatures [1, 2] have shown that repetitive graph convolutions with multiple layers cause the oversmoothing problem. Both our method and GDSS used GMH layers and graph convolutions, however, SWGT in our method allows a model to flexibly preserve discriminative characteristics of graph representations. This is because filtered eigenvalues $k(s\Lambda)$ and corresponding eigenvectors restrict the extent of message propagation and preserve the unique characteristics of localized signals. This spectral filtering with limited eigenvectors/eigenvalues prevents the over-smoothing issue in deep layers compared to using unfiltered raw data. We conducted an empirical analysis to support this claim by comparing Mean Average Distance (MAD) [1] between $AX$ and $A^sX$, where the MAD is a metric to measure the smoothness of the graph representation. Given a template graph shown in Fig. 1a in the main paper, $X_{i+1} = AX_{i}$ (i=0,1,2) was calculated as in 3-layer graph convolution (without weights), where the $X_0$ is a one-hot encoded degree matrix. The MAD value of $AX_3$ was 0.079 and that of $A^sX_3$ with s=20 was 0.724, which is 9 times larger. The low MAD value without SGWT represents that the node representations become indistinguishable, and the higher MAD with SGWT shows that the local and discriminative features were preserved. Q2) Provide an example and definition of multi-resolution. A) The multi-resolution is a hierarchical concept [3] to capture information of data at varying levels of granularity. This multi-resolution is well-established with Wavelet transform (Mallat, 1999) and Spectral Graph Wavelet Transform (Hammonds, 2012) extends its concept to graphs as which is utilized in our framework. Basically, it is the band-pass filter $k(s)$ which covers different bands, i.e., scales, in the frequency space. Given a signal $x$, controlling $s$ in Eq (1) and (2) in the main paper yields the representation of $x$ in different resolutions. We will clarify these in the preliminary section of our revision. For empirical evaluation, in Sec. 7 of supplementary material and the general response, we provided the value of the actual converged scales for generic graph datasets for our experiments, which show the capability of Wave-GD to capture different multi-resolution graph representations considering the characteristics of datasets. Q3) What is the motivation for the experiment with larger samples? A) As in EDP-GNN [4] and GDSS [5], we performed the experiment with more samples to extensively assess the quality of generated samples. With more data, the larger sample set may contain more abnormalities or high-fidelity samples. Therefore, the experiment aims to assess how the quality of generated samples changes with greater diversity. [1] Chen et al. "Measuring and relieving the over-smoothing problem for graph neural networks from the topological view." AAAI. 2020. [2] Zhao and Akoglu. "Pairnorm: Tackling oversmoothing in GNNs." ICLR, 2020. [3] Rosenfeld, “Multiresolution image processing and analysis.” Springer Science & Business Media, 2013. [4] Niu et al. “Permutation invariant graph generation via score-based generative modeling.” AISTATS, 2020 [5] Jo et al. “Score-based generative modeling of graphs via the system of stochastic differential equations.” ICML, 2022 --- Rebuttal Comment 1.1: Comment: My concerns are mostly addressed, I'd like to raise my score
Summary: In this work the authors tackle the problem of graph generation learning where the goal is to learn the key features of a set of graphs and be able to generate graphs with similar properties. To that extend, the authors extend the GDSS (Jo 2022) method through an additional loss term (Eq. 7). This loss term encourages the employed GNNs to learn to reconstruct spectrally modified matrices A^s_i in addition to the normal adjacency matrices. The spectrally modified matrices are obtained through an SVD/PCA like approach where some spectral properties of the adjacency matrix are accentuated. This spectral accentuation is learnable as part of the training procedure. The authors evaluate their approach on two real world and two synthetic datasets also used in previous studies. In terms of MMD (Mean maximum deviation) their approach frequently outperforms other methods employed for the task of graph generation learning. On molecule data, the employed procedure still ranks amount the best. Strengths: The authors provide an interesting extension to an existing approach (GDSS), which allows to better learn the scales that are important for a specific set of graphs. The greatly increased stability during training compared to GDSS (Figure 3) seems promising. The paper contains extensive comparison to other methods. The presented approach is on par/outperforms these other methods on the presented datasets. Most computational overhead is in the training phase, inference is as fast as GDSS. Weaknesses: Overall the core weakness of the paper is its presentation, which has a lot of room for improvements and weak support for the main claims of the paper (last point). The presentation of the SDE learning (lines 155-179) is not understandable without reading the GDSS paper. I think it could be drastically improved by highlighting the differences/ improvements in comparison to GDSS rather than repeating all the definitions/equations. The description of Figure 2a) could be much improved as it is not well understandable without reading the rest of the paper first. Some of the equations and notation does not further the main cause of the paper: Section 3.1 beyond the first paragraph can almost entirely be cut. The introduced transformation $X^{s_i}$ on the edge signal X is not really used. Lemma 1 seems to be misplaced as it introduced (lines 138ff) quite far away from where it is used (lines 190ff). Also it seems Lemma 1 is likely not a new result. The bold highlights in Table 1 are not correct, in column “orbit”, three other methods outperform the presented approach but are not highlighted in bold. Also in lines 289f the text mismatches the table. It is also unclear from the text how much hyperparameter tuning was done to achieve the results presented. Last but not least, the experimental section shows, from an applicant's point of view, that the proposed changes lead to empirical performance increase on the graph generation task. On the other hand the experimental section does not well support whether the introduced changes actually had the desired effect of learning different “scales” better. It might as well be, that the introduced changes simply increase training robustness. More on this in the “Questions” section. It seems that one would need additional support for the main claim. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It would be interesting to see which scales lambda (as in lines 219, 220) have actually been obtained from the training. Do they differ significantly for different datasets? Would an equidistant choice of lambda be sufficient? Similarly it would be interesting to see the relative strengths of the lambda_A/ lambda_A^{s_i} (as in eq.7). From a theoretical perspective (thinking of spectral clustering) one would assume that mostly low frequency modes are relevant for the community dataset, is that really the case? Lastly, it would be interesting to see whether the performance increase obtained from the spectrally filtered adjacency matrix is any better than just introducing A^{s_i} Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: see questions above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1) Improve clarity on SDE learning, Fig. 2a, Sec 3.1, Lemma 1, and Tab. 1. A) We thank the reviewer for the detailed constructive comments. We presented a revised version of Fig. 2a with its improved description in the pdf of the general response, so please check on it. We will also clarify all other places based on the reviewer’s suggestions, such as highlighting the difference between our method and GDSS by adding the description of the loss in Eq. 7 and modifying the bolds in Table 1 in the revision. Specifically, we will add the following descriptions in line 180, before the Remark section: “Both our method and GDSS perform denoising score-matching to the partial scores of $X$ and $A$. However, our method additionally models the joint probability space of $X$ and $A^s$ via SDEs. This is realized by the loss in Eq. (7), and this operation allows a model to flexibly estimate the complex dependency between nodes and edges with multi-resolution SGWT. Also, note that the scales of SWGT {$s_i$}$_{i=1}^J$ in Eq. (7) are trainable so that multi-level granularities that characterize graph distribution can be adaptively captured during training.”. For the $X^s$ in Sec 3.2, it is not used in our method, however, it can be used by replacing $A^sX$ with $AX^s$. As shown in the proof of Lemma 1 in the supplementary material, $W_{A}(s) \cdot W_{X}(s) = AUk^2(s\Lambda)U^TX = A^sX = AX^s$. Therefore, either $A^sX$ or $AX^s$ can be used to capture the spectral coherence between node features and graph structures. We will add this explanation to the revised manuscript for better understanding. W2) Lemma 1 is likely not a new result. A) To our best knowledge, Lemma 1 and its proof are novel ideas as it proposes that computing the coherence as a dot product of multi-resolution nodes and edges in the spectral space is equivalent to a graph convolution in the original graph space. Please let us know of any references that suggested the same idea so that we can properly credit them. W3) How much hyperparameter tuning was done? A) Basically, we followed most hyperparameter settings of GDSS as Wave-GD is built upon the GDSS. The hyperparameter we mainly tried out was learning rate and $J$, i.e., the number of scales. As we used two wavelet filters (i.e., low-pass and band-pass filters), we conducted experiments with at least two scales ($J=2$). Subsequently, we increased $J$ by 1 for the band-pass filter until the optimal results were achieved. For the Grid dataset, the maximum $J$ we could try was $6$ due to computational limitations. We provided ablation studies on $J$ in Table 4 in the main paper and Section 4 in the supplementary material for all datasets. While the optimal $J$ may be different across different datasets, the results within each dataset are not very sensitive but rather robust to the choice of $J$. For the learning rate, we performed a grid search in {$0.0008, 0.001, 0.002, 0.005, 0.01$} for each dataset. W4/Q1) Which scales were obtained? The values should be presented in the experimental section. A) In the pdf of the global response, we reported the table of converged scale values. We will add the table in the revised paper. In Section 7 of the supplementary material, we provided all scale convergence flows along epochs and analyzed the converged scale ranges for generic graph datasets. We observed that the scales differ for different datasets. Interestingly, the scales of the large Grid dataset (with $100 \leq |V| \leq 400$ nodes) were generally smaller compared to the Ego-small ($4 \leq |V| \leq18$) and Community-small ($12 \leq |V| \leq 20$) datasets. Specifically, the converged scales of Ego-small ranged within $[20, 47]$ and the scales of Grid ranged within $[12, 36]$. Note that the smaller scales capture local graph features with higher frequencies. Therefore, the results demonstrate that capturing local and detailed graph representations is more critical when dealing with complex and large graphs, in comparison to smaller graphs. In other words, for the Community-small dataset, features in relatively low frequency (e.g., cluster-related features) were captured with larger scales. Q2) Relative strength of lambda in Eq. 7. A) $\lambda$'s in Eq. 7 were set to 1 so the relative strength between $A$ and $A^{s}$ were the same. Q3) Performance comparison between spectrally filtered adjacency matrix and $A^{s_i}$. A) The use of spectrally filtered adjacency matrix (i.e., wavelet coefficient $W_{A}(s)=\psi_s \cdot \mathbf{A}$) with spectrally filtered node features (i.e., $W_{X}(s)$) will make the same result as using $A^s$ and $X$. As shown in the proof of Lemma 1 in the supplementary material, $W_{A}(s) \cdot W_{X}(s) = A^{s}X$, and therefore the result of using either side of the data will be equivalent. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. In light of the responses by the authors I have increased my score.
Summary: The paper introduces a graph generative approach that leverages diffusion models and wavelet theory. The key concept revolves around utilizing the wavelet transform of the adjacency matrix across various scales, and learning a joint backward diffusion process that remains valid at all considered scales simultaneously. Consequently, the proposed approach exhibits a multi-resolution characteristic. The proposed approach is evaluated in the graph generation task using four benchmarks and is compared against autoregressive and one-shot approaches from previous works. Strengths: The adaptation of diffusion models to the graph generation field is a timely and intriguing topic, considering the challenges posed by the discrete nature of graph data. The experimental evaluation provides substantial support for the claims made in the paper. The proposed approach outperforms recent methods on three benchmark in the graph generation task and achieves comparable results in molecule generation task. The experimental setup and the metric considered for the evaluation are clearly presented. Weaknesses: Having to choose the parameter *J* without any insight or guidance for different datasets can be a drawback in practical applications. I would suggest to further investigate the relationship between performance and *J* among different datasets, along with the impact of graph statistics on the optimal *J*. Also, it is not clear which how many scales have been used to obtain the results reported in Tables 1-3. The time complexity of the proposed approach, as indicated in Table 3, could present challenges in certain settings. Further discussion or potential mitigation strategies for addressing this issue would be beneficial. Clarity could be improved in certain sections. For instance, Figure 1's purpose is unclear to me, and the role of spectral coherence between nodes and edges in the proposed approach needs better explanation. Providing a clear definition of the scale (s) domain and kernel function (k) in the preliminary sections would also help readers understand the concepts more quickly. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It would be helpful if the authors could provide clarification on why only $A^s$ and not $X^s$ is considered for learning the diffusion process. Regarding Table 4, should the lower numbers indicate better results? In the QM9 part of the table, higher numbers are bold. Is my interpretation wrong? The following paper, which explores diffusion in the wavelet coefficient space for 3D shape generation, could be an interesting reference: \ _Hui, Ka-Hei, et al. "Neural wavelet-domain diffusion for 3d shape generation." SIGGRAPH Asia 2022 Conference Papers. 2022_ Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1) Ablation study on $J$ and analyses of the impact of graph statistics on the optimal J are needed. Also, how the J’s were set to obtain the main results? A) We provide our answers in the following, and we will discuss them in the main manuscript to make the paper more clear. * We agree that the number of scales $J$ needs to be carefully chosen for each dataset. We already have done the ablation studies on $J$ which are given in Table 4 of the manuscript for Community-small and QM9 datasets and Section 4 of the supplementary for Ego-small and Grid datasets. We observed that the results are not too sensitive to $J$, but we had to try varying numbers to obtain the optimal result. * For the relationship between $J$ and graph statistics, we empirically observed that relatively smaller $J$ ($J=4$) performs better on large Grid graphs (with $100 \leq |V| \leq 400$) and larger $J$ ($6 \leq J \leq11$) showed the best performances on smaller graphs such as Ego-small ($4 \leq |V| \leq18$), Community-small ($12 \leq |V| \leq 20$) and QM9 ($1 \leq |V| \leq 9$). * As we mentioned in line 224, to obtain the results in Table 1-3, $J$ was set to $11$, $6$, $4$, and $6$ for the Ego-small, Community-small, Grid, and QM9 datasets, respectively. Also, we analyzed the convergences of $J$ scales along epochs for each dataset and reported the results in Section 7 of the supplementary material. The exact trained scale values were reported in the pdf of the general response. W2) Time complexity could present challenges. A) As in GDSS, PC sampling was used with a reverse diffusion predictor and Langevin MCMC corrector for the QM9 dataset. By omitting the correction step and using a predictor-only method, the sampling time was reduced from 154s to 61s. However, we observed a trade-off between sampling time and sample quality, as validity slightly decreased (~1.7%p drop) for a predictor-only method. Regarding the complexity challenge for the spectral decomposition of large graphs, as we mentioned to Reviewer 6cNy, there are approximations available. W3) Need to improve clarity (e.g., Figure. 1, description of the spectral coherence, scales, and kernel function). A) Thank you for the suggestion. We will improve the clarity that the reviewer pointed out such as a description of the kernels we used, and the definition of scales in the preliminary section as well as in Figure 1. The intention of Figure 1 was to show that certain connections/disconnections are accentuated at specific scales which can be better characterized in the diffusion model, and we will connect dots between Figure 1 and the design of our model to help future readers. Q1) Why only $A^s$ is used and $X^s$ is not considered? A) As shown in the proof of Lemma 1 in the supplementary material, using either $A^s$ or $X^s$ is enough to capture the spectral coherence between node features and graph structures. Specifically, the Eq. (2) and (3) in the proof show that $W_{A}(s) \cdot W_{X}(s) = AUk^2(s\Lambda)U^TX = A^sX = AX^s$. Therefore, either $A^s$ or $X^s$ should be used with an unfiltered counterpart. We will add this explanation to the revised manuscript for better understanding. Q2) Interpretation in Table 4 is confusing. A) As in Table 3, higher numbers are better results for QM9. Note that, the metrics we used for the molecular QM9 dataset, i.e., validity, uniqueness, novelty, are different from those for generic graph datasets. Unlike QM9, lower MMD values indicate better results for the generic graph datasets such as Ego-small, community-small, and Grid. We will improve the clarity in Table 4 by adding up/down arrows beside the metrics. Q3) Suggestion on additional reference. A) Thank you for suggesting an interesting reference on 3D generative modeling with wavelet representation. The proposed reference and our work basically use different wavelet construction, and we will discuss it in the Related work section. --- Rebuttal Comment 1.1: Comment: I thank the authors for their time and effort in answering my questions. After having considered both the author's response and the other reviews, I maintain my original score.
Summary: This paper claims the node feature and graph topology are not coherent in most previous generative graph models and high-frequency signals in node features and graph topology may neglect during the generation process. Therefore, they propose a Wavelet graph diffusion model (Wave-GD) with score-based diffusion. Specifically, it uses different graph wavelet bases to get graph signals in different frequency ranges. The overall model diffuses node features, the original graph, and the adjacency matrix constructed by graph wavelet bases. To improve the coherence between node features and graph topology, the score-based models are based on graph multi-head attention layers, which take the product of node features and adjacency matrices including the original one and ones learned by different bases, which alleviates the gap between node features and graph structures. Performance on three small synthetic datasets and one real-world molecular dataset show that the proposed model can generate graphs that are not only realistic graphs in shape but also obey the chemical rules in high fidelity. Strengths: 1. The novel part is the paper considers high frequency and discovers the coherence between node features and graph structures in diffusion models but still uses a simple dot product to solve it. 2. The empirical results on real-world molecular datasets show the effectiveness in terms of generating realistic graphs with high fidelity and high novelty in a relatively fast time. 3. The model enjoys high flexibility regarding different tasks where they need frequency graph signals at different scales. Weaknesses: 1. "nodes and edges" in line 26 is misleading, it would be better if you mention "node" means "node features" and "edges" means "graph structures" in the introduction and then use "node" and "edge" for simplicity. 2. Multi-resolution and coherence are two major claims in your paper, but it is not verified that coherence improves performance. 3. the proposed model is limited to small graphs due to the decomposition in spectral graph wavelet bases. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: Would you give some experimental analysis that can show if the main contribution comes from coherence or multi-resolution? Q2: how do you define $s$ for graph wavelet bases? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. It may produce molecules that may harm the body. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1) Change descriptions of the node and edge in line 26. A) We will use the terms ‘node features’ and ‘graph structure’ in the intro properly as the reviewer suggested. We appreciate the reviewer's comment. W2/Q1) Does performance improvement come from coherence or multi-resolution? A) As we have shown in the proof of Lemma 1 (in the supplementary material), the coherence as a dot product between wavelet representation $W_{A}(s)$ and $W_{X}(s)$ at scale $s$ in the spectral domain is equivalent to a simple matrix multiplication $A^{s}X$ in the graph domain. Although the concept of coherence was not used in the previous literature, GDSS used the formulation $AX$ to compute scores for joint distribution learning, and we are essentially deriving its multi-resolution representations and using them as the multi-resolution coherence. Therefore, we conclude that it is the multi-resolution that improves graph generation. W3) The method is limited to small graphs due to spectral decomposition. A) We agree that online spectral decomposition can be challenging for extremely large graphs due to its computational costs, i.e., $O(N^3)$ with $N$ nodes. To mitigate this issue, first of all, conventional polynomial approximations for the transforms can be easily achieved based on [1,2], which have been cited in the original manuscript. Moreover, for exact computation, the decomposition does not necessarily have to be performed online. It can be done in the data preprocessing stage to save all eigenvectors and eigenvalues before model training, which can be a reasonable strategy for a population of graphs with relatively small sizes. To empirically assess the computational cost for the decomposition, we examined the actual decomposition time of randomly generated "fully connected graphs of diverse sizes" and presented their results in the pdf of the general response (Fig 1(b)). The decomposition was performed using Pytorch (with torch.linalg library) on one Nvidia T4 GPU. As shown in the result, a random graph with 10 nodes requires 0.0004s and a random graph with 5000 nodes requires 20.1s for decomposition. If there are 1000 graphs with 5000 nodes, the required time is probably <6 hours. We think this computational cost with ~5000 nodes may be reasonable and applicable in many practices as it only needs to be done only once at the first preprocessing stage. [1] Xu et al. "Graph wavelet neural network." International Conference on Learning Representations (ICLR), 2019. [2] Ma et al. "Learning multi-resolution graph edge embedding for discovering brain network dysfunction in neurological disorders." Information Processing in Medical Imaging (IPMI), 2021. Q2) How do you define $s$? A) The scales for graph wavelet bases were randomly initialized within a range $[10, 50]$, and they converged to different values after training. We provided figures of convergences of the scales for each dataset in Section 7 of the supplementary material. Also, the optimal values are presented in the pdf of the general response, which we will add in the revision once the paper is accepted. --- Rebuttal Comment 1.1: Title: Thanks for the authors' response. Comment: Thank all authors for clarifying my concerns. I'd like to keep my score.
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive reviews with anonymously positive evaluations. In the pdf of the general response, we present $\bf{1) }$ A revised version of Fig. 2a and its description, $\bf{2) }$ an analysis on the computational time of eigendecomposition, $\bf{3) }$ optimal scale values after training, $\bf{4) }$ comparison with additional baselines, and $\bf{5) }$ comparison with an additional dataset. Pdf: /pdf/5d2a5cfa2f6b27074637c24bb4c133863e5559d5.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Alleviating the Semantic Gap for Generalized fMRI-to-Image Reconstruction
Accept (spotlight)
Summary: This paper presents a new approach to generalized fMRI-to-image reconstruction with a focus in incorporating mage semantics and addressing semantic gaps during the reconstruction. To address inside-space semantic gap, a CLIP based feature space is utilized. To address outside-space semantic gap, a structural information guided diffusion model to transfer semantics. An adaptive strategy to integrate the semantic and structural information is also used. Experiments are conducted on the GOD and NSD dataset with several existing baselines. Strengths: The paper is well motivated and the methodology components clearly described. The focus on incorporating semantics and addressing semantic gaps is interesting, and the adaptive integration with LDM is novel. The experiments are relatively comprehensive, and the margins of improvements especially on GOD appears to be significant. The ablation study provides clear evidence on each module of the methodology, especially their adaptive integration. Weaknesses: It is not clear why did the authors consider only one baseline for comparison on NSD? The performance metrics only lack sufficient statistics on this dataset. It is not clear if the training and testing is done separately per subject, or for all three subjects simultaneously but without held-out subjects? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please respond to the questions raised above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors discussed briefly some limitations associated with the current methodology, without discussing potential future directions to address such limitations or their impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## To Reviewer AK4m ### W1. About only considering one baseline model for comparison on NSD. Given that in Section 4.5, we want to illustrate that following the dataset split method in [a], even a very simple baseline model (k-nearest-neighbor) can achieve a comparatively good decoding result. It could indicate that such a random split cannot ensure that the model has learned the visual representation mechanism and maintain generality. Therefore, we propose a more realistic zero-shot learning (ZSL) division method. The aforementioned baseline methods fail to generalize (Figure 4, 52% in ZSL split), but GESS can generalize well in this complex scenario. For sufficient statistics, we provide variance across repeated experiments (10 trials with different seeds) in Table R2 and will provide more details of the experiments in the supplemental materials. ### W2. Details of the training and testing split. Following [b], we train a model on one subject's training trials (1200 samples from 150 categories) and test it on the test trials (50 samples from 50 categories) of the same subject. The training and testing images come from different categories to construct a semantic gap (or ZSL scenario). We repeat our experiments on three subjects to demonstrate the generality and average the performance for the final results. Cross-subject generalization remains challenging. Because brain signals are highly personalized with different signal dimensions and visual area locations across subjects [c], effectively aligning data from multiple subjects into a shared space is difficult. Addressing cross-subject decoding is thus important but complex, beyond the scope of the current work. We leave exploring methods to achieve cross-subject prediction for future work. ### Limitations. Discussions of future directions to address the limitations. (1) About low SNR in fMRI, combining EEG [d] and fMRI [b] may capture better brain signals and provide complementary information, yielding more robust and accurate models for decoding semantic representations. Integrating multi-modal neural data has the potential to significantly advance performance. (2) Cross-subject generalization is an important next step. Collecting experimental recordings is expensive, and the current models [a] require larger datasets, so that training a model on several subjects' neural signals could provide further gains in accuracy. (3) More efficient inference strategy. The current model samples one image by multiple time steps which has greater computational cost compared to GANs. With the development of LDM, a more efficient solver is expected. We will add more future direction descriptions in the paper. All of the details mentioned above will be added to the paper or supplementary material. ### Table R2: Effectiveness of different modules by perceptual similarity (CLIP). | Subject | Sub1 (%) | Sub2 (%) | Sub3 (%) | AVG (%) | |---------|---------:|---------:|---------:|--------:| | Full Method | 78.0±0.7 | 84.8±0.8 | 80.4±0.6 | 81.1±0.7 | | w.o.MOE | 69.2±1.0 | 75.2±3.0 | 77.2±1.0 | 73.9±1.7 | | w.o. momentum alignment | 63.6±0.1 | 60.2±2.7 | 68.2±0.4 | 64.0±1.1 | | w.o. data augmentation | 72.4±3.0 | 76.4±0.6 | 78.2±2.4 | 75.7±2.0 | | w.o. CycleGAN feat. | 75.6±1.8 | 78.0±0.8 | 78.0±0.3 | 77.2±1.0 | | w.o. linear reprojection | 68.4±1.4 | 70.8±2.7 | 76.2±1.0 | 71.8±1.7 | ## References [a] Takagi, Yu, and Shinji Nishimoto. "High-resolution image reconstruction with latent diffusion models from human brain activity." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [b] Tomoyasu Horikawa and Yukiyasu Kamitani. Generic decoding of seen and imagined objects using hierarchical visual features. Nature communications, 8(1):15037, 2017. [c] Rieck, Bastian, et al. "Uncovering the topology of time-varying fMRI data using cubical persistence." Advances in neural information processing systems 33 (2020): 6900-6912. [d] Bai, Yunpeng, et al. "DreamDiffusion: Generating High-Quality Images from Brain EEG Signals." arXiv preprint arXiv:2306.16934 (2023).
Summary: This paper addresses the problems of semantic gap between training and testing fMRI neural responses and generalization of fMRI-to-image reconstruction models. A pre-trained CLIP model is leveraged to map the training data to a latent feature space in which sparse semantics are extended into dense semantics, thereby alleviating the semantic gap within known semantic spaces. Overall, it is an interesting paper, and the empirical studies show some improvement. Strengths: Please refer to the question section Weaknesses: Please refer to the question section Technical Quality: 3 good Clarity: 3 good Questions for Authors: The followings are the major concerns and minor comments: 1) In this paper, the notations are confusing. In regular papers, scalers are denoted by small letters, vectors are defined with small letters (highlighted in bold), and matrices are denoted by capital letters using bold. In this paper, there are a lot of conflicts. It is so hard to trace what is a set, a matrix, or even a distribution. 2) The proposed method can be summarized in the form of an algorithm or pseudocode. 3) This paper is hard to follow. To explain the technical details of the proposed method clearly, some sections should be revised and reorganized. 4) Some abbreviations are presented before their definitions – e.g., CLIP in the abstract. 5) There are some minor linguistic and typo problems in this paper. E.g. “Alleviating” not, “Allievating” in the title of the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to the question section Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## To Reviewer qNjJ ### Q1. About the confusing Notations. We acknowledge the potential confusion caused by the notations employed in the paper and recognize the need for adopting clearer conventions. Accordingly, we will make revisions to define vectors with bold lowercase letters and represent matrices with uppercase letters using bold font, to achieve overall harmony in the final version. Additionally, for the sake of clarity (Algorithm R1), we have modified the subscript notation in the original text to superscript. Furthermore, we will ensure that the paper adheres to standard conventions. Concretely, (1) we will add subscripts to denote the conditional variable $h$ (in line 207) for clarity and provide more descriptions for each variable to make their meanings intuitive. (2) Some variables are noted with the subscript 'te' while others are not (like in line 235), depending on the context, which can be confusing. To standardize the notation, we will use superscripts uniformly across all variables. (3) We notice that in our paper, functions, modules, and matrices are all denoted using uppercase letters (e.g. lines 139, 171, 236, etc.), which is confusing. To follow standard conventions, we will revise the notations. In summary, we will make revisions to make the final version easier to follow for readers. ### Q2. To summarize as a Pseudo code description. We realize that presenting the proposed method in the form of pseudo code could aid clarity and we provide it in Algorithm R1. Kindly note that we have modified the subscript notation in the original text to superscript for clarity. We will include this description in the revised paper. ### Q3 About technical details for reproducibility. Due to page limits, some details were omitted or put in the supplementary materials like the detailed parametric settings (Section 3.4). To improve reproducibility and readability, we will add more details: (1) In Section 3.2.1, we will provide more context for the momentum alignment and linear reprojection methods. This includes the motivations, assumptions and parameter values. (2) The main text lacked details of the structural information extraction module, such as the transformer size [d] and codebook size. Section 3.2.2 will include more details on the feature extraction using CycleGAN [a]. (3) Section 3.3 omitted some details on the normalization, kernel density estimation and mixture of experts methods, as well as parameters of our proposed conditioning strategy. We will expand this discussion in the main text and supplementary materials. (4) Section 4.1 lacked details on the dataset and preprocessing steps, which can be found in references [b][c]. We will add the necessary information in the supplementary materials. Later, we will provide intermediate features and additional results to enable reproducibility. We will also **release the code** to facilitate understanding of the work and enhance reproducibility. ### Q4. Q5. Some correctness. We acknowledge that some sections could be improved by correcting all the spelling errors in the revised version. We will also revise the case where abbreviations are presented before their definitions (line 45 for CLIP, line 117 for VQGAN, and some others in line 71, etc.). Additionally, we will correct spelling errors such as "alleviating", and address inaccuracies in word choices, such as revising "sub-class." These revisions will be made in the final version. Thank you for your advice and reminders. We will add more details and revise the paper accordingly to improve the reading experience. ## Algorithm R1 Image reconstruction from fMRI using GESS (component constitution strategy). **Input**: Paired fMRI $X$ and Image $Y$ Dataset: $D^{tr} = {(x_i^{tr}, y_i^{tr})}^N_{i=1}$ and $D^{te} = {(x_i^{te})}_{i=1}^N$, N is the number of samples. **Output**: Reconstructed images $\hat{y}^{te}$ from fMRI $x^{te}$. **Training**: Initialization: Constructing CLIP [a], VQGAN [b] and LDM [c] by pretrained parameters $\phi$, $\gamma$ and $\theta$. Training $\beta_c$ of **semantic module $M_c$**: 1. Extracting semantic features $c_i^{tr}$ from $y_i^{tr}$ by CLIP: $c_i^{tr}=f_ {\phi}(y_i^{tr}) $. 2. Training ridge regression parameters $\beta_c$ by {$c_i^{tr}, x_i^{tr}$.} Training $\beta_s$ of **structural model $M_s$**: 1. Extracting structural features $s_i^{tr}$ from $x_i^{tr}$ by VQGAN: $s_i^{tr}=f_ {\gamma}(y_i^{tr}) $. 2. Flatting $s^{tr}_i$, and training ridge regression parameters $\beta_s$ by {$s_i^{tr}, x_i^{tr}$}. **Inference**: 1. **Semantic module.** Predicting semantics from fMRI: $\hat{c_i}^{te} = \beta_cx_i^{te}$. Using momentumn alignment and linear reprojection to get $\hat{c}^{te}_{i,r}$ (Section 3.2.1). 2. **Structural module.** Predicting structure from fMRI: $\hat{s}^{te}_i = \beta_sx_i^{te}$. 3. **MOE.** Estimating weighting parameters $\pi_c$ by KDE and $\pi_s = 1-\pi_c$ (Section 3.3.1). 4. **LDM.** Reconstructing by CS strategy (Section 3.3.2): 1. Initializing $z_0$ by Gaussian. 2. For $t = 1, 2, ..., T$: 1. Conditioned by cross attention: $z_{t-1}' \sim p_{\theta}(z_{t-1}'|z_t,\hat{c}^{te}_{i,r})$. 2. Conditioned by CS strategy: $z_{t-1}=z_{t-1}'-\pi_sF_s(z_{t-1}')+\pi_s F_s(\hat{s}^{te}_{i})$ 3. $\hat{y_i}^{te} = f_{\gamma}(z_0)$. [a] Roman Beliy, Guy Gaziv, Assaf Hoogi, Francesca Strappini, Tal Golan, and Michal Irani. From voxels to pixels and back: Self-supervision in natural-image reconstruction from fmri. [b] Tomoyasu Horikawa and Yukiyasu Kamitani. Generic decoding of seen and imagined objects using hierarchical visual features. [c] Takagi, Yu, and Shinji Nishimoto. "High-resolution image reconstruction with latent diffusion models from human brain activity." [d] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. --- Rebuttal 2: Comment: I have read all the reviews and rebuttals. I am satisfied with the author's responses to my concerns and still find the manuscript above the acceptance threshold. I raise my score to 6.
Summary: This paper proposes a GESS model to solve the semantic gap between the training and the testing data in the generalized fMRI-to-image reconstruction task. A CLIP based method is used to alleviate semantic gap for instances with known semantic space, and a structural information guided diffusion model is used to alleviate semantic gap for instances with unknown semantic space. In addition, this paper quantifies the semantic similarity between a given instance and training data. Strengths: Originality: The design of Generalized fMRI-to-image reconstruction task is interesting. It considers both the known and the unknown subspace, and proposes corresponding solutions. Clarity:The motivation of the method is clearly addressed. Significance: The proposed method not only explicitly extracts semantic and structural information, but also adaptively integrate the features based on the semantic uncertainty, alleviating the semantic gap and achieving general and vivid reconstructions. Weaknesses: 1. The comparison is insufficient which maybe partially due to the specificity of the task. 2. The quantitative results are insufficient, and the ablation experiment is incomplete, which cannot well explain the quality of the proposed method. For example, "Momentum alignment" and "Linear reprojection" described in 3.2.1, and "VQ-GAN" and "CycleGAN" in 3.2.2, etc. 3. Quantified Semantic Confidence is not given in the experiment. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It is confusing between "VQGAN to extract the latent representation" on line 190 and the decoding part of VQ-GANd in Figure 2. As far as I understand, this part of the figure uses VQ-GAN to encode features, and the encoding part should be used. Is it possible to provide a computational efficiency comparison of the models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: It's better to include the discussions about limitations on experiment validation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## To Reviewer NuyD ### W1. About the limited comparison experiments. When we submitted the paper, we strived to find limited open-source methods ([a], [b], [c], etc.), among which [c] is the state-of-the-art of CVPR 2023. We will continue searching and include more up-to-date methods for comparison. ### W2. More quantitative experiments of ablation studies. Due to space constraints, we did not include many ablation studies. To demonstrate the effectiveness of the proposed modules in Sections 3.2 and 3.3, we add more quantitative results (including the mean and standard deviation of the comparison metrics) and ablation studies (the effectiveness of momentum alignment, linear reprojection, CycleGAN features, data augmentation, MOE strategy, etc.) in Figure R1, R2 and Table R2. ### W3. About the quantified semantic confidence. In our approach, the semantic confidence is implicitly measured by kernel density estimation (KDE) in the MOE component. To quantify the semantic confidence, we perform a 100-class classification task and use the maximum of the estimated posterior probabilities as the instance confidence. As shown in Table R3, the semantics estimated by our model is comparatively confident across different subjects (93.6% on average). To further demonstrate the effectiveness of our estimated semantic confidence, we compared the performance of our model reconstructed with MOE-allocated weights (confidence) to those with a constant weight in Table R2. Results (73.9% vs 81.1% with MOE) show the benefit of our estimated semantic confidence. ### Q1. Details of VQGAN module. In our approach, we treat the VQGAN model as a perceptual compression method and use its encoder to extract compressed image features similar to the CNNs. The subsequent diffusion process and other calculations are all performed in the compressed image feature space. The VQGAN decoder is responsible for ultimately decoding the compressed features into the image space for reconstruction. We will add the above details to the paper to clarify our usage of VQGAN's encoder and decoder. ### Q2. About the computational efficiency comparison. We provide a list of the computational costs of the individual modules in our method and provide a computational cost comparison between our proposed GG and CS strategies in Table R1. The results demonstrate that the CS achieves approximately 3 times faster performance compared to GG. Further analysis indicates that the most computationally demanding components of the model are the inference and embedding stages. We will add the above results to the Supplementary Material. ### Limitations. More about the experiments. Regarding the experimental results, the current model limits its ability to generalize across subjects. As great difference exists across different subjects' signals, we cannot validate whether it can accurately decode semantic representations for new subjects. And following [b], we had to average the repeated fMRI recordings to improve their signal-to-noise ratio, which is an inefficient use of the available data. We will expand the discussion of these limitations and avenues for future improvement in the paper. ### Table R1: Time-consuming of each module. | Method | Time-consuming | |-|:-:| | Gradient Guided strategy | 679.87 s | | Component Substitution strategy | 215.45 s | | Fitting Semantic Module $M_c$| 0.158 s | | Fitting Structural Module $M_s$ | 2.459 s | | CLIP embedding (pre-processing) | 47.553 s | | VQVAE embedding (pre-processing) | 72.366 s | ### Table R2: Effectiveness of different modules by perceptual similarity (CLIP). | Subject | Sub1 (%) | Sub2 (%) | Sub3 (%) | AVG (%) | |---------|---------:|---------:|---------:|--------:| | Full Method | 78.0±0.7 | 84.8±0.8 | 80.4±0.6 | 81.1±0.7 | | w.o.MOE | 69.2±1.0 | 75.2±3.0 | 77.2±1.0 | 73.9±1.7 | | w.o. momentum alignment | 63.6±0.1 | 60.2±2.7 | 68.2±0.4 | 64.0±1.1 | | w.o. data augmentation | 72.4±3.0 | 76.4±0.6 | 78.2±2.4 | 75.7±2.0 | | w.o. CycleGAN feat. | 75.6±1.8 | 78.0±0.8 | 78.0±0.3 | 77.2±1.0 | | w.o. linear reprojection | 68.4±1.4 | 70.8±2.7 | 76.2±1.0 | 71.8±1.7 | ### Table R3: Averaged confidence across subjects on testset. | Subject | Confidence (%) | |-|:-:| | Subject1 | 93.2±14.2 | | Subject2 | 94.4±10.1 | | Subject3 | 93.3±12.3 | | Average | 93.6±12.2 | ## References [a] Roman Beliy, Guy Gaziv, Assaf Hoogi, Francesca Strappini, Tal Golan, and Michal Irani. From voxels to pixels and back: Self-supervision in natural-image reconstruction from fmri. In Advances in Neural Information Processing Systems, pages 6514–6524, 2019. [b] Tomoyasu Horikawa and Yukiyasu Kamitani. Generic decoding of seen and imagined objects using hierarchical visual features. Nature communications, 8(1):15037, 2017. [c] Zijiao Chen, Jiaxin Qing, Tiange Xiang, Wan Lin Yue, and Juan Helen Zhou. Seeing beyond the brain: Conditional diffusion model with sparse masked modeling for vision decoding. arXiv preprint arXiv:2211.06956, 1(2):4, 2022. --- Rebuttal Comment 1.1: Comment: Thanks for providing the feedback. The responses have clarified some of my concerns and I would like to raise my score to weak accept.
Summary: This paper's objective is to enhance the generalization performance of the fMRI-to-image reconstruction task through dense representation learning. To achieve this, a pre-trained CLIP is utilized to establish a semantic space, thereby bridging the gap between the training and test sets. Specifically, the paper presents an adaptive method for integrating semantic and structural information. A latent diffusion model is developed to align the semantic and structural data using the proposed gradient-guided method. Finally, the method's effectiveness is evaluated using both GOD and NSD datasets. Strengths: The idea of the paper looks novel and easy to read and the proposed method looks novel. Weaknesses: While I recognize that the paper presents a straightforward and novel extension, I am not completely persuaded by its benefits. In its current state, the manuscript lacks clarity regarding the distinct advantages of each section. Furthermore, I believe additional comparisons with other image reconstruction methods would be beneficial to fully substantiate the paper's contributions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: While I appreciate the use of data augmentation for regularization in Momentum alignment, could you clarify how this might be viewed as a novelty of the paper in addressing the semantic domain shift? Could you provide more insight into the assumption of a linearly weighted sum of neighboring elements mentioned in line 169? Why is this assumption necessary? Regarding line 240, the authors examine the computational complexity of GG. It seems that ablation studies both in terms of computational and generalization performance would be beneficial. The discussion on the outside-space gap is interesting, yet I feel some experimental evaluations demonstrating this problem in real-world cases would strengthen the argument. In line 282, the paper considers the performance of other methods using the paper's new split strategy. Why should a new training and test split cause a decrease in performance? What would the method's accuracy be under the previous split? I believe a more comprehensive comparison is needed for a fair assessment. And finally, what about between-subject accuracy? Do you think that the current method can manage the domain shift and perform well in between-subject contexts? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Indeed, the paper discusses the issue of significant variance in generation quality when dealing with noisy data. Could you spell please check “Allievating” in the title. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## To Reviewer ciXJ ### Q1. About the novelty of data augmentation in momentum alignment Different from previous works [b][d], we explicitly define the semantic gap and reduce it through momentum alignment. The alignment requires accurate descriptions of the data distribution while estimating statistics accurately from the limited fMRI data that deviates from real-world distributions remains a challenge. Therefore, the data augmentation used in momentum alignment is important for our novel contribution 1 (Section 1), as it helps learn well-aligned CLIP features. Ablation studies in Table R2 demonstrate its effectiveness (75.7% v.s. 81.1% of full method). Kindly note that our contribution 1 involves several components (momentum alignment, linear reprojection, etc.). ### Q2. About the importance of linear weighting assumption. $c_{te}$ is reprojected from $c_{te}^*$ as a linear combination of its nearest neighbors. Practically, we found that optimizing based on the MSE loss in Section 3.2.1 does not ensure that $c_{te}^*$ lies on the manifold, and this deviation leads to performance degradation (Table R2, 71.8% v.s. 81.1% of full method). To project the vector onto this unknown manifold, referring to [a], we make the assumption that after the aforementioned augmentation, the feature space $C_a$ is locally continuous such that a linear combination of its vectors approximately lies on the manifold. With this non-parametric assumption, we do not need to explicitly define the manifold or fit an additional model to approximate it, making our approach more efficient and effective. ### Q3. The computational complexity of GG and CS methods. To compare the computational costs of the two conditioning strategies, we measured the inference time costs of the GG and CS methods when generating 50 test images of size 768 x 768 pixels using 50 DDIM steps on an NVIDIA RTX 4090 GPU, AMD Ryzen 5950x CPU, and 64GB RAM. As shown in Table R1, the CS method requires approximately 3 times less time than the GG method, demonstrating significantly better computational efficiency. ### Q4. More experiments and examples of outside-space cases. In our model, the outside-space problem is addressed by the MOE strategy (Section 3.3.1). As shown in Table R2, when outside-space examples are not explicitly handled (only constant weighting assumption), performance degrades significantly (73.9% vs 81.1% of the full method). Concretely, in the third row of plots of Figure 4, when the building photos' concept is absent from the training set, the baseline model predicts according to prior experience and generates an irrelevant reconstruction (fruit), while GESS, which considers the zero-shot learning (ZSL) scenario, works comparatively well. ### Q5. More discussions and comprehensive comparison of split strategy. The generalized split in our paper differs from the random split in that our split strategy considers the zero-shot learning (ZSL) scenario, where the training and test sets come from different categories. Such split follows [b] and aligns with reality: in experiments, collecting brain signals is expensive and time-consuming, which leads to a limited sampling scope. When a random split is considered (the training and test sets have high semantic overlap), both our model and the baseline model perform well (Figure 4, 81% vs 88.8% accuracy by perceptual similarity). Regarding the performance degradation (81% v.s. 52% of baseline model, in Figure 4), some methods that are trained on a limited number of samples tend to be overfitted without understanding the underlying visual mechanisms. As a result, they fail to generalize to real-world images under the more challenging ZSL split. ### Q6. About applying GESS in the cross-subject case. Generalizing GESS to the cross-subject scenario remains challenging. The brain signals are highly individualised, with different signal dimension and visual area locations across subjects [b][c]. So that it is difficult for a shared network to process them directly. Addressing the lack of cross-subject generalization is complex and beyond the scope of the current work. We leave exploring methods to achieve cross-subject decoding for future work. ### Table R1: Time-consuming of each module. | Method | Time-consuming | |-|:-:| | Gradient Guided strategy | 679.87 s | | Component Substitution strategy | 215.45 s | | Fitting Semantic Module $M_c$| 0.158 s | | Fitting Structural Module $M_s$ | 2.459 s | | CLIP embedding (pre-processing) | 47.553 s | | VQVAE embedding (pre-processing) | 72.366 s | ### Table R2: Effectiveness of different modules by perceptual similarity (CLIP). | Subject | Sub1 (%) | Sub2 (%) | Sub3 (%) | AVG (%) | |---------|---------:|---------:|---------:|--------:| | Full Method | 78.0±0.7 | 84.8±0.8 | 80.4±0.6 | 81.1±0.7 | | w.o.MOE | 69.2±1.0 | 75.2±3.0 | 77.2±1.0 | 73.9±1.7 | | w.o. momentum alignment | 63.6±0.1 | 60.2±2.7 | 68.2±0.4 | 64.0±1.1 | | w.o. data augmentation | 72.4±3.0 | 76.4±0.6 | 78.2±2.4 | 75.7±2.0 | | w.o. CycleGAN feat. | 75.6±1.8 | 78.0±0.8 | 78.0±0.3 | 77.2±1.0 | | w.o. linear reprojection | 68.4±1.4 | 70.8±2.7 | 76.2±1.0 | 71.8±1.7 | ## References [a] Shu-Yu Chen, Wanchao Su, Lin Gao, Shihong Xia, and Hongbo Fu. Deepfacedrawing: Deep generation of face images from sketches. ACM Transactions on Graphics (TOG), 39(4):72–1, 2020 [b] Tomoyasu Horikawa and Yukiyasu Kamitani. Generic decoding of seen and imagined objects using hierarchical visual features. Nature communications, 8(1):15037, 2017. [c] Rieck, Bastian, et al. "Uncovering the topology of time-varying fMRI data using cubical persistence." Advances in neural information processing systems 33 (2020): 6900-6912. [d] Takagi, Yu, and Shinji Nishimoto. "High-resolution image reconstruction with latent diffusion models from human brain activity." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. --- Rebuttal Comment 1.1: Comment: I'm grateful for the author's thoughtful responses to my feedback. As a result, I've upgraded my rating from 'Weak Accept' to 'Accept'. I would be glad to see the revised version of the paper.
Rebuttal 1: Rebuttal: ## To Reviewers We sincerely appreciate all reviewers devoting time for our paper and provide valuable comments. We also feel encouraging that all reviewers agree with our contributions in addressing the semantic gaps, introducing the adaptive confidence-weighted approach, and presenting the specially designed condition strategy for the diffusion process. We have taken meticulous care in addressing each of the concerns raised by the reviewers. Our responses have been crafted to effectively address the questions and provide comprehensive explanations. Furthermore, we have included a PDF file that contains additional results (Figure R1 - R2) to substantiate our responses, particularly for aspects of additional ablation studies. We will merge the details and figures from our responses into both the main text and the supplementary materials. Once again, we extend our gratitude for your valuable feedback, and we firmly believe that these refinements will significantly enhance the quality and impact of our work. Pdf: /pdf/cbdcf555b05b036e98fb99029278603846d767a1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks
Accept (poster)
Summary: This paper studies the implicit bias of gradient descent for two-layer ReLU networks for cluster data distribution, and shows the implicit bias towards solutions that generalize well but are vulnerable to adversarial examples. Strengths: The paper builds upon previous work by Vardi et al, removing the orthogonality assumption of the training data, and considering the robustness of test data instead of training data. Although the analysis is via understanding the KKT points, leveraging the previous work result that the implicit bias of gradient low for homogeneous NN given over exp or logistic loss, such regime belongs to the rich regime instead of the lazy regime, which should be more interesting to understand. Therefore I believe the paper provides good contributions to understanding the generalization and robustness of two-layer NN via gradient methods. Moreover, the paper provides a complementary result of Bubeck and Sellke, showing that even if the network is overparameterized, the implicit bias of gradient flow prevents convergence to robust solutions. This paper gives concrete examples in which the network generalize well but is non-robust, and the perturbation is independent of the network width. Note that Bubeck and Sellke paper consider regression setting whereas this paper consider classification setting. Weaknesses: Instead of the orthogonality of the training samples as assumed by Vardi et al, this paper turns out to have a clustered assumption on data distribution. I understand such an assumption comes from the proof technique, but it still restricts the setting too much that cannot capture the real scenario. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I'm wondering whether assumption 2.2 (3) can be further simplified by a relationship between k and d, currently the LHS has both d and k. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. Regarding the reviewer’s question: Assumption 2.2 (3) essentially requires that $k$ cannot be too large and the correlations between cluster means cannot be too large. We will discuss this inequality in the camera-ready version to make it a bit easier to digest.
Summary: In this paper the authors study the implicit bias of two layers neural networks with ReLU activation, in the setting where the data is composed of independant clusters that "dont' overlap" (i.e the probability of having the point of a cluster falling outo the support of another is small, e.g Gaussian of subgaussian). They study both the generalization properties and robustness of the solution obtained by training such network on those data. They show that if the network suceed to fit the data (i.e the train loss falls below some threshold) and if we follow the gradient flow: * then the weights converge *in direction* * the network generalizes well * however it converges to a non robust solution (if the dimension is high, the number of clusters is small, and the number of points is small too) * whereas in the same setting robust solutions exist! But they are not "found" by following the gradient flow * the adversarial attack associated is universal and transferable Strengths: The paper does a good job at explaining the technical hypothesis, their significance, and the practical consequences. The "proof idea" sections contain the right level of details to grasp the ideas. The paper is easy to read despite its technical content. While I am not familiar with the topic, the literature review seems thorough, in particular the contributions of the paper compared to existing works are clear. The main conclusion is significant in my opinion: "However, the implicit bias also leads to non-robust solutions (susceptible to small adversarial 2-perturbations), even though robust networks that fit the data exist.", with the additionnal properties that "the adversarial attack associated is universal and transferable". Definition 4.1 seems to be new, and it is an interesting concept to define robustness. Weaknesses: The main weakness of the contribution lies in the hypothesis, that are extremely restrictive, and sometimes even more than it seems at first glance. It is not clear if those hypothesis have a chance to hold on high dimensional image space, and if the main steps of the reasoning can be adapted to this setting, or if this is just specific to this particular setting and problem (see the section "Limitations" below about the setting). ## Thm 4.2 In Thm 4.2 the quantity is $\min{(Q_{+}/k,Q_{-}/k)}\geq c$ can be discussed a bit. In essence, the condition $c^2k=\omega(1)$ (l332) means that the number of clusters should be **big**, especially if a lot of clusters have the same labels (in which case $c\approx 0$). But on the other hand assumption 2.2 (third item) tells that the number of clusters should be small. Moreover the dimension in which the theorems hold should be high (from what I understand of theorem 3.1), but there shouldn't be too much points per clusters (otherwise $n$ would be too big). In overall, beyond artificial example 2., is there practical settings that fulfill those hypothesis? I'm scared that the results only apply to a high dimensional setting with very few points per cluster, that are all orthogonal, and that do not overlap. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: ## Suggestion Based on the quality of the literature review, I think a "summary" table that contains a high level description of some significant prior works, with their associated hypothesis and main conclusions, would help to situate the paper better, in the spirit of: | Paper | Hypothesis on model | Hypothesis on data | Conclusion | ----------- | ----------- | ----------- | ----------- | Paper 1| two-layers ReLu | none | converges | Paper 2| homogeneous | almost orthogonal | converges and generalizes | Our | etc... | clusters | generalizes but not robust ## Question How much do the results depend on the hypothesis that the activation is a ReLU ? This is part of the requirements of prior work, and clearly the case of $\phi(z)\geq z$ (l236). Do you think that extension to similar shaped activations is straighforward, or very difficult? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The limitations of the hypothesis are discussed, but this part can be detailed in more depth. I have a few remarks and questions. l207 : I think the formula can be simplified as $$k(\max_{i\neq j} |\langle \mu_i,\mu_j\rangle|+C+1)\leq \frac{1}{10}(d - C + 1)$$ where $C$ depends on $d$ and $\sigma$. An interpretation can be given: "The centroids must be almost orthogonal to each other, and there shouldn't be too much of them.". Note sure if $\frac{1}{10}$ plays a special role, or if any other constant small than $1$ could have done the trick in the proofs. It is worth mentioning it, because this condition is hard to interpret. We see that "it is worth noting that Assumption 2.2 implies that the data is w.h.p. linearly separable" shows that the hypothesis that the clusters are almost orthogonal is quite strong. What I understand from "Finally, we remark that when k is small, our results may be extended to the case where σ > 1." (l227) hints that what you need is a "almost orthogonal clusters that don't overlap, so they are separable" kind-of hypothesis. I am not sure how much additional insights it brings compared to the work of Vardi, Yehudai, and Shamir [VYS22]. In particular, the upper bound on $n$ in Thm3.1 is quite surprising, fortunately the discussion l265 "It is noteworthy that all existing non-vacuous generalization bounds for interpolating nonlinear neural networks in the presence of label noise require n < d [FCB22; Cao+22; XG23; Fre+23a; Kou+23]." helps to understand why this is required. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review. We respond to their main questions and comments below. ### Assumptions on $Q_{\pm}$, $k$ and $c$: We apologize for the confusion here. Although our analysis can accommodate non-constant $c$, for simplicity one can take $c$ to be a constant, in which case the assumption that $\min(Q_+/k, Q_-/k)\geq c$ means that at least a constant fraction of the clusters have positive labels and a constant fraction of the clusters have negative labels. The comment about $c^2 k= \omega(1)$ can then be thought of as a statement about the number of clusters $k$ rather than $c$. Thus, the lower bound on $k$ is $\omega(1)$, while the upper bound on $k$ from Assumption 2.2 is typically much larger (see the first two items in Example 1). We will be sure to revise our manuscript to make this more clear. ### Assumptions: high-dimensionality, orthogonality etc. The reviewer is concerned that the results only apply to high-dimensional settings with few points per cluster that are all orthogonal and non-overlapping. We want to emphasize that our assumptions can accommodate low-dimensional settings (i.e., where $n \gg d$) with correlated clusters and many points per cluster. For instance, as in the first bullet in Example 1 and in Example 2, we can permit $k = \tilde{O}(\sqrt{d})$ clusters with cluster correlations of order $\tilde{O}(\sqrt{d})$ and with $\sigma=1$. Since $\sigma=1$, the cluster radius is roughly $\sqrt{d}$, so that the cluster radius is of the same order as the distance between clusters. And as we mention in lines 253--255 and 328--330, our results hold when $n = \mathrm{poly}(d)$ for any polynomial $\mathrm{poly}()$, so we for instance could have $n = d^{50}$ which is far from a `high-dimensional’ setting. The only setting not covered is when $n$ is super-polynomial in $d$, which we think is a fairly restricted setting but which also would require significant new technical innovations as we mention in line 265. Of course, one can obtain many additional examples such as the above, that satisfy our assumptions. Regarding practicality of our assumptions, we agree that the setting we consider is somewhat stylized, but we wish to emphasize that in order to characterize whether or not KKT conditions for margin-maximization imply generalization or susceptibility to adversarial attacks for test data, we must make some type of distributional assumption. We think that any distributional assumption comes with benefits and pitfalls, and we are not aware of widely-accepted definitions of ‘real-world distributions’ or ‘practical’ settings. We think the assumptions in our work are fairly natural and uncontrived. We believe the possibility of a ‘double-edged sword’ of the implicit bias of GF in this setting is a novel and remarkable finding that would be of interest to the NeurIPS community. ### Suggestion on the ​​literature review: We thank the reviewer for the suggestion. In the camera-ready version we will try to situate the paper better in the spirit of the suggested table. ### Assumption of ReLU activation: We think it would be relatively straight-forward to generalize our results to the leaky ReLU, which is also homogeneous and satisfies a similar inequality to the one you state ($\phi(z) \geq \gamma z$ for $\gamma >0$ would suffice in many places), at the expense of additional notation and dependence on the $\gamma$ parameter. However, we think there would be significant difficulties with extending the result to non-homogeneous settings, since there are essentially no known characterizations of the implicit bias of gradient flow/descent for such cases. ### Additional remarks: Thank you for your additional suggestions. We will elaborate more on Line 207 in the camera-ready as the reviewer suggested. Regarding the comparison with VYS22, as we have mentioned to Reviewer 8zLE, at a high-level the reviewer is correct that there are parallels between clusters in our setting and samples in their setting. However, there are important conceptual differences. First, their results require that the ambient dimension d is much larger than the number of samples n; the high-dimensional setting often has different generalization behavior than in the low-dimensional setting (e.g., overfitting can be 'benign' in the high-dimensional setting, but harmful in low-dimensional setting [1]). Second, their analysis held for arbitrary labels $y_i$ of the training data, and it was unclear how an underlying distribution over x and a signal (in the form of a conditional distribution of $y|x$) would affect both generalization and the existence of adversarial test examples. We will be sure to add details in the revision which make this more explicit. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed answer and for your clarifications. > so we for instance could have $n=d^{50}$ which is far from a `high-dimensional’ setting. While I agree with your overall answer, I believe that as long as the number of points is polynomial in the dimension, we are *asymptotically* in a high dimensional setting. Here, for $d\gg 50$, $n=d^{50}$ is far lower than the number of corners $2^ d$ of the hyper-cube. > Assumption of ReLU activation: Thank you for your answer. I have an additionnal question: do you think it extends to the case of non elementwise activation functions that are homogeneous? I am thinking of MaxMin for example, that operates on pairs of consecutive neurons. $$\text{MaxMin}([x,y])=[\min(x,y),\max(x,y)].$$ --- Reply to Comment 1.1.1: Comment: > While I agree with your overall answer, I believe that as long as the number of points is polynomial in the dimension, we are asymptotically in a high dimensional setting. Here, for $d\gg 50$, $n=d^{50}$ is far lower than the number of corners of the hyper-cube. We are struggling to understand the reviewer’s definition of 'high-dimensional'. Our understanding of 'high-dimensional’ refers to settings where the number of samples is either of the same order or much larger than the number of samples; this is the way 'high-dimensional’ is used in Wainwright’s textbook on *High-Dimensional Statistics*, for example. With this definition, in the high-dimensional setting samples are nearly-orthogonal and there are few samples per cluster, which are some of the phenomena the reviewer expressed concern about in their review. But these phenomena do not hold when $n$ is a large polynomial in $d$ as our settings allow. We are also confused by what the reviewer means by 'asymptotic’ as our results can be applied for finite $n$ and finite $d$. > Thank you for your answer. I have an additionnal question: do you think it extends to the case of non elementwise activation functions that are homogeneous? I am thinking of MaxMin for example, that operates on pairs of consecutive neurons. We think extending our analysis to non-elementwise homogeneous activations like maxpooling is an intriguing direction for future research. We are not sure whether our results would hold for the MaxMin activation.
Summary: The paper studies the implicit regularization brought by the neural network itself. The authors theoretically prove that in two-layer ReLU networks trained with the logistic loss or the exponential loss, the implicit bias would lead to the solutions that generalize well but non-robust, regardless of the size of the network. Strengths: - The topic is valuable and may shed some light on the field. - The theoretical proof seems to be sound. Weaknesses: - Lacking of even toy experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
null
Summary: The authors show under a special data cluster model, the implicit bias of gradient flow converge to KKT points that generalize well but are not robust under l2 perturbations, when robust solutions of the problem exists. Their results are built upon earlier works on KKT points by LL20 and JT20, and the more recent work by VSS22, and extend these results further. Strengths: - The authors study the gradient flow solution of 2-layer ReLU network on a mixture of Gaussian distribution, and prove that while the solution generalizes well (classifies almost every test point correctly with high probability), it is not robust (can change its classification with perturbation much less than \sqrt{d}, the optimal achievable robustness). This is a very nice result following up on the previous line of work on this topic. - The paper is well-written and clear. While the result is technical the presentation is easy to follow and main proof ideas succinctly presented in the limited space. Weaknesses: - Given the assumption on large cluster separation (\sqrt{d} between cluster centers) and small variance of the clusters (\sigma<1), it feels like the clusters behave very much like individual isolated points/samples. In view of this, the differentiation with the results of VSS22 in the introduction section appear to be weaker than the authors claim, as the cluster means take the role of training samples in VSS22, which still requires near-orthogonality. Also, results wrt test data is also easier under the data assumptions used in this paper because test data essentially takes the same label from their respective cluster means. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Although not strictly necessary for a theroy paper, it would be nice to have some simulations on synthetic data since it is easy to set up with the data model the authors assume. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: - The authors state clearly the assumptions used for their results. Negative societal impacts not applicable here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review of our work. We respond to two comments/questions below: ### Difference with VYS22: At a high-level, the reviewer is correct that there are parallels between clusters in our setting and samples in their setting. However, there are a few important differences. The first is that when $\sigma=1$ the distance between points in the same cluster is of the same order as the distance between points in different clusters (or the distance between cluster means). So the clustered setting is significantly different from the isolated points from VYS22. The second is a more conceptual difference, which is that their results require that the ambient dimension $d$ is much larger than the number of samples $n$; the high-dimensional setting often has different generalization behavior than in the low-dimensional setting (e.g., overfitting can be 'benign' in the high-dimensional setting, but harmful in low-dimensional setting [1]). Moreover, their analysis held for arbitrary labels $y_i$ of the training data, and it was unclear how an underlying distribution over $x$ and a signal (in the form of a conditional distribution of $y|x$) would affect both generalization and the existence of adversarial test examples. ### Experiments: As we mention to Reviewer DAg1, although we agree that a thorough experimental investigation of our results would be beneficial, we are not convinced the time and paper-space needed to do them well would be worthwhile. In particular, although it would be easy to verify the generalization properties and that the universal perturbation $z$ we discover indeed succeeds in adversarial attacks, we believe that simply verifying our theorems are correct with experiments would add little value. However, we agree that a thorough empirical study that extends our setting and evaluates whether the "double-edged sword" effect of the implicit bias occurs more generally is an intriguing topic for future research. [1] Guy Kornowski, Gilad Yehudai, Ohad Shamir. From Tempered to Benign Overfitting in ReLU Neural Networks. arXiv preprint 2305.15141 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for clarifying the differences of the current work with VYS22.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the generalization and robustness of solutions obtained by gradient flow on two-layer ReLU networks. Under a distributional setting where the data is sampled from a Gaussian mixture distribution, this paper shows that the gradient flow is biased towards solutions that generalize well, but are vulnerable to adversarial examples. The theorems are built on LL20 and JT20 (Theorem 2.1). The authors also show the existence of a robust solution in this setting and prove that any solution obtained via gradient flow is non-robust. Although the assumption is a bit restrictive, the result is novel and interesting. The authors also provide a few examples which help to understand the assumptions. Strengths: 1. This paper is well-written and easy to follow. The problem is well-motivated. The authors provide a thorough discussion of the related work. 2. The theoretical results are solid and interesting, the proof ideas help to understand the results. 3. This is an active area of research. The contribution is relevant. Weaknesses: 1. The authors assume that when the training loss is small, the gradient flow will start to converge to a KKT point of some maximum margin problem. What if we have a random initialization, how is this small training loss guaranteed? 2. This might be relatively minor, but numerical experiments would help to demonstrate the vulnerability (non-robustness) of the training of two-layer ReLU networks. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The authors emphasize that the result hold in the rich regime. Could the authors be more specific, as there is no discussion about the rich regime in the results? 2. Does such a result hold for regression, in other words, what about other loss functions, such as squared error? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors assume that the data is clustered and the number of clusters is not too large, therefore the data is linearly separable. This is a relatively strong assumption. The relaxation of this assumption and extensions to other settings would be interesting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and suggestions. We respond to specific points below. ### Assumption of small training loss: Indeed, the implicit bias result kicks in when gradient flow reaches a sufficiently small training loss. One of the advantages of relying on the KKT conditions of the max-margin problem instead of analyzing the full trajectory of gradient flow is that it allows us to separate the convergence question from the generalization and robustness questions. Hence, even in settings where proving convergence is difficult, we may prove generalization and non-robustness. Nevertheless, as the reviewer mentioned, proving convergence in our setting is an interesting question. For wide networks convergence can be shown by an NTK analysis, and in the general case this question is open. We will add a discussion on this issue. ### Experiments: Although we agree that a thorough experimental investigation of our results would be beneficial, we are not convinced the time and paper-space needed to do them well would be worthwhile. In particular, although it would be easy to verify the generalization properties and that the universal perturbation $z$ we discover indeed succeeds in adversarial attacks, we believe that simply verifying our theorems are correct with experiments would add little value. More broadly, we think it is worth emphasizing that this work is a part of a series of works on understanding the theoretical foundations of robustness in deep learning. Many of these works do not include experiments and have been published at NeurIPS, including Bubeck and Sellke’s Outstanding Paper Award-winning work at NeurIPS 2021. However, we agree that a thorough empirical study that extends our setting and evaluates whether the "double-edged sword" effect of the implicit bias occurs more generally is an intriguing topic for future research. ### Rich regime: Here we are re-using the terminology from, e.g., [1], where by "rich regime" refers to neural network training which does not lie in the "kernel regime". The "kernel regime" requires a number of assumptions regarding the network width, initialization, etc, and since our setting makes no such assumptions our results hold for networks in the "rich regime". We shall clarify this in the camera ready. ### Other losses: We are quite interested in exploring the questions in this paper in the regression setting. However, one difficulty is that there is a much less developed theory about the implicit bias of gradient flow/descent in neural networks with regression losses, which is provably not the minimum-l2 loss in relu networks [2]. This makes it difficult to use our approach which relies upon rather explicit characterizations of the implicit bias of optimization algorithms to characterize the behavior of trained neural nets. [1] Blake Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, Nathan Srebro. Kernel and Rich Regimes in Overparametrized Models. COLT 2020 [2] Gal Vardi and Ohad Shamir. Implicit Regularization in ReLU Networks with the Square Loss. COLT 2021 --- Rebuttal Comment 1.1: Comment: Thank the authors for detailed explanation. I would like to increase my rating from 6 to 7.
null
null
null
null
null
null
Provably Efficient Offline Goal-Conditioned Reinforcement Learning with General Function Approximation and Single-Policy Concentrability
Accept (poster)
Summary: The paper proposes the VP-learning algorithm to solve offline goal-conditioned RL in the context of general function approximations. The algorithm is based on a previous empirically successful algorithm proposed by [1], and the author proves the finite sample complexity for VP-learning under mild assumptions. The proposed algorithm avoids minimax learning by using a duality formulation first identified by [1], which makes the proposed VP-learning algorithm friendly to implementation while also enjoying theoretical guarantees. **References:** [1] Ma J Y, Yan J, Jayaraman D, et al. Offline goal-conditioned reinforcement learning via $ f $-advantage regression[J]. Advances in Neural Information Processing Systems, 2022, 35: 310-323. Strengths: - The VP-learning algorithm proposed in the paper enjoys finite sample guarantees under a mild single policy concentrability assumption. Compared with previous works on offline RL (single-task) algorithms with general function approximations, VP-learning does not involve minimax optimization, potentially making the algorithm more easy and stable to implement. Weaknesses: - This theoretical paper aims to study the problem of offline goal-conditioned RL. But it seems that the whole theory does not essentially rely on the setting of goal-conditioned RL. The existing theoretical works this paper compares with are also for standard offline RL, which makes the motivation of the paper confusing. It's true that the standard offline RL can be interpreted as a special case of goal conditioned RL, but I hope to see more intuitions and messages on the problem of goal-conditioned RL itself. - The key path through which the VP-learning algorithm can avoid solving minimax optimization problems is based on [1], which makes the theoretical contributions and insights of the paper weakened. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Regarding Assumption 4 (single policy realizability), what is the $\alpha$ in the equation $\pi_{\alpha}^{\star}\in\Pi$? Since in Theorem 2 and 3 this assumption is imposed before the choice of $\alpha$, the $\alpha$ in the assumption seems undetermined. I think this needs clarification since the paper assumes that the policy class $\Pi$ is finite, which necessitates a specific choice of $\pi_{\alpha}^{\star}$. - Regarding Assumption 5 (lower bound of policy), this seems a quite strong assumption which is not needed by previous single task offline RL algorithms with general function approximations, e.g., [2, 3, 4]. The author argues that the parameter $\tau$ can be very small, but a small $\tau$ will also increase the upper bound of the suboptimality of VP-learning due to Theorem 2&3. Is this assumption actually necessary for offline goal-conditioned RL? or is it only required by the specific analysis for VP-learning algorithm? **References:** [2] Zhan W, Huang B, Huang A, et al. Offline reinforcement learning with realizability and single-policy concentrability[C]//Conference on Learning Theory. PMLR, 2022: 2730-2775. [3] Uehara M, Sun W. Pessimistic model-based offline reinforcement learning under partial coverage[J]. arXiv preprint arXiv:2107.06226, 2021. [4] Xie T, Cheng C A, Jiang N, et al. Bellman-consistent pessimism for offline reinforcement learning[J]. Advances in neural information processing systems, 2021, 34: 6683-6694. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The above questions are potential limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful and insightful comments. Below are our responses. >But it seems that the whole theory does not essentially rely on the setting of goal-conditioned RL We provide the following reasons (motivations) that our paper uses goal-conditioned settings. 1. The original algorithm GoFAR we analyze is designed for GCRL setting, so we keep their setting. Also, as we compare our VP-learning and GoFAR algorithm empirically (see “global” response for details), the environments are goal-conditioned. 2. GCRL is a more general setting that includes single-task RL as a special case, and our algorithm “can also be applied in non-goal-conditioned settings” as Reviewer 39fV mentioned. Since it can be analyzed and applied in both GCRL and single-task settings, we choose the more general one to make our result applicable in a wider scope. >The key path through which the VP-learning algorithm can avoid solving minimax optimization problems is based on [1], which makes the theoretical contributions and insights of the paper weakened. As Reviewer fB34 said, our algorithm “achieves an effective and reasonable improvement from the lens of theory”, and our theory further provides insightful guidance on the practical implementation of algorithms (as shown in the “global” response and the attached pdf). We disagree that the above step makes our theoretical contribution and insight weakened. Instead, this is one of the important steps that ensure our algorithm has a theoretical guarantee and outperforms [1] empirically. Note that the GoFAR algorithm in [1] lacks a theoretical guarantee and we provide a finite sample guarantee under a careful choice of $\alpha$. An appropriate value of $\alpha$ not only ensures that our algorithm has a finite sample guarantee (Theorem 1) but also helps to improve over the previous algorithm GoFAR empirically (see “global” response for details). >What is the $\alpha$ in the equation $\pi_\alpha^\star \in \Pi$? $\alpha$ can be chosen as in Theorem 2 or 3. Our Assumption 4 is actually similar to Assumption 2,3 in [2]. We will mention it in our assumption that $\alpha$ can be chosen as in Theorem 2 or 3. Also, the policy class can have infinite cardinality, and our results still hold as long as the policy class $\Pi$ has a bounded log-covering number. We assume a finite cardinality only for the convenience of presentation (also similar to [2]). >Regarding Assumption 5 (lower bound of policy), this seems a quite strong assumption which is not needed by previous single task offline RL algorithms with general function approximations, … Is this assumption actually necessary for offline goal-conditioned RL? or is it only required by the specific analysis for VP-learning algorithm? First, although some previous single tasks offline RL algorithms do not require the lower bound assumption, they require other strong assumptions. For example, [3] requires a completeness-type assumption, where for all policy $\pi \in \Pi$, it requires the $Q$ function of $\pi$ is realized in a value function class $\mathcal{F}$, and $\mathcal{F}$ further needs to satisfy the completeness assumption. [2] does not require a lower bound of policy, but they assume that the behavior policy $\mu$ is known; note that if this is true for our algorithm, then we can directly calculate the policy using $\mu$ and the learned $U_V$ after $V$-learning and thus does not even require a policy class. However, this method is not realistic and thus we use a more practical method (i.e., weighted MLE) to calculate the policy. Also, we argued that $\tau$ can be very small and even depends on $\alpha$. When $\tau$ depends on $\alpha$, e.g., $\tau = \alpha^c$ for some constant $c > 0$, we can choose a different $\alpha$ between line 588 and 589 in the proof of the main theorem s.t. the two terms in the third line between line 588 and 589 equal to each other and still obtain a suboptimality rate polynomial in $n$. Roughly speaking, we require that $\alpha = \frac{1}{\alpha^c}\cdot \frac{1}{\alpha^{1/4}N^{1/8}}$, and thus the suboptimality is $O(\alpha) = O(1/N^{\frac{1}{10+8c}})$, which equals to $O(1/N^{1/10})$ when $c = 0$ as in Theorem 2. When $c$ is positive but small, our lower bound assumption on policy class is mild and it only makes the sample complexity slightly worse. In practice, we don’t need this lower bound assumption and can directly apply the weighted MLE algorithm in the policy learning procedure. We add this assumption for theoretical analysis. To the best of our knowledge, it remains an open question whether a lower bound assumption on policy class is necessary for a finite sample guarantee if one uses MLE. **Reference:** [1] Ma J Y, Yan J, Jayaraman D, et al. Offline goal-conditioned reinforcement learning via f-advantage regression[J]. Advances in Neural Information Processing Systems, 2022, 35: 310-323. [2] Zhan W, Huang B, Huang A, et al. Offline reinforcement learning with realizability and single-policy concentrability[C]//Conference on Learning Theory. PMLR, 2022: 2730-2775. [3] Xie T, Cheng C A, Jiang N, et al. Bellman-consistent pessimism for offline reinforcement learning[J]. Advances in neural information processing systems, 2021, 34: 6683-6694. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I have read the rebuttal and the comments from other reviewers. Thanks for all your efforts dealing with my concerns. I agree now the theoretical contribution of the paper is enough, especially given that the technical concerns from Reviewer quHd can be addressed. Regarding my other questions, I hope that the authors could make them clearer in the revision. Given all that, I am pleased to raise my score to 6. --- Reply to Comment 1.1.1: Comment: Thanks for your time reviewing our paper, reading our response, and providing helpful feedback! We will make the points you mentioned more clear in the revision.
Summary: This paper provides a rigorous theoretical analysis to a modified version of existing offline Goal-conditioned RL algorithm, and proves that it has an $\tilde O (poly(1/\epsilon))$ sample complexity, where $\epsilon$ is the desired suboptimality of the learned policy. The algorithm requires minimal assumptions on the dataset and the function class and does not involve minimax optimization. Strengths: The paper makes a novel contribution by providing a theoretical analysis of an existing offline GCRL algorithm. The paper is clearly written and well-organized. The theoretical analysis is well-explained. Weaknesses: The derivation of this paper is flawed due to a mistake in Proposition 3.2. This mistake leads to the conclusion that strong duality holds, and we can recover the optimal policy by solving the dual problem. However, it can be seen that, at the limit of $\alpha\to0$, the dual problem (Equation (6)) solves a value function that evaluates the behavior policy. Therefore, solving the dual problem cannot recover the optimal value from suboptimal data, and strong duality does not hold. As a result, all subsequent derivation is invalid, and the claims are unsupported. The mistake in Proposition 3.2 stems from a citation of Proposition 4.2 of [1] which incorrectly establishes strong duality. Overall, I think the paper would have been a valuable contribution to the field of offline RL. However, the flawed derivation is a serious issue that needs to be addressed. [1] Ma, Jason Yecheng, et al. "Offline goal-conditioned reinforcement learning via $f$-advantage regression." Advances in Neural Information Processing Systems 35 (2022): 310-323. ===Post-rebuttal Update=== Previously raised concerns have been effectively addressed. Overall, I am optimistic that, with some revisions, this paper could meet the standards for acceptance. I am pleased to raise my score from 3 to 6 and look forward to seeing your further refinement of this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you explain more on the raised issue? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors of the paper have discussed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful and insightful comments. Below are our responses. >Overall, I think the paper would have been a valuable contribution to the field of offline RL. However, the flawed derivation is a serious issue that needs to be addressed. We really appreciate the reviewer’s work in identifying the issue that is caused by “a citation of Proposition 4.2 of [1] which incorrectly establishes strong duality”. We carefully checked the derivation of Proposition 4.2 of [1], and found that their result indeed has some issues theoretically. Below we provide a simple fix to this issue with details and show that our theoretical results still hold. The issue is caused when we solve $\max\_{d(s,a;g) \geq 0} \mathbb{E}\_{(s,a,g)\sim d} [r(s;g) + \gamma \mathcal{T} V(s,a;g) - V(s;g)] - D\_g(d(s,a;g) || \mu(s,a;g)).$ For notation convenience, we denote $A_V(s,a;g) = r(s;g) + \gamma \mathcal{T} V(s,a;g) - V(s;g)$, and denote $w(s,a;g) = d(s,a;g)/\mu(s,a;g)$. Then the above problem can be rewritten as $ {\max}\_{w \geq 0} \mathbb{E}\_{(s,a,g) \sim \mu}[w(s,a;g) A\_V(s,a;g) - g(w(s,a;g))].$ Note that we can solve $w(s,a;g)$ separately for each individual $(s,a,g)$ pair. If we don't have the constraint that $w \geq 0$, then equation (22),(23) in [1] is correct, i.e., the maximum of the above object is $\mathbb{E}\_{(s,a,g) \sim \mu}[g\_\*(A\_V(s,a;g))]$ and the optimal $w$ satisfies $w\^\star\_V(s,a;g) = g'\_\*(A\_{V}(s,a;g))$. However, we have the constraint that $w \geq 0$, which makes the above solution incorrect. Under this constraint, one can solve that when $g$ is convex, we have $ d\_V^\star(s,a;g) = w\_V^\star (s,a;g) \cdot \mu(s,a;g) = g'\_\*(A\_V(s,a;g))\_+ \cdot \mu(s,a;g)$ where $x\_+ \triangleq \max$ {$x, 0$}, and the maximum of the above object is $\mathbb{E}\_{(s,a,g)\sim \mu}[I$ { $g'\_\*(A\_V(s,a;g)) \geq 0$ } $\cdot g\_\*(A\_V(s,a;g)) + I$ { $g'\_\*(A\_V(s,a;g)) < 0$ } $\cdot \min_{u \in \mathbb{R}} g\_\*(u)].$ If we further define $\tilde{g}\_\*(x) = g\_\*(x) - \min\_{u \in \mathbb{R}} g\_\*(u)$ which is a constant shift of $g\_\*$, then the above objective can be expressed as $ \mathbb{E}\_{(s,a,g)\sim \mu}[ I$ { $g'\_\*(A\_V(s,a;g)) \geq 0$ } $ \cdot \tilde{g}\_\*(A\_V(s,a;g)) ] + \min\_{u \in \mathbb{R}} g\_\*(u).$ Therefore, the $V$-learning should be corrected as $\min_V L_\alpha (V)$ where $L\_\alpha(V) = \alpha( (1-\gamma)\mathbb{E}\_{(s,g)\sim(\rho,p(g))}[V(s;g)] + \mathbb{E}\_{(s,a,g)\sim \mu}[ I$ { $g'\_\*(A\_V(s,a;g)) \geq 0 $ } $ \tilde{g}\_\*(A\_V(s,a;g)) ]).$ Note that this is similar to the previous $L_\alpha(V)$ except that we only consider the $\tilde{g}\_\*$ of which $g'\_\* \geq 0$ in the second term. Under our choice of $f(x) = \frac{1}{2}(x-1)^2$, $g = \alpha \cdot f$, we have $g_*(x) = \frac{(x+\alpha)^2-\alpha^2}{2\alpha}$. Note that in our paper we defined $U_V(s,a;g) = r(s;g) + \gamma \mathcal{T} V(s,a;g) - V(s;g) + \alpha = A_V(s,a;g) + \alpha$. Therefore, the $V$-learning objective is also equivalent to $ L\_\alpha(V) = \alpha(1-\gamma)\mathbb{E}\_{(s,g)\sim(\rho,p(g))}[V(s;g)] + \frac{1}{2} \mathbb{E}\_{(s,a,g)\sim \mu}[ (U\_V(s,a;g)\_+)^2 ].$ Similarly, in the policy learning procedure, we need to modify the algorithm to $ \pi^\*\_\alpha = \arg\max\_{\pi} \mathbb{E}_{(s,a,g)\sim \mu}[(U\_\alpha^*(s,a;g)\_+/\alpha) \cdot \log \pi(a|s,g)].$ Note that this is also similar to the previous form, except that we ignore the term with a negative $U$ value. With the above modification, our algorithm is correct and the whole proof still goes through. Note that for the most part, the proof remains unchanged, and the only place we need to pay attention to is Lemma 1. Since now our policy learning procedure uses $U_+$ as the weight instead of $U$, we only need to control $|| \hat U\_+ - (U^\*\_\alpha)\_+ ||\_{2,\mu}$. The previous proof uses the property that $\tilde{L}\_\alpha(U\_V) \triangleq L\_\alpha(V)$ is strongly convex w.r.t. $U\_V$ and $|| \cdot ||_{2,\mu}$. Then we can upper bound $|| \hat U - U^\*\_\alpha ||\_{2,\mu}^2$ using $\tilde{L}\_\alpha(\hat U) - \tilde{L}\_\alpha(U\_\alpha^*)$. After modification, $\tilde{L}\_\alpha(U\_V) \triangleq L\_\alpha(V)$ is no longer strongly convex, but is ``semi''- strongly-convex, i.e., $\tilde{L}\_\alpha(U\_V) - \frac{1}{2}\mathbb{E}\_{(s,a,g)\sim \mu}[ (U\_V(s,a,g)\_+)^2]$ is linear (and thus convex). Therefore, if we define $h(U\_V) = \tilde{L}\_\alpha(U\_V) - \frac{1}{2}\mathbb{E}\_{(s,a,g)\sim \mu}[ (U\_V(s,a,g)\_+)^2]$, we have by the definition of convexity that $h(y) \geq h(x) + \nabla h(x)^{\mathsf{T}} (y-x)$ which implies that $ \tilde{L}\_\alpha(y) \geq \tilde{L}\_\alpha(x) + \nabla \tilde{L}\_\alpha(x)^{\mathsf{T}}(y-x) + \frac{1}{2}|| y\_+ - x\_+ ||\_{2,\mu}^2.$ Therefore, we can use the same method to upper bound $|| \hat U\_+ - (U^\*\_\alpha)\_+ ||\_{2,\mu}^2$ using $\tilde{L}\_\alpha(\hat U) - \tilde{L}\_\alpha(U\_\alpha^\*)$ and all the other proof remains unchanged. We thank the reviewer again for identifying the issue so that we can fix it and make our theoretical results more solid. We are also encouraged that the reviewer thought our paper would be “a valuable contribution to the field of offline RL” if we could address the issue. We are happy to discuss this further if the reviewer still has any unaddressed concerns. **Short Version** $V$-learning should be $L\_\alpha(V) = \alpha( (1-\gamma)\mathbb{E}\_{(s,g)\sim(\rho,p(g))}[V(s;g)] + \mathbb{E}\_{(s,a,g)\sim \mu}[ I$ { $g'\_\*(A\_V(s,a;g)) \geq 0 $ } $ \tilde{g}\_\*(A\_V(s,a;g)) ]).$ and policy learning should be $ \pi^\*\_\alpha = \arg\max\_{\pi} \mathbb{E}_{(s,a,g)\sim \mu}[(U\_\alpha^*(s,a;g)\_+/\alpha) \cdot \log \pi(a|s,g)].$ **Reference:** [1] Ma J Y, Yan J, Jayaraman D, et al. Offline goal-conditioned reinforcement learning via f-advantage regression[J]. Advances in Neural Information Processing Systems, 2022, 35: 310-323. --- Rebuttal Comment 1.1: Title: I am pleased to raise my score from 3 to 6 Comment: I appreciate the effort you've put into addressing the concerns raised in my review. Upon considering your response, I am pleased to acknowledge that the primary concern I had previously expressed has been effectively addressed. However, the proposed fix raises an additional problem. The V-learning now involves a minmax optimization which eliminates a bright spot of the VP-learning algorithm. Furthermore, I am aligned with the viewpoints of Reviewer vRJf concerning the adoption of the GCRL setting. It appears that the inclusion of the GCRL setting introduces complexity to the notation without yielding any benefits. Contrary to the assertion in your rebuttal, I think this setting does not enhance the generality of the approach. The goal can be easily formulated as the additional state dimensions, making GCRL special cases of RL. Therefore, I recommend move to the RL setting which could also amplify the potential impact and reception of this paper. Overall, I am optimistic that, with some revisions, this paper could meet the standards for acceptance. I am pleased to raise my score from 3 to 6 and look forward to seeing your further refinement of this paper. --- Reply to Comment 1.1.1: Title: Thanks for your valuable feedback! Comment: We thank the reviewer again for the helpful comments that make our results more solid, and we are pleased that we have addressed the concern! For GCRL vs single-task RL setting, it's a very good suggestion to view GCRL as a special case of RL and simplify the notation to avoid introducing complexity to readers. We will make corresponding modifications in the revision. For the additional problem that the reviewer mentioned regarding our proposed fix, actually, the $V$-learning is still a minimization problem. Note that our $V$-learning still has the form of $\min\_{V} L\_\alpha(V)$, and the objective $L\_\alpha(V) = \alpha( (1-\gamma)\mathbb{E}\_{(s,g)\sim(\rho,p(g))}[V(s;g)] + \mathbb{E}\_{(s,a,g)\sim \mu}[ I$ { $g'\_\*(A\_V(s,a;g)) \geq 0$ } $ \tilde{g}\_*(A\_V(s,a;g)) ])$ in our proposed fix does not involve a maximization problem. Therefore, after the fix, our VP-learning algorithm still enjoys the property that it does not involve a minimax optimization.
Summary: This paper aims at improving the theoretical understanding of offline goal-conditioned RL (GCRL). In particular, this paper modifies an existing offline GCRL algorithm and shows an O^˜ (poly(1/ϵ)) sample complexity under minimal assumptions of single-policy concentrability and realizability. Their algorithm, called VP-learning, consists of two uninterleaved optimization steps and has good empirical performance while retaining computational stability. Moreover, it can also be applied in non-goal-conditioned settings. There seems to be a theory-practice gap that this paper addresses. Namely, most provably efficient algorithms seems to require minimax optimization, while in practice that is not effective. Ideally, an algorithm should be practical with good sample complexity guarantees, which is a gap this paper aims to fill. In particular this paper provides guarantees for a modified version of an algorithm GoFAR, with modifications in the deterministic and stochastic MDP settings. They call their modified algorithm VP-learning. While other algorithms have been shown to be efficient under single-policy concentrability and realizability assumptions, they require solving minimax optimization problems. Strengths: While other algorithms have been shown to be efficient under single-policy concentrability and realizability assumptions, they require solving minimax optimization problems. This algorithm does not. The algorithm they develop is built off of an algorithm with good empirical performance. Weaknesses: While part of the pitch of the paper is that they desire an algorithm that has provable efficiency whilst also having good empirical performance, they do not have empirical results to demonstrate that their modified algorithm performs well. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How do you expect the modified algorithm, VP-learning to compare to GoFAR in terms of empirical performance? How do you expect the modifications will impact things adversely? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: To my knowledge, the authors do adequately address the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful and insightful comments. Below are our responses. >How do you expect the modified algorithm, VP-learning to compare to GoFAR in terms of empirical performance? How do you expect the modifications will impact things adversely? The modified algorithm, VP-learning outperforms GoFAR in terms of empirical performance, and we provided details in the “global” response. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for answering my question. --- Reply to Comment 1.1.1: Comment: Thanks again for your time and effort in reviewing our paper and reading our response.
Summary: This paper establishes a rigorous theoretical analysis for offline goal-conditioned reinforcement learning algorithms (GCRL). To achieve that, the authors made a slight modification on top of an existing offline GCRL algorithm (GoFAR), achieve a polynomial sample complexity by regression instead minimax optimization. Strengths: 1. This paper is well-organized and provides a throughout survey of related work. 2. It seems to achieve an effective and reasonable improvement from the lens of theory. Weaknesses: There is no experiment to support the correctness of the theoretical analysis. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful and insightful comments. Below are our responses. >There is no experiment to support the correctness of the theoretical analysis. We provide experiments to support the correctness of our theoretical analysis (especially the choice of $\alpha$). Please see the “global” response for details. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: I thank the authors for their response and the uploaded experiment results, which improve the credibility of this work. I do not doubt the correctness of their contribution to the theory, so I gave such a suggestion as I think it would be helpful for the readers to have a straightforward understanding of their work. I'll improve my score from 5 to 6. However, I still have some suggestions about their uploaded experiment results: I cannot get any information about the environment settings, e.g., what are FetchReach, FetchPick, FetchPush, FetchSlide, and HandReach? I hope the authors can add this part of the environment introduction and also the learning curve in their future revision. --- Reply to Comment 1.1.1: Title: Thanks for your suggestion Comment: Thanks for your time reviewing our paper and reading our response, and thanks for raising your score! For the experiment settings, FetchReach, FetchPick, FetchPush, FetchSlide, and HandReach are all environments in the d4rl benchmark. We did not provide details of the environment in the global response since it is the same as [1], and thus we omit the details to keep the response more concise. We will add the environmental details in the revision. Thanks for your suggestion! **Reference:** [1] Ma J Y, Yan J, Jayaraman D, et al. Offline goal-conditioned reinforcement learning via f-advantage regression[J]. Advances in Neural Information Processing Systems, 2022, 35: 310-323.
Rebuttal 1: Rebuttal: We thank all the reviewers for their helpful and insightful comments. Below we first address common issues. Since several reviewers mentioned that our paper does not provide empirical results of the modified algorithm, we provide experimental results of our VP-learning algorithm with different choices of $\alpha$ under five environments (FetchReach, FetchPick, FetchPush, FetchSlide, and HandReach) used in [1] (see the two tables in the attached pdf file ). All the implementation details of our VP-learning are the same as the GoFAR algorithm [1], except for the value of $\alpha$. Note that our VP-learning algorithm with $\alpha=1$ is equivalent to the GoFAR algorithm. Table 1 contains the discounted returns and Table 2 contains the final distances of the policies trained after 100 epochs and evaluated over 10 runs. For each environment and each $\alpha$, the result was averaged over 3 random seeds as in the GoFAR paper [1]. The best results of each environment are in bold. The empirical results demonstrate the correctness of our theoretical analysis: choosing $\alpha=1$ will result in a large suboptimality of $\pi_\alpha^\star$ and thus the learned policy $\hat \pi$. Instead, we should carefully choose the value of $\alpha$ to ensure a vanishing suboptimality. In practice, we should tune the value of $\alpha$ and typically it should be less than 1. In our experiments, the best $\alpha$ ranges over $0.05-0.5$. **Reference:** [1] Ma J Y, Yan J, Jayaraman D, et al. Offline goal-conditioned reinforcement learning via f-advantage regression[J]. Advances in Neural Information Processing Systems, 2022, 35: 310-323. Pdf: /pdf/221346def9071a6f8254e9fa901c63be9748a8b0.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints
Accept (poster)
Summary: The paper extends the traditional complex query-answering task into an eventuality-centric complex query-answering task to understand the reasoning at the eventuality level. Specifically, the paper divides the discourse rations into two types of implicit constraints: occurrence constraints and temporal constraints. The occurrence constraints determine whether certain eventuality happens or not. The temporal constraints provide the order of the eventualities' occurrence. In addition, the paper proposes a new memory-enhanced query encoding to reason based on eventuality-centric knowledge graphs. The model first introduces a computational graph that encodes queries, including operations such as relational projection and intersections. The paper proposes a memory-enhanced encoding component that utilizes a memory module to encode constraint information. The operation output is used as a query to access each head eventuality based on the relevance score. The paper then uses an attentional aggregation to sum over the constraint relation and tails. The final model is optimized based on similarity scores with cross-entropy loss. Strengths: 1. The paper introduces a new eventuality-centric complex query-answering task to better model the reasoning at the eventuality level. The paper proposes a new way to provide discourse relations with two implicit logic constraints. The idea of occurrence constraints and temporal constraints is interesting. 2. The paper introduces a new memory-enhanced query encoding to update the query representation with relevance-based constraint representations. 3. The paper tests the new framework with a new dataset sampled from ASER. The model shows strong performance over multiple different baselines. The paper also provides code and dataset construction details in the Appendix. The paper also includes a case study in the Appendix. Weaknesses: Some parts of the paper are not very clear: 1. In section 3.1, what function is used for relation projection and intersection operation? The paper said that the intersection is a permutation-invariant neural network. However, it needs to be clarified in detail to readers. 2. In section 3.2, what relevance score is used for Equation 5? Suppose the paper used a semantic relevance score such as cosine similarity. In that case, the motivation for this part is unclear because the constraints with higher similarity might not be the ones with closed relevance. Moreover, Figure 4 needs to be clarified. The paper needs to briefly explain the right part (constraint memory in Figure 4.) I suggest going through the walkthrough example in section 3.2. The code does not include a ReadMe file. 3. In section 4.1, the paper only focuses on the answer with constraints. Has the paper also tested the new model for answers without any contradictions? 4. The abbreviation in Table 4 needs to be clarified. It seems that p is the projection, i is the intersection. However, those abbreviations are not clarified in the caption. Readers also find it hard to figure out what e represents. The analysis of Table 4 is superficial. The paper needs to add more qualitative analysis with more concrete examples. The paper needs to conduct an ablation study to show the contribution of each component. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In section 3.1, what function is used for relation projection and intersection operation? 2. In section 3.2, what relevance score is used for Equation 5? 3. Has the paper also tested the new model for answers without any contradictions? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper provides a limitation section and broader impact in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Re W1 & Q1: The relation projections and intersection operations are adopted from backbone models. For different backbone bone models, we have different parametrization of Intersection/Union. GQE [1] model uses a feed-forward layer, followed by an average pooling, and then followed by a matrix multiplication for intersection. The relational projection is modeled by using a matrix multiplication. Query2Particles [2] use self-attentions for intersection modeling, and the gated transition function for relational projection. FuzzQE [3] uses element-wise fuzzy logic operations on embedding space to do intersections and unions. Meanwhile it use a feed forward layer, a layer normalization, and a sigmoid function for relational projection. Neural-MLP [4] uses MLP-Mixer as an intersection module and the relation projection module. Re W2 & Q2: All existing query encoders use the similarity between the query embedding and answers embedding to retrieve answers. In our case, this similarity score is an inner product. Because of this, we can use the same similarity measure to compute the relevance between the query embedding and the head eventuality embedding as a relevance score to the constraint. If the constraint is relevant to the query embedding, then relation and tail information of this constraint are added into the query representation. However, if a constraint is relevant, we want to exclude the information of its relation and tail from the query embedding, we are motivated to add a feed-forward layer to adjust the direction of the memory value, which contains the relation and tail embedding. For example, in the example of Figure 4, the “Food is bad” in the constraint memory has a high relevance score to the “the events happens before X complains and leaves the restaurant”. Then, when the constraint of “Food is bad before PersonY adds soy sauce” is given, we add the information of relation type “Precedence” and tail node “PersonY adds soy sauce” from memory value to the query embedding. We are motivated to do this because we want to exclude the answer of “PersonY adds soy sauce” if the following relation projection is “Reason”, “Condition”, or “Succession”, because we want to avoid temporal contradiction. We further prove the correctness of the intuition behind this idea by the ablation study conducted in Re W4. Re W3 & Q3: We do not fully this question, because “the answers with constraints” has the same meaning as “the answers without contradictions”. We filtered out the contradictory answers according to the theorem prover, and only kept the answers that satisfy the logical constraints in the dataset construction process. | Models | | Occurence Constaints | | | Temporal Constraints | | | Average | | | | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|---|---| | | Hit@1 | Hit@3 | MRR | Hit@1 | Hit@3 | MRR | Hit@1 | Hit@3 | MRR | | | | GQE | 8.92 | 14.21 | 13.09 | 9.09 | 14.03 | 12.94 | 9.12 | 14.12 | 13.02 | | | | + CMQE | 10.20 | 15.54 | 14.31 | 10.70 | 15.67 | 14.50 | 10.45 | 15.60 | 14.41 | | | | + CMQE - Constraints | 8.29 | 12.87 | 11.62 | 8.80 | 13.02 | 12.17 | 8.54 | 12.95 | 11.90 | | | | + CMQE - FFN | 0.67 | 1.17 | 1.13 | 0.74 | 1.23 | 1.12 | 0.70 | 1.19 | 1.08 | | | | Q2P | 14.14 | 19.97 | 18.84 | 14.48 | 19.69 | 18.68 | 14.31 | 19.83 | 18.76 | | | | + CMQE | 15.15 | 20.67 | 19.38 | 16.06 | 20.82 | 19.74 | 15.61 | 20.74 | 19.56 | | | | + CMQE - Constraints | 14.16 | 20.00 | 18.86 | 14.72 | 19.92 | 18.79 | 14.44 | 19.96 | 18.82 | | | | + CMQE - FFN | 12.77 | 16.63 | 15.89 | 12.74 | 16.83 | 14.75 | 12.76 | 16.73 | 15.32 | | | | Nerual MLP | 13.03 | 19.21 | 17.75 | 13.45 | 19.06 | 17.68 | 13.24 | 19.14 | 17.71 | | | | + CMQE | 15.26 | 20.69 | 19.32 | 15.91 | 20.63 | 19.47 | 15.58 | 20.66 | 19.40 | | | | + CMQE - Constraints | 13.33 | 19.15 | 17.94 | 13.49 | 19.18 | 14.48 | 13.41 | 19.16 | 18.08 | | | | + CMQE - FFN | 10.35 | 14.67 | 13.71 | 10.94 | 14.67 | 12.74 | 10.64 | 14.67 | 14.53 | | | | FuzzQE | 11.68 | 18.64 | 17.07 | 11.68 | 17.97 | 16.53 | 11.68 | 18.31 | 16.80 | | | | + CMQE | 14.76 | 21.12 | 19.45 | 15.31 | 21.01 | 19.49 | 15.03 | 21.06 | 19.47 | | | | + CMQE - Constraints | 12.69 | 19.92 | 17.68 | 13.53 | 18.25 | 17.91 | 13.11 | 19.08 | 17.80 | | | | + CMQE - FFN | 9.81 | 15.26 | 14.46 | 10.17 | 15.37 | 14.87 | 9.99 | 15.31 | 14.66 | | | Re W4: Yes, for the query types notations, the “p” is for projection, the “i” is for intersection, and “e” is for eventuality. We will include the detailed introduction of query types and abbreviations in the paper. We conducted an ablation study on our proposed MEQE method to prove the effectiveness of the relevance score and the feed-forward To prove the effectiveness of the relevance score and the feed-forward module mentioned in Re W2, we conducted an ablation study on our proposed MEQE method, and here are the results: When we removed the feed-forward network and directly added the relations and tails embedding to the query embedding, the performance was negatively affected. This is because the query embedding is more likely to have higher similarity to the answers that should be excluded. This effect was more significant in the GQE model, as the GQE model uses the simplest element-wise addition as the relations projection. We conduct another ablation of replacing the constraints to random triples so that there are no contradictory answers. Then we observed that the performance of the baseline models is comparable with the MEQE model. This indicates that the performance improvement is gained from the constraints instead of the structural changes of the query encoder. This experiment proves two things. First, the relevance score is effective in finding the corresponding constraints. Second, the feed-forward layer is useful and necessary to adjust the direction of the memory contents to incorporate into the query embedding. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: The authors have answered most of my questions. I have raised my score from 4 to 5. I want to clarify the W3 and Q3. So basically, I wonder about the performance of the dataset without any constraints. Will the model cause a performance drop in those instances? --- Reply to Comment 1.1.1: Comment: Thank you for your reply and clarification. To address the issue you raised, we have sampled another round of evaluation data with informational atomics (i.e., contents in the memory module) that do not have constraints to the answers. We have ensured that there are no contradictory answers in any of these instances. The performances are as follows: | Models | | Occurrence Constraints | | | Temporal Constraints | | | Average | | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | | Hit@1 | Hit@3 | MRR | Hit@1 | Hit@3 | MRR | Hit@1 | Hit@3 | MRR | | GQE | 10.06 | 15.87 | 14.51 | 9.56 | 14.68 | 13.68 | 9.81 | 15.27 | 14.10 | | + CMQE | 10.98 | 16.87 | 15.37 | 11.03 | 15.34 | 14.41 | 11.00 | 16.10 | 14.89 | | Q2P | 11.94 | 17.61 | 16.11 | 11.36 | 16.69 | 14.89 | 11.65 | 17.15 | 15.50 | | + CMQE | 13.13 | 17.78 | 16.53 | 12.72 | 16.78 | 15.75 | 12.93 | 17.28 | 16.14 | | Neural MLP | 16.58 | 22.00 | 20.78 | 16.24 | 21.15 | 20.09 | 15.23 | 21.57 | 20.43 | | + CMQE | 16.88 | 21.60 | 20.38 | 16.85 | 20.93 | 19.91 | 15.52 | 21.26 | 20.14 | | FuzzQE | 15.71 | 22.17 | 20.44 | 15.15 | 21.61 | 19.42 | 15.43 | 21.89 | 19.93 | | + CMQE | 16.50 | 23.01 | 21.10 | 15.46 | 20.70 | 19.75 | 15.98 | 21.86 | 20.43 | Generally, the performance of these models is comparable. However, MEQE performs slightly better when used together with GQE, FuzzQE, and Q2P, while it is comparable to Neural MLP. Although the memory contents do not have constraints on the query answers, the subtle performance improvement can be explained from two perspectives. First, the informational atomics are sampled from the edges related to the queries, providing additional information about the entities in the query, even though they do not have a direct impact on the answers. Second, MEQE has more parameters than the baseline models. However, as we explained in our previous reply, the structural changes are not the main reason for the performance improvement, as shown in our ablation study. We hope that this explanation clarifies the performance comparison between the backbone and MEQE models. Thank you again for your valuable feedback.
Summary: The paper proposed a reasoning task "Complex Eventuality Query Answering (CEQA)". CEQA is performed over EVKG and is different from traditional CQA over entity-centric KG. Authors of the paper clearly explain the new task, and further discussed a memory-augmented method to improve models' performance on CEQA. Strengths: The paper made two contributions. First, it proposed the CEQA task and discussed its difference from the traditional CQA task. Then, the paper proposed a memory augmentation method for query encoding. The authors clearly discussed the CEQA task and emphasized its importance in logical reasoning. Weaknesses: I have a few quick comments of the paper. Maybe it worth briefly discussing System 1 and 2 for the completeness of the paper. Readers may be unfamiliar with the terms. Please consider discussing your dataset created from ASER earlier in the paper and maybe go through a few examples. This task is new, to my knowledge, to many readers including myself. Maybe also discuss ASER a bit more. How is ASER constructed? I have another concern about the quality of ASER, the backbone database of the proposed task. Some statements and/or reasoning can be ambiguous or debatable. For example, someone can say, "PersonX adds soy sauce" is the cause of "Food is bad" due to some other implicit information in addition to the temporal one. How did ASER (or other similar database resolve this problem)? How is ASER related and different from Commonsense QA? Both require reasoning with implicit information. Are there other relevant tasks in addition to the ones you described in the paper. Please include them in the Related Work section. Are you able to experiment with a few more datasets? I understand this is a new task with limited resources. However, experiment results in this paper is limited to fully prove the effectiveness of your proposed method. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The paper proposed a novel reasoning task potentially useful for building more powerful and general reasoning systems. The paper has made substantial contribution in proposing new tasks, but is limited in modeling and experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your suggestions and efforts for reviewing this paper. W1 System 1 and System 2: In short, the theory of System 1 and System 2 reasoning, proposed by Daniel Kahneman, suggests that human thinking can be divided into two systems. System 1 operates automatically and intuitively, making quick judgments and performing routine tasks effortlessly. System 2, on the other hand, engages in deliberate and analytical thinking, requiring conscious effort. In the context of logical query answering, System 2 reasoning is engaged when we need to carefully analyze the question, consider relevant information, and apply logical rules to arrive at a correct answer. We will incorporate brief discussions in our introduction later. W2 curated benchmark and ASER: We will add a subsection presenting more detailed statistics, knowledge format, and examples of ASER and our constructed benchmark. For your information, ASER is constructed via two main steps: extraction and conceptualization. The extraction step involves extracting eventualities (events or situations) from a collection of large corpora. Then, in the conceptualization step, the authors use a graph-based approach to represent the extracted eventualities and their relations as a knowledge graph. This involves identifying and linking semantically related eventualities and relations to form a large-scale eventuality knowledge graph. W3 : The question posed is a good one, and to answer it, we need to understand how the edges of ASER are constructed from text using information extraction methods. When a statement like "PersonX adds soy sauce" is identified as the cause of "Food is bad," and both phrases appear in the text corpus above a certain frequency, they are recorded in the ASER graph. Similarly, phrases like "PersonX sleeps" before and after "PersonX takes a shower" can both be plausible high-frequency edges that are stored in the KG, despite being contradictory in a specific situation. It's important to note that, in a random subgraph of ASER or a similar database, not all edges can hold simultaneously in a specific situation. This is a characteristic of a database describing events and activities, and not a shortcoming of the database itself. The KG is designed to provide all possible answer candidates, while logical verification is left to the reasoning process. This is the main motivation behind the approach taken in this paper. W4 Difference with commonsense QA: As we discussed with Reviewer DAya, our task different with other QA or implicit reasoning tasks. Our task is distinct from traditional question answering tasks due to its wide scope, encompassing various relationships, including non-common sense relations found in Treebank 2.0. This resource provides additional relations, making our task more complex. In particular, we encounter queries that involve relationships unique to eventuality levels, such as co-occurrence, conjunction, and contradiction. These intricate connections cannot be effectively addressed using common sense question answering methods. Our main focus is on complex query answering, where queries center around intricate relationships between eventualities. Unlike existing common sense knowledge graphs (CSKGs), which typically handle relations involving two events in a triple, our task involves multiple events within a single query-answer pair. This presents a challenge in formulating our task as either a knowledge graph completion (KGC) or question-answering (QA) task, as such formulations would require discarding most query constraints, reducing complexity, and simplifying it into a basic query answering task. While commonsense knowledge may play a role in answering our queries, it is not as prevalent as in other tasks. Additionally, our task does not heavily rely on the semantic information of the query itself; instead, it relies on learning graph structures to perform query answering and reasoning. We utilize the inherent structure of the graph rather than relying solely on natural language processing. As for relevant tasks, there are several complex query answering tasks that share similar settings with the one in our paper, like the EFO1 benchmarks [2]. We will make sure these differences are more clearly discussed in our paper. W5 More dataset: Thank you for your suggestion. Incorporating additional datasets would undoubtedly strengthen our paper. However, at this stage, we have been unable to identify other viable datasets. The main challenge lies in finding knowledge bases or graphs that encompass both (i) eventualities, including states, actions, and events, as nodes, and (ii) comprehensive relationships between these eventualities, as edges. We will remain vigilant and explore any new resources that become available in the future. Reference: [1] Daniel, K. (2017). Thinking, fast and slow. [2] Wang, Z., Yin, H., & Song, Y. (2022). Benchmarking the Combinatorial Generalizability of Complex Query Answering on Knowledge Graphs. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1 (NeurIPS Datasets and Benchmarks 2021).
Summary: This work aims to conduct complex logical query task over eventuality-centric KG (EVKG) and propose the Complex Eventuality Query Answering (CEQA) setting that considers the implicit constraint of the temporal order and occurrence of eventualities. The authors also propose a memory-enhanced query encoding method and achieve state-of-the-art performance on the CEQA task. Strengths: 1. very interesting and important research problem 2. the proposed MEQE module enhances the state-of-the-art query encoder on the CEQA task 3. the paper is overall well-organized and well-written Weaknesses: 1. the novelty of memory-enhanced module is limited since the memory mechanism has been proposed in many prior works such as [1] 2. Recently, path-based KG reasoning methods such as QE-GNN [2] have achieved much research progress and show stronger reasoning ability than embedding-based methods. However, the proposed MEQE seems cannot incorporate the path-based method that didn't learn the representation for each node and relation in KGs. [1] Rossi, E., Chamberlain, B., Frasca, F., Eynard, D., Monti, F., & Bronstein, M. (2020). Temporal graph networks for deep learning on dynamic graphs. arXiv preprint arXiv:2006.10637. [2] Zhu, Z., Galkin, M., Zhang, Z., & Tang, J. (2022, June). Neural-symbolic models for logical queries on knowledge graphs. In International Conference on Machine Learning (pp. 27454-27478). PMLR. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: please refer to the weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Re W1: Thank you for pointing out this reference, and we will cite the corresponding papers. Meanwhile, we would like to argue that, we are the first work to use memory modules in the problem of query encoding, and we are the first work to propose using memory modules to encode the logical constraints during the query encoding process. This is a novel idea worth noticing in the community of knowledge graph reasoning. Re W2: Yes, we admit that the current design of our approach cannot be directly used together with GNN-QE. In the current research context, it is the only method that uses GNN over the underlying knowledge graph. However, GNN-QE obtained superior performance at a high computational cost. Here are the reasons: 1. The projection operation of GNN-QE relies on GNN operations, and with time complexity $O(|V|d^2 + |E|d)$, where $|V|$ is the number of vertices, $|E|$ is the number of edges, and d is GNN hidden size. The inference time grows linearly with the size of KG. However, the projection operations of other QE methods are of $O(d^2)$, and they are independent of KG size. 2. The query representation size of GNN-QE is $|V|$ instead of $d$ as previous methods. In previous work, their query embedding size is around 15,000 (the number of vertices), while other QE methods have the query embedding size at around 300-400. Moreover, for the previous query encoding methods, they can be scaled-up to the graph with 86,000,000 nodes [1]. Yet, we are not confident that the QE-GNN can also scale up to large graphs. Meanwhile, we also evaluate the performance of FuzzQE, another fuzzy logical based method over the fixed query embedding size. [1] Ren, H., Dai, H., Dai, B., Chen, X., Zhou, D., Leskovec, J., & Schuurmans, D. (2022, August). Smore: Knowledge graph completion and multi-hop reasoning in massive knowledge graphs. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 1472-1482).
Summary: This paper proposes an approach to address the challenge of complex query answering on eventuality knowledge graphs by integrating implicit logical constraints. The authors introduce the task of complex eventuality query answering (CEQA), which requires considering the occurrence and temporal order of eventualities. Methodologically, the paper encodes the edges of the knowledge graph containing these constraints as key-value pairs, which are then integrated into the attention mechanism. The authors extracted eligible data from the ASER dataset and conducted experiments combining their proposed method with various query encoding models, demonstrating improved performance. Strengths: - The paper's motivation is solid, aiming to incorporate logical information from eventualities into complex query answers. - The method serves as an effective additional information exploitation approach, adaptable to various query encoding models. - The constructed CEQA dataset can provide insights for related tasks. Weaknesses: - This paper dedicates excessive description to the introduction. The use of multiple representations of first-order logic may be misleading, as the actual encoding is relational. - The work assumes the existence of a knowledge graph and constructs datasets using theorem provers, limiting its generalizability to other tasks. - The paper introduces two types of constraints and presents contradictory answers in the data, but lacks analysis of these categories in the methods and experiments. While Table 2 provides statistics on constraints and contradictory answers, conducting further experiments specifically targeting these categories would provide valuable insights into the functioning of the model. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In Figure 3, please provide a brief explanation of the confliction of the "Precedence" relation in the two subfigures. - Could you provide a concise overview of the structure of the permutation-invariant neural network for the intersection operation in Section 3.1? - Consider reducing the number of method-independent first-order logic examples and representations in order to allocate more space for comprehensive experimental and methodological analyses of the defined categories in the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have no concerns about the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! I would like to address your concerns one by one. Re: W1 We clarify that the proposed problem is formally defined in logical form. Meanwhile, relational encoding is part of the query encoding method. It is possible that there will be other methods for this task that do not use relations encoding. Moreover, the definition in logical form is necessary, because we are using the logical expression as input to the theorem prover to verify the correctness of the answers. This is an important part of our problem definition. Re: W2 In this paper, our proposed method is a logical query answering method on KG, so we do need a knowledge graph as it is part of our problem definition. However, we argue that it is a novel and generalizable idea to use theorem prover for verifying the consistency of the relations among multiple events. This is because events can also be obtained from other sources, for example text, instead of sampling from a KG. For a specific example, we can use this idea to verify the logical consistency of the stories generated from language models. In doing so, we can first extract events from the stories, and use discourse parsers to create edges among the events. Then we can apply the theorem prover on this graph of events to verify whether there are contradictions on occurrence and the order of occurrence of events. Re:W3 We would like to clarify that we do indeed include the corresponding results for the two types of constraints and contradictory answers in Table 3. This table has nine columns, with the three columns on the left side showing the results with occurrence constraints, the three columns in the middle showing the results on temporal constraints. The last three columns show the averaged results. Re: Q1 In table 3, V is something that happens before a person complains and leaves the restaurant, according to the KG, the V could be either “Service is bad” or “Food is bad”. If V? is the reason of V, then according to the graph, V? could be either “Staff is new”, “PersonY adds ketchup”, “PersonY adds soy sauce”, and “PersonY adds vinegar”. However, in the query we also know that “PersonY adds vinegar” does not happen, and “PersonY adds soy sauce” happens after the “Food is bad”, thus cannot be the reason for “Food is bad”. The conflict here is causality implies presidency, and an specific event cannot happen both before and after another event. Re: Q2 The relation projections and intersection operations are adopted from backbone models. For different backbone bone models, we have different parametrization of Intersection/Union. But basically, they are all deep-set functions [1]. * GQE [2] model uses a feed-forward layer, followed by an average pooling, and then followed by a matrix multiplication. * Query2Particles [3] use self-attentions for intersection modeling: * FuzzQE [4] uses element-wise fuzzy logic operations on embedding space to do intersections and unions. * Neural-MLP [5] uses MLP-Mixer as an intersection module. All the baseline models have their unique designs, but they share one thing in common, their intersection operations are all invariant to the permutation. They are all special types of deep set functions [1]. We will include the formula of them to the appendix of this paper. Re: Q3 Thank you for your advice, we will move parts of the logical definition to the appendix, so that we can include the newly added experiment results to the paper. Reference: [1] Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R. R., & Smola, A. J. (2017). Deep sets. Advances in neural information processing systems, 30. [2] Hamilton, Will, et al. "Embedding logical queries on knowledge graphs." Advances in neural information processing systems 31 (2018). [3] Jiaxin Bai, Zihao Wang, Hongming Zhang, and Yangqiu Song. 2022. Query2Particles: Knowledge Graph Reasoning with Particle Embeddings. In Findings of the Association for Computational Linguistics: NAACL 2022. Association for Computational Linguistics, Seattle, United States, 2703–2714. [4] Chen, Xuelu, Ziniu Hu, and Yizhou Sun. "Fuzzy logic based logical query answering on knowledge graphs." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 4. 2022. [5] Alfonso Amayuelas, Shuai Zhang, Susie Xi Rao, and Ce Zhang. Neural methods for logi333 cal reasoning over knowledge graphs. In The Tenth International Conference on Learning 334 Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL 335 https://openreview.net/forum?id=tgcAoUVHRIB. --- Rebuttal Comment 1.1: Comment: I have carefully reviewed the comments from other reviewers and the authors' responses. The authors' replies have addressed most of my concerns. Concerning W3, I would appreciate a more comprehensive analysis beyond what is presented in Table 3. Specifically, I am interested in how models constrained exclusively by occurrence can avoid conflicts related to both occurrence and temporal aspects. This curiosity arises from the already provided distribution of contradictory answers in Table 2 (by the way, there are spelling errors for "occurrence" in Tables 2 and 3). My primary recommendation revolves around the structure of the paper. While the introduction of a new task necessitates the provision of background information, I believe that certain content could be drawn from references and the Appendix. What I am emphasizing is a requirement for more intricate model details and insightful experimental analysis in the primary content. Thus, I am inclined to maintain my current score. --- Reply to Comment 1.1.1: Comment: We appreciate your time and effort in reading our rebuttal. To address your concerns, we would like to provide further clarification and analysis. We will address each comment individually. Comment: How can models constrained exclusively by occurrence avoid conflicts related to both occurrence and temporal aspects? Response: We would like to clarify that our model is not exclusively constrained by occurrences. As depicted in Figure 3, the model is constrained by informational atomics, which may include either occurrence or temporal constraints. These constraints are constructed and filtered using the z3 prover. As explained in the paper (lines 229-240), we create our data using theorem provers for occurrence constraints (True or False) and a linear program solver for treating the currency time as continuous variables (floating point numbers) to detect potential temporal contradictions. Our MEQE model captures the constraints within informational atomics by first computing relevance scores and then adding the corresponding constraint information to the query embedding. The effectiveness of this method is demonstrated in Table 3. Our model can capture implicit constraints, and the content in Table 3 shows that it works for both occurrence and temporal constraints. Comment: Model details and experimental analysis Response: As mentioned earlier, Table 3 shows the improved performance of MEQE across four different backbones on a dataset that includes queries with both occurrence and temporal constraints. To further explain the performance improvement of MEQE, we conducted an ablation study, as detailed in our reply to reviewer YBho. Here are the key findings from the ablation study: The relevance score effectively identifies corresponding constraints. The feed-forward layer is essential for adjusting the direction of memory contents to incorporate into the query embedding. Removing the FFN layer significantly reduces performance. We hope our additional explanations and ablation study results provide further insights into Table 3 and adequately address your concerns. | Models | | Occurrence Constraints | | | Temporal Constraints | | | Average | | | | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|---|---| | | Hit@1 | Hit@3 | MRR | Hit@1 | Hit@3 | MRR | Hit@1 | Hit@3 | MRR | | | | GQE | 8.92 | 14.21 | 13.09 | 9.09 | 14.03 | 12.94 | 9.12 | 14.12 | 13.02 | | | | + CMQE | 10.20 | 15.54 | 14.31 | 10.70 | 15.67 | 14.50 | 10.45 | 15.60 | 14.41 | | | | + CMQE - Constraints | 8.29 | 12.87 | 11.62 | 8.80 | 13.02 | 12.17 | 8.54 | 12.95 | 11.90 | | | | + CMQE - FFN | 0.67 | 1.17 | 1.13 | 0.74 | 1.23 | 1.12 | 0.70 | 1.19 | 1.08 | | | | Q2P | 14.14 | 19.97 | 18.84 | 14.48 | 19.69 | 18.68 | 14.31 | 19.83 | 18.76 | | | | + CMQE | 15.15 | 20.67 | 19.38 | 16.06 | 20.82 | 19.74 | 15.61 | 20.74 | 19.56 | | | | + CMQE - Constraints | 14.16 | 20.00 | 18.86 | 14.72 | 19.92 | 18.79 | 14.44 | 19.96 | 18.82 | | | | + CMQE - FFN | 12.77 | 16.63 | 15.89 | 12.74 | 16.83 | 14.75 | 12.76 | 16.73 | 15.32 | | | | Nerual MLP | 13.03 | 19.21 | 17.75 | 13.45 | 19.06 | 17.68 | 13.24 | 19.14 | 17.71 | | | | + CMQE | 15.26 | 20.69 | 19.32 | 15.91 | 20.63 | 19.47 | 15.58 | 20.66 | 19.40 | | | | + CMQE - Constraints | 13.33 | 19.15 | 17.94 | 13.49 | 19.18 | 14.48 | 13.41 | 19.16 | 18.08 | | | | + CMQE - FFN | 10.35 | 14.67 | 13.71 | 10.94 | 14.67 | 12.74 | 10.64 | 14.67 | 14.53 | | | | FuzzQE | 11.68 | 18.64 | 17.07 | 11.68 | 17.97 | 16.53 | 11.68 | 18.31 | 16.80 | | | | + CMQE | 14.76 | 21.12 | 19.45 | 15.31 | 21.01 | 19.49 | 15.03 | 21.06 | 19.47 | | | | + CMQE - Constraints | 12.69 | 19.92 | 17.68 | 13.53 | 18.25 | 17.91 | 13.11 | 19.08 | 17.80 | | | | + CMQE - FFN | 9.81 | 15.26 | 14.46 | 10.17 | 15.37 | 14.87 | 9.99 | 15.31 | 14.66 | | | Re: Restructuring the paper. Thank you for your suggestions on restructuring our paper. As we are unable to modify the paper directly this year, we will implement your suggestions by drawing the details of model definitions and parameterizations of backbone models from the reference and appendix and incorporating them into the primary content. Additionally, we will move the detailed logical definitions and examples to the Appendix.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a new task of complex event knowledge graph completion, curates the datasets from existing event knowledge graph, and designs a solution. However, the tasks motivation is not very clear regarding the formal logic form as each event is still in the form of natural language, especially considering the event chains are likely break down along with increasing reasoning paths. The authors need to further discuss the different and importance of the proposed task compared with commonsense reasoning or commonsense knowledge graph completion. Weaknesses 1. The paper is hard to follow and needs further polish. For example, 1. line 53-60, what ‘s the difference between entity-centric and event centric KGC? The given example is not convincing and seems artificial. If the event does not occur, it may not be necessary to included in the KG. So, the difference lies on the Open-world assumption or closed world assumption, not entity- and event-centric. 2. What is the motivation to formulate the commonsense knowledge as the formal logic form? For commonsense knowledge, it has a great chance to hold true with probability. That is, along with the increasing paths, the reasoning chains are more likely false. 2. The proposed task looks similar with commonsense reasoning. So, it is necessary to discuss and compare with the datasets and baseline methods for commonsense question answering or commonsense knowledge graph completion. 3. What is the quality of the curated benchmark? Strengths: see summary Weaknesses: see summary Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: see summary Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 1 poor Limitations: see summary Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: | | Commonsense KG | | Eventuality KG | |---|---|---|---| | KG | Atomic | ConceptNet | ASER | | Node | Event-Centric (mostly) | Entity-Centric (mostly) | Eventuality | | Edges | If A then B; | Entity relations | If A then B; | | | A because B; | … | A because B; | | | A as a result B; | | A as a result B; | | | A before B; | | A before B; | | | A after B. | | A after B. | | | | | Narrative Relations: | | | | | **Although A, B;** | | | | | **A but B;** | | | | | **A and B;** | | | | | **A for example B;** | | | | | **A in other words B;** | | | | | **A or B;** | | | | | **Instead of A, B;** | | | | | **A except B;** | First of all, we would like to clarify that **not all relations between events/eventualities are commonsense relations**. As we have explained in the section 2 about the scope of relations we adopt in this paper is discourse relations, which include four general types: **temporal** (before/after), **contingency** (because/result), **comparison** (but/although), and **expansions** (and/or/except/instead). The commonsense relations, on the other hand, mainly focus on the first two types of relations: **contingency** and **temporal**. The occurrence constraints discussed in this paper primarily exists in the **expansion** type which do not appear in commonsense KG but exists in the KG. Based on this clarification, we will address your concerns: Re W1.1: As shown in line 33-34 and line 44-45, the major difference is that the vertices of entity-centric KG are entities, and the vertices of event-centric KGs are events. On simple way to distinguish events from entities is that, the events include verbs that that can be associated with ture/false values to indicate whether that events happens in a specific situation, but the entities are all nouns. Meanwhile, the open-world assumption means the missing edges in a knowledge graph are unknown instead of false. Regardless of what kind of KG is, this assumption is always adopted in almost every task for knowledge graph, like KG completion, complex query answering, and rule mining. Re W1.2 Yes, as you have said, commonsense knowledge is something that mostly happens in a situation. But the level of commonsense modeling is not down to the level of our modeling of events, in which we care about under a specific situation, the occurrence of events. This is why we need logical reasoning in our task. Re: W2 Difference between Commonsense Question Answering and Complex Eventuality Query Answering (CEQA). CommonsenseQA is a benchmark dataset for NLP models to test the ability to reason with commonsense knowledge, and it's a multiple choice problem. The commonsense QA problems do not necessarily include events and the relations between events. Here is an example: * Where can I stand on a river to see water falling without getting wet? * A. waterfall, B. bridge, C. valley, D. stream, E. bottom Commonsense KG completion is the task of given head and relation, predicting the tail node. The evaluation is a ranking task. For ConceptNet: * Question: (bacteria, causes, V) * Answers: [tooth decay, infection in cut] For Atomic: * Question: (X repels Y's attack, Xattribute, ?) * Answers: [X is strong, X is skilled, X is brave] Our task differs from traditional question answering tasks because our task encompasses a broad range of relationships, including non-common sense relations as seen in Treebank 2.0, which provides numerous additional relations. Some complex queries include relations that only exist at eventuality level, such as co-ocurence, conjunction, contradiction, cannot be effectively simulated by commonsense question answering. As a result, our focus lies in complex query answering, where queries primarily revolve around intricate connections between eventualities. Unlike existing common sense knowledge graphs (CSKGs) that can only replicate relations involving two events in a triple, our task involves far more than two events in a single query-answer pair. This poses a challenge in formulating our task as either a knowledge graph completion (KGC) or question-answering (QA) task, as such formulations would necessitate discarding most query constraints, diminishing complexity, and transforming it into a simpler query answering task. Although commonsense knowledge may come into play when answering our queries, it is not as prevalent as in other tasks. Instead of relying on human-level commonsense reasoning, our task also doesn’t rely on the semantic information of the query, but rather rely on learning graph structures to perform query answering and reasoning, utilizing the structure of the graph itself instead of natural language. Thus, we believe that our task is significantly different and methods solving those tasks cannot be simply mitigated. Re: W3 Our benchmark dataset is derived directly from ASER without any complex transformations. ASER has been demonstrated to possess exceptional eventuality and discourse extraction quality, with an accuracy of over 90%. This means that the extracted information is highly accurate and representative of the original semantic meanings. As a result, we are confident in the reliability of our benchmark's query-answer pairs. We used the Amazon Mechanical Turk platform to conduct human annotations and assess the plausibility of each query-answer pair. The workers received thorough training, including detailed instructions and qualification rounds to ensure their accuracy level was above 90%. After annotating 1200 query-answer pairs, we calculated the statistics and found that 86.5% of the answers were considered plausible. These results align with our expectations and provide validation for our hypothesis. We assure you that we will carefully consider these discussions and incorporate them into our paper's final version. Additionally, we plan to conduct case studies to further demonstrate the quality of our benchmark.
null
null
null
null
null
null
Statistical Insights into HSIC in High Dimensions
Accept (poster)
Summary: The paper investigate the performance of HSIC to test the independence of two random vectors. The focus is on high but not ultra high dimensional scenarios where the theory is lacking. More specifically, the paper presents convergence rates for HSIC as the dimensions grow at different rates, and demonstrates how HSIC’s capacity to measure nonlinear dependence evolves as the dimensions increase. The paper also shows that the rescaled HSIC converges in distribution to a standard normal distribution under the null hypothesis and provides the conditions needed to have nontrivial power in high dimensions. The theory is validated by simulations and real-world data involving stock prices from the energy sector and raw material sector in the US stock market. Strengths: This is a solid paper with strong theory and convincing numerical support. A key advantage of Theorem 1, which provides the asymptotic distribution (with rates) of the HISC test statistic under the null hypothesis, is that it alleviates the use of permutation test to decide critical values, a requirement commonly observed in other tests of independence. The phase transition of the convergence rates in Theorem 3 is illuminating. Weaknesses: None. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: How different is the theory between yours and the 2021 paper by Gao et al. in the Annals of Statistics? What are the differences in technical tools and level of difficulties? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Some constraints are imposed in the theory but they are fairly mild and reasonable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for taking the time to read our paper and for your valuable feedback and constructive suggestions. Comment 1: How different is the theory between yours and the 2021 paper by Gao et al. in the Annals of Statistics? What are the differences in technical tools and level of difficulties? Response 1: We appreciate your interest in our paper and the work of Gao et al. (2021). Indeed, our first results are parallel to theirs. In particular, we prove that the rescaled HSIC converges in distribution to a standard normal distribution under the null hypothesis. We also derive a general condition for the HSIC based tests to have power asymptotically approaching one. However, the main difference between our theory and theirs is that we focus on the HSIC, which includes the distance correlation as a special case. In contrast to their paper, our paper provides a much more extensive analysis of HSIC. We demonstrate that HSIC can detect different kinds of dependences that depend on the dimensionality and sample orders, which is a novel and important insight that has not been explored in the literature before. Furthermore, our proof technique in this part is completely different from theirs, which indicates a higher level of difficulty. This is the main contribution of our paper and sheds light on the performance of HSIC in high dimensions. References Gao, L., Fan, Y., Lv, J., and Shao, Q.-M. (2021). Asymptotic distributions of high- dimensional distance correlation inference. The Annals of Statistics, 49(4):1999–2020.
Summary: The paper provides insights into the properties of HSIC in high dimensions, more specifically the rate at which sample size must grow in order to detect non-linear correlations if data is high dimensional. The results are categorized based on scenarios where either one or both variables have a "growing" dimension, and they express various types of nonlinearity using conditional expected values of higher orders. A great summary of the results can be found in lines 61 to 64 of the paper. Strengths: The paper effectively presents the main results and is written with clarity. In my opinion, the results are significant within the sub-area of independence testing. It has been recognized for some time that the power of HISC diminishes in high dimensions, but as far as I know, no one has provided conditions on the sample size, dimension and degree of non-linearity for the test to detect signal. Formulation of the results in the language of conditional expected values of higher moments yields insightful and concise characterization of the limitations of HSIC in high dimensions. Weaknesses: I'm happy to see empirical study (section 5.3) in a theoretical paper. Having said that, I would be surprised if a linear test would not reject the null (pool the returns per sector and run a correlation based test). Edit: I read the response, and this still doesn't fully make sense to me. I would have chosen a dataset with a non-linear dependence only and shown HSIC failing. However, ultimately, this does not detract from the quality of the findings and I stand by 7 (8 makes sense too). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Is there related work for MMD? Seems like similar results should/could hold for MMD. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for taking the time to read our paper and for your valuable feedback and constructive suggestions. Comment 1: I'm happy to see empirical study (section 5.3) in a theoretical paper. Having said that, I would be surprised if a linear test would not reject the null (pool the returns per sector and run a correlation based test). Response 1: Indeed, in this example, the monthly mean stock prices of energy companies are linearly dependent with the raw material companies without a doubt. We use this example to confirm the assertions we made in Theorem 3. In particular, according to the second part of Theorem 3 and the discussions at the end of Section 4, the HSIC can only have nontrivial power if $p^{(s_1-1)\kappa_x}q^{(s_2-1)\kappa_y}=o(n)$. Because in this data set, both p and q are much larger than n, this condition is only satisfied if $s_1=s_2=1$, which corresponds to the covariance between x and y. This together with the fact that the test based on HSIC rejected the null hypothesis, we conclude that there exists a linear dependence relationship between x and y. This type of linear dependence is also confirmed by the RV coefficient as well as the $R^2$s we computed for Denison Mines Corp. and Energy Fuels Inc., and Uranium Energy Corp. and Energy Fuels Inc.. We will emphasize the motivations of this analysis more precisely in the revised version. Thank you for your comment. Please let us know if you have any further questions. Comment 2: Is there related work for MMD? Seems like similar results should/could hold for MMD. Response 2: Our results focus on the dependence measures between two random vectors, which can be similarly adapted to the two sample test context using MMD. In fact, our results can shed light on the performance of MMD in high dimensions, as we can translate the two sample problems into independence problems. To illustrate this idea intuitively, we define two new random variables U and V as follows: V is either identical to X or Y, and U is set to be 1 if V=X and 0 if V=Y. Then, $f_X(t)=f_Y(t)$ is equivalent to $f_{V\mid U=1}(t) = f_{V\mid U=0}(t)$. This means that U and V are independent, since the conditional distribution of V given U does not depend on the value of U. Therefore, testing whether X and Y have the same distribution is equivalent to testing whether U and V are independent. Note that when X and Y are high-dimensional covariates, V is also high-dimensional and U is univariate. Hence, we can apply the first part of Theorem 3 to understand the behavior of MMD in high dimensions.
Summary: A paper providing tighter analysis and tests for HSIC statistics for independence in some regimes of interest NB I have only a nodding acquaintance with this statistic, but have used it and regard it as of high importance. I have done my best to learn the background in the time available. Strengths: The model provides an interesting and non-trivial bounds for HSIC to test for dependence of high-dimensional covariates, which is an important and useful setting. If I understand correctly they constrain the complexity of polynomial mean dependence that can be detected given covariate sizes and dimensions. Weaknesses: The conditions are very stringent, and so we are left tow wonder if the results apply to real problems. the isotropic kernel choice is a very strong restriction, and one that I would never use in practice - heuristically we expect low discrimination power for kernel-based methods under istoropy for "most" kernel methods; it would not be surprsiing if this was true for HSIC in particular. I am not clear how essential the isotropy is but the results start to look like a trivial if that is all they can handle: we might think of these as a "gaussian thin shell"-type result in that case. Independence is particularly important when it is conditional at which point it gives us Bayesian networks; Do any of these results survive for conditional independence? The authors mention such applications (l34-l37) but I believe thereafter discard them. I am taking the validity of the proofs here largely on faith. Nothing "looks" odd, but I have not stringently checked. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is the isotropy essential to these bounds or can we use other kernels? Do the bound still hold if we relax isotropy to some "better" distance metric? How about if we use a kernel that incorporates some prior knowledge of the domain? How about other kernels, incl nonstationary ones? Can I improve my bounds by using a polynomial kernel, which is not a characteristic but might be useful for certain types of dependence? Or dot-product-type kernels? Can we apply these results to conditional independence testing, i.e. inferring graphical models? If not, the results are still cool, but they do not, IMO, offer anything practical. They would still be a useful improvement in our understanding, however, and thus I favour publishing this. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors are clear about necessary conditions for their theorems to hold, but sufficient conditions are not obvious to me. See above for questions about generality. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for taking the time to read our paper and for your valuable feedback and constructive suggestions. Comment 1: Is the isotropy essential to these bounds or can we use other kernels? Do the bound still hold if we relax isotropy to some "better" distance metric? How about if we use a kernel that incorporates some prior knowledge of the domain? How about other kernels, incl nonstationary ones? Can I improve my bounds by using a polynomial kernel, which is not a characteristic but might be useful for certain types of dependence? Or dot-product-type kernels? Response 1: Indeed, the isotropy kernel assumption is essential to the bounds derived in this paper. This kind of kernel is very commonly used in the literature and includes many positive-definite kernels such as the Gaussian kernel, the Laplacian kernel, the rational quadratic kernel, and the kernel generating Sobolev spaces, etc. We believe that our results can be similarly generalized to many other kinds of kernels, including non-stationary ones. However, this would require some additional technical assumptions and modifications of our proofs. We leave this as an open problem for future research. However, there do exist some "better" distance metrics that could potentially improve the performance of our method. We are currently working on this direction, but we cannot share the details publicly at this moment. Please understand. To motivate you further, we can provide some references that explore some of these ideas in different settings. For example, Zhu et al. (2020) suggest to aggregate marginal sample HSIC as the test statistic instead of using HSIC over the whole features. Chakraborty and Zhang (2021) propose a new distance for Euclidean space that is capable of detecting marginal nonlinear dependences in high dimensions. We hope this answers your question. Comment 2: Independence is particularly important when it is conditional at which point it gives us Bayesian networks; Do any of these results survive for conditional independence? The authors mention such applications (l34-l37) but I believe thereafter discard them. Can we apply these results to conditional independence testing, i.e. inferring graphical models? Response 2: Thank you for inspiring us. We believe that the results summarized in this work can be applied to conditional independence testing methods such as the conditional distance correlation (Wang et al., 2015) in high dimensions. However, this would require some extra effort to prove them rigorously. Therefore, we leave this as an open problem for future research. References Chakraborty, S. and Zhang, X. (2021). A new framework for distance and kernel-based metrics in high dimensions. Electronic Journal of Statistics, 15(2):5455–5522. Wang, X., Pan, W., Hu, W., Tian, Y., and Zhang, H. (2015). Conditional distance correlation. Journal of the American Statistical Association, 110(512):1726–1734. Zhu, C., Zhang, X., Yao, S., and Shao, X. (2020). Distance-based and rkhs-based dependence metrics in high dimension. The Annals of Statistics, 48(6):3366–3394.
Summary: The authors provide the statistical properties of HSIC, which is the measure of independency between two random variables. When the random variables have high-dimensionality and have nontrivial dependency, the authors provide the condition for the number of samples in order to successfully detect the dependency. Strengths: For the two cases when only one variable is high-dimensional and when both variables are high-dimensional, the authors provide the number of data conditions that the nontrivial dependency can be reliably detected. In particular, when there is no lower-order dependency, the authors show that HSIC experience a difficulty to measure the dependence appropriately. Because the analysis is asymptotic, it is not guaranteed that the tendency should appear with finite dimensions. However in the experiments shown in this manuscript, the tendency is clearly shown in both synthetic and real data. Weaknesses: Choice of kernels should be related to the argument that HSIC only measures linear dependences (l336). What is the effect of the change of \gamma on the equations in Theorem 3 and the experiments? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Is the derived condition for Gaussians (l58-l59)? The authors should mention those conditions in every theorem explicitly. 2. In theorem 2, what is the condition when n*hCorr^2 is finite when n^{1/2}hCorr^2 is infinity when (A1) or (A2) does not hold? 3. What are the curves shown in Figure 1 and Figure 2? What are the horizontal and vertical axes? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The comparison with other methods such as distance correlation could provide more information about the difficulty in each case provided in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for taking the time to read our paper and for your valuable feedback and constructive suggestions. Comment 1: Choice of kernels should be related to the argument that HSIC only measures linear dependences (l336). Response 1: Thank you for raising this important point about the choice of kernels. The choice of kernels does not influence the theoretical results provided that the chosen kernels are isotropic and satisfy assumption (A2). It covers many commonly used kernels such as the Gaussian kernel, the Laplacian kernel, the rational quadratic kernel, and the Kernel generating Sobolev spaces, etc. In the empirical study, both dimensions p and q are much larger than n, and this is exactly the case where the conditions of the second part of Theorem 3 hold when $s_1=s_2=1$. In this circumstance, HSIC degenerates to the covariance, which only measures linear dependence. That explains why "HSIC only measures linear dependences" in line 336. Comment 2: What is the effect of the change of $\gamma$ on the equations in Theorem 3 and the experiments? Response 2: We appreciate your insightful question about the effect of the change of $\gamma$ on our results. Theoretically, the bandwidth parameter $\gamma$ can be chosen from a wide range of values, as long as it satisfies condition (A2). In practice, we use the median of $||z_1-z_2||$ as a default value for $\gamma$, since it ensures that Assumption (A2) holds for many common kernels. However, our method is robust to different choices of $\gamma$. We will emphasize this in the revised version of the paper. We vary $\gamma$ from $0.5\gamma_m$ to $2\gamma_m$, where $\gamma_m$ is the median of $||z_1-z_2||$, and report the performance of our method on Examples 1 to 3. The results are consistent across different values of $\gamma$, showing the stability and reliability of our approach. For illustration, we present the empirical type-I error rates at the significance level $\alpha=0.05$ for Example 1 with $p=q=10$ in the table below. We hope this answers your question and clarifies our method. | | 0.5$\gamma_m$ | $\gamma_m$| 1.5$\gamma_m$| 2$\gamma_m$| |:------|:------|:------|:------|:------| |Gaussian| 0.0548 | 0.0580 | 0.0570 | 0.0554| |Laplacian| 0.0530 | 0.0590 | 0.0588 | 0.0586| Comment 3: Is the derived condition for Gaussians (l58-l59)? The authors should mention those conditions in every theorem explicitly. Response 3: We would like to clarify that we do not assume that the random vectors are Gaussian in our paper. In lines 58 to 59, the zero mean and identity covariance matrix assumptions are only for illustrative purposes, and they are not essential for our theoretical results. In all the theorems in our paper, we do not impose any assumptions on the distributions of the random vectors. We will clarify this point in the revised version of the paper. Comment 4: In theorem 2, what is the condition when $n*hCorr_n^2$ is finite when $n^{1/2} hCorr^2$ is infinity when (A1) or (A2) does not hold? Response 4: Thank you for this question. Theorem 2 shows that if $n^{1/2}hCorr^2\to\infty$ and Assumptions (A1) and (A2) hold true, the test based on HSIC would have power approaching 1, i.e., for $n * hCorr_n^2$ to diverge to infinity under the alternative hypothesis. This is because $n*hCorr_n^2$ converges to a normal distribution under the null hypothesis. Assumption (A1) restricts the dependence structures within the coordinates of $z$, while Assumption (A2) imposes some conditions on the kernels and the bandwidth parameters. For example, Assumption (A1) would be violated if all coordinates of $z$ are identical, and Assumption (A2) would be violated if the Gaussian kernel is used and $\gamma_z$ is small enough such that $E||z_1^*-z_2^*||^2\to\infty$. We demonstrate these cases below Assumptions (A1) and (A2) in the paper. As for $n^{1/2} hCorr^2$, it goes to infinity as long as the dependence measured by $hCorr^2$ does not decay to zero too fast. We show in Section 4 that it captures different types of dependences, depending on the dimensionality and sample orders. Comment 5: What are the curves shown in Figures 1 and Figure 2? What are the horizontal and vertical axes? Response 5: We apologize for not providing enough details in the captions of Figures 1 and 2. The figures show the kernel densities of the test statistics under the null hypothesis, computed from 5000 simulations. The horizontal axes represent the observed values of the test statistics, and the vertical axes represent the kernel densities of those values. We use two different kernels to implement the tests, namely Gaussian (dashed line) and Laplacian (dotted line). The solid line is the reference curve, which is the density of the standard normal distribution. We will revise the captions to include this information.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This article deals with the problems of measuring nonlinear dependence between random vectors living in Euclidean spaces and testing for their independence. The authors provide statistical insights into the performance of one of the two major criteria, the Hilbert-Schmidt independence criterion (HSIC), when the dimensions of the random vectors grow at different rates. Their theoretical contribution is completed with an empirical study involving both artificial and real-world data sets. Strengths: The major strong point of the contribution seems to be the real data application, which could be of interest even to the non specialist. Weaknesses: The paper will not appear as self-contained to the non specialist (like me). The naive reader will find it contrary to intuition that the computation of a criterion measuring a basic statistical connection between random vectors should involve the choice of two kernel functions. This is all the more strange as the two vectors take their values in Euclidean spaces, but the Euclidean dot product is not an option to be favoured. Could this be motivated in a simple way? The originality of the contribution is difficult to assess, all the more since the other major criterion, the distance correlation (DC) criterion, has already been the subject of a similar study. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Could the authors provide the definition of technical concepts, like the "degree of conditional mean of x given y", or the "n-th Kronecker power of x"? More generally, could they make the paper technically more self-contained for the non specialist? Could the comparison between HSIC and DC be developed ? The typos should be corrected. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: This criterion does not apply here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for taking the time to read our paper and for your valuable feedback and constructive suggestions. Comment 1: The paper will not appear as self-contained to the non specialist (like me). The naive reader will find it contrary to intuition that the computation of a criterion measuring a basic statistical connection between random vectors should involve the choice of two kernel functions. Response 1: We agree that the choice of two kernel functions may seem unintuitive and complicated to some readers, especially those who are not familiar with the HSIC test. However, for the sake of generality and completeness, we allow them to be different in our theoretical study. In practice, one can choose the same kernel function for both variables, or use a common kernel function such as the Gaussian kernel, which has been shown to perform well in many previous studies (see, e.g., Albert et al., 2022). We hope this clarifies our motivation and rationale for choosing two kernel functions in our paper. Comment 2: This is all the more strange as the two vectors take their values in Euclidean spaces, but the Euclidean dot product is not an option to be favored. Could this be motivated in a simple way? Response 2: The Euclidean dot product is not an option to be favored because it does not capture the nonlinear dependence between random variables. For example, the covariance uses the Euclidean dot product, which can only measure linear dependences. The kernel functions, on the other hand, can measure the dependence between the variables in a high-dimensional feature space, where the nonlinear dependence can be better detected. This is the essence of the kernel trick, which is widely used in machine learning and statistics. We hope this explains why we do not use the Euclidean dot product in our paper. Comment 3: The originality of the contribution is difficult to assess, all the more since the other major criterion, the distance correlation (DC) criterion, has already been the subject of a similar study. Response 3: We appreciate the reviewer’s comment on the originality of our contribution. We agree that the distance correlation (DC) has been studied in the literature, such as in Zhu et al. (2020) and Gao et al. (2021). However, our paper is different from theirs in the following ways. - We study the HSIC, which is more general than DC. We prove its asymptotic normality under the null and a general condition for its consistency under the alternative. - We provide a much more comprehensive analysis of the HSIC based test in high dimensions than previous works. We show that HSIC can capture different types of dependences, depending on the dimensionality and sample orders, which have not been realized before. Our results characterize a full picture of the HSIC based test in high dimensions, while previous works only focused on some specific cases. We hope this clarifies the originality and contribution of our paper. Comment 4: Could the authors provide the definition of technical concepts, like the "degree of conditional mean of x given y", or the "n-th Kronecker power of x"? More generally, could they make the paper technically more self-contained for the non specialist? Response 4: Thank you for the kind reminder. The degree of conditional mean of x given y quantifies the difference between $E(x\mid y)$ and $Ex$, which is measured by $MD(x\mid y)$ in the paper. The n-th Kronecker power of x is defined as $x^{\otimes n} = x\otimes x^{\otimes(n-1)}$, $x^{\otimes 1} = x$, and $\otimes$ denotes the Kronecker product. We sincerely apologize for any lack of clarity in the previous version of the paper. We will make significant efforts to rephrase technical terms and provide more intuitive explanations to aid non-specialist readers. Comment 5: Could the comparison between HSIC and DC be developed? Response 5: We appreciate your interest in comparing HSIC and DC. As we mentioned in the paper, DC is a special case of HSIC when the distance-induced kernel is used, and our results also apply for DC. Therefore, the comparison between HSIC and DC depends on the choice of kernels. However, choosing an appropriate kernel for HSIC is not trivial. Therefore, we believe that there is no definitive answer to which method is better in theory, and the performance may vary depending on the data and the application. We hope this clarifies our point of view. Comment 6: The typos should be corrected. Response 6: We appreciate the reviewer's comment on correcting the typos in our manuscript. As suggested, we went through the whole manuscript carefully and made every effort to correct typos and grammatical errors. For instance, we corrected "fiar" to "fair" in line 44 and changed the first parenthesis to braces in line 299. References Albert, M., Laurent, B., Marrel, A., and Meynaoui, A. (2022). Adaptive test of inde- pendence based on hsic measures. The Annals of Statistics, 50(2):858–879. Gao, L., Fan, Y., Lv, J., and Shao, Q.-M. (2021). Asymptotic distributions of high- dimensional distance correlation inference. The Annals of Statistics, 49(4):1999–2020. Zhu, C., Zhang, X., Yao, S., and Shao, X. (2020). Distance-based and rkhs-based dependence metrics in high dimension. The Annals of Statistics, 48(6):3366–3394.
null
null
null
null
null
null
Monitor-Guided Decoding of Code LMs with Static Analysis of Repository Context
Accept (poster)
Summary: Authors propose to use static analysis to guide LLM decoding process to improve generation of code that may not be available in the context or training data. Authors show that their approach improves identifier generation compared to baseline and in certain cases compared to larger models. Strengths: - Authors propose a static analysis driven LLM decoding, which improves code generation (identifier generation) in situations when relevant code is not part of the context or training data - Using static analysis to drive decoding rather than adding additional code information to the prompt/context, allows to preserve context for other needed information while improving the identifier generation - Authors also create a PRAGMATICCODE dataset and DOTPROMPTS testset - Monitor guided decoding shows significant improvement vs the same model without MGD. - Authors also show the MGD is beneficial even when prompt augmentation is used. Weaknesses: - Unlike prompt/context augmentation, MGD approach can only improve output in specific cases where static analysis provides additional information. Although the paper shows significant improvement in the authors' PRAGMATICCODE dataset and DOTPROMPTS testset, this testset is limited and geared towards identifier completion. Authors do not show a comparison in a general test set where identifier completion may be small part of accuracy. In such testset (and in real life usage), the MGD approach may not contribute much improvement. This would have to be known for real-life implementations. I have read the author’s rebuttal. Authors have addressed my concerns by explaining DOTPROMPTS testset and its metrics. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the cost/overhead of static analysis during decoding? How much does this slow down the result generation? - In Table 1, could you explain the row ordering? Why CG is mixed with CG-X-MGD, but for SC, all SC-X-MGD lines are at the end of the table? I have read the author’s rebuttal. Authors have addressed my concerns by providing cost/overhead of static analysis and explaining the Table 1 row ordering. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It would be good to have a longer discussion of static analysis limitations. What static analyses can be applied and how costly they could be? As implemented MGD can only be applied for identifier completion. It would be good to know what other static analyses could be used and for what tasks. I have read the author’s rebuttal. Authors have addressed my concerns by further discussing static analysis techniques. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > this testset is limited and geared towards identifier completion ... Authors do not show a comparison in a general test set where identifier completion may be small part of accuracy. We would like to clarify that each testcase in DotPrompts is obtained by identifying a dereference location in a method having at least 7 lines of source code, and the task for the model is to generate the rest of the method after the dereference (line 70-75 in Appendix). In DotPrompts, on average, the number of lines of code in the ground truth completion of a testcase is 12.7 and 75% quantile being 15 lines of code. Further, among the evaluation metrics, all metrics except NIM evaluate the complete method level generation by the model. These metrics are Compilation Rate (CR), Identifier Sequence Match (ISM) and Prefix Match (PM). As shown in our results, SOTA LMs struggle with the problem of generating type-consistent code, and our technique generates type-consistency without affecting (rather improving) match with the ground truth as measured by ISM and PM. Unlike contest style benchmarks, for example, HumanEval and CodeContests, we target pragmatic scenario encountered by real world developers in IDEs, where they are in the middle of implementing a method, in the context of a repository, rather than solving a standalone algorithmic problem. > What is the cost/overhead of static analysis during decoding? How much does this slow down the result generation? We acknowledge the importance of measuring the performance impact of MGD in real-time code generation. On average, the decoding time overhead of MGD compared to the same model without MGD is modest 1.83x (section G in Appendix). A tighter coupling with the Language Server Protocol architecture can further reduce this overhead. > It would be good to have a longer discussion of static analysis limitations ... As implemented MGD can only be applied for identifier completion. It would be good to know what other static analyses could be used and for what tasks. We posit MGD to be a general framework, which inherits the strengths and weaknesses of the static analysis techniques. MGD can be applied to many coding scenarios, like usage of correct number of arguments to methods and generic types, correct use of named-constant values, valid class instantiations, or to enforce richer code constraints like typestate or session type validity. All of these requires use of different static analysis techniques, but can be captured by MGD formalism. We have discussed these in the common response and will include them in the paper, including the experimental results on MGD-for-Rust case studies. However, to extend MGD to more sophisticated properties, such as general preconditions and post-conditions, advanced constrained decoding methods (including backtracking) might be needed. This is an exciting direction for future work and we will mention this in the main paper. > In Table 1, could you explain the row ordering? Why CG is mixed with CG-X-MGD, but for SC, all SC-X-MGD lines are at the end of the table? We thank the reviewer for highlighting this and realize that readability can be improved following your suggestion, and we will make the changes accordingly. At the time of writing, the rationale had been that each of the CG-X configurations pertained to different models (350M, 2B, etc.), whereas all of the SC-X configurations pertained to the same model, with augmentation in the space of FIM modality or prompting, and we resolved to keeping augmentation of the same model together, while separating different models. We will make the necessary changes. --- Rebuttal Comment 1.1: Title: Thanks to the authors Comment: Thank you for addressing my questions and comments. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging that our response has addressed their questions and comments.
Summary: The paper proposes a framework that modifies the output logits of a language model using a monitor-guided decoding approach. The monitor consists of type-guided method invocations across the repository obtained via static analysis tools and heuristics to update the contents of the monitor along with when to trigger it. The authors show performance improvements when their approach is used in conjugation with different LMs and when augmented with different prompt augmentation techniques. Strengths: - This paper is in line with some recent works that combine the strengths of static analysis tools (that can help incorporate domain knowledge and that are well-studied) with LLMs (that have inbuilt world knowledge). - The paper is clear in writing and easy to understand. - The experimental results are convincing. Weaknesses: - My main concern with the paper is the scope of its applicability to other settings. Even though the authors describe their framework as a general framework in Section 2, they have shown results in a very narrow setting of generating identifiers informed by type constraints. In the paper, the authors have used a pre-condition corresponding to the occurrence of the token "." which only represents method invocation in Java. Also, I do not see how the monitor will work in its current form when this pre-condition is made general, for example, say when the user hits a new line. It is not clear to me how this framework can be adapted to a more general code completion setting that is free of the notion of a pre-condition or to applications other than code completion (such as bug-repair) or even to a different programming language where it is difficult to define this pre-condition or the state corresponding to the monitor. I would like a discussion of the generalization capability of MCD to complex settings with concrete examples and experiments. - I didn't see an analysis of the time and memory overhead due to MCD in the main paper. As per my understanding, this can be significant given that the static analysis tool is triggered at each invocation point and no form of caching is done to save time. I would like to know the time taken by MCD to generate an accurate prediction and see whether it is practical from the point of view of deploying it in conjugation with a real-time end-user application. This factor becomes increasingly important since the authors are performing repeated sampling to obtain gains. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Line 223-224: "For our experiments, we use n = 6 independent trials. For a budget of k ∈ [1, n] samples, we compute the aggregate score score@k..". Does this mean that you perform n forward passes through the LLM or sample n times? If the former, is the input to the LLM same during these passes? - It is not clear to me as to what is the stopping criteria used for decoding. Can it generate multiple lines of code following the "." or is it a single line of code? - Line 172-173: "The update function removes the members in s that are not prefixed by xn+1, and those prefixed by xn+1 are updated by pruning the prefix string xn+1." I didn't quite understand this. - Line 206: " ..to adapt RLPG to DOCPROMPTS". Does this mean that RLPG was trained on DOCPROMPTS? Can you describe the process of adaptation and how the repository contexts selected by RLPG were incorporated with SantaCoder? - Minor formatting issues: (a) The sentence is not clear at the end of the caption of Figure 1; (b) Extra spaces in between in the abstract. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Both limitations and potential negative societal impact are mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Main concern: scope of applicability to other settings. Authors describe framework as general ... have shown results in narrow setting of type constraints As discussed in the common response, MGD is generalizable across programming languages, coding scenarios as well as different static analyses. We emphasize that compilability is an important desirable characteristic of code, and we demonstrate that LMs across parameter scale struggle with generating compilable code. We show that generation of type-consistent dereferences plays a significant role in improving the compilability and match with ground truth of code generated by LMs. > "." as precondition ... only represents method invocation in Java. How monitor will work when pre-condition is made general. How framework can be adapted to general code completion setting free of notion of a pre-condition Our precondition check (pre) is syntax driven, which in general can be thought of as syntactic pattern matching. These can be made robust to whitespaces such as newlines. While we use `"."` as the precondition trigger for our monitor, other triggers are possible, for example, `"->"` and `"::"` in C++, `["new", " "]` or `["case", " "]` in Java, etc. These can be instantiated with a simple change to the precondition check (pre). It’s a language specific implementation which can be easily changed. > Application other than code completion (such as bug-repair) or a different PL ... Discuss generalization capability of MCD to complex settings MGD is agnostic to the downstream coding application, and can be used in all coding scenarios, where LMs are used generatively. For example, consider the development scenario of code refactoring, where the repositories are in a transient state. MGD can be useful to have the LM generate code using API names with the latest updates to the codebase. Further, we have several generalization aspects of MGD in the common response and will include experimental results on the same. > Time taken by MCD to generate ... is it practical for deploying in a real-time end-user application Yes, MGD is practical for real-time IDE deployment. Our implementation is based on Language Server Protocol, which are highly optimized implementations designed specifically for real-time IDE usage, and we inherit those optimizations. Specifically, we find that the end-to-end time overhead of generation with MGD is a modest 1.83x on average compared to a model without MGD (appendix G). > Memory overhead of MGD Language Servers used for MGD are already a part of IDEs like VSCode and Sublime Text, and loaded when a file of a particular language is opened. The instantiated monitor is a very thin client bridging the LM and the language server, and not expected to have any additional memory overhead beyond the language servers. > Since the authors are performing repeated sampling to obtain gains We clarify that we do not perform repeated sampling to obtain gains. MGD can enhance the quality of generation even with a single sample. Following the common practice of evaluating models with different sampling budgets, we allow each model to sample multiple generations and compare the models across different sampling budgets. Our results show that MGD helps across all budgets (in [1,6]) including a single sample for every model considered. Cost of sampling a single generation from larger models is higher than that of smaller models. Thus, it might be feasible to sample multiple generations from a smaller model at lower/comparable cost than generating a single generation from a larger model. It is in this context that we highlight that selecting from best of 3 samples of a 1.1B model (SantaCoder) with prompting and MGD, on average, has better prefix match with ground truth than much larger 175B model (text-davinci-003). This is in addition to showing that every model improves with MGD given the same sampling budget, including for one sample. Please let us know if further explanation is needed from our end about our setup. > Q1: Line 223-224 ... Given a testcase from DotPrompts, we use nucleus sampling with top-p value of 0.95 to generate $n=6$ samples using the same prompt (Line 93-95 in Appendix). > Q2: Stopping Criteria ... Yes, the LM is required to generate multiple lines of code following the dereference until end-of-method (On average, the number of lines of code to be completed in the ground truth completion of a testcase in DotPrompts is 12.7 and 75% quantile is as high as 15) with a budget of 512 tokens. The decoding terminates when a closing brace that matches the method’s opening brace is generated or the generation budget is exhausted. > Q3: Line 172-173 ... We explain this by an example. Kindly refer to figure 4 in Appendix which shows the decoding steps for the motivating example. Consider the transition from state $s_1$ = `{"withIp", "withPort", "newServernode"}` to state $s_2$ = `{"Ip", "Port"}`. Following one step of Monitor-Guided Decoding at state $s_1$, the token `with` is sampled. Next, `update` function removes all the strings in $s_1$ that can’t be generated following `with`: so `newServerNode` is removed. For the remaining strings, `update` truncates the part that has been sampled, so `with` is removed from `withIp` and `withPort`, leaving $s_2$ = `{"Ip", "Port"}`. We will improve the description in the paper. > Q4: RLPG ... RLPG considers a single line completion task (and hence a single line hole), whereas DotPrompts has a multi-line method-level generation task (and hence we consider the method suffix following an object dereference as a hole). We make necessary changes to adapt the retriever for the change in hole granularity. As RLPG model was trained for Java, and RLPG paper shows performance generalization to new repos, we reuse their officially released model checkpoints. We thank the reviewer for also suggesting formatting improvements and we will make the necessary changes. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response! I am happy with the explanations provided by the authors regarding the generalization and applicability of MGD as well as the Rust experiments. I would say that setting the pre-condition for each language and thinking about what trigger tokens to use still requires some effort. For future work, I would encourage the authors to think of ways of removing this constraint. One way could be to let a classifier predict what trigger tokens to use for each language based on important keywords and the use-case. I would also recommend authors include a discussion about the stopping criteria, sampling budget, and memory/latency overhead in the main paper. RLPG: Can you please give details of how the RLPG retriever was adapted for multi-line completion? I have increased my score to 6 after reading the author's response. --- Reply to Comment 1.1.1: Comment: We are happy that our response addressed the reviewer's concerns. Thank you for revising the score and suggestions for future improvements. We will update the main paper as suggested by the reviewer. > RLPG: Can you please give details of how the RLPG retriever was adapted for multi-line completion? The RLPG implementation masks a single line whereas we mask all the lines following the object dereference (to prevent ground truth leakage into the RLPG-generated prompt). In our implementation, this is done through a simple wrapper around the released RLPG codebase. As RLPG model was trained for Java, and RLPG paper shows performance generalization to new repositories, we reuse their officially released model checkpoints. We will release our code. We note that our results show that MGD is complementary to prompt augmentation techniques like RLPG (section 4.2 in the paper). We would be happy to provide any additional details.
Summary: This paper proposes to use the output of a code static analysis tool, of the sort used in IDE code completion tools, to constrain the output of an LLM to improve code generation. Concretely, the paper focuses on type-consistency in object dereference for Java code. When an LLM is called to generate a use of an identifier from a class (i.e. using the '.' operator, in Java), the static analysis tool is invoked to only allow the LLM to generate identifiers accessible in that class which are consistent with the current typed context. This is implemented by masking the probabilities in the LLM to zero-out any tokens inconsistent with the set of valid identifiers returned by the analysis tool, and sampling from these constrained probabilities. The paper compiles a new dataset of open-source Java repositories with dependencies and development environments (to allow applying the static analysis tool), and a typed-dereference code completion task on this dataset. The method consistently improves the ability of black-box code LLMs (CodeGen and SantaCoder) to produce code that successfully compiles and matches the ground truth. Strengths: The approach is simple but well-motivated, and can be applied on top of black box code LLMs. I could easily see this approach being deployed effectively in IDEs, if it isn't already. As a type of constrained decoding technique, it should easily be compatible with approaches that e.g. augment the model prompt. While the experimentation was limited to a single task and dataset, the evaluation on it was fairly thorough, evaluating two different base LLM families (CodeGen and SantaCoder) with FIM and non-FIM modes for SantaCoder, and comparing to reasonable baselines. The approach demonstrates solid improvements across metrics, with large improvements in compilation rate and next identifier match. Weaknesses: The contribution felt a bit thin to me for a NeurIPS paper. I think that the general idea of using monitors is well-motivated and potentially widely applicable, but here they are only demonstrated on a single task which IMO is particularly well-suited task to the approach --- dereference constraints should really narrow down the space of compilable identifiers that the LLM can choose from, and the largest improvements from the method are on compilation rates and single identifier match. I think the paper would be much stronger if it showed that the approach can also be effectively applied on another task which affords static analysis, ideally one that LLMs are a clearer fit for (i.e. generating longer-form code, such as classes, while ensuring the generated code can compile), or one of the ones mentioned in the Future Work. This might require more sophisticated methods of integrating the static analysis constraints into the LLM decoding procedure beyond the simple prefix-consistency filtering used here. I felt that the baselines could be improved. One missing baseline that I think is important would just choose randomly from the set of possible identifier candidates produced by the static analysis tool, without using an LLM at all. This seems like it could do pretty well on CR (although it's unclear to me how much code beyond the next identifier the task requires generating, see below), and potentially also on NIM (it would at least give a sense for how many identifiers are output by the tool). I was also pretty unclear on what the classExprTypes baseline is doing; see questions below. The writing of the paper was somewhat unclear, and I had trouble resolving a number of important details, in particular about the experimental setup. The dataset and new task definitions could potentially be a contribution, but many details about them were unclear; see questions below. --- Update after response --- The author response largely addressed these concerns; see comment below. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Q1) What static analysis tool is being used? Is it from some IDE (the discussion mentions Eclipse and Visual Studio), and if so which one? Q2) What code does the model need to generate in the DotPrompts task, after the post-dereference location? Is it the rest of the method? Since this is a new dataset, it would help to provide some stats on how long the ground-truth completions are, to get a sense for their difficulty. Q3) I didn't understand the classExprType approach from the explanation in line 202-205. What is being included in the prompt -- type definitions, or methods/expressions that have matching types, or something else? Q4) It felt odd to me to not apply either MGD or the baseline methods to the text-davinci-003 model if I understood right, could some explanation be given for this (API costs)? *Other comments* (not necessary to respond to in response) - Clarify what sampling strategy is being used (e.g. top-p with a temperature? what hyperparameters?) - I think the definition of pass@k in 225 is wrong -- it's the expected # of times that >=1 success occurs in the list of k candidates. - Clarify how FIM interacts with truncation to fit within the context window. - The description of the metrics in 211-221 was unclear to me, in particular how "ordered set" deals with repeated identifiers. - The plots in Figure 3 were hard to read, with all lines being red. - Figure 1 was a bit confusing --- do both text-davinci-003 and SantaCoder generate the same code (in red, with X)? - "Basic concepts and notation" would be clearer if it's tied to the motivating examples, e.g. give a concrete example of property (compilation?) - Lines 154-173 spend a lot of space describing the bookkeeping necessary due to vocabulary mismatches, but it's pretty wordy and I'm worried it might not make things clearer to someone who doesn't already have a sense of how this would need to be done. I think this could be moved to the appendix, or perhaps made more precise with an algorithm block (possibly in the appendix). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper should do a better job of indicating that the evaluation of the proposed monitor framework is pretty limited in the current work, evaluating only on a single task (and a single dataset). I'm worried that the token-level constrained decoding presented here might not work well when combined with more complex static analysis settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q2) ... is it rest of the method? Yes, the task is to generate the rest of the method, starting from the dereference location. Since the methods in DotPrompts consist of at least 7 lines of source code, a typical completion consists of multiple lines (Appendix line 75). On average, the number of lines of code in the ground truth completion of a testcase is 12.7 and 75% quantile is as high as 15 lines of code. > Show applicability of MGD on task LLMs are a clearer fit for (generating longer-form code ... while ensuring generated code compiles) Among the evaluation metrics, except NIM, all other metrics - Compilation Rate (CR), Identifier Sequence Match (ISM) and Prefix Match (PM) evaluate the complete method-level generation. The significant jump in compilation in our results demonstrate that MGD guidance consistently improves ground truth match for longer form code generation, while significantly improving the compilability of the generated code. > experiments limited to single task & dataset, evaluation thorough ... different LLM families with FIM and non-FIM ... reasonable baselines. We thank the reviewer for recognizing the thoroughness of our evaluation. We would like to mention that DotPrompts was sourced from large number of real world repositories, with a long form code generation task. While metrics NIM and ISM capture the specific scenario of type-consistency, PM and CR are method-level metrics, used commonly in the literature, evaluating a broad range of real world coding scenarios and properties. > Paper would be stronger if showed approach can apply on another task that affords static analysis, or ones mentioned in Future Work Thank you for this suggestion. In the common response section, we demonstrate a concrete instantiation of MGD on a different programming language (Rust), utilizing 2 different static analyses (typestate & session-types), both of which are mentioned as future work. We will include these scenarios in the paper. We also show scenario for doing joint monitoring by combining multiple static analyses together (also mentioned in future work) for ensuring generation of type-correct method calls with right number of arguments passed (please refer to the common response). > How many identifiers are output by the static analysis? On average in DotPrompts, the static analysis returned 44.86 identifiers, with a median of 25 identifiers. > missing baseline: choose randomly from possible ... identifier candidates We implement the proposed baseline over TD-3 and evaluate it on a subset of DotPrompts with 533 test cases across 77 repos. Results are reported in **Table 1 and Figure 1 (TD-3-Random) in the attachment**. We observed a significant decrease across all metrics, likely due to the random sampling's inability to capture the intent implicit in the context, which the LLM captures. ## Questions > Q1) We use Eclipse JDT.LS, a Language Server for Java. It is an IDE agnostic tool and used to provide Java support in several IDEs (Appendix lines 84-92). As our implementation builds upon the Language Server Protocol, instantiating MGD for other languages can be achieved by changing the Language Server from Eclipse JDT.LS to another Language Server, as we did for the Rust PL discussed in the common response. > Q3) Given a testcase to complete method M, within class C - We mask out M from C, and run a static analysis in the remaining contents of C to identify all possible expressions in it (for example, dereference expressions, function calls, concatenation, etc.), and list the type of all such expressions. We then identify the files where each of these types are declared, and concatenate the content from those files, truncating to fit the context budget allocated. > Q4) We thank the reviewer for the suggestion and implement MGD for TD-3. Detailed results are in **Table 1 in the attachment** and discussed in the common response. ### Other Questions > What sampling strategy is used ... We use nucleus sampling, with a top-p value of 0.95, generating 6 samples, 1 each at temperature 0.2 and 0.4, and 2 each at temperature 0.6 and 0.8 (Appendix line 93). > How FIM interacts with truncation to fit the context window We assign 20% of prompt token budget for ClassExprTypes, 50% for FIM, and truncate from left for standard and ClassExprTypes prompt, while truncating on the right for FIM prompt. In case of joint FIM and ClassExprTypes, we assign a budget of 20% for ClassExprTypes, and 40% for FIM (Appendix lines 95-102). > ... how "ordered set" deals with repeated identifiers We thank the reviewer for suggesting an improvement in writing of the evaluation metrics. We are using “ordered sequences” and not “ordered set”. We will update it in the writeup. Since we are using sequences, repeated identifier names are also matched for. For example, if ground truth identifier sequence is ["withIp", "withPort", "withIp", "withIp"], and the model generates the sequence ["withIp", "withPort", "withIp"] or ["withIp", "withPort", "withIp", "withPort"], then it gets a score of 3/4, but if it generates ["withIp", "withPort", "withIp", "withIp"], then it gets a score of 4/4. > Fig 1 ... text-davinci-003 and SantaCoder generate the same code? Yes, both text-davinci-003 and SantaCoder generate the same code present in the red box with X. > "Basic concepts and notation" ... > Definition of pass@k ... > The dataset and new task definitions could potentially be a contribution ... We thank the reviewer for their suggestions and for highlighting the dataset as a contribution. We will incorporate your valuable feedback. We will: 1. Adapt the discussion in section A of Appendix, to be clearer, and fit with the "Basic concepts and notation" section of the paper. 2. Rectify the wording on pass@k metric. 3. Rectify the dataset section to explain details related to the dataset, including relevant statistics like typical length of testcases as reported here and Appendix section B. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the extremely thorough response, which addressed my major concerns! The Rust experiments are a definite strength that help show the generality of the approach. I'm also much more convinced that the approach can be applied to a broader range of tasks given the clear examples of trigger words and the LSP in the general response. It was helpful too to clarify that the Java task does involve method-level completion, and to show that the method outperforms the random identifier baseline and gives improvements on TD-3. I still have a bit of a concern that the constrained decoding method used might not work well when trying to integrate MGD into tasks where the static analyzer constrains the tokens a substantial distance into the generated output (if I understand right, the Java tasks constrain at the beginning of the generated output, and the Rust setting seems to involve short outputs), but this could likely be addressed through a backtracking or beam-search-like approach. I've raised my score to a 6 (from a 4). --- Reply to Comment 1.1.1: Comment: We are glad that our response addressed the reviewer's major concerns and thank the reviewer for revising their score accordingly. We agree with the reviewer that advanced decoding schemes using backtracking, beam-search or look-ahead could be combined with MGD, particularly to target richer constraints. It is an exciting future direction to pursue. Thank you!
Summary: Inspired by the fact that IDE static analysis helps in code writing, this work proposes an MGD approach that uses a monitor to guide the decoding generation of the code language model. The authors started with the motivation that the code language model generation process does not perceive the global information of the repository thus leading to hallucination errors. The proposed MGD method uses static analysis to obtain the global context of the repository scope instead of the local context and uses this monitor as a stateful interface between LM. Experimental results show that the MGD method can significantly improve the quality of Java code generation. Strengths: 1. Compared to introducing global information through prompt engineering, language model architecture modification, incremental training, etc., MGD is a simpler and more effective way of limiting the output by utilizing construct masks. However, such an idea is not innovative. 2. The experimental results given are good and can support some of their claims, especially the effectiveness of the MGD method in type-consistent identifiers. Weaknesses: 1. Constructing masks to constrain the output space using grammar information, compilation information, and code definition information is classic in the code generation, Text-to-SQL Task. The authors also mention in related work that Text-to-SQL work such as PICARD has a similar constraint strategy, but MGD is targeted at general-purpose programming languages and focuses on static analysis through repository-level context. However, MGD cannot be theoretically justified to support general-purpose programming languages, or even to effectively limit output in only some coding scenarios. In this work, the MGD utilizes the "." as a trigger to update the state, is it possible to find a single unique trigger for all coding scenarios, are there scenarios to consider consecutive multiple words as triggers, or spanning words as triggers? Also, can MGD support nested programming languages? Again, this would require setting up special triggers, and complex state management. 2. The experimental design, including the construction of the dataset and evaluation metrics, appears to be specific to monitoring type-consistent identifiers. I understand that this is an instantiation of the MGD method, but my concern is that the MGD method only supports special instantiations from the methodological idea and experimental validation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: As mentioned in the weaknesses, can MGD be extended to other programming languages and other coding scenarios? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss the limitations of their work and give ideas for improvement. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Authors mention ... MGD is targeted at a general-purpose programming languages and focuses on static analysis through repository-level context. However, MGD cannot be theoretically justified to support general purpose programming languages, or even to effectively limit output in only some coding scenarios We have demonstrated that MGD does support general purpose programming languages (as shown by large improvements in both compilation and method level match for Java, in a dataset consisting of real world projects), however, we target specific properties that can be checked statically and enforced during decoding, like those supported by type systems. We have provided examples of additional coding scenarios and static analyses (typestate and session-types) in the common response. We have also built MGD for the Rust programming language and evaluated it on a mini-benchmark, whose results (in the attached PDF) show that MGD can guide LMs in generating code that adheres to richer properties. We acknowledge that functional correctness specified through pre/post conditions and invariants may not be enforceable through MGD in the current form. We will make these limitations explicit in the paper, and thank the reviewer for pointing them out. > Experimental design, dataset construction and evaluation metrics appear to be specific to type-consistent identifiers. I understand that this is an instantiation of the MGD method, but my concern is that the MGD method only supports special instantiations from the methodological idea and experimental validation. We clarify that the task in DotPrompts is to generate the complete method following a dereference location, with an average ground truth completion in the testset being about 12.7 lines of code. Hence, the task is long form code generation. Compilability is a necessary characteristic of any code and a general evaluation metric. In this paper, we demonstrate that LMs across parameter scale struggle with generating compilable code. While Next-Identifier Match (NIM) and Identifier Sequence Match (ISM) capture the specific scenario of type-consistency for MGD, Prefix Match (PM) and Compilation Rate (CR) are method-level metrics, that are used commonly in the literature, and are not specific to MGD. We show that generation of type-consistent dereferences plays a significant role in improving the compilability of code generated by LMs (relative improvement of 22.67% over SantaCoder and 22.81% over text-davinci-003 with MGD). While MGD is used in this paper to target a specific problem (generating type-valid dereferences), it is shown to improve match with ground truth (PM) as well. In the common response, we have discussed generality of MGD along several dimensions. > In this work, the MGD utilizes the "." as a trigger to update the state, is it possible to find a single unique trigger for all coding scenarios As correctly pointed out by the reviewer, for different coding scenarios, the triggers may be different; however, the same MGD framework can support all of these triggers. For example, for object instantiation in Java, the trigger could be `["new", " "]`, and for named `enum` values in switch statements, the trigger could be `["case", " "]` (details in the common response). In both cases, the function pre in the formalism can be modified to handle these triggers. > Can MGD support nested programming languages? ... require setting up special triggers, and complex state management. We believe the reviewer is referring to nested functions or nested classes (kindly correct us if our interpretation is wrong). Languages with advanced type systems, that support these features can be leveraged by MGD. For example, Java and C# have such advanced features, and their Language Servers are able to provide type consistent identifiers even with nesting. Further, our datasets PragmaticCode and DotPrompts indeed contain such evaluation scenarios. The monitor instantiated in the paper already handles this scenario, including nested function calls and definitions. We will highlight these interesting cases with concrete examples in the Appendix. --- Rebuttal Comment 1.1: Comment: Thanks for the response from the authors. I would like to clarify that I agree that the MGD method is effective in its proposed programming languages and scenarios. What I would like to discuss with the authors is whether the MGD method can be reused for languages with other programming paradigms. I try to give two examples of SQL code generation here. > SELECT AVG(salary) FROM employees WHERE salary > ( SELECT budget / 2 FROM departments WHERE name = 'Engineering' ); This one simple example seems to involve a number of questions. The nested query of the above SQL statement is supposed to return a scalar value, can the MGD method handle that? The MGD method does not seem to have nested state management, how does it return to the previous level after ending the nesting? "(" can be used as a trigger to limit the fields followed by the AVG aggregator and also to trigger a nested query, is there a conflict? > SELECT T2.name, COUNT(*) FROM cocert AS T1 JOIN stadium AS T2.... This is also a simple example. T2 is an alias, if MGD utilizes "." as a trigger can it guide the subsequent generation to stadium columns, currently "stadium" is not generated. Also, can the authors explain more about the innovativeness of the MGD method compared to past methods that used external structural information to guide code generation? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for providing us the opportunity to discuss generalizability of MGD to other programming languages, coding scenarios and usage of richer static analyses. Following their review, we have discussed different coding scenarios (for example, class instantiation), extension to static analyses (for example, typestate and session-types), and programming languages (for example, Rust) in our response. Thank you for acknowledging the effectiveness of MGD in the proposed programming languages and scenarios. > The nested query of the above SQL statement is supposed to return a scalar value, can the MGD method handle that? As discussed in the paper and rebuttal, MGD inherits the strengths and weaknesses of the underlying static analyses. In case of SQL, several static analyses are available through the Language Server Protocol (with which our monitor interfaces). For this specific example given by the reviewer, we created a schema for `employees` and `departments` tables based on the example given by the reviewer. We then check whether given the table name, a static analysis could suggest schema-valid identifier names. Specifically, given the prompt `SELECT AVG(salary) FROM employees WHERE salary > ( SELECT departments.`, a static analysis for SQL returns `[budget:INT, name:VARCHAR]`, which can then be used to select only the type-valid identifiers. This shows that nested expressions in SQL can be handled by MGD using the static analysis. For the next scenario, prompting the language server with `SELECT T2.name, COUNT(*) FROM departments AS T1 JOIN employees AS T2 on T2.` returns `[name:VARCHAR, salary:INT]`, and hence, MGD will be able to handle aliasing in SQL variable names as well. > how does it return to the previous level after ending the nesting? This is a great question. Handling arbitrary nesting will require a stack-based monitor. We are implementing one to handle the case of monitoring for correct number of arguments to methods (where each argument itself can be a nested subexpression) - a scenario described in the common response above. Please note that with appropriate definitions of the `pre` and `update` functions, handling nesting is possible within the MGD formalism introduced in the paper. > can the authors explain more about the innovativeness of the MGD method compared to past methods that used external structural information to guide code generation? We broadly fall in the space of constrained decoding techniques, however, the use of rich static analysis (beyond structural properties, for example, typestate) for constraining code generation on-the-fly is novel. As discussed in the “Related work” section, structural information has been used for generating code in domain specific languages like SQL, SMCalFlow and Vega-Lite. We target general-purpose programming languages (like Java and Rust), and bring the benefits of years of static analysis research and LSP based IDE integration to code generation using LMs. Other works that also try to constrain general purpose programming languages include GNN2NAG and NSG. Unlike GNN2NAG and NSG, we do not need specialized architectures/modifications or additional training to guide the models with results of rich static analyses. This immediately makes our technique applicable to off-the-shelf LMs which we think is the main and pragmatic innovativeness of our method. In the paper, through extensive experimentation we have shown this to be true for multiple models (CG, SC, TD-3) of varying parameter scales (350M-175B). Further, MGD nicely complements prompt augmentation techniques such as RLPG, which have been specifically designed to capture repository-level context.
Rebuttal 1: Rebuttal: We appreciate the reviewers' constructive feedback and suggestions. We first answer some common questions and present individual responses later. # Generalization and Applicability of MGD We demonstrate MGD’s applicability to more coding scenarios, programming languages, and different static analyses & properties. We will include this discussion in the paper. ## Generalization Dimensions ### Programming languages MGD can be applied to most programming languages (PL). For instantiation, it requires static analyses that help infer and enforce semantic constraints on the code under development. Such analyses are available in IDEs (e.g., clangd for C/C++, Jedi for Python & Rust Analyzer for Rust) through the standard Language Server Protocol (LSP). We build a monitor as a thin client around LSP (Appendix lines 84-92). Supporting new PLs is easy and doesn’t necessitate changes to the monitor's static analysis interface, as it is a generic LSP client. *We were able to develop an MGD implementation for Rust using Rust Analyzer in just two hours following reviewer comments* (discussed below). ### Coding scenarios MGD can be extended to several coding scenarios, including those requiring spanning token sequences as triggers. Example scenarios realizable with MGD formalism introduced in the paper are as follows: 1. **Class instantiation**: MGD can trigger on a 2-token span: `['new', ' ']` to ensure only valid classes are instantiated. The trigger invokes a static analysis that identifies instantiable classes from the local & global context at the trigger location. 2. **Method call and arguments**: As an example of joint monitoring based on multiple static analyses, consider a monitor for 2 properties, $M_a$) Type-consistent dereferences (from the paper) and $M_b$) Correct number of arguments to calls. $M_b$ triggers on the final state of $M_a$ when the last token contains `'('` (start of a call). A static analysis determines numbers of arguments that decoded method takes and $M_b$ states correspond to the number of arguments left to be decoded. `update` function transitions on arguments to the current function call, accounting for nested parentheses. $M_b$ prevents the generation of a token with `')'` (end of the call) till the right number of arguments have been decoded. 3. **`switch` over `enum`**: A switch case statement over `enum` uses named enum values in `case <val>` to match. A monitor with `pre` triggering on multi-token sequence `['case', ' ']` is used to generate valid named values. ### Static analysis techniques We now provide examples of deeper semantic properties (& static analyses) that can be used with MGD (concretely instantiated for Rust as discussed next): 1. Typestates [1], often expressed as finite state machines (FSMs), define valid sequences of operations that can be invoked on objects of a given type. For example, a type representing a file handle would have typestate that disallows calling `read` after `close` has been called. 2. Session Types [2] ensure that messages between concurrent programs are sent and received in the expected order, following a specified protocol, and specified as communicating FSMs. ## Concrete instantiation of MGD for Rust with Typestate & Session Type consistency To concretely show that MGD can be applied to other programming languages & use results from other static analyses, we instantiated MGD for Rust using Rust Analyzer [3]. A mini-benchmark with small code completion scenarios requiring conformance to typestate or session-type specifications for valid generation along with results is reported in **Table 2 of the attachment**. Despite using the strongest SantaCoder configuration from our paper, SC-FIM-classExprTypes, the generated code doesn’t satisfy the typestate and session-type properties (e.g., generating `stop()` which is not valid in the state `Stopped` of `MusicPlayer`). Use of MGD ensures generation of correct invocations. We will include these results in the paper. # Additional Results Following reviewer LEhe’s suggestion, we instantiate MGD over text-davinci-003 (TD-3), and a valid identifier random selection baseline. Detailed results are in **Table 1 & Figure 1 of the attachment**. Notably, TD-3-MGD achieves a compilation rate of **73.73% (22.81% improvement over TD-3)** and all other metrics see significant improvements as well. TD-3-Random suffers across all metrics, likely due to random sampling's inability to capture the intent implicit in the context, which the LLM captures. # Performance Overhead Language Servers are optimized for real-time use in IDEs. MGD integrates with Language Servers (which perform static analysis) and inherits the optimizations such as caching and reuse of pre-computed values. In our implementation, MGD's average decoding time overhead is a modest 1.83x (appendix G). In production setting, tight integration with Language Servers can reduce the overhead further. # Generality of Dataset and Metrics DotPrompts testset task requires generating the *complete method after the dereference location* spanning multiple lines. Test cases are derived from methods with $\geq7$ lines of code (Appendix, lines 70-75). Mean ground truth code length to be generated is 12.7 lines, with a 75% quantile of 15 lines, making DotPrompts a method-level code generation task. The LM is allowed to decode upto 512 tokens and MGD monitors every dereferenced identifier during decoding. All evaluation metrics except NIM assess method-level generation: Compilation Rate, Identifier Sequence Match, and Prefix Match. To achieve a successful compilation, the model not only has to generate a type correct next identifier, but all subsequent method calls and field accesses must be valid as well. [1] Strom et al., Typestate: A programming language concept for enhancing software reliability. IEEE TSE 1986 [2] Jespersen et al., Session Types for Rust. In WGP 2015 (pp. 13-22). ACM [3] “Rust Analyzer.” (online) Pdf: /pdf/46148719fb1628d40c313a1b827158bf29985a5b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals
Accept (poster)
Summary: RL has a notorious sample efficiency problem, and the authors propose to tackle this problem by having RL agents read instruction manuals. The authors propose a novel framework called Read and Reward (R&R), which is composed of two modules. The first module is QA which extracts and summarizes relevant information in the manual, and the second module is reasoning which evaluates events in the game and provides appropriate rewards for the RL agents to learn with algorithms such as A2C and Agent 57. The authors found R&R to significantly improve the agent's sample efficiency in several challenging games, such as Skiing. Strengths: The paper is well-written and has the following strengths: * **Interetsing approch**: as the authors mentioned, this is the first work that demonstrates improving RL performance by reading instruction manuals. Many pure RL methods suffer significantly in Skiing, so seeing other approaches that improve training is encouraging. * **Further contribution to NLP + RL**: the authors further explore the space of using NLP to improve RL agents. This line of research will help reduce the need to train agents from scratch and provide useful priors to the agent. * **No labeling required**: the amount of human labor required in the proposed framework is minimal — they only provide some generic questions, such as "what is the objective of the game?" Weaknesses: Despite its strengths, the paper has the following weaknesses: * **Adhoc setting**: the setting seems ad hoc, stitching together various modules such as QA and object detection. A multi-modality model such as GPT-4 can probably serve as a general replacement for these modules, unlocking new capabilities, such as learning from image/video tutorials, in addition to instruction manuals. That said, there has not been an excellent open-source multi-modality model available. * **Slightly insufficient evaluation**: the authors only tested their approach on four games, but maybe it's worthwhile to evaluate some other games, such as Montezuma Revenge, which RL algorithms usually struggle with. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I'm curious about the delayed reward schedule. Would it not artificially create a sparse reward problem? Misc: * The citation format seems slightly problematic. `However, in many real-world scenarios (Kolve et al., 2017)` feels more common and readable compared to `However, in many real-world scenarios Kolve et al. (2017);` * Lines 44 and 48 seem repetitive. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing that our proposed framework is interesting and promising. Here are our responses to the questions and concerns: W1 Multi-modality models: Thank you for recognizing the important future direction to train and incorporate multi-modal LMs. We have been actively experimenting with the latest VLMs. Although our internal experiments demonstrate VLMs still lack compositionality and cannot ground interactions as accurate as our current framework, we are hopeful that the community will soon discover more reliable VLMs that are suitable for this task. W2 Evaluation: Currently, reliable ground-truth object labels only exist for Tennis, Ms. Pacman, Breakout. However, we believe deeply in the impact of creating a general solution for all Atari games, and have been actively researching more general solutions for visual-language grounding. In addition, we have initiated experiments and have attached std for all algorithms except Agent57 in the rebuttal. We will update the table with full results for the final version. Q1 Delayed reward schedule: The delayed reward schedule introduces `sparse reward' for both Read & Reward and baselines (similar to the original setting for the Skiing game). Therefore, the effect of Read & Reward is more visible. Misc: Thank you for pointing out minor improvements in our paper. We will correct them in the final version. --- Rebuttal Comment 1.1: Title: response Comment: I thank the authors' clarification on the choices of Atari games.
Summary: The authors introduce Read and Reward framework, where RL agents accelerate their learning of a new environment by interpreting user manuals. Specifically, the framework consists of a QA Extraction module that extracts and summarizes relevant information and a Reasoning module that evaluates object-agent interactions based on the information. The algorithm accelerates learning by using auxiliary rewards, whenever the interactions are detected by the reasoning module. The method empirically boosts the efficiency and performance of many Atari games. Strengths: The topic in focus is very relevant because of the recent surge in the abilities of LLMs. The solution of utilizing LLMs to map auxiliary rewards in RL is novel. The performance of speeding up RL training for the four games investigated is apparent. The ablations performed are thorough and complete. Weaknesses: - document length and unstructuredness are overstressed in the paper as a method/contribution. However, a) many NLP papers have already addressed them, and the method employed in the paper is simple summarization; b) long context models are already available to ingest more than the text needed to reason - the overall idea is quite nice; however, due to the difficulty of grounding knowledge in the environment, the paper only considered "hit" interactions and required unsupervised object detectors (which might be unreliable) OR game states, and the interaction is quite primitive in many scenarios, therefore cannot describe most tasks/useful interactions. - Table 1 needs error bars/std Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - have you tried using VLMs to do the reasoning? instead of object detectors. This way the set of interactions can also be expanded - does the scale of the aux reward matter in the games investigated? - is there an intuition on the speedup ratio (Table 3) Vs. game? (Breakout appears to really benefit from R&R) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Both limitations and broader impact sections are included in the draft Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the potential of our work. Here are our responses to the questions and concerns: W1 Document Length: We would like to reiterate that our *main contribution* is to sketch out a framework making use of LLMs for exploiting external textual data that future RL algorithms may build upon (also recognized by reviewer qAA2, 7mzB), by making use of recent NLP techniques and long context models. While recent works within a few months have somewhat transformed NLP and in-context reasoning, they are orthogonal to the main focus of this work and long context problems (for example, reading a paper) may still exist. Our RR framework is compatible with all recent LLMs and will likely result in more possibilities. W2 Grounding: As mentioned in Section 5, while there is a strict requirement on grounding, VLMs may open up more potential for better interaction tracking and wider the set of interactions. Although our internal experiments demonstrate VLMs still lack compositionality and cannot ground interactions as accurate as our current framework, we are hopeful that the community will soon discover more reliable VLMs that are suitable for this task. W3 Table 1: Thank you for pointing out the insufficiency in our evaluation. We have initiated experiments and have attached std for all algorithms except Agent57 in the rebuttal. We will update the table with full results for the final version. Q1 VLMs: We have been actively experimenting with current VLMs. Although our internal experiments demonstrate VLMs still lack compositionality and cannot ground interactions as accurate as our current framework, we are hopeful that the community will soon discover more reliable VLMs that are suitable for this task. Q2 Scale of Aux reward: The framework is quite robust to the scale of auxiliary reward since reward clipping has been implemented by the RL algorithms. Any reward in the range of (2,50) should result in the same behavior. Q3 Breakout benefitting from RR: Due to the mechanism of the game and the content of the instruction manual. An auxiliary reward is provided very densely every time the paddle hits the ball. Therefore, the RL algorithms learn fastest in the game Breakout because auxiliary reward is provided most often. --- Rebuttal Comment 1.1: Comment: Dear reviewer, Thank you again for the insightful review. We hope that our response has addressed your concerns. Please let us know if there are any additional questions or concerns.
Summary: This paper proposes Read and Reward (RR), a method to incorporate prior human knowledge about the environment to achieve performance and efficiency gains in RL environments. The paper instantiates RR in several Atari environments by reading the information from the instruction manual. A full end-to-end pipeline is demonstrated on Skiing and a pipeline without object detection is demonstrated on Tennis, Pacman, and Breakout. Strengths: *Originality*: While incorporating prior knowledge into RL training is a previously investigated topic, this paper provides an end-to-end pipeline and demonstrates its usefulness on Atari Skiing. The method introduced is more general than methods previously investigated. *Quality*: There are thorough experiments with ablation studies, such as training the QA module on a different set of instructions. *Clarity*: the paper is well-written. *Significance*: as RL agents are eventually deployed in real-world scenarios, it will be increasingly important to have them learn efficiently and incorporate prior knowledge, which is often in the form of text. This paper provides a complete demonstration of a method leveraging natural language data to improve performance. More generally, it sketches out a framework for exploiting external data that future algorithms may build upon. Such agents may be rapidly deployed to learn in an unsupervised fashion. Weaknesses: Although RR is an interesting proof-of-concept of automated reward shaping, it is unclear whether it scales to more complex environments or works on noisier sources of textual knowledge. In particular: 1. Does RR generalize to noisier sources of textual data? While the Wikipedia ablation is interesting, the data itself is rather clean and contains similar information. One interesting experiment to try might be with text-based RL environments, such as the Jericho [1] or Machiavelli [2] environments. Oftentimes these environments have publicly available [walkthroughs online](https://forum.choiceofgames.com/t/guides-for-all-games/15569/4), although they are often incomplete and noisy, as the instructions are intended for humans. Is RR able to extract the rewards and guide the agents in this setting? (I realize that this would be a significant undertaking so am not expecting it in the rebuttal) 2. The reward shaping only surrounds object detection. As a result, the QA prompt, even though it is hand-designed, still generalizes. However, there are several RL environments where the rewards are more heuristics rather than strict rules. For example, dialogue in the Diplomacy environment [3] does not have clear positive or negative reward but instead depends on the context. One could imagine gaining better understanding of dialogue in Diplomacy by leveraging knowledge on the internet, but it is unclear how RR might be applied in that scenario, as the reward shaping must center on text and the QA prompt likely could not be hand-designed. 3. It seems like it was quite difficult to train a bounding-box detector for most games. However, this would imply that RR's capabilities are bottlenecked by limitations of such bounding-box detectors. This might make RR difficult to scale to more complex environments with more ambiguous objects. [1] Hausknecht, M., Ammanabrolu, P., Coté Marc-Alexandre, & Yuan Xingdi (2019). Interactive Fiction Games: A Colossal Adventure. CoRR, abs/1909.05398. [2] Pan, A., Chan, J., Zou, A., Li, N., Basart, S., Woodside, T., Ng, J., Zhang, H., Emmons, S., & Hendrycks, D. (2023). Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark. ICML. [3] Meta Fundamental AI Research Diplomacy Team (FAIR) et al. Human-level play in the game of Diplomacy by combining language models with strategic reasoning. Science 378,1067-1074 (2022). DOI:10.1126/science.ade9097 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Would it be possible to have more results on other Atari games? I understand that the bounding-box detection was challenging, but are Tennis, Pacman, and Breakout the only other Atari games that allow for ground truth calculation of the objects from the RAM state? 2. How robust is the automated reward shaping to the quality of the bounding box detector in Skiing? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: While some limitations are addressed, it would be interesting if there were additional discussion about the limitations of extracting information from natural language data. For example, humans understanding of the physical world may not be adequately encoded in natural language. Does this limit the application scope of agents using the RR method? Additionally, if the LM's reasoning abilities are erroneous, will this irreparably break RR? Finally, it would be helpful to directly note that as RR is fundamentally a reward shaping method, it is difficult to apply it to situations where the reward itself is unclear. This happens often with delayed reward. In particular, RR would likely not improve performance on Go or Chess. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the novelty and significance of our work. Here are our responses to the questions and concerns: W1 Generalization to noisier data sources: Current implementation of RR uses small language models (RoBERTa and Macaw), which already demonstrate some degree of robustness to noisy manual information in the Atari official instruction manual intended for human readers (Section 3, line 125~127). To improve performance, one could adopt a more powerful language model like GPT-4. W2 Limitation of reward shaping for dialogue games: Thank you for pointing out this important limitation. Our main contribution is a framework to transfer external knowledge to RL agents to games with well-defined observation and action space. It indeed takes more work to apply to specific scenarios like Diplomacy. W3 Bounding-box generation: This is indeed one of the main challenges for future works. We hope that the advances in visual-language models (VLMs) could aid our progress. Q1 Additional Atari games: Currently, reliable ground-truth object labels only exist for Tennis, Ms. Pacman, Breakout. However, we believe deeply in the impact of creating a general solution for all Atari games, and have been actively attempting to integrate recent visual-language models (VLMs). Although we do not have a working solution yet due to the lack of compositionality in VLMs, we believe that our approach will lay the ground for a future solution that reads the instruction manual and solves all Atari games. Q2 Robustness to bounding box detection error: Thank you for raising this important question: In practice, we find RR quite robust to bounding box detection errors since the bounding box detector causes at least one observable error every 10 frames, and almost all detection errors result in missing automated reward. Assuming uniform probability of missing reward for any interaction, the expectation of reward is still positive/negative, and therefore the RR framework should demonstrate a reasonable degree of robustness to noise in detectors. Limitations: Thank you for the insightful suggestions on scenarios where RR may not apply, and where LLMs may fail. We will include more discussions on scenarios where RR does not apply in the final version of our paper. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: While the method is a bit problem-specific, I still think it is an interesting demonstration of incorporating prior knowledge into RL environments. I would like to keep my score
Summary: This paper presents a novel method for Single Agent Reinforcement Learning which utilises computer game instruction manuals to enhance learning efficiency and performance. The Atari game manuals are used to accelerate RL algorithms in learning to play four different games. The framework comprises a Question-Answer Extraction module and a Reasoning module. The QA Extraction module summarizes salient information from the manual. The Reasoning module evaluates in-game events using the extracted information from the manual and then assigns auxiliary rewards when such interactions are detected. These auxiliary rewards are provided to standard RL agents. The paper's results show improvements in both performance and training speed for various RL algorithms when aided by this novel framework. It is asserted that this is the first successful attempt to use instruction manuals in a fully automated framework for solving the Atari RL benchmarks. It is noted that even small open-source QA language models could effectively extract useful gameplay information from human-written documentation for RL agents. Strengths: The results in this paper are certainly extremely promising, and the ability to use large language models to create additional rewards from the instruction manual is very promising. That such methods are so much more sample efficient is clearly going to be important in realms beyond simple game play. Weaknesses: The main, and very problematic issue with this paper is the lack of robust evaluation. There are no statistics given, no averaging over random seeds provided and no code made available. It is thus entirely unclear how much the results themselves can be trusted under scrutiny. Thorough and transparent evaluation is absolutely vital in all RL scenarios, and because this has been left out completely, this paper is, I believe, not ready for publication. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Without a full analysis of the variation in results over a large number of random seeds, how can the results be trusted? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: I believe that the limitations provided are fair. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing that our method is novel and the results are promising, and thank you for pointing out concerns about the reliability of our evaluation (W1, Q1). We would like to reiterate that our *main contribution* is to sketch out a framework making use of LLMs for exploiting external textual data that future RL algorithms may build upon (also recognized by reviewer qAA2, 7mzB). The read and reward framework is independent from the 4 RL algorithms we evaluated. Our experiments (Table 1) involving 3 games and 4 different algorithms already demonstrate the robustness of our framework. Due to the fact that our framework provides dense reward for the RL agent, we observe consistent performance across different trials. For completeness we will include additional experiments (Attached to rebuttal) involving 3 random seeds in the final paper. Our computational resources are limited so we will not be able to finish Agent57 experiments before the end of the rebuttal period. --- Rebuttal Comment 1.1: Title: No attachment Comment: There appear to be no revisions as yet. Clicking on the button on the rebuttal does not link to the paper with additional experiments. --- Reply to Comment 1.1.1: Comment: Hello, Please refer to the common rebuttal for the table of result. We did not attach a pdf. Please note that we are not allowed to revise the paper in the NeurIPS rebuttal period. Please find the common rebuttal right below the submission in this page. Or by searching for “ On robustness of evaluation” in the browser.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful comments. We are encouraged by *all reviewers'* appreciation that our proposed framework achieves promising results (Reviewers 3Ei6, qAA2, LzHL, 7mzB). In addition, we are encouraged by acknowledgements for our contributions on 1) an important connection between the rapidly advancing LLM field and RL. (Reviewers LzHL, qAA2, 7mzB) 2) getting an end-to-end system together that clearly makes use of text data (Reviewers qAA2, 7mzB). ## Our contribution We would like to reiterate that our main contribution is to sketch out a framework making use of LLMs for exploiting external textual data that future RL algorithms may build upon (also recognized by reviewer qAA2, 7mzB). ## On robustness of evaluation Our experiments (Table 1) involving 3 games and 4 different algorithms already demonstrate the robustness of our framework on different tasks and algorithms. | | Tennis | Pacman | Breakout | | :----------- | :-----------: | :-----------: | :-----------: | | A2C | -23 (1.3) | 387.1 (66.2) | 2.1 (0.3) | | A2C + R & R | -5.2 (2.2) | 455.2 (63.2) | 4.5 (3.0) | | PPO | -23 (1.0) | 200.1 (65.4) | 1.9 (0.5) | | PPO + R & R | -8.1 (2.3) | 284.2 (67.3) | 10.2 (0.1) | | R2D1 | -23.0 (1.3) | 1999.2 (109.2) | 2.1 (0.2) | | R2D1 + R & R | -2.2 (3.1) | 3001.3 (203.2) | 10 (3.0) | To further address reviewer concerns, we have conducted experiments with 3 random seeds for each entry in the table and reported the average and (standard deviation of the experiments). Note that we were unable to attach results for Agent57 due to the fact that each Agent57 experiment takes on average 1 month to run on our machines. We will update the final version of our paper with the results in this table.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
From Comprehensive Study to Low-Rank Compensation: Exploring Post-Training Quantization in LLMs
Reject
Summary: This paper conducts a comprehensive analysis of various quantization methods for large language models (LLMs). Some interesting takeaways were shared, for example, activation quantization is generally more susceptible to weight quantization; none of the current quantization methods can achieve the original model quality. Based on such insights, the paper also proposes an optimized method called Low-Rank Compensation (LoRC), which employs low-rank matrices to enhance model quality recovery with a minimal increase in model size. Strengths: - The motivation of the paper is solid. With the rapid development of LLM, it is essential to study the methodology to deploy LLM on more accessible hardware, where quantization is an important category of the approach. Therefore, a comprehensive study of these methods is necessary. - The insights shared by this paper are helpful, e.g., it is interesting to know activation quantization is generally more susceptible to weight quantization. - The proposed improvement based on the observation is reasonable. Weaknesses: - The proposed method is based on low-rank approximation, which can be viewed as a sparsification-based method. It is kind of out of the scope of the proposed method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is it possible to open-source the code for reproducibility? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for valuing our research's importance in LLM deployment and acknowledging the relevance of our insights on quantization. We're pleased our proposed improvements resonated and welcome further feedback. *Q1:* The proposed method is based on low-rank approximation, which can be viewed as a sparsification-based method. It is kind of out of the scope of the proposed method. *A1:* Thank you for pointing out the connection between our method and sparsification through low-rank approximation. Our primary intention was to leverage the benefits of such approximations to enhance the quality of quantization in LLMs. We believe that utilizing tools from related compression domains can provide innovative solutions, even if they originate from seemingly different approaches. Other quantization work also utilizes various methods to reduce the quantization error, e.g.,[1] uses both int8 and fp16 to represent a single weight matrix. [1] Dettmers, Tim, et al. "Llm. int8 (): 8-bit matrix multiplication for transformers at scale." arXiv preprint arXiv:2208.07339 (2022). ----- *Q2:* Is it possible to open-source the code for reproducibility? *A2:* Yes, we will release the codes.
Summary: This paper analyzes post-training quantization (PTQ) techniques in large language models, exploring various schemes, model families, and bit precision. The authors propose an optimized method called Low-Rank Compensation (LoRC) to enhance model quality recovery with minimal size increase. Strengths: 1. An evaluation and comparison of existing PTQ methods provide some insights for the community. 2. The paper is well-written. Weaknesses: Reading from paper provides a satisfying experience, but I must admit that my understanding of the LLM field is limited. Thus, my suggestions may be wrong, please directly point them out. 1. The points in Figure 1 are too dense and lack recognition. 2. Although the method is proposed for LLMs, it's also better to compare it with some traditional quantization methods on ResNet-series. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors could refer to the Weaknesses. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for finding our paper easy to read and understand. We appreciate your positive feedback. ---- *Q1:* The points in Figure 1 are too dense and lack recognition. *A1:* Thanks for the suggestions. We thanks reviewer pointing out that figure 1 is hard to differentiate the difference of different methods. We will do the following for the final revision - (1) adding higher resolution images in the appendix for better visual quality; - (2) adding tables for several bin sizes for easy reading. ---- *Q2:* Although the method is proposed for LLMs, it's also better to compare it with some traditional quantization methods on ResNet-series. *A2:* Thank you for the suggestion to compare our method with traditional quantization techniques on the ResNet-series. We recognize the value of such a comparison in demonstrating the versatility and breadth of our method. However, - (1) Applying ResNet-series method to LLMs might be challenge since most of them rely on heavy computer cost and time consuming. Please see the literature Reviewer wnse mentioned. - (2) Our primary focus in this study has been on Large Language Models (LLMs) due to their unique characteristics and challenges in quantization (heavy compute cost and time consuming). Integrating a comparison with ResNet-series on computer vision tasks would require extensive evaluation in the domain of computer vision, which is beyond the scope of our current work. - (3) Also, the key component LoRC is used to maintain good accuracy while introducing minimal the memory footprint. CNNs are usually more compute heavy but the model size is relatively small. LoRC can be used as a simple add-on component for different quantization methods. As such, we applied this to ViT-large on ImageNet (google/vit-large-patch16-224 from Hugging Face) using PTQ (particularly, we here use RTN). Table 1 shows the Top-1 accuracy results of per-row weight quantization with or without LoRC (rank=8). As can be seen, there is a significant accuracy boost from LoRC. Particularly, W2A16 using LoRC can achieve even better accuracy than W3A16 using RTN. Table 1: The accuracy improvement from LoRC for ViT-Large model. | Using LoRC | W16A16 (baseline) | W4A16 | W3A16 | W2A16 | |--------------------|-------------------------|-------------|-------------|-------------| | No | 82.878 | 82.642 | 68.858 | 0.126 | | Yes (rank 8) | N/A | 82.754 | 81.902 | 73.480 | --- Rebuttal Comment 1.1: Comment: Thank the authors for the explanation, and all my concerns are addressed. Since I am not an expert in this field, so I finally decided to keep my score.
Summary: This paper studied the post-training quantization method for 4-bit weight quantization and W4A4 quantization. The authors further proposed a Low-Rank Compensation (LoRC), to enhance model quality with low-rank matrices. Strengths: The paper is well-written and easy to follow. Weaknesses: The novelty is not very significant. The post-training quantization with finer-grained and zero-shift is not a new idea. The sensitivity analysis is conducted with one particular quantization method, and the conclusion should be conditioned on that quantization method, but not general enough to conclude that PTQ exhibits the same behavior. I would suggest the authors to compare different quantization functions, such as minmax/percentile to deliver a more comprehensive conclusion. The accuracy improvement is marginal. Figure 1 didn’t show much improvement compared to the previous naïve baseline of RTN. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Have the authors tried other different baseline quantization method, e.g., minmax/percentile quantization? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your comments and appreciate the opportunity to address your concerns. As we strive to maintain the highest level of integrity in our research, we welcome any feedback that will help us improve our work. --- *Q1:* The novelty is not very significant. The PTQ with finer-grained and zero-shift is not a new idea. *A1:* Sorry for the confusion. We did not claim fine-grained quantization and zero-shift as the contribution of our paper. Those help us to conduct comprehensive study on LLM quantization. The novel part of our work is described in Section 5, which provides a simple but effective way to further boost the quantization performance. Also, we want to emphasize the comprehensive study also provides non-trivial value to the community for further exploration. --- *Q2:* The conclusion should be conditioned on that quantization method, but not general enough to conclude that PTQ exhibits the same behavior. I would suggest the authors compare different quantization functions, such as minmax/percentile to deliver a more comprehensive conclusion. *A2:* Thanks a lot for the suggestions. We are not sure about what minmax quantization is different than the baseline RTN used in the paper (RTN is using the min/max to perform the quantization). Please let us know if we misunderstand minmax quantization. - For percentile quantization, in computer vision, it normally outperforms RTN on outlier quantization (particularly for activation quantization), which has been shown in HAWQ series of work in their Github repository ([1-3]) (use RTN for weight quantization and optional percentile for activation quantization). However, For LLM this has been shown to be oppoosite. We will give more details backed up by our additional experiments. - Also, we want to emphasize that fine-grained (either block-wise or per-row/token wise) is also an effective way to reduce the outlier-effect of activation quantization as demonstrated in Table 5 for activation quantization. - Finally, we want to note is that percentile quantization for activation is usually not used together with dynamic token-wise quantization as the initial goal of percentile quantization is to reduce the dynamic/outlier-effect of the activation. To use both percentile and dynamic (per-token or block wise) will significantly increase the on-the-fly activation quantization cost. [1] Dong, et al. "Hawq: Hessian aware quantization of neural networks with mixed-precision." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019. [2] Dong, et al. "Hawq-v2: Hessian aware trace-weighted quantization of neural networks." Advances in neural information processing systems 33 (2020): 18518-18529. [3] Yao et al. "Hawq-v3: Dyadic neural network quantization." International Conference on Machine Learning. PMLR, 2021. Additional experiments: As can be seen in the following table (Table 1), the weight only percentile quantization does not provide better accuracy than standard RTN quantization. For activation only and weight-and-activation percentile quantization, the accuracy drop of percentile is much larger than standard RTN. We have spent some time to understand this phenomenon and find the outliers of activation plays a crucial role to preserve the accuracy of LLMs. Similar finding can be found in SmoothQuant work (2211.10438, arxiv.org). In Table 5, SmoothQuant compares their results with Outlier Suppression (2209.13325, arxiv.org), one of the main differences between those two algorithms is that besides both migrated activation quantization difficulty to weight, Outlier Suppression also clipped the activation range (similar to percentile quantization), and this leads to significant accuracy degradation. Table 1: Compare between percentile (0.1% and 99.9%) and min/max. For W4A8, quantization percentile applied to both weight and activation. For W16A8 (W4A16), quantization percentile applied to activation (weight). All the quantization results below use the configuration: row-wise weight RTN quantization and token-wise activation RTN quantization. | precision | quantization percentile | 1.3b | 6.7b | 13b | 65b | |-------|-------|---------|---------|---------|---------| | W4A16 | no | 19.77 | 13.44 | 12.09 | 11.52 | | W4A16 | yes | 23.27 | 14.58 | 13.96 | 11.74 | | precision | quantization percentile | 1.3b | 6.7b | 13b | 65b | |------------|-------------------------|---------|---------|---------|---------| | W4A8 | no | 21.21 | 14.81 | 26.34 | 84.41 | | W4A8 | yes | 7958.96 | 7452.49 | 7649.71 | 5338.85 | | precision | quantization percentile | 1.3b | 6.7b | 13b | 65b | |-------|------------|---------|---------|---------|---------| | w16A8 | no | 15.99 | 12.55 | 15.38 | 23.74 | | w16A8 | yes | 5002.91 | 8100.55 | 7619.81 | 5539.92 | --- *Q3:* The accuracy improvement is marginal. Figure 1 didn’t show much improvement compared to the previous naïve baseline of RTN. *A3:* Sorry for the confusion, due to the image size, it is hard to differentiate the difference between different methods. A more clear conclusion can be found in Table 2/Table 4 (and its full versions in Table E.15/E.16) for RTN and Table 7 for LoRC. For OPT-6.7B (per-row/block-size 256) quantization, RTN’s PPL is 13.44/12.57 and LoRC’s PPL is 12.10/11.99. The improvements are 1.34 and 0.58 respectively. For OPT-66B (per-row/block-size 256) quantization, RTN’s PPL is 31.52/10.80 and LoRC’s PPL is 10.34/10.29. Note that the full precision OPT-66B's PPL is 10.33. LoRC almost achieves no degradation. We will add a small discussion to compare our method and RTN in the final revision for easier across comparison. We also take action to improve the figures and the way we present our results. See our A1 to Q1 from Reviewer **iFFk**.
Summary: This work focuses on systematic examination of various post training quantization techniques in large language models. Experimental analysis include comparison of different model sizes, different numerical precision, and quantization of only weights vs activations. In addition, the Low Rank Compensation method is proposed to enhance model quality recovery. Strengths: The research on LLMs is rapidly growing, but computational and memory capabilities are still limited when deploying huge models. Therefore, quantization is a must. The analysis presented in this work is definitely needed to advance LLMs deployment in multiple use cases. The paper flows logically and is well structured. The proposed compensation method is interesting. Weaknesses: 1. Quantization may be input data specific. Same model topology when trained on different data may behave differently. It’s mentioned that you “use the zero-shot validation perplexity (PPL) differential on three datasets, namely, Wikitext-2 [23], PTB [22], and C4 [27], before and after the quantization”, however results presented in the following tables doesn’t indicate what error was achieved on which dataset. Could you clairfy? 2. Quantization may be operator specific. It would be interesting to list operators present in both model topologies and identify layers that were specifically sensitive to quantization. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Please clarify the finding about the layer norm in the OPT family. Were all weights and biases 1 and 0 in all layer norm layers? Are you sure the model was initiated correctly? 2. Does you compensation technique require any fine tuning? It wasn’t clear from the text. 3. How does your compensation technique compare to e.g., LAPQ https://arxiv.org/pdf/1911.07190.pdf where quantization error is used for layer smoothing in a similar way. 4. Does your compensation technique work in a layer-by-layer manner? How can you ensure that error doesn’t accumulate across layers? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Listed limitations and future work directions are clear and make sense. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad that you find our work significant & timely in advancing the deployment of LLMs. We appreciate your positive feedback on the logical flow & structure of our paper, and on our proposed compensation method. Please find our responses below: *Q1:* Results in tables are not clear *A1:* The results presented in the main text are the average of the three datasets. For detailed results, please refer the appendix, e.g., the corresponding detailed results of Table 2 is given in Table E.1 as described in the caption of Table 1. --- *Q2:* list operators & layers that were sensitive to quant. *A2:* Thank you for the suggestions. We conduct experiments on OPT-1.3B & 6.7B. The structure of our tests was as follows: (i) sensitivity of four components (QKV, Attn-out, MLP1, MLP2) and the results are in Table 1 (ii) sensitivity of three-type layers (first/middle/final 1/3 layers) and the results are in Table 2. Table 1. Results for W4A8 and W4A16 using RTN across various linear modules. Observations suggest that the QKV component has the highest sensitivity during quantization. Following QKV, MLP1 exhibits significant sensitivity, while Attn-out appears to be the least sensitive. Baseline for 1.3b is 15.44 PPL, and for 6.7b is 11.89 PPL. Note the values are the mean of three datasets c3, ptb & wikitext-2, consistent in the tables of our submission. | size | bits | QKV | ATT-out | MLP1 | MLP2 | |------|-------|-------|---------|-------|-------| | 1.3b | w4a16 | 16.25 | 15.52 | 16.24 | 15.82 | | 1.3b | w4a8 | 17.04 | 15.52 | 16.29 | 15.82 | | 6.7b | w4a16 | 12.29 | 11.92 | 12.26 | 12.01 | | 6.7b | w4a8 | 13.28 | 11.92 | 13.15 | 12.01 | Table 2. Outcomes of W4A8 and W4A16 using RTN on different layer divisions. The data indicates pronounced sensitivity to quantization in the initial one-third of the layers, showing the most adverse perplexity. Conversely, the mid and final thirds don't follow a definitive pattern. | size | bits | first 1/3 | middle 1/3 | last 1/3 | |------|-------|------|------|------| | 1.3b | w4a16 | 17.21 | 16.07 | 16.11 | | 1.3b | w4a8 | 17.52 | 16.10 | 17.54 | | 6.7b | w4a16 | 12.46 | 12.31 | 12.12 | | 6.7b | w4a8 | 12.80 | 12.52 | 12.17 | The above observed potential of mixed-precision quantization across different layers and components suggests a tailored approach might be more effective. Instead of a uniform quantization method, a customized strategy for each layer, based on its sensitivity, could optimize computational efficiency without major accuracy trade-offs. The additional experiments suggested by the reviewer pave the way for future research on formulating dynamic quantization algorithms and understanding the reasons behind these sensitivities. --- *Q3:* Clarify the LN in OPT *A3:* We double checked our evaluation and the initiation is correct. We further checked if the LN of opt-family are well trained or not using the following code snippet. See below for the code snippets (due to space limit, we only show one LN but rest are similar). We will release codes for reproducibility. We mistakenly said the bias is not well-trained for models and will correct this in the final revision. The code snippets for OPT: ``` from transformers import OPTForCausalLM model = OPTForCausalLM.from_pretrained('facebook/opt-6.7b') for n, p in model.named_parameters(): if 'layer_norm' in n: print (n, "end=:") if "weight" in n: print(((p-1).abs() > 1e-4).sum()) else: print((p.abs() > 1e-4).sum()) ``` Our results for 6.7b model: ``` .... model.decoder.final_layer_norm.weight end=: tensor(0) model.decoder.final_layer_norm.bias end=: tensor(4094) ``` for 66b model: ``` ... model.decoder.final_layer_norm.weight end=: tensor(0) model.decoder.final_layer_norm.bias end=: tensor(9200) ``` --- *Q4:* LoRC requires any finetuning? *A4:* Thanks for the question, our method LoRC works actually for both post-training quantization and quantize aware training. Due to the huge amount of resource and time requirements, this paper focuses only on post-training quantization exploration as indicated in the title of our manuscript. Thus, the results shown in Table 7 require no fine-tuning (see line 290-296 and footnote 6). --- *Q5:* Compare LoRC to e.g., LAPQ. *A5:* Thanks for pointing out this work, and we will include it in related work in our final revision. The LAPQ work focuses on the different quantization step size and how to jointly optimize it using Hessian and iterative Powell optimization. However, our work LoRC is more about after quantization, how we can further boost the accuracy by incorporating the error matrix with a low rank decomposition (see A2 of Reviewer iFFk). The two methods should be easily compossible. Other than above, LAPQ methods might be too expensive (both compute-/time-wise): (1) it needs to compute the Hessian across different layers. The LLM has much larger parameter space and more layers as compared to ResNet & MobileNet. (2) Similarly, the iterative optimization is also not cheap. Those drawbacks may affect the application of LAPQ on LLMs. --- *Q6:* Error accumulates across layers. *A6:* This is a great point. Yes, our method is working in a layer-by-layer manner. There is no guarantee about the error accumulation across different layers. However, this is a common issue of LLMs quantization, e.g., ZeroQuant [36], SmoothQuant [35] and LLM.int8 [6]. All works are not able to deal with across-layer error accumulation and one of the core reasons is about the complexity and cost of LLMs. Also, we note that although this is not taken into current algorithmic considerations, the overall quantization performance of LLMs is good for most of cases. However, this is an interesting direction to explore, particularly for extreme low-precision quantization, e.g., 2/3 bits. We will include this as a further opportunity in our discussion. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for clarification and a very detailed explanation. All my questions were addressed.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion
Accept (poster)
Summary: The paper concerns catastrophic forgetting in neural networks and draws from neural mechanisms contributing to the absence of such phenomena in the brain to propose a method to overcome CF through targeted neuron retraining, task knowledge revision, and enhanced learning for less active neurons. The authors present an impressive suite of experimental results, benchmarking their proposed method as superior or comparable to notable existing approaches in class- and task-IL scenarios and with further analysis of the different stages of the method. Strengths: + The work creatively and effectively links to biological motivations toward continual learning (CL) capabilities. + Fairly inclusive summary of existing CL approaches with contributive discussion as to their strengths and weaknesses toward integration in the proposed approach. + Figures are well-crafted and easily understandable. + Performance is comparable to SOTA methods. + It's nice to see in the experimental data the significance of dynamic masking, retaining, rewinding. Weaknesses: - (As mentioned by the authors:) The work is primarily applicable only to CNN-based architectures in its present state, and additional tuning is required for the increased number of hyperparameters. While the paper is nice, one can see how TriRE won't work on transformers due to computational expense. - Regarding Related Work, the authors could elaborate on whether there are any other notable CL approaches that draw upon strengths of previous works as TriRE does and clarify how TriRE is superior/comparable (if applicable). - The authors could perhaps move discussion of limitations more heavily to the main text as opposed to the Appendix. The limitations discussed in Appendix C pertain to the practical applicability of the work and thus have merit for inclusion. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * I am curious as to why rewinding improves accuracy, perhaps this could be another direction for exploration. * Small clarifications regarding the method itself: * Learn: finite replay buffer with loss-aware experience rehearsal? * Retain: parameter isolation where important connections and weights are learned? * Revise: How are the subnetworks combined? I wonder if there is a way to specifically focus on free neurons when learning new tasks, so during the combination stage they are orthogonal to the combined network? * Rewind: Does relearning target specific unactivated neurons/parameters? This step is perhaps redundant if simply taking weights from a few epochs before and training more on new mini-batches to relearn. Minor: * Lines 159-160: Put “k” in math mode (top-k, k-winner) * Line 215: “Retatin” -> “Retain” Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors touch upon the limitations of their work in the main text and more so in the Appendix. Social impact — not directly applicable but is mentioned in broader impacts discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer taking time to review our work in detail. We appreciate your encouraging words on our manuscript. Our response to weaknesses and questions are as follows: `While the paper is nice, one can see how TriRE won't work on transformers due to computational expense.` We agree with the reviewer that added computational complexity is one of the major limitations of our approach. Therefore, a naive extrapolation of TriRE to transformer-based architectures might face some hurdles such as computational complexity, hyperparameter tuning etc. We extend our Limitations section to include these in the next revision. However, from a distinctive standpoint, TriRE is a novel training paradigm that enables effective assimilation of multiple orthogonal CL approaches and is one of the earliest works in this direction. Although TriRE suffers from some limitations, it has been successful in showcasing the combined efficacy of multiple orthogonal CL approaches. Also, it is to be noted that, other baseline methods mentioned in Table 1 and Figure 3 could also not work on transformers considering the architectures’ innate complexity and need for more resources and training time. `whether there are any other notable CL approaches that draw upon strengths of previous works as TriRE does` To the best of our knowledge, we are one of the earliest works in this direction. However, we might have missed some of the recent publications. As suggested by you and other reviewers, we intend to build upon our related works to provide more information on any other notable CL approaches that draw upon strengths of previous works as TriRE does. `move discussion of limitations more heavily to the main text as opposed to the Appendix` Due to the space constraints, we moved the Limitations section to Appendix. We intend to expand the limitation section further by incorporating suggestions from you and other reviewers for the revised version. Considering that, we would appreciate your guidance on identifying specific sections from the main paper that could potentially be moved to the appendix to make space for the Limitations section. `why rewinding improves accuracy, perhaps this could be another direction for exploration` There is existing literature which shows that active forgetting is a part of biological learning and that neuron decay due to unuse is one of the reasons for catastrophic forgetting. Inspired by that, we aimed the rewind and subsequent relearning to act as a warm-up for the less active neurons, that are not there in cumulative subnetwork S, making them relevant again for the learning circuit and engaging them to be more receptive to learning the next task. By keeping the cumulative subnetwork S intact, we also make sure that the rewinding of weights does not cause any performance drop. Moreover, we concur with the perspective that exploring the concept of "rewind" as a potential research direction holds promise and could open up new avenues for further investigation. `Learn: finite replay buffer with loss-aware experience rehearsal? ` (Assuming the clarification is to understand loss-aware experience rehearsal better) In loss aware balanced reservoir sampling, we compute a score vector proportional to the number of items of each class and estimate an importance score given by the opposite of the loss value for each example. Then, we normalize these two terms to ensure an equal contribution and sum them to form a single score vector. Finally, we assign replacement probability to each item that is proportional to the combined score. This ensures that there is balance within the buffer in terms of the number of examples per class as well as help us identify and replace the elements displaying low loss values to make space for harder examples. `Retain: parameter isolation where important connections and weights are learned?` In the Retain phase, neurons and weights are learned for the current task. At the end of that stage, we employ k-WTA and heterogenous dropout based activation pruning and subsequent weight pruning to retain the best weights and activations for that task. `Revise: How are the subnetworks combined?` In the Revise stage, initially the cumulative subnetwork containing knowledge from past tasks and the subnetwork containing knowledge from the current task are trained / fine tuned together on the rehearsal buffer to learn their joint distribution. After this revising process, we update the cumulative set $S = S \cup S_t$. This makes sure that at the end of the current task, the cumulative set contains the best weights and activations of both past tasks and the current task which helps in preserving knowledge. `if there is a way to specifically focus on free neurons when learning new tasks, so during the combination stage they are orthogonal to the combined network?` We don’t encourage completely mutually exclusive subnetworks for each task as empirical analysis in Figure 4 shows that there is neuron overlap while learning different tasks. This knowledge sharing is particularly beneficial when it comes to scaling the model for longer task sequences. Furthermore, when all subnetworks are orthogonal to each other, it could lead to capacity saturation and task discovery problems during inference. `Rewind: Does relearning target specific unactivated neurons/parameters?` In the rewind phase, as correctly inferred by the reviewer, we are rewinding and relearning the less active neurons. The cumulative subnetwork S which contains the most important weights and activations remains unchanged. The rest of the less active neurons go through the Rewind phase. We hope that the provided clarification has addressed your concerns and inquiries to your satisfaction. However, if further assistance or elaboration is required, we would be more than happy to provide additional information to ensure a complete comprehension of our work. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for the detailed response. Regarding making space for limitations: I am not sure what would be feasible to remove while preserving the intended message and impact of each section -- however, the initial descriptions of each of the three stages (Section 3) could be made more concise to create more space overall; the Broader Impacts section could be shortened (last 2 sentences) and/or combined with the conclusion; and perhaps Algorithm 1 could be moved to the appendix, although I am not sure this is in line with best practice. --- Reply to Comment 1.1.1: Title: Reply to Reviewer 3Tuq Comment: Thank you for your swift response. In line with the reviwer's suggestion, we will be moving the Limitations section to the main paper. Furthermore, we have expanded the scope of limitations to encompass the potential challenges that could arise from a naive extrapolation of TriRE to transformer-based architectures, notably with regards to computational complexity and memory overhead (Refer our 'revision plan' in the official comment for Reviewer tURo).
Summary: The paper proposed a new continual learning (CL) method that updates partial neurons while rewinding other neurons to the previously stored weights. To do this, the authors utilize sparsity constraints to the network weights and select highly activated neurons. Consequently, the proposed method outperforms the baselines. Strengths: 1. The proposed method is generally technically sound. 2. The ablation study in Table 2 helps us understand the contribution of each component. Weaknesses: 1. The proposed method has three components, retain, revise, and rewind. The key contribution part is rewind while other parts are periphery or have no novelty. 2. What is the overhead cost for rewinding, such as training time, memory cost for holding previous weights, and the performance drop for the current task? 3. There are 7 different parameters and it seems that it is arbitrarily chosen to make the best performance for each task (Appendix D) as the authors mentioned in Appendix C, which is appreciated. 4. The statement in L211-212, “This is helpful because studies show that in the human brain, less active neurons follow a ‘use-it-or-lose-it’ philosophy” is difficult to understand why it is “helpful.” It seems that the authors blindly think that “human-like” is good. 5. It will be better if the authors include recent baselines [1-2] and check whether the proposed method is state-of-the-art in the current setting. [1] Fu-Yun, et al. "Foster: Feature boosting and compression for class-incremental learning." ECCV. 2022. [2] Zhou, Da-Wei, et al. "A model or 603 exemplars: Towards memory-efficient class-incremental learning." ICLR. 2023 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See Weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See Weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for your thorough review of our paper. Below, we have addressed each of your concerns to the best of our understanding, aiming to enhance the paper's overall contribution. `What is the overhead cost for rewinding, such as training time, memory cost for holding previous weights, and the performance drop for the current task? ` We appreciate the reviewer's insightful observation of the absence of computation cost analysis. Furthermore, we concur with the interpretation regarding the potential drawback of the Rewind phase in terms of memory consumption and training time. Yet, we wish to emphasize that the novelty in our paper lies in effectively integrating existing neuro-inspired concepts, aiming to guide the CL research community towards a less-explored direction. We are actively considering the efficient interpretation and implementation of TriRE as a natural progression of this work. However, we acknowledge the present version's drawbacks, including memory and training costs, which will be duly addressed in the limitation section. Additionally, we would like to address the concern regarding the possible performance drop in the current task because of the Rewind phase. The analysis for this can be found in Section B.2 in the Appendix. We analyze this from the perspective of the stability-plasticity dilemma. As shown in Figure 7, while other baselines like ER and DER++ suffer from recency bias and are more plastic, TriRE clearly balances the stability - plasticity tradeoff managing to preserve existing knowledge from past tasks while acquiring new knowledge from current tasks. This is because while rewinding the weights, we only rewind the weights that are not in the cumulative subnetwork S. And to reiterate, by the time we reach the Rewind stage, S contains the most important weights and activations for the past and current task. So by keeping them intact and only rewinding the rest of the weights, we preserve what we have already learned and manage to bring the less active neurons back into the learning circuit. `It seems that the authors blindly think that “human-like” is good.` There is existing literature [1,2] which shows that active forgetting is a part of biological learning and that neuron decay due to unuse is one of the reasons for catastrophic forgetting. Therefore, we aimed the rewind and subsequent relearning to act as a warm-up for the less active neurons, that are not there in cumulative subnetwork S, making them relevant again for the learning circuit and engaging them to be more receptive to learning the next task. This is to also make sure that the model is scalable and is able to utilize the available parameters efficiently. Moreover, we prove this hypothesis with empirical analysis in Section 5 (under Ablation Study) which separately focuses on the impact of the Rewind phase on the overall algorithm. [1] Shors, Tracey J., Megan L. Anderson, D. M. Curlik Ii, and M. S. Nokia. "Use it or lose it: how neurogenesis keeps the brain fit for learning." Behavioural brain research 227, no. 2 (2012): 450-458. [2] Almond, N. M. "The Use-It-Or-Lose-It Theory; The Cognitive Reserve Hypothesis and the Use-Dependency Theory: Methodological Issues, Previous Research, Current Research and Future Perspectives." K, Edison.(Ed). Episodic Memory: Formation, Clinical Disorders and Role of Aging. Nova Science Publishers, Inc: New York (2014). `It will be better if the authors include recent baselines [1-2]` We extend our gratitude to the reviewer for bringing to our attention the possibility of establishing new baselines. That being said, the experimental setup of the suggested papers and TriRE are different so it will be difficult to incorporate that in the current version. However, your suggestion has enriched our perspective and we will extend our Related Works section to include these newer methods in the final version. Currently, we have made an earnest effort to consider baselines that are representative of each category of the existing CL methods that combat task interference and catastrophic forgetting. Once again, we thank you for your thoughtful evaluation and consideration. We have diligently tried to address each concern you raised, and we are committed to ensuring that our paper contributes meaningfully to the conference proceedings. --- Rebuttal Comment 1.1: Comment: Thank the authors that respond to my concerns. I have follow-up questions. `Related to Overhead of rewind` I recommend measuring or computing the complexity of the overhead cost in terms of memory and runtime. `Related to `It will be better if the authors include recent baselines [1-2]` It would be better if the authors specify how the experimental setting is different which hinders the fair comparison. Both are not quite different settings from DER++ used in Table 1 --- Reply to Comment 1.1.1: Title: Reply to Reviewer 6DDf Comment: We thank the reviewer for taking the time to review our rebuttal. With regard to measuring computational and memory overhead, we plan to include this in the final revision (See our 'revision plan' in the official comment for Reviewer tURo). As these experiments need to be standardized (with experimental settings, software environments, and underlying hardware) to make sure we are comparing apples to apples, we expect these experiments to take some time. We hope to provide these results before the discussion period ends, but unfortunately, time and limited computational capacity are proving to be a constraint. In any case, we will include these results in our final revision. We considered the most important baselines from different CL approaches to evaluate the efficacy of TriRE. We intend to compare and contrast Foster [1] and Memo [2] analytically in our final revision. Some of the key differences hindering our progress are (1) Different backbones (ResNet-18 in TriRE Vs ResNet-32 variant in Foster for CIFAR100 experiments), different buffer sampling strategies, different buffer sizes (500 Vs 2000) and different training schedules. The same arguments hold for Memo as well as it has similar experimental settings as Foster. With sufficient time, these differences can be addressed and methods can be compared on a common ground. As per the reviewer's suggestion, we will include an experimental evaluation entailing a comparison between these methods in the final revision. Our revision plan can be found in the official comment for Reviewer tURo. [1] Fu-Yun, et al. "Foster: Feature boosting and compression for class-incremental learning." ECCV. 2022. [2] Zhou, Da-Wei, et al. "A model or 603 exemplars: Towards memory-efficient class-incremental learning." ICLR. 2023
Summary: The paper proposes a method for avoiding catastrophic forgetting in continual supervised learning that operates in three stages. In the first stage (retain), a subnetwork for the current task is identified by detecting most/least activated neurons of the main network. In the second stage (revise), both the main network and the extracted subnetwork are re-trained / fine-tuned with examples from the current task and examples from a replay memory. The subnetwork is later integrated into the main network. In the final stage (rewind), the weights belonging to non-cumulative subnetworks are finetuned for a few epochs. The paper presents experimental results on three benchmark continual learning datasets, in both the task-incremental and class-incremental settings, by also comparing with counterpart methods in the area. --Rebuttal-- I read the rebuttal, along with other reviewers' comments, and increased my score accordingly (to borderline accept) during the rebuttal phase. Strengths: - The originality of the paper relies on a new multi-stage method to tackle catastrophic forgetting which relies on the important concepts of modularity and example replay. - The paper is in general easy to follow. - The paper is properly aligned with literature in the area. Weaknesses: - Although the proposed approach touches on a lot of important points in catastrophic forgetting and continual learning, it seems to be a mere combination of existing ideas into a so-called three-stage approach, therefore undermining the novelty of the proposed mechanism. Furthermore, what seem to be novel aspects of the proposed approach are not explained in full detail, and therefore it is difficult to estimate their impact. For instance, in the retain phase, activation pruning is performed by using the existing heterogeneous dropout, while weight pruning used the existing CWI approach. For activation pruning, in lines 158-159, it is stated that a counter is used to determine the top k-winner activations. However, not technical details are provided regarding how this counter works. In the revise stage, lines 191-193 mention that the learning rate is "considerably" reduced at this phase; however, no insights onto how to decide what the reduction rate to use. For this same stage, in lines 195-199 it is mentioned that the S and S_{t} subnetworks are eventually merged; however, no technical details of how this merge occurs are provided. Finally for the rewind stage, in lines 209-214, it is unclear how to decide for how many epochs (k?) to rewind the network, what are the criteria, the requirements to decide this. - The experiments lack analysis of computational cost (e.g. memory consumption, training time). As can be inferred from Algorithm 1 and section 3, the multi-stage procedure proposed in this paper involves several calculations and passes through the network/subnetwork. What is the cost of this multi-stage procedure? How does this compare to counterpart methods? - In the experiments, some results are left unexplained while some others have an ambiguous explanation. For example, why does the proposed method seem to underperform on Task-IL for two of the three datasets evaluated? The explanation for the "miserably" (please change this word) performance of methods such as LwF and SI provided in lines 240-246 looks ambiguous and incorrect, since these methods have been used previously in Task-IL. - From the results in Figure 3, it can be implied that the proposed method actually improves performance as the number of tasks increases. Is that the case? Did you run multiple task orders? Are tasks 10 to 20 naturally easier? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to questions listed in the "weaknesses" section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations in terms of the network architecture used are clearly stated in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Firstly, we would like to express our gratitude for the time and attention you dedicated to reviewing our paper. Below, we have carefully addressed each of your concerns to the best of our knowledge to enhance the paper's overall contribution. `no technical details are provided regarding how the activation counter works.` For a given task, each neuron in a layer is monitored for its frequency of activations during training. In essence, each neuron is given an activation counter that increases when a neuron’s activation is among the top-k activations in its layer. In the Retain phase, the likelihood that a neuron would be dropped is inversely proportional to its activation counts. As a result, the model is not only encouraged to preserve the knowledge of the current task but also learn the new task utilizing neurons that were less active during earlier tasks. At the end of each task, we reset this counter to make sure that we track the neuron activity of every task separately. `the learning rate is "considerably" reduced at Revise phase; however, no insights into how to decide what reduction rate to use.` We understand that the indication to decrease the learning rate “considerably” in the Revise stage could be misleading for the reader and appreciate you pointing it out. Our notion from the suggestion was that depending on the optimizer used, model architecture and the complexity of the dataset involved, slowing down the learning in the Revision stage can help us by not drastically overwriting the existing weights while simultaneously reaping the benefits of joint distribution based training. We acknowledge that this is a hyperparameter that needs tuning but for the datasets mentioned in the paper, we have provided the learning rates that gave us the best results in the Appendix which could be a good starting point for anyone in the community who wants to develop on this idea. Also as a thumb of rule, from our experiments, we found that $\eta'$ should be approximately 1/20th of the initial learning rate $\eta$ for smaller datasets and 1/10th for larger datasets. `mentioned that the S and S_t subnetworks are eventually merged; however, no technical details of how this merge occurs are provided.` By merging we mean updating the cumulative set S = S $\cup$ S_t at the end of the revise step. Specifically, the new cumulative mask includes those weights that were already part of the cumulative mask and the new set of weights that are identified to be most active by S_t. This information is part of Algorithm 1, line 16. In any case, we will update Figure 1 and provide more clarity in the next revision. `it is unclear how to decide for how many epochs (k?) to rewind the network` This has been addressed in Section 5 under “How much to Rewind?”. As explained in the paper, rewinding to very early and very late stages in the training tends to decrease the accuracy. The former does not work because the network has not learned enough meaningful features by then to regain the lost accuracy and the latter does not work because there is not enough time for relearning. The empirical analysis as shown in Figure 5 proves that rewinding to between 70% and 90% of the training time in the Retain phase results in the best accuracy. `explanation for performance of methods such as LwF and SI provided in lines 240-246 looks ambiguous and incorrect` We regret that the explanation for the unsatisfactory performance of LwF and SI when compared to our work seems ambiguous. However, we do want to point out that our method does considerably better (Refer Table 1) than weight regularization methods like LwF and SI and we were simply trying to bring out the difference in the accuracy in both scenarios. There is a relative increase of ~40% between LwF and TriRE in terms of Task-IL accuracy in Seq-CIFAR10. Furthermore, TriRE’s Task-IL accuracy shows a relative increase of ~200% in Seq-TinyImageNet which is a harder dataset considering its low buffer-to-class ratio. This proves that our method which uses a combination of weight regularization, experience replay and parameter isolation is better than using weight regularization alone. However, under the reviewer’s suggestion, we would be removing the word “miserably” as it’s an unfairly harsh description and would be adding more explanation. `implied that the proposed method actually improves performance as the number of tasks increases. Is that the case? Did you run multiple task orders? Are tasks 10 to 20 naturally easier?` We would like to clarify that Figure 3 provides results on all 20 tasks after training on Seq-CIFAR100 with 20 task training i.e. Figure 3 depicts final accuracies on 1st, 2nd.. 20th task after training. Therefore, it cannot be inferred that moving from 10 tasks to 20 tasks is naturally easier. On the contrary, catastrophic forgetting worsens as the number of tasks in a sequence increases, referred to as long-term catastrophic forgetting. The number of samples in the buffer representing each previous task drastically reduces in longer task sequences resulting in poor performance. With the Revise phase forcing joint distribution based training thus preserving forward transfer and with the Rewind phase forcing less active neurons to participate in the task learning, TriRE combats task interference better. We employ the same task order as the compared baselines to maintain fairness in comparison. Therefore, Figure 3 contains a single task order averaged over multiple random seeds. We once again thank the reviewer for detailed feedback. We have made an utmost effort to address all the concerns raised. Please let us know in case we have missed something. --- Rebuttal Comment 1.1: Title: Given author responsiveness re limitations and other points, I now more strongly favor acceptance Comment: If there is an official way to revise my rating of the ms, I hope someone will let me know. Otherwise, I'll just say here that I'm hopeful the paper can be accepted and presented at least as a poster at the conference. --- Reply to Comment 1.1.1: Comment: Thank you for advocating for our paper. In your official review session, there is an "Edit" button that will enable you to modify the initial review and rating. Your support is greatly appreciated. --- Rebuttal 2: Title: Authors' rebuttal Comment: Thanks to the authors for their responses to my questions and concerns. After reading those, along with other reviewers' comments and responses to those comments from the authors, I am happy to increase my score as long as the authors commit to include in their final version of their paper the explanations and results of missing elements, in particular computational cost of the method compared to others and details of hyperparameter tuning (considering the large number of hyperparameters that are needed). --- Rebuttal Comment 2.1: Title: Reply to reviewer Soxi Comment: We thank the reviewer for their response. As suggested by you and other reviewers, we fully commit to including experiments on computational overhead, hyperparameter tuning, and explanation for missing elements. As can be seen in official comments for Reviewer 6DDf, we have made some progress with respect to suggested changes. Please find our complete revision plan under an official comment for Reviewer tURo. Please let us know in case any of your concerns are missing in the final revision plan.
Summary: This article introduces a multi-faceted approach to continual learning. combining features of many other approaches and relying on a three-phase training process, such that, as each new task in a sequence of tasks is encountered, subsets of weights in the network are 'retained', 'revised' or 'rewound' to values that maintain old knowledge while beginning to learn the new task, facilitate integration of weights important for the new task with those important for previous tasks, and maintain plasticity for future learning, respectively. The authors find advantages for this approach relative to a large range of other approaches, especially with more complex task settings, and perform several ablations helping to clarify the roles of the three stages, along with a few other explorations. Strengths: The paper cites a wide range of relevant related work and situates its approach within the context established by other approaches. The other approaches considered seem fairly extensively sampled. I appreciated the consideration of the effects of the different ablations which show the importance of the 'rewind' phase as well as the stability plasticity analysis shown in the appendix. Weaknesses: The work seems thoughtful in relation to the relevant existing literature but it may be that the CL paradigm as explored in this project is in need of re-framing, if new breakthroughs are to be achieved. While the advantages relative to many of the baselines seem clear, I felt that the additional complexity of the TriRE scheme made the gaps between it and some of the other approaches relatively difficult to get excited about. Some of the features that seem relatively important, such as CWI, are direct importations from previous work, making it difficult to assess whether TriRE really advances our understanding, given its additional complexity compared to most other work. To me the greatest weakness of the approach is one that appears to be widely shared across the CL literature: This is the fact that this and all of the cited work rely on triggering complex meta-processes at the boundaries between tasks in ways that seem very distant from what might occur in biological networks on in naturalistic continual learning settings where task boundaries are not announced. We need approaches that can address the continual learning problem when tasks shift more gradually and task boundaries are not available. The approach also exploits features limited to the setting in which tasks are defined by the fact that they use completely non-overlapping sets of output neurons (i.e. distinct class labels). For example the Loss on the new task items in Eq 4 only considers the class labels relevant to the current task. Finally, continual learning as explored in this literature is restricted to tasks with no intrinsically cumulative structure. Catastrophic forgetting also occurs in settings where new learning could productively build on structure learned in previously learned tasks (in the way that addition builds on counting, multiplication builds on addition, etc). The whole body of work thus seems narrowly focused on paradigms of limited general interest. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I had difficulty understanding the CLS-ER results, which are almost as good as TriRE. A clearer understanding of what the CLS-ER model shares with TriRE and how it differs from it would be useful in evaluating the contributions of TriRE over and about the use of an EMA model smoothing weights across tasks. The note in Table 1 is insufficient for me to understand the relationship between CLS-ER and TriRE. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I initially wrote: "The paper mentions limitations in the appendix. As stated they hint at some of the same limitations I see in the work as described above. Per the importance placed on limitations in instructions to authors and reviewers, these ought to be placed in the main text." The authors have addressed this concern through the discussion during the rebuttal, leading me to increase my rating to 'Accept'. While I think the weaknesses described above still apply, they are, as I have said, largely shared by a mini-paradigm in which these issues have been addressed. I hope the authors will be encouraged by acceptance of this work to work on a new paradigm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for providing thoughtful feedback and sharing a constructive evaluation of our work. Your valuable input has greatly contributed to the enhancement of our paper. `complexity of the TriRE scheme made the gaps between it and some of the other approaches relatively difficult to get excited about. ` We agree with the implication that TriRE has additional complexity when compared to the other mentioned baseline methods. But the exciting element in what we are proposing is that there is a very plausible research direction which was under explored and has shown promise. Although the work before ours has pushed the boundary in CL domain, they have missed out on the benefits of effectively integrating several neuro-physiological processes emulating how biological learning works. We manage to coherently combine the concepts of meta-plasticity, active forgetting, relearning, neurogenesis and context gating to form a new CL paradigm and empirically prove that it is a viable direction with results. `the greatest weakness of the approach is one that appears to be widely shared across the CL literature:` We appreciate the reviewer for highlighting a common gap observed in both our proposed work and the related literature. Your discerning analysis has emphasized an area of significance that merits careful consideration and refinement. That being said, we would like to indulge in the argument that CL as a topic is in its early phase and what you suggested : task boundary less training, rehearsal free training; are all part of the desiderata of CL [1]. Therefore, as the domain is progressing, we believe that there is a lot of scope for improvement. For instance, neuro-symbolic continual learning [2] is a subtopic within CL that broadly deals with the reviewer’s suggestion; the concept of new learning productively building on structure learned in previously learned tasks. However, in this work, we would reiterate that our focus was on demonstrating the efficacy of harmoniously combining the existing CL methods to tackle catastrophic forgetting better and it achieves the same. Ours is one of the earliest works in this direction and we hope that it spurs the CL research community to explore more methods which combine neuro-physiological processes as in the biological brain. [1] Farquhar, Sebastian, and Yarin Gal. "Towards robust evaluations of continual learning." arXiv preprint arXiv:1805.09733 (2018). [2] Marconato, Emanuele, Gianpaolo Bontempo, Elisa Ficarra, Simone Calderara, Andrea Passerini, and Stefano Teso. "Neuro symbolic continual learning: Knowledge, reasoning shortcuts and concept rehearsal." arXiv preprint arXiv:2302.01242 (2023). `A clearer understanding of what the CLS-ER model shares with TriRE and how it differs from it would be useful` CLS-ER emulates the interplay between fast and slow learning systems by incorporating two supplementary semantic memories that aggregate the weights of the working model in a stochastic manner via an exponential moving average. That is, CLS-ER operates with three models: the working model, the stable model, and the plastic model. Conversely, in the TriRE framework, the architecture consists of two models: the working model and a stable model (referred to as the EMA model). Therefore, the similarity is that both the methods use the concept of progressively aggregating the weights of the working memory as it sequentially learns tasks allowing us to consolidate the information efficiently. However, there are two main differences: 1. CLS-ER uses two supplementary semantic memories whereas TriRE only uses one supplementary memory and still our results are better than CLS-ER in all three datasets as shown in Table 1 of the main paper. 2. CLS-ER only uses experience replay to tackle catastrophic forgetting whereas our method harmoniously integrates all the families of methods in CL, i.e weight regularization, parameter isolation and experience replay. That is, CLS-ER only focuses on one aspect of biological learning whereas we effectively consider multiple aspects. We once again thank the reviewer for detailed and insightful feedback. Please let us know in case we have missed any open points. --- Rebuttal 2: Title: What revisions do the authors plan? Comment: I hope I'm not missing it but I didn't see signs of any intention to revise either in the overall rebuttal or in the response to my comments. Before re-confirming my belief that the paper deserves consideration for acceptance, I'd like to see the authors present a revised ms with a limitations section included in the main paper, specifically addressing the concerns I have raised with the whole continual learning setup of using announced task boundaries to gate complex meta-processes. --- Rebuttal Comment 2.1: Title: Reply to Reviewer tURo Comment: We thank the reviewer for swift response. We take reviewers’ feedback seriously and intend to accommodate all their concerns in the final revision. As uploading a revised manuscript is not a possibility due to NeurIPS guidelines and the rebuttal had a limited word count, we did not provide a revised version of the limitations section. Considering the feedback from all reviewers, we intend to update our manuscript with the following changes: - Computational and memory overhead comparison - Robustness to choice of hyperparameters - Information pertaining to differneces between CLS-ER and TriRE - Clarifications regarding Figure 3 - Extending our evaluation with more recent baselines (as proposed by other reviewers) - Other miscellaneous / minor changes, clarifications - Limitations and Future Work: We proposed TriRE, a novel CL paradigm that leverages multiple orthogonal CL approaches to effectively reduce catastrophic forgetting in CL. As orthogonal CL approaches may not always be complementary, the selection of such approaches needs careful consideration in TriRE. In addition, having multiple objective functions naturally expands the number of hyperparameters, thereby requiring extensive tuning to achieve optimal performance. Therefore, additional computational complexity and memory overhead due to the staged approach and extensive hyperparameter tuning are one of the major limitations of the proposed method. For the same reason, we highlight that TriRE is not directed towards compute-intensive architectures such as vision transformers. As TriRE involves different stages of training within each task, it assumes the knowledge of task boundaries. Moreover, in line with state-of-the-art methods in CL, each task entails a non-overlapping set of classes with data within each task shuffled to guarantee i.i.d. data. However, in the case of online learning where data is streaming and the distribution is shifting gradually, TriRE cannot be applied in its current form. Therefore, additional measures such as task-boundary approximation and modification to learning objectives are necessary to enable TriRE to work in such scenarios. Furthermore, traditional CL datasets considered in this work entail independent tasks and data points without intrinsic cumulative structure. As TriRE does not leverage structures learned in previously encountered tasks, structure learning forms one other limitation of this proposed method. Reducing computational and memory overhead, extending to task-free CL scenarios with recurring classes, and leveraging intrinsic structures within underlying data are some of the future research directions for this work.
Rebuttal 1: Rebuttal: In this section, we aim to address the overarching concerns raised by reviewers, ensuring a comprehensive response to the broader themes highlighted in their feedback. `The key contribution part is rewind while other parts are peripheral or have no novelty / It seems to be a mere combination of existing ideas into a so-called three-stage approach.` We respectfully disagree with this characterization. We intend to showcase that the novelty of the proposed method lies in the fact that it is able to effectively combine the existing families of CL methods to form a sophisticated new CL paradigm. The key contribution of the work is not the stages or phases by themselves but our effective interpretation of how these stages can harmoniously work together to create a novel CL method. As mentioned in the ‘Related Works’ section, each of these categories of methods has a plethora of research available individually, but they also have their drawbacks. For instance, weight regularization (inspired by meta plasticity) alone only imposes a soft penalty, which doesn’t entirely prevent catastrophic forgetting. Rehearsal methods (inspired by experience replay) do better but suffer overfitting in low buffer regimes, whereas parameter isolation (inspired by context gating and/or neurogenesis) alone is not a good candidate for longer task sequences due to capacity saturation. Notably, mammalian brain doesn’t depend on a singular concept but on an amalgamation of the above mentioned neuro-physiological phenomena working coherently together. As obvious as it may seem, the current CL research is not focused in that direction, and with this work, we are trying to bring focus to this idea by showing its effectiveness and plausibility. To the best of our knowledge, ours is one of the earliest works in this direction and we have proved its potential with empirical results. `Weight pruning uses the existing CWI criterion` While we recognise that CWI is an existing weight pruning criterion, it is essential to note that the proposed work focuses on a novel CL paradigm and considers weight pruning as a means to an end. The purpose here is not to propose a new pruning method. Nonetheless, in the process, other pruning criteria were considered like magnitude pruning and fisher information based pruning (Refer file attached). As shown in the Table, it is evident that CWI only contributes marginally to the overall accuracy improvement. The method is conceptually compatible with other criteria as well. The rationale behind selecting CWI is primarily rooted in its capacity to assess the weight significance relative to data stored in the rehearsal buffer. As shown in the Table, even other pruning criteria yields better results than CLS-ER even without hyperparameter tuning due to time constraints. Also, as weight pruning is an active research area, we believe newer pruning criteria could also work in conjunction with our method. `There are 7 different parameters and it seems that it is arbitrarily chosen` As this work is one of the earliest ones in the direction of effectively combining existing CL methods, it is not the most optimal method and the number of hyperprarameters adds a perception of complexity. However, we can assure you that it is not arbitrarily chosen to make the best performance for each task. Although the combination of hyperparameters that gave the best accuracy have been tabulated in Table 4 as a means for reproducibility, our method is not highly sensitive to the particular choice of hyperparameters and is robust enough to handle different settings to attain similar performance. Regarding the intuition behind the hyperparameters, for instance, Rewind percentile, which decides how much to rewind, has been analyzed and the instinct for choosing values in the range of 70-90% percentile has been discussed in Section 5. Furthermore, the parameter is fixed at 0.999 for all experiments to mimic the slow acquisition of structured knowledge by the EMA model. Also, the learning rate in the Revise phase ($\eta$') is dependent on the initial learning rate ($\eta$). From our experiments, we found that for smaller datasets, $\eta$' is approximately 1/20th of $\eta$ and for larger datasets, its 1/10th of $\eta$. It is a safe bet to use this criterion as a starting point to find the most optimal learning rate for the Revise stage. `The experiments lack analysis of computational cost and what is the cost of multi-stage procedure?` We appreciate the reviewers for identifying the lack of computation cost analysis, and we have made a note to include a comprehensive analysis in the final version of the paper. Also, we would agree with the extrapolation that the multi-stage procedure proposed in this paper is not the most optimal solution memory and computation efficiency-wise and has a lot of room for improvement. However, we would also like to reiterate that efficiency of the method was not a goal for this work and the novelty of this paper lies in the effective combining of existing neuro-inspired concepts. We are trying to spur the CL research community towards this direction which is less explored. Efficient interpretation and implementation of TriRE is a natural progression to this work that we are also considering. `Why does the proposed method seem to underperform on Task-IL for two of the three datasets evaluated?` Considering Task-IL is considered to be an easier CL scenario than the Class-IL, TriRE’s Task-IL performance is not significantly worse or better statistically than the competing methods. However, as the CL scenario gets harder, the difference between the competing methods is clearer due to their ability to mitigate catastrophic forgetting. For instance, their performance is much worse compared to TriRE in Seq-TinyImageNet where the buffer-to-class ratio is low. We regret the lack of clarity in the current version and will update the manuscript as per your suggestion in the next revision. Pdf: /pdf/ab5a37c23b6d0769647bbd15e7b5ade5198151e5.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
VaRT: Variational Regression Trees
Accept (spotlight)
Summary: The authors propose a non-parametric Bayesian model for Regression trees. Variational inference is used approximate the posterior distribution. Extensive experiments are conducted to show the superiors of the method. Strengths: The authors proposed a non-parametric method to learn regression trees. The paper is well-written and easy to follow. The experimental results on 18 datasets are provided. Weaknesses: 1, The basic idea is too simple. The authors proposed several prior distributions for the variables in tree. Then the parameters in the prior distributions are optimized using SGD with some reparametrization tricks. The priors as well as the reparametrization tricks have been proposed already. So the contribution of this paper is limit. 2, No introduction is provided regarding the 18 datasets. No discussion is provided for the prediction results. The highlighted RMSEs in table 1 are not always the lowest. 3, The datasets used in 4.1 is too small. It is interesting to show the performance on large scale dataset. The computational complexity of the proposed method seems to be high. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1, what is function g(\phi,\epsilon) in equation (16)? 2, what is the symbol \beta_{pa_k} in equation (27)? 3, Are the RMSEs reported in table 1 of VaRT obtained using a single Tree? Please provides some discussions to explain why VaRT with a single Tree outperforms the BART with 50 Trees. It is interesting to provide the prediction results of CART and random forest. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: When compared with CART, the proposed method seems time-consumming Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Firstly, we extend our gratitude for your assessment of our paper. Your feedback has been instrumental in shaping our work and we value the insights you've shared. We acknowledge your observation about the apparent simplicity of our approach, which centers around non-parametric Bayesian models for regression trees. While the foundational concept of using tree-based models for regression might seem straightforward, we wish to emphasize that the novelty lies in the intricate interplay of the proposed prior distributions, the design of the variational family, and the utilization of these elements within the broader context of Variational Inference to approximate the posterior distribution. We believe that this unique amalgamation contributes to the field by providing a methodologically distinct approach to tackling the challenges associated with Bayesian regression tree modeling. We firmly believe that this innovation enriches the field by providing an unparalleled framework for addressing the intricacies associated with regression tree modeling in the context of Bayesian models. As of now, we are not aware of any prior research that encompasses the distinctive amalgamation of techniques featured in our method. Given your expertise in the field, we would greatly appreciate any guidance you can provide in identifying references that might shed light on parallel developments. In your review, you noted the application of a reparametrization trick that has been introduced elsewhere in the literature. We acknowledge this point, and we do not claim originality in this aspect. In fact, we explicitly mention its prior introduction in our manuscript (lines 94-96). Our emphasis was on demonstrating how this established technique fits seamlessly into our broader framework. We also acknowledge your concerns about the choice of datasets and the absence of discussions regarding prediction results in our paper. We hope to have addressed some of these concerns in the "global" rebuttal and remain open to answering any questions you may have. Regarding the prediction results, our intention was to provide transparent evaluation. The notation in Table 1, indicated by "∗," aimed to highlight instances where VaRT's posterior predictive distribution excelled or overlapped with other algorithms. This approach is aligned with our commitment to unbiased reporting. In response to your suggestion regarding the dataset sizes employed in our experiments, we took your advice into consideration and extended our evaluation to include datasets comprising more than 40,000 data points. Notably, our method demonstrated seamless scalability to handle these larger datasets, owing to its inherent parallelizability. The computational efficiency of our algorithm was evident, with fitting times spanning a range of one to three minutes. It's worth noting the significant contrast with Bayesian MCMC methods like BART, which are constrained by their sequential nature and lack support for parallelization. While we acknowledge the existence of rapid frequentist boosting algorithms such as XGBoost and CatBoost, we recognize the potential for further exploration in this direction. We are excited about the prospect of future advancements in this area and acknowledge that the initial implementation of our algorithm might not outpace these established algorithms in terms of speed. Regarding your specific questions, the function $g(\phi,\epsilon)$ is the function utilized in the reparametrization trick. While we adopted this notation from (Kingman and Welling, 22'), we regrettably omitted its explicit definition in our manuscript. We appreciate your attention to this detail and will rectify this oversight by including a dedicated line that provides a clear definition of this function within our paper. Furthermore, we understand that the notation $\beta_{pa_{k}(i)}$ might have caused confusion. To clarify, this notation represents the $\beta$ coefficient associated with the $k^{th}$ parent of node $i$ in the tree. We recognize the potential for ambiguity and will address this concern by adding a clarification in our paper to ensure proper understanding. Regarding the observation that a single tree occasionally outperforms an ensemble of trees, we recognize that this might appear unexpected. The distinction lies in our approach's capacity to endow individual trees with the ability to capture intricate data relationships. This capability is rooted in our algorithm's inherent adaptability, facilitated by more complex splitting and prediction rules. Moreover, as we learn a distribution over the space of trees, each posterior sample can yield a different prediction, resulting in an ensemble-like effect. On average, this allows our model to encapsulate inherently intricate functions. This nuanced aspect likely contributes to instances where our singular VaRT posterior distribution over trees showcases superior performance compared to other boosting methods reliant on ensembles of trees. We acknowledge the importance of accentuating this distinctive attribute within our paper to provide a comprehensive context for the observed outcomes. We want to highlight that we really value your critical insights and are eager to address any concerns you may have to ensure the accuracy and quality of our work. We believe that your perspective will greatly contribute to enhancing our manuscript, and we hope that this clarifying response provides a more accurate portrayal of our contributions and intentions. Thank you once again for your thorough review. Respectfully, The Authors. --- Rebuttal Comment 1.1: Comment: The additive numerical results is quite amazing to me, where a single tree of depth 3 provides competitive results with ensemble methods on about 9 datasets. I agree that the used piece-wise linear regression tree is more expressive than the tree used in RF and GBDT, but the results is still quite amazing. [A1] use the same architecture, but requires multiple trees to achieve similar AUC to cat boost/xgb/lightGBM. Besides, the authors claim that each posterior sample of VaRT can yield a different prediction, resulting in an ensemble-like effect. Can you provide more evidence to support this claim? One of main drawback of Bayesian methods is their higher computational complexity. The results about the runtime of VaRT on large dataset is surprising, the authors claims that this is attributed to their vectorized implementation. So if this vectorized implementation benefit from their specific prior design? If so, I think this is a good work. [A1] Shi, Yu, Jian Li, and Zhize Li. "Gradient boosting with piece-wise linear regression trees." *Proceedings of the 28th International Joint Conference on Artificial Intelligence*. 2019. --- Reply to Comment 1.1.1: Title: Clarifications on Rebuttal Comment: We want to thank you for your detailed response and engagement with the review process. You brought to our attention a couple of very important points that helped us improve the quality of our work and highlight our contributions. The reference that you are pointing us to is both interesting and relevant to our work. It is reassuring to see that piecewise linear trees offer an empirical advantage over regular decision trees. To showcase the ensemble-like effect of our method, we decided to sample several trees from the posterior distribution of the toy data of Figure A (left) and look at their topology. We leave four samples from the posterior distribution below ``` _____1_ / \ _____2__ 3 / \ / \ 4__ 5_ 6 7 / \ / \ 8 9_ 10 11 / \ 18 19 1_________ / \ 2 __3 / \ __6_ 7 / \ 12_ 13 / \ 24 25 _1_________ / \ 2 __3 / \ / \ 4 5 __6_ 7 / \ 12_ 13 / \ 24 25 1_________ / \ 2 __3__ / \ __6_ 7___ / \ / \ 12_ 13 14 15_ / \ / \ 24 25 30 31 _________1 / \ _2______ 3 / \ 4 __5_ / \ / \ 8 9 10_ 11 / \ 20 21 ``` We hypothesize that the variation in tree topology, coupled with the stochasticity of node and leaf parameters, contributes to the ensemble-like effect; we also believe this brings some level of smoothness to the posterior predictive mean. This insight is valuable and merits a discussion in the main body of our paper. We intend to incorporate the reference you've shared, alongside a discussion of this ensemble-like phenomenon. Your guidance in this matter is greatly appreciated. Regarding your second point, we acknowledge your accurate observation. A core advantage of our prior design lies in its compatibility with vectorized computations. Parallelizability is facilitated by the fact that that both tree sampling a tree and traversing a data point down to a leaf are both the result of a collection of independent computations, thus enabling efficient execution on GPU architectures. This parallelization attribute underscores the rationale behind our utilization of Variational Inference, and offers a significant departure from other MCMC-based Bayesian methods. We concede that our initial manuscript may not have highlighted this aspect of our method adequately and remain committed to revising our paper to emphasize this pivotal contribution. We believed that your perspectives have greatly contributed to enhancing the quality of our manuscript. Once again, we thank you for your review and expertise and remain open to addressing any other questions you may have. Best, The Authors
Summary: This paper introduces a non-parametric Bayesian model taking the form of a stochastic decision tree and proposes to approximate the posterior distribution with variational inference. The prior involves three parts, a prior on the tree structure, a prior on the probabilistic splitting criteria, and a prior on the conditional distribution at each leaf. To conduct variational inference, reparameterization tricks of normal and binary variables are used. Prediction can be made based on the posterior predictive distribution and inference can be made with Bayesian credible intervals. Numerical experiments are supportive, including a comparison with benchmarks on prediction and application to toy datasets as well as real datasets. Strengths: 1. The paper is clear and well written. 2. The proposed method is strong in originality. 3. Numerical experiments are supportive. 4. It addresses the long-lasting problem that Bayesian tree methods (Bayesian CART or BART) are infeasible for larger datasets and their MCMC chains may never really converge. The combination of stochastic decision trees with variational inference suggests a promising direction in handling this problem. Weaknesses: There doesn't appear to be any major weaknesses to me. The following are some minor ones that could be improved: 1. For Section 4.1, since the task is purely predictive, it would be better to compare with aggregated decision trees or random forest methods as well. Or alternatively, if the comparison is restricted purely to Bayesian models, a Bayesian neural network should be used with parameters inferred either with variational inference or HMC sampler. 2. It would be nice to compare the posterior distribution approximated by variational inference with the true posterior distribution. For instance, with low dimensional covariates and response, a data augmented Gibbs sampler (with binary latent variables at each splitting point) along with reversible jumps could also be designed and used to sample from the true posterior distribution. 3. There seems to be too much parameters introduced with gamma, mu, zeta. Some structural formulations such as a cumulative shrinkage prior with just a few prior parameters might be better. Of course, all of these would require lots of additional effort, so as a first paper in this direction the lack of them is not a problem. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Since the data generating model could be viewed as a mixture model, the posterior distribution of the tree also gives a posterior distribution over the clusters of data. Does this posterior distribution over the clusters concentrate around the true clusters when the true data generating model is a mixture model? 2. As mentioned in the paper, the variational regression tree can be generalized to variational classification tree with a binary response variable. How does it generalize to classification of response variables with more than two levels? On a side note, for doing variational inference instead of Gibbs sampling, why would Polya-Gamma augmentation still be needed? 3. The soft splitting criterion is based on a logistic regression of the covariates, so the splitted regions is no longer rectangular. While this may be easier for implementation, how does it compare to the case where the soft splitting criterion are all restricted to be parallel to the axis (a hierarchical prior on beta could do this)? Also, for the obtained posterior samples, does the splitting critria at different nodes tend to be more parallel or more orthogonal? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Not much, see Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your exceptional review of our manuscript. Your insightful feedback has significantly impacted our research, and we are grateful for your valuable insights. Your evaluation not only validates our efforts but also provides us with invaluable perspectives that will undoubtedly shape the trajectory of our paper and future work, regardless of the review outcome. Your observation regarding the comparison with other frequentist methods is well-taken. To address this concern, we conducted additional experiments comparing the performance of VaRT against established ensemble methods like Random Forests, XGBoost, and CatBoost. Furthermore, we expanded our experiments to include two datasets, each containing over 40,000 points, to address scalability concerns raised by other reviewers. We are excited to report that our method exhibited seamless scalability and comparable performance to other boosting algorithms (see table A). As you rightfully pointed out, we believe methods based on Variational Methods offer a distinctive advantage due to their inherent parallelizability. This is in stark contrast to other Bayesian methods based on Markov Chain Monte Carlo. Your suggestion regarding exploring a data augmented Gibbs sampler for posterior distribution comparison is insightful. We are currently investigating similar analyses in the literature to gain a better understanding of this approach. If you have any specific references you could suggest, it would greatly aid our exploration. We were also particularly intrigued by your mention of cumulative shrinkage priors. Building on this, we plan to explore the application of cumulative shrinkage priors and their implications for our model in future work. We would greatly appreciate your suggestions on relevant references in this regard! (were you thinking of hierarchical priors, like hourshoe priors, on the splitting/leaf nodes to induce sparsity?) Addressing your specific questions, we conducted experiments on toy data generated from a piecewise continuous function. The data was generated from a uniform mixture of Gaussians centered on the midpoint of each interval. Our results suggest that VaRT correctly "routes" each point to the appropriate node with high probability, as evidenced by our analysis on 100 and 1000 datapoints. This suggests that the posterior concentrates around the "true" clusters. Regarding the use of Pólya-Gamma augmentation, while it may not be strictly necessary, we have encountered cases where Pólya-Gamma random variables have been integrated into Variational Inference schemes to get closed-form updates of the natural gradients. Notably, the paper "Efficient Gaussian Process Classification Using Pólya-Gamma Data Augmentation" by Wenzel et al. discusses a similar application. However, we now recognize that their use is not strictly necessary and will explicitly note this in our paper and acknowledge your insight. For generalizing our method to more than two response variables, we propose incorporating a matrix β ∈ ℝ^(c × d), attaching a categorical distribution to each leaf nodes as follows $\text{Categorical}(\text{SoftMax}(βX))$ and employ discrete reparameterization techniques to fit variational family. In response to your question about hierarchical priors and orthogonal splitting criteria, we believe hierarchical priors, such as a horseshoe priors on the variances of β coefficients, could indeed induce sparsity and enhance model interpretability as it would bias the model towards axis-alignted splits. We believe this is an exciting area for future work! Currently our method does not have any particular bias for this and most of the splits end up being non-axis-aligned. Thank you once again for your detailed review! Your feedback has undoubtedly enriched our understanding of our work. Regardless of the review outcome, we intend to acknowledge your valuable contributions in the acknowledgments section of our paper. We'll remain available to answer any other questions you may have during the discussion period. Sincerely, The Authors --- Rebuttal Comment 1.1: Comment: Thank you for the further explanations and responses to several questions. I will keep the score as is but I now have a higher confidence for the score.
Summary: The paper develops a new generating process for soft decision trees. The proposed process is then adopted as a prior model in a Bayesian nonparametric regression setting, and a novel variational inference algorithm is developed using the truncated version of the tree generating process as the variational distribution. Experiments suggest that the proposed method has comparable performance with BART in real data sets and causal inference applications. Strengths: * A good addition to the literature of Bayesian soft decision tree models. * The soft decision trees generating process and the variational inference algorithm is technically sound and novel. * Competitive performance with BART. Weaknesses: * Some important baselines are missing. The proposed method is not compared with other soft decision tree based models such as soft BART (Linero and Yang, 2018; Linero, 2022) and the frequentist version of soft decision trees. * Experiments are only based on some default hyperparameters for both VaRT and the baselines; hyperparameter sensitivity is not investigated. * The paper can benefit from careful proofreading. For instance, a few notations are used without any definition. Formatting and presentation of the paper can also be improved. Reference: [1] Linero, A.R. and Yang, Y., 2018. Bayesian regression tree ensembles that adapt to smoothness and sparsity. Journal of the Royal Statistical Society Series B: Statistical Methodology, 80(5), pp.1087-1110. [2] Linero A.R. 2022. SoftBart: Soft Bayesian Additive Regression Trees. arXiv preprint arXiv:2210.16375. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I think it would be beneficial to provide more comparison between the proposed VaRT and conventional Bayesian decision tree models like BART and Bayesian CART, such as: 1.1 One well-known advantage of BART is that it can perform variable selection to some extend. Does the proposed model have similar capability? 1.2 What will be the computation benefits of the proposed method compared with the MCMC approach for conventional Bayesian decision tree models? 1.3 In what scenario will the proposed VaRT outperform/underperform conventional Bayesian decision tree models? 2. Questions/comments on the experiments: 2.1 Details of the experiments should be included. For example, is the RMSE in Table 1 computed on the test split? If so, how do you obtain the training/test splits from the data sets? 2.2 In Table 1, I think it will be more fair to also compare with the 0.9 and 0.1 posterior percentiles of BART. 2.3 Very limited discussion on the experiment results, especially for the UCI datasets. 3. Minor issues: 3.1 In (3), is the denominator equal to 1? If so, the denominator can be trivially omitted. 3.2 In Line 74, it is more rigorous to write "$s_k \sim \text{Bernoulli}(\gamma_k)$ independently for $k = 1, 2, \ldots$". 3.3 Citation in Line 191 does not seem correct. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Some limitations and future work are discussed in the paper. I don't foresee any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our gratitude for taking the time to review our manuscript. Your insightful comments and suggestions have been instrumental in guiding our work towards refinement, and we appreciate your thorough evaluation. Regarding your recommendation to evaluate our method against these models and other soft decision tree-based approaches, we want to clarify our approach. In the interest of providing a thorough evaluation within our current constraints, we chose to focus on established boosting methods like XGBoost, Random Forest, and CatBoost when conducting new experiments. The comparisons we've conducted in this rebuttal with widely recognized boosting methods offer insights into the strengths and performance of our approach. We believe these comparisons contribute valuably to our research and the broader field of Bayesian nonparametric regression. We fully understand your concern about the comparability of our method and acknowledge the challenges associated with implementing additional benchmarks and locating or implementing specific Python versions of certain models in such a short time-frame. Despite these challenges, we remain committed to addressing your valuable feedback in a meaningful way. In our revised manuscript, we will incorporate a detailed discussion of soft BART models and other relevant soft decision tree-based models. This will underscore the significance of these models in the context of Bayesian soft decision trees and contribute to a comprehensive understanding of the research landscape. Addressing your specific questions, we do believe our model possess the capacity for variable selection. We believe this could be achieved by incorporating global shrinkage priors into the splitting and prediction rules of VaRT. This clever observation was brought to our attention by reviewer Ya4w. We provide further clarification on this issue in the "global" rebuttal section but we remain open to answer any lingering questions. Reviewer 2ow5 suggested that we ran experiments on larger datasets. We took their advice and extended our evaluation to include datasets comprising more than 40,000 data points. Notably, our method demonstrated seamless scalability to handle these larger datasets, owing to its inherent parallelizability. The computational efficiency of our algorithm was evident, with fitting times spanning a range of one to three minutes. It's worth noting the significant contrast with Bayesian MCMC methods like BART, which are constrained by their sequential nature and lack support for parallelization. While we acknowledge the existence of rapid frequentist boosting algorithms such as XGBoost and CatBoost, we recognize the potential for further exploration in this direction. We are excited about the prospect of future advancements in this area and acknowledge that the initial implementation of our algorithm might not outpace these established algorithms in terms of speed. Moreover, we hope to have addressed your concerns on the experiments in the "global" rebuttal section. In particular, we have included a thorough discussion on our original and new experiments. Please feel free to reach out with any questions and are open to any feedback you may have. We intend to incorporate these discussions into our paper. Lastly, we appreciate your keen attention to detail, which led to the identification of the denominator's equivalence to 1 in equation (3). We have diligently revised our manuscript to reflect this clarification. Your suggestions in points 3.2 and 3.3 have also contributed to improving the clarity of our paper, and we have successfully incorporated these changes. Once again, we extend our heartfelt thanks for your invaluable feedback. Your expertise and insights have been instrumental in shaping the trajectory of our work, and we are dedicated to refining our manuscript in accordance with your suggestions. Respectfully, The Authors. --- Rebuttal Comment 1.1: Comment: I thank the authors for the additional numerical results and their detailed response. The additional results show very competitive performance and demonstrate appealing computational benefits, which definitely strengthen the paper. I lean towards accepting this work. My conjecture is that the “soft” nature in the proposed method allows it to outperform conventional tree ensembles when the true mean function is relatively smooth. I encourage the authors to include comparisons with soft BART (open source R implementation available) and other tree ensemble methods with a larger number of trees (which can produce smoother prediction) in the revised version. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful response and your engagement with the review process. Your observation regarding the potential advantage of our method in cases where the true mean function exhibits smoothness is quite astute. We found your conjecture intriguing, and your suggestion aligns well with the trends observed in our supplementary experiments. These initial findings are certainly encouraging for us to dive deeper into this hypothesis and explore its implications more comprehensively in our research. Once again, thank you for your expertise and thoughtful evaluation. Your insights have been incredibly helpful in strengthening the results of our paper. Best, The Authors
Summary: The authors propose to use variational inference to train regression trees. The innovation lies in using variational inference as the optimization process rather than Markov Chain Monte Carlo. The authors demonstrate this method on 18 ML UCI problems and some causal inference and toy problems. Strengths: The paper is clearly written and specific about its contribution. The figures are generally good, although they could use better captions (especially Figure 1). Weaknesses: It's hard for me to judge the novelty and potential impact of this work. There are so many decision tree boosting algorithms out there that are blazingly fast (e.g. XGBoost, Catboost), whereas this method takes a few minutes to train. There could be an advantage if the resultant model was more interpretable than others, but that isn't really demonstrated. It is not clear to this reviewer whether BART is really a go-to, off the shelf ML algorithm like XGBoost has become, but the authors focus their side by side comparisons on this algorithm (which makes sense since it is perhaps the closest philosophically). In other words, the comparison would be stronger if a candidate boosted tree algorithm was included in the comparisons. MLPs are known to underperform most tree ensembles on UCI. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors mention depth 5-7; what happens at smaller depths? A smaller tree would greatly aid interpretability. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: there is no limitations section, although the authors discuss limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort you've dedicated to reviewing our manuscript. Your thorough evaluation offers valuable insights that will undoubtedly enhance the quality and impact of this work. Regarding the concern on training time, we acknowledge that there are blazingly fast boosting algorithms available in the field. Your observation regarding the underperformance of MLPs on tabular datasets is also well-founded. To address these concerns and establish the feasibility and performance of our method across a diverse range of scenarios, we conducted a series of additional experiments. These experiments involved comparing the performance of our approach to well-established boosting algorithms, namely XGBoost, CatBoost, and Random Forest. We are excited to share that the performance of VaRT remained competitive and, in some cases, even outperformed these boosting algorithms across a diverse set of situations. We have included the results of these new experiments in Table A of the attached PDF for your reference and want to thank you for pointing out this gap in our evaluation. To showcase the scalability of our method, we also ran experiments on two new datasets containing more than 40,000 datapoints. Despite the increased data volume, the training time remained within the order of one to three minutes (see attached pdf on "global" rebuttal). This notable efficiency is attributed to the parallelizability of our approach, which offers a clear computational advantage over other Bayesian algorithms reliant on Markov Chain Monte Carlo. We hope that these additional experiments and insights provide a more comprehensive view of the potential impact of our work. Thank you once again for your valuable feedback and insights. Your suggestions have been instrumental in shaping our approach to further align with the expectations and demands of the field. We remain open to answering any other questions. Best, The Authors. --- Rebuttal Comment 1.1: Comment: the additional comparisons to ensemble tree algorithms strengthens the paper. I encourage the authors to integrate this into the main text. i like the additional figure of model interpretation as well. I think this would should be accepted.
Rebuttal 1: Rebuttal: We extend our heartfelt gratitude to the reviewers for their insightful evaluations of our paper. The feedback we received has played a pivotal role in the evolution of our work, and we have approached our revisions with utmost seriousness and dedication. One area that emerged as a significant opportunity for improvement was the lack of experimental comparison with frequentist tree-based boosting methods, a concern raised by reviewers AUTX, Ya4w, and MwKc. As emphasized by reviewer AUTX, our work draws philosophical inspiration from BART, which initially led us to benchmark our algorithm against it. However, we now recognize the necessity of providing a more inclusive comparison. In light of this, we have expanded our experiments to encompass three well-regarded boosting methods: Random Forests, XGBoost, and CatBoost. We are excited to share that even with the introduction of these additional benchmarks, our algorithm remains competitive with these state-of-the-art boosting methods. The results of these experiments are showcased in Table A of the attached pdf. In the new experiments of Table A, we compared an ensemble of fifty trees of BART, Random Forests, XGBoost, and CatBoost to a single tree of our algorithm at depths of 3, 5, 7, and 10. Because of our commitment to transparency and rigorous evaluation, we have decided to report the results across all four runs of VaRT, as opposed to just reporting the best one. By adopting this approach, we aim to provide a thorough representation of VaRT's performance across various configurations, showcasing its consistency and versatility in diverse scenarios. Our proposed method showcases notable strengths in comparison to established boosting methods as evidenced by the RMSE values presented in Table A. Across a diverse range of datasets, VaRT consistently demonstrates competitive performance, often outperforming or closely aligning with well-regarded boosting techniques such as CatBoost, RandomForest, and XGBoost. The versatility and effectiveness of VaRT's single-tree approach is evident from its ability to yield compelling results on various datasets, highlighting its potential to provide accurate predictions in different problem domains. While our experiments were comprehensive, a couple of reviewers rightfully poinited out a lack of depth in our result discussions. To provide context, we employed a 90%/10% train-test split on 18 standard regression datasets from the UCI Machine Learning Repository. For fair comparison, features were standardized within (0,1) and (-1,1) ranges for Table 1 and Table A, respectively. The reported RMSE values in both tables are averaged from five test set runs, accompanied by their standard deviations. Closer examination of Tables 1 and A reveals noticeable performance challenges for VaRT arise in datasets 'airfoil,' 'autos,' and 'sml.' This prompts us to scrutinize dataset attributes influencing VaRT's performance in these datasets. As we analyzed the experimental results, we observed variations in VaRT's performance across different datasets. While the majority of the datasets demonstrated strong performance, we noted that the 'autos' and 'airfoil' datasets posed unique challenges. Moreover it is worth pointing out that the 'sml' dataset constitutes a time-series task, which introduces complexities that may not align optimally with VaRT's current framework. These challenges have highlighted specific areas that require further investigation and improvement. In comparison to other tree-based Bayesian algorithms, VaRT exhibits distinct advantages that enhance its practical utility. Notably, VaRT's framework is inherently parallelizable, a feature that sets it apart from MCMC-based methods like BART which are inherently sequential and lack this parallelization potential. This parallelizability not only accelerates the model training process but also empowers efficient exploration of large datasets, making VaRT particularly well-suited for modern computation demands. Furthermore, VaRT leverages gradient-based optimization techniques, making it compatible with widely used automatic differentiation engines such as PyTorch. We corroborated this by running new experiments on datasets with 40k+ datapoints where training took < 3 mins (see Table B on the attached pdf). Reviewer Ya4w gave us with a couple of phenomenal suggestions that have greatly enriched our perspective on our work. One suggestion that particularly piqued our interest was the incorporation of cumulative shrinkage priors into the splitting and regression parameters. We believe this has the potential to significantly enhance the performance and interpretability of VaRT. As these priors are known for inducing sparsity in the posterior distribution, we believe that their addition could also address a question raised by reviewer MwKc on variable selection, as sparse solutions should lead to a model that uses only a subset of the features. We believe this to be a very promising and exciting avenue for future work.   Moreover, reviewer Ya4w's insight into viewing our model as a mixture model has been enlightening. We conducted experiments using simulated data from a mixture of Gaussians and found that the posterior distribution of our model indeed concentrates around the "true" clusters of the underlying data-generating model (see attached pdf). We hope our interpretation of this insight aligns with the reviewer's Ya4w intention, and we remain open to addressing any questions. In conclusion, we extend our heartfelt appreciation to the reviewers for their invaluable insights and feedback that have significantly contributed to refining our paper. Your comments have been instrumental in shaping the trajectory of our research and enhancing its quality. In this regard, we will provide a detailed response to each of these points in the "individual" rebuttal section. We sincerely thank you for your time, expertise, and thoughtful guidance. Pdf: /pdf/b2055b75f4c4bf8614dddcdc2af8888f90da08ac.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Is Heterogeneity Notorious? Taming Heterogeneity to Handle Test-Time Shift in Federated Learning
Accept (poster)
Summary: The authors propose a scheme to handle the test-time shift at each FL client. To achieve this, during training, a contrastive learning is adopted to extract invariant information within the same class (class-level invariant information) at each client. The updated models at the clients are also aggregated via FedAvg. During testing, at each client, the model is further fine-tuned using the test dataset and the corresponding unsupervised loss function. Strengths: 1. The paper is generally well written and easy to understand. The figures also well describe the high-level concepts of the proposed methods. 2. This paper proposes strategies for both training and testing during FL, to handle the test-time shift issue, which is practical. 3. The authors consider various datasets for experiments including DomainNet. Weaknesses: 1. Section 4.1.1: First of all, in 4.1.1, the authors are trying to extract invariant information within the same class, which shares the same concept with domain generalization that learns domain-invariant features during training. However, comparison with these methods are lacking. For example, one can apply any domain generalization schemes for local training to learn domain-invariant features, and then aggregate the models according to 4.1.2. Note that in domain generalization literatures, “domain shift” generally includes covariate shift, attribute shift, and domain shift the authors mentioned in this paper. Existing domain generalization methods can generally handle all these shifts. 2. Related to the comment above, there are many works that focus on adopting contrastive loss for domain generalization. It is not clear what is the authors’ novelty compared to the existing works on contrastive learning loss in Section 4.1.1 3. Section 4.1.2: This part is just about FedAvg. Hence, in my point of view, Section 4.1 is about performing updates locally to extract invariant information, and then aggregating the models. However, I do not find the novelty compared to applying existing works in Section 4.1 4. Section 4.2: During testing, a test dataset needs to be available in each client for fine-tuning. This may not be practical when test samples arrive one-by-one. Moreover, focusing on a specific client (and given the well-trained models), what is the novelty of Section 4.2 compared to other test-time adaptation methods? 5. Experiments: For experiments, as mentioned in my first comment above, the important baselines are missing. Especially, the authors’ scheme is the only method that simultaneously adopts (i) generalization during training and (ii) test-time adaptation at inference. To validate the effectiveness of the idea in Section 4.1.1, the authors need to consider schemes that apply domain generalization methods instead of Section 4.1.1 and combine it with Section 4.2, and compare it with FedICON. Similarly, to validate the effectiveness of the idea in Section 4.2, the authors may combine other test-time methods (e.g., Tent, T3A) with the idea in Section 4.1.1, and compare the scheme with FedICON. 6. Experimental setup: It is not clear how data are distributed across clients. Moreover, during testing, does the proposed method assumes that full test dataset is accessible at each client to finetune the model? Overall, this paper can be viewed as a combination of domain generalization learning (or contrastive learning) and test-time adaptation in FL, but I find the novelty of the paper to be relatively weak, since there are no clear descriptions showing the difference compared to various existing works and there are no experiments comparing with the corresponding baselines. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Another question is that, what happens if the classifier is not personalized but shared by aggregation? Moreover, what is the experimental setup of Fig. 1(b) showing the advantage of heterogeneous data compared to homogeneous setup? Regarding other questions, please refer to the weakness above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for all the valuable comments and questions. **Comparison with the combination of DG and FL.** We used three DG-originated test-time adaptation (TTA) methods as baselines because they are more aligned with our setting and can be easily implemented in an FL framework. Note that not all existing DG methods can fit into FL settings and solve the test-time shift problem appropriately. **Concept confusion issue.** We would like to explain that the concept of “domain shift” in our paper refers to the more general cases of feature-level shifts compared with the covariate and attribute shifts, which can be seen as special cases of the general domain shift. In general domain shift cases, the training and test data are from different sub-datasets of the multi-domain datasets. We will add a clarification in the final version. We take the first step to comprehensively investigate this problem in FL. **Limited novelty compared with existing DG methods using contrastive loss.** We have unique novelty compared with existing contrastive learning-based DG methods in terms of **target problem** and **technical design**. *First*, we focus on the test-time shift problem in FL rather than the DG problem where the source domain and the target domain are usually clearly pinpointed. There are some unique challenges to solve in FL, e.g., how local training is conducted, how the knowledge across clients should be aligned, and how privacy is guaranteed. *Second*, existing works [1,2] usually use contrastive learning as a regularization term to the original cross-entropy loss instead of using a pure contrastive learning-based framework to optimize the model so as to resolve the test-time shift or domain shift issues. But in our method, we establish the framework of FedICON based on the representation learning scheme. We focus more on extracting and sharing the invariance knowledge by contrastive learning in FL setting. [1] Self-supervised Contrastive Regularization for Domain Generalization. In ICCV 2021. [2] Proxy-based contrastive learning for domain generalization. In CVPR 2022. **Novelty of Sec 4.1.** Compared to existing works, Sec 4.1 has the following novelty. First, the model architectures are different. We utilize the representation learning framework to solve the problem. While most FL methods train the whole model and do not take the intermediate output into consideration. Second, the loss functions are different. Most existing FL methods utilize cross-entropy as their loss functions while our method computes the loss based on the output of the feature encoder in a contrastive learning manner. Third, the communication protocols are different. In FL, some methods average all the learnable parameters at the server, while others may average part of the model parameters to achieve a specific goal, e.g., personalization. The selection and design of the communication protocol is an important component of the FL methodology. It is non-trivial to specify the concrete global update/sharing strategy in the proposed method. **Test samples arrive one-by-one.** Please refer to **C1** in the general response. **Novelty of Sec 4.2 compared to other TTA methods.** Sec 4.2 is proposed to keep the losses of the training and test phase unified and make the whole process consistent. To the best of our knowledge, we have not seen any existing works solving the test-time shift problem in this way, not to mention formulating the problem and developing a solution for federated frameworks. **Missing baselines.** We have considered three DG-originated test-time adaptation (TTA) methods and implemented them in the vanilla federated learning framework to validate the performance of our method. However, it is hard to directly combine TTA methods with Sec 4.1.1 because most of them (e.g., Tent, EATA, T3A) require the output of the classifier head which is not involved in Sec 4.1.1. Also, it is hard to directly combine Sec 4.2 with DG methods, because Sec 4.2 is kept in a unified form with the loss in Sec 4.1.1. If the training loss is not in a pure contrastive learning manner, the operation in Sec 4.2 may lead to severe performance degradation or divergence. Note that we focus on test-time shift problem in FL and compare with ten baselines that can fit into our setting. We will add more discussion on the selection of baselines in the final version. **Data setup details.** Since we followed the benchmark setting in [1,2] as mentioned in the paper, we just illustrate how the test-time shifts are simulated and the values of hyperparameters in Appendix A.2. We will more details in the final version to make the data setup clearer. [1] Federated learning on non-iid features via local batch normalization. In ICLR 2021. [2] Test-time robust personalization for federated learning. In ICLR 2023. **Full test set requirement.** Our method does not require access to the full test dataset. Please refer to **C1** in the general response for detail. **The case where classifier is not personalized but shared.** We compared several variants for classifier training on top of the frozen feature encoder and selected the case achieving the best performance. We chose not to present the results in the main text to 1) help isolate the effects of the proposed key components and the classifier part; 2) emphasize that our method is open to different kinds of additional classifier training strategies according to the specific data setting. We add the experimental results to Table 1 in the uploaded file and will provide a sufficient illustration in the final version. We hope the new results can efficiently address your concerns about the classifier alignment. **Detailed Setting of Figure 1(b).** We provided the detailed specification of the setting of Fig. 1(b) in Appendix A.1 titled “Detailed Setting of Figure 1(b)” as shown below. We will add a brief specification in the main text in the final version. --- Rebuttal Comment 1.1: Comment: I would like to appreciate the authors for the detailed response. At the same time, I am sorry to say that I am still leaning against rejection, especially due to the experiments. Although the authors mention that they focus on test-time shift instead of DG, they both share some similar philosophy in finding the invariant knowledge during training to gain robustness during testing. And if my understanding is correct, the authors are actually doing this in Sec. 4.1. More specifically, the authors are (i) extracting invariant information during training and (ii) conducting test-time adaptation at inference. To reiterate my comment, the authors’ scheme is the only method that simultaneously adopts (i) generalization during training and (ii) test-time adaptation at inference. And thank you for letting me know that Sec. 4.1 and Sec. 4.2 are not compatible with other schemes. However, the baselines are currently only focusing either training or testing. For example, Tent, T3A only focus on the testing-phase. Can their performance get improved by adopting DG methods during training, to better learn invariant features/characteristics? (There are many DG methods that are applicable to local updates for FL clients to learn invariant knowledge). At the same time, the 6 baselines on FL only focus on the training stage. Can their performance get improved by doing test-time adaptation during inference? A natural question that people might be interested in is, can existing combinations perform better? If not, what makes the authors’ scheme to perform the best? I believe these results as well as the comprehensive discussions should have been included in the original submission. Without these comprehensive analysis, I feel that the current results are less surprising, especially considering the bar of NeurIPS. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the prompt reply. We would like to clarify that we focus on the test-time shift problem in the **federated learning** setting. It is essential and reasonable to compare the proposed method with existing typical generic FL and personalized FL algorithms, which perform better on either generalized or personalized data distribution and can be implemented as a direct solution for our target problem. We admit there are many possible combinations among 1) the training-phase DG methods, 2) the test-phase TTA methods, and 3) different generic FL and personalized FL methods. However, it is non-trivial to develop an algorithm that not only fits the test-time shift scenarios in FL but also outperforms the existing state-of-the-art method, FedTHE [3], on various test-time shift settings. By providing a new state-of-the-art solution, we sincerely hope our paper can shed light on exploring the utilization of unique properties of FL systems compared with centralized systems, e.g., the inherent data heterogeneity across clients, to help alleviate the practical test-time shift problems in FL. [1] In search of lost domain generalization. In ICLR 2021. [2] Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization. In NeurIPS 2021. [3] Test-time robust personalization for federated learning. In ICLR 2023.
Summary: This paper propose FedICON, which uses inter-client heterogeneity to handle intra-client heterogeneity. During training, FedICON uses contrastive learning locally to extract invariant class-conditional information, and performs global invariance sharing under inter-client heterogeneity. During testing, the feature extractor is adapted with contrastive learning. Extensive experiments validate the effectiveness of FedICON. Strengths: - The paper makes clear definitions of inter-client and intra-client heterogeneities. Meanwhile, it is insightful to propose that inter-client heterogeneity in FL can be used to tackle intra-client heterogeneity. - FedICON has good performance on a variety of experimental settings. Especially, the author tried different types of heterogeneities. Weaknesses: - Global invariance sharing. It is unclear why sharing the model parameter of feature extractor can achieve global invariance sharing. Since FedICON does not compare representations from different clients, FedICON only guarantee local invariance on each client, but not global invariance. For example, consider a binary classification problem, a feature extractor might map positive/negative examples on client 0 to (0, 1), (0, 3), and map positive/negative examples on client 1 to (0, -2), (0, -4). In this way, although all positive samples on client 0 are mapped to the same representation (0, 1), positive samples on client 0 and 1 are mapped to different representations (0, 1) and (0, -2). - Since this paper focuses on how inter-client heterogeneity can help tackling intra-client heterogeneity, I believe the author may consider emphasizing the type of distribution shifts for each heterogeneity in the main text, instead of in the supplementary material. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Could you provide more information about whether FedICON learns local and global invariances? E.g., by numerical results of the variance of representations, or visualization (not required since it could be hard during rebuttal) - Other questions that I am interested: - In line 190, the authors mentioned that they generate two random augmenteed views for each sample. Usually this is for constructing positive pairs for unlabelled data (e.g., in self-supervised learning). However here the setting is supervised. Also in equation (3) and (4) it seems the augmented data is not fed into the feature encoder. I am confused whether the augmentation is used during training. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations are clearly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The reason why global invariance sharing can work.** Thanks for the valuable comments. There might be some misunderstanding towards the global invariance sharing part. There is no actual global invariance extracted among participating clients. The shared global invariance in the paper refers to the invariance encoding ability underlying model parameters. We claim that sharing the model parameter of feature extractor can achieve global invariance sharing because the parameter sharing process broadcasts the invariant encoding knowledge acquired during local training and hence mutually boost invariance extraction globally. As for the binary classification example, after aligning the model parameters of client 0 and client 1, it is of great probability that the positive samples on them are mapped to the same representation, e.g., (0, -0.5) even if there is a certain degree of deviation between their data. **Emphasizing heterogeneity types in the main text instead of supplementary material.** Thanks for the valuable comments. To make it convenient for the readers to capture key information about the heterogeneity setup, we will move the specification of several distribution shifts (inter- and intra-heterogeneity) to the main text in the final version. Due to the space limit, we still leave some detailed specifications of each feature-level shift, i.e., the number of clients, the reference previous works, and the hyperparameters for data setup, to Appendix A.2 but we provide a summary of them in Sec 5.1 to increase the readability. **More information about the learned local and global invariance.** Thanks for the valuable comments and questions. In Table 5 of Sec 5.3, we show performance improvement achieved by local invariance extraction and global invariance sharing component. Furthermore, we show how the local and global invariance are learned in the following. In the Table below, we report the average standard deviation of representations within the same class in a client and across client, respectively. The experiments are run on Digit-5 dataset under covariate test-time shift. A lower standard deviation means that the representations within a class are much more similar. It can be seen that the representation variance of FedICON is lower than those in other methods, illustrating the ability of FedICON to extract and share invariance and hence alleviate the test-shift problem. |Method|In A Client|Across Clients| |-|-|-| |Local|0.036|0.079| |FedAvg|0.045|0.064| |FedRep|0.041|0.065| |FedICON(Ours)|0.024|0.058| We hope these new results can efficiently address your concerns about whether FedICON extracts local invariance and acquires global invariance encoding ability in our proposed method. We will add the complete results in the final version. Also, it would be beneficial to use some visualization techniques to present this point in a more indicative manner. We will spend some time on visualization and present the results in the final version. **Whether the augmentation is used during training.** Thanks for the valuable comments and questions. As mentioned in line 190, given an input batch of data, we first apply data augmentation to obtain two random augmented views of each sample, which doubles the size of the input dataset. Then, for each sample $\mathbf{x}$, we conduct Eq. 3 to obtain its feature embedding and compute the supervised contrastive loss in Eq. 4. It should be noted that both the original data and the augmented data are involved in $P(\mathbf{x})$ and $A(\mathbf{x})$ and fed into the feature encoder so that they contribute to the model training process. Some previous works also construct positive pairs for contrastive learning in the supervised setting [1,2,3], which ensures the existence of at least one positive sample within the input data batch when contrasting the anchor $\mathbf{x}$. We will add more explanations to make it clearer in the revised version. [1] Supervised contrastive learning. In NeurIPS 2020. [2] Targeted supervised contrastive learning for long-tailed recognition. In CVPR 2022. [3] Federated learning from pre-trained models: a contrastive learning approach. In NeurIPS 2022. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thanks a lot for your rebuttal. Your experiments regarding the learned local and global invariance show that your proposed FedICON can reduce both the intra-client and inter-client variance, which answers my question 1, thanks! After reading your rebuttal and re-reading the paper, I am still very confused regarding my question of weakness 1. As stated in the abstract (line 10), the purpose of this paper is to use “inter-client heterogeneity in handling intra-client heterogeneity”. However, in the algorithm design, - In 4.1.1, each client’s local training extract local invariance. I believe it is expected that the representation should be “invariant” to (1) augmentation and (2) intra-class natural variance. - In 4.1.2, the parameter of feature extractor is shared across clients. I believe it is only expected that the representation will be “invariant” to the aforementioned (1) and (2) on all clients, in other words, the union of random augmentations and intra-class natural variance. I am very confused how inter-client heterogeneity is exploited in the algorithm. It seems to me that FedICON only use the union of intra-client variance. The only module in FedICON regarding multiple client is the feature extractor parameter averaging, which is also used in FedAvg (although FedAvg averages all the parameters). The statement of “it is of great probability that the positive samples on them are mapped to the same representation, e.g., (0, -0.5) even if there is a certain degree of deviation between their data” lacks evidence, and I highly disagree with it: if simply sharing parameter can achieve invariance, what is the purpose of domain adaptation and generalization algorithms? I recognize that FedICON can achieve great performance across datasets and distribution shifts. However, I believe it is also important to figure out “why FedICON works”, and whether it matches with your motivation of using inter-client heterogeneity. Here are some suggestions for ablation study: 1. Is inter-client heterogeneity really exploited? You may consider running experiments with FedICON under both IID partition and non-IID partition, training to a same accuracy (to avoid optimization challenge under non-IIDness), and testing on shifted data. If inter-client heterogeneity is really exploited, FedICON under non-IID client distribution should have higher accuracy. 2. Is the high performance a result of data augmentation? I recognize that data augmentation is widely used. However, we know that using data augmentation during training can obviously improve the robustness of model to test-time shifts (especially when augmentation and distribution shifts share many similarities). When comparing FedICON to baselines, I notice that baseline methods do not use data augmentation (correct me if I am wrong), which is unfair to baselines since augmentation can be easily incorporated into them. It may be more meaningful if you can (1) adding data augmentation to baselines, or (2) removing data augmentation from FedICON. I believe it will be a fairer comparison. I apologize for proposing new experiments during the discussion. However I do believe that it is very important to test whether the proposed algorithm matches with you motivation. You don’t need to follow my suggestion exactly, as long as my question of “whether and how inter-client heterogeneity is exploited” is answered. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the reply. We are glad to hear that part of your concern about the simultaneous learned local and global invariance is alleviated. As for your further concern about how inter-client heterogeneity is exploited to solve the intra-client heterogeneity problem, we hope the following explanation and experimental results can help address it. **Whether the inter-client heterogeneity is really exploited.** We agree with your suggestion on running experiments under both IID and non-IID partitions to verify the positive effect of inter-client heterogeneity. In fact, we have conducted motivated experiments that consider both the IID and non-IID partitions in **Fig. 1(b)**. **Standalone** refers to the case where each client owns the same dataset of Digit-5 (IID partition), while **Inter-Client Heter** refers to the case where each client owns a different dataset from Digit-5 (non-IID partition). Usually, it is hard to obtain the same training accuracy under different data settings because of the objective inconsistency under different data settings [1]. The fairer way is to fix the number of communication rounds and compare their accuracy on the shifted test set. *The results show that inter-client heterogeneity can truly improve the ability of clients to handle test-time shift problems compared to training on homogeneous data.* Based on this observation, we further developed our method FedICON to fully take advantage of the benefit brought by inter-client heterogeneity during training time, and then, at the test phase, tune the local model of each client on its private test data to conquer the intra-client heterogeneity. [1] Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization. In NeurIPS 2020. **The effect of data augmentation.** The performance improvement benefits from the data augmentation but is not entirely due to it. We did not perform an independent ablation study on the data augmentation component because we view it as a part of the local invariance extraction. However, in our ablation study (Table 5), we consider a variant that removes the data augmentation module. That variant replaces the whole local invariance extraction model by conventional supervised learning module (cross-entropy loss), the accuracy drops from 62.23% to 59.54%, but still higher than baselines where the highest average accuracy is 59.14% (achieved by Tent). According to your suggestion, we provide the following tables under *covariate* test-time shift to explicitly show the independent effect of data augmentation. The upper table reports the average accuracy in Table 1 of the main text and the bottom table reports supplementary results to Table 5 of the main text, suggesting that the performance improvement only partially results from the data augmentation module. |**Method**|Local|FedAvg|FedAvg-FT|FedProx|FedRep|FedBN|FedTHE|Tent|EATA|T3A|FedICON(Ours)| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |**Avg Acc**|34.00|55.04|54.55|55.23|38.14|54.80|45.93|*59.14*|58.88|51.60|**62.23**| |**Component**||| **Variants**||| |-|:-:|:-:|:-:|:-:|:-:| |Local Invariance Extraction - Data Augmentation|&check;|&check;||| |Local Invariance Extraction - Local Contrastive Loss|&check;||&check;|| |**Avg Acc**|62.23|60.87|61.64|59.54| We hope the existing ablation study and the above additional results can well address your concern on the effect of the data augmentation module. --- Rebuttal 2: Comment: Dear Reviewer pJGu, We sincerely appreciate the feedback and suggestions you have provided to our paper. We have tried our best to address the concerns raised by you. We sincerely hope our responses have solved your concerns and we remain fully available to respond to any extra inquiries that may arise during the discussion phase. Best Regards, Authors --- Rebuttal Comment 2.1: Title: Thanks for your response Comment: Thanks a lot for your response. I apologize for late response since I am seriously ill recently. Regarding the effect of data augmentation, I think the additional experiment answers my question. Thanks! Regarding the whether the inter-client heterogeneity is really exploited in FedICON, I still believe that experiments on FedICON, with the setting in Table 1, while changing IID paritition to non-IID paritition, is important. I understand the motivation behind Fig 1(b), however, the algorithm used here seems not to be FedICON. --- Reply to Comment 2.1.1: Comment: Dear Reviewer pJGu, We are sorry to hear that you are ill and hope you to be better soon. We again really appreciate your feedback. We are sorry for not showing the results of FedICON under both IID and non-IID partition cases. What we try to verify is the generally existed phenomenon that inter-client heterogeneity can help alleviate the test-time shift problem. In fact, we have conducted the experiments of FedICON under IID and non-IID partition cases but omitted the results in the motivated experiments. We will follow your suggestion and add the following results in the format of bar chart to the experimental part. We do believe this will address the readers' concern about whether inter-client heterogeneity is exploited in FedICON. |**Partition**| **Method**| mn| sv| ys| syn| mm| |-|-|-|-|-|-|-| |IID |FedAvg| 77.18,8.15| 15.30,3.30| 42.35,4.36| 18.47,2.45| 21.00,1.92| |IID | FedICON|84.00,0.88|18.45,0.17|47.08,2.12|29.55,2.23|28.66,1.84| |non-IID| FedAvg| 86.18,0.72| 38.60,1.84| 58.19,2.78| 51.50,1.13| 40.72,1.15| |non-IID|FedICON| 89.67,1.48| 45.23,0.35| 72.13,2.22| 56.13,0.63| 47.98,0.50|
Summary: To deal with the feature-level test-time shift problem in federated learning, this paper proposes to leverage the inherent heterogeneity across clients based on a contrastive learning method, named FedICON. Clients acquire invariance encoding ability on heterogeneous source data and further boost the performance in the test phase. Experiments on various datasets show the effectiveness of FedICON compared with baseline methods. Strengths: 1. It is a novel idea to leverage heterogeneous properties underlying FL systems to deal with the commonly existing test-time shift problem within a client. Compared with centralized test-time adaptation studies, the heterogeneous problem in the training phase is a characteristic of FL. 2. Sufficient experiments are conducted and the main results show that FedICON has achieved improvement for almost all participating clients under various feature-level shifts. 3. The paper is easy to follow and well-organized. The authors provide clear and novel definitions of inter-client and intra-client heterogeneity. The motivation for utilizing inter-client heterogeneity to deal with intra-client heterogeneity is well demonstrated. Weaknesses: 1. The concepts of test-time shift and intra-client heterogeneity seem to refer to the same thing, which is not well clarified in the main paper. 2. It is better to give a better explanation of the fundamental difference between label-level and feature-level test-time shifts, which makes it easier for the readers to understand why the shifts should be studied separately. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Is it possible to deploy the proposed method in an online manner, e.g., the online test deployment in [1], to handle the dramatic shifts during the test phase? [1] FedTHE: Test-Time Robust Personalization for Federated Learning. In ICLR 2023. 2. Given a local model, how to choose an appropriate feature space for contrastive learning? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Difference between test-time shift and intra-client heterogeneity.** Thanks for the valuable comments. The intra-client heterogeneity we defined in this paper refers to the test-time shift issue, especially in federated learning. The concept of test-time shift is commonly used in general machine learning, e.g., both centralized and decentralized frameworks. Since test-time shift is a more well-known challenging issue recognized by the community, we use it to pinpoint the core challenge addressed by this paper. However, to better formulate the problem of test-time shift in FL and make the notation more unified, we propose the concept of intra-client heterogeneity existing in the inference stage of a specific client, as a counterpart of inter-client heterogeneity in the training stage among all the participating clients. We will add more clarification on the concept of test-time shift and intra-client heterogeneity in the revised version. **Difference between label-level and feature-level test-time shifts.** Thanks for the valuable comments. During the federated training stage, label-level and feature-level shift across clients are two different kinds of statistical heterogeneous issues for federated learning, which occurs in the output space and input space, respectively. Usually, different strategies are designed to solve these two issues. For example, to alleviate the feature-level shift issue, the feature encoder is kept personalized or finetuned at each client; to alleviate the label-level shift issue, the classifier is tuned instead. As for the test-time shift, feature-level and label-level shift are still two different challenges that deserve specific technical design to alleviate them respectively. Experimental results in [1] show that, even though a method that is effective for most feature-level shift cases may still not work well for the label-level shift case. Hence, it is better to discuss these two shift cases respectively. We will add more discussion on the explanation of the fundamental difference between these two kinds of test-time shifts in the revised version. [1] Test-time robust personalization for federated learning. In ICLR 2023. **Deployment in an online manner.** Thanks for the valuable question. In our current method, according to Eq. 7, the size of the test set should be larger than one to guarantee that the self-supervised contrastive learning can be conducted for test-time adaptation. However, it can still be easily extended to the online manner where the test samples arrive one by one. First, a set of training samples are selected from the training set, replacing the $A(x)$ in Eq. 7. Then, after performing the optimization for each test sample, we replace one training sample in $A(x)$ with this test sample for future test samples. This transfers the whole pipeline into an online manner and allows the model to gradually adapt to the distribution of test data. We will add a more detailed discussion on this part in the revised version. **How to choose feature space.** Thanks for the valuable question. Given a specific model, we can decouple the model into two parts. One is the feature encoder that maps the raw input image into a latent embedding, while the other is the classifier that further maps the latent embedding into a probability vector among classes. Usually, for vision tasks, the classifier is composed of several fully-connected layers and a softmax layer as its final layer. We can select the features fed into the classifier, which is commonly in existing works [1,2], for contrastive learning. We will add a detailed specification on this in the revised version. [1] Supervised contrastive learning. In NeruIPS 2020. [2] Model-Contrastive Federated Learning. In CVPR 2021. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. After explanation, the difference between test-time shift and intra-client heterogeneity, and the difference between label-level and feature-level test-time shifts become very clear. I don't have any concerns, will keep my original score of 8 and recommend its acceptance.
Summary: This paper focuses on the federated learning (FL) scenario with the test-time shift problem, which is a practical yet challenging research topic. With empirical study, the authors find that the inter-client heterogeneity in personalized FL can be further leveraged to build a robust FL framework against the test-time shift problem. Motivated by this, the authors propose a novel FL approach termed FedICON that uses train-time and test-time contrastive learning to boost the ability of local models to learn invariant information among inter-client heterogeneity caused by test-time shifts. Extensive experiments show the effectiveness of FedICON on FL scenarios with various test-time shifts. Strengths: Strengths 1. Valuable research problem. This paper addresses a new research problem, namely federated learning with feature-level test-time shifts. The exploration of this research topic has significant potential for application in various industry scenarios. The authors discuss the problem from the perspective of inter- and intra-client heterogeneity, providing valuable insights for future works. 2. Novel method. The proposed method, FedICON, is innovative in addressing the FL problem with test-time shifts. The incorporation of supervised and unsupervised contrastive learning during the training and testing phases to acquire invariant knowledge is a meaningful approach. 3. Extensive experiments. The authors have conducted an extensive set of experiments to evaluate the effectiveness of the proposed method. They have considered multiple scenarios, such as covariate shift, domain shift, attribute shift, etc, which enhance the persuasiveness of the experiments. The experimental results demonstrate a significant performance improvement achieved by FedICON. Weaknesses: Weaknesses: 1. Lack of discussion for the motivated experiment. In Sec 1, the authors present evidence that highlights the positive impact of inter-client heterogeneity in FL when addressing test-time shift problems. However, the paper lacks a thorough discussion of the underlying reasons behind this observation. It would be beneficial for the authors to provide deeper insights and further explanations regarding the reasons or deductions supporting this finding. 2. Design motivation needs to be more clear. Contrastive learning plays a crucial role in the proposed method. However, the authors have not sufficiently discussed why they choose contrastive learning to provide the pivot supervision signals instead of other supervised or unsupervised objectives, such as cross-entropy or self-reconstruction. It is important for the authors to provide a clearer explanation and justification for this choice, allowing readers to understand the design motivation of using contrastive learning in the proposed method. 3. There are some instances where unexpected periods (".") appear at the end of subheadings in certain subsections, such as Sec 3.1, 3.2, and 3.3. The authors should carefully review the paper to identify and correct any typos or format errors throughout the manuscript. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: As I pointed out in "Weaknesses", more discussions are expected: 1. The discussion for the motivated experiment. 2. The motivation for using contrastive learning in FedICON. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors have discussed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Lack of discussion for the motivated experiment.** Thanks for the valuable comments. In the paper, we empirically prove and illustrate the fact that inter-client heterogeneity in FL can help alleviate the test-time shift problem. As for the underlying reason for that, the naturally-existing heterogeneity among FL clients potentially improves the generalization ability of the model when data of diverse feature distributions contribute to the learned model. Moreover, the inherent heterogeneity allows the model to capture invariance among heterogeneous feature distributions by exploring the common feature/representation properties. We will add more discussion and explanation regarding the reasons and insights in the revised version. **More explanation about the motivation.** Thanks for the valuable comments. We utilize contrastive learning to provide pivot supervision signals because contrastive learning is designed to learn representations that are invariant to different transformations of the same instance which is aligned with our target to extract local invariance. As for the other types of objectives or learning pipelines, some of them lack discriminative ability, e.g., self-reconstruction, which makes them less useful for distinguishing between different instances. Also, some of them lack the ability to extend to unsupervised learning scenarios, e.g., cross-entropy, making them less flexible during the inference stage. We will add more explanation about our motivation to use contrastive learning in the revised version. **Some typos.** Thanks for the valuable comments. We will correct these typos in the revised version. --- Rebuttal Comment 1.1: Title: thanks for the response Comment: Thanks for the response. All my concerns have been properly addressed. I will raise my score.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments. We are glad that the reviewers found that the problem we are solving is valuable and practical in federated learning (Reviewers 9n3S, CniS, v7Ae); our idea of leveraging inter-client heterogeneity to handle test-time shift problem is novel and insightful (Reviewer 9n3S, CniS, pJGu); our experiments are comprehensive and extensive (Reviewer 9n3S, CniS, pJGu). Both Reviewer CniS and Reviewer v7Ae think the paper is well-written and easy to understand. We respond to some shared concerns by more than one reviewer below. Detailed responses to each reviewer are provided in the following. We will incorporate all the feedback in the final version. **C1. The case where test samples arrive one-by-one (Reviewer CniS and Review v7Ae).** Thanks for the your valuable comments and questions. In our current method, according to Eq. 7, the size of the test set should be larger than one to guarantee that the self-supervised contrastive learning can be conducted for test-time adaptation. *It does not require access to the full test dataset to conduct the test-time adaptation.* Moreover, it can still be easily extended to the online manner where the *test samples arrive one by one*. First, a set of training samples are selected from the training set, replacing the $A(\mathbf{x})$ in Eq. 7. Then, after performing the optimization for each test sample, we replace one training sample in $A(\mathbf{x})$ with this test sample for future test samples. This transfers the whole pipeline into an online manner and allows the model to gradually adapt to the distribution of test data. We will add a more detailed discussion on this part in the final version to clarify the property of testing scenarios. Pdf: /pdf/c4adc90c5d1c8b33eaf5c095ed41a1094a66c033.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Sketching Algorithms for Sparse Dictionary Learning: PTAS and Turnstile Streaming
Accept (poster)
Summary: The paper studies the sparse dictionary learning and k-means clustering problems, using tools from sketching. Various results are obtained under different settings and assumptions. The first part of the paper considers lower bounds in the streaming setting. Here, the main technical result, which is a lower bound for k-means, uses a reduction from the communication complexity of the multiparty set-intersection problem. This is a nice set of results that translate prior ideas to k-means clustering problems. A new upper bound is also established under certain input sensitivity assumptions in the random order model. The second part of the paper gives approximation schemes for both problems. The key technical ingredient is the use of dimensionality reduction. Here, the use of dimensionality reduction follows essentially from prior work on projective clustering. The final part of the paper considers space complexity in the turnstile model. At the high level, sketching tools are used to discretize the appropriate matrix optimization problem, followed by brute force. Strengths: The paper studies two fundamental important problems. The work introduces several ideas from sketching in this setting. The authors appear to be very well versed in the related literature. The paper is well-written and the presentation is quite accessible. Weaknesses: In the statements of several of the Theorems, the running time is not given (either at all, or not with enough precision). This makes me think that the algorithms are probably not very practical. The section on the turnstile model is labeled as "space complexity". I find this confusing. Your algorithm has some space complexity and some time complexity, and it would be best to state both clearly. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Can you provide any bounds on the exponent in the poly(n) term in Theorem 3.2? What's the running time of the algorithm in Theorem 4.3? Can you comment on possible applicability of your algorithms? Is there a bottleneck in performance? Can you briefly discuss whether your results on k-means imply anything for k-center or k-median? Your lower bound constructions seem general enough that perhaps other problems can be addressed too. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >In the statements of several of the Theorems, the running time is not given (either at all, or not with enough precision). This makes me think that the algorithms are probably not very practical. We will add more precise time complexity results to our upper bounds, so that our work is easier to compare to in future work. Regarding the practicality of our methods, please see our later response regarding the "bottleneck" of our methods. > The section on the turnstile model is labeled as "space complexity". I find this confusing. Your algorithm has some space complexity and some time complexity, and it would be best to state both clearly. We will add time complexity bounds to the statements of Theorem 4.1 and 4.2 as well as rename the section to "Turnstile Streaming Algorithms". > Can you provide any bounds on the exponent in the $\operatorname{poly}(n)$ term in Theorem 3.2? Ignoring lower order terms, the complexity of Theorem 3.2 will be $\exp((8k^{3r}b \log d)^{O(k^{r+1})}\log n)$. > What's the running time of the algorithm in Theorem 4.3? The dominant time complexity term for Algorithm 4.3 would be $n^{\tilde{O}(k^2/\epsilon)}$, since the cost with respect to $d$ is amortized over the turnstile updates. > Can you comment on possible applicability of your algorithms? Is there a bottleneck in performance? The main bottleneck generally comes from the fact that, once we reduce the dimension of a problem, solving the reduced problem has a high computational complexity in terms of parameters such as $k$ and $r$. This likely cannot be avoided in the worst case, since both $k$-means and sparse dictionary learning are NP-Hard. However, there has been extensive work studying heuristic algorithms and algorithms leveraging additional statistical/structural assumptions for these problems. Our dimensionality reduction approaches could potentially be paired with these methods to achieve more practically efficient algorithms. > Can you briefly discuss whether your results on k-means imply anything for k-center or k-median? Your lower bound constructions seem general enough that perhaps other problems can be addressed too. Thank you for pointing this out! Indeed, we expect our lower bound argument to hold for Euclidean $k$-median clustering, and more generally, $(k,p)$-clustering for $p \leq 2$. For $p > 2$ (including $k$-center clustering), our lower bounds may also generalize to this setting, but we may also expect stronger lower bounds since $\ell_p$ norm estimation in $d$ dimensions requires $\operatorname{poly}(d)$ bits of space for $p>2$. In general, generalizing our results from Euclidean metrics to $\ell_p$ metrics is an interesting question.
Summary: This theoretical paper discusses algorithms and lower bounds regarding time and space complexity for two interesting machine learning problems: 1) Euclidean k-means clustering 2) Sparse dictionary learning I briefly summarize the results based on the order that they are presented in the main paper (which does not fully agree with the order that is presented in the introduction): 1. Space complexity **lower bounds** for approximate k-means in the **(a)** turnstile streaming, **(b)** row-arrival streaming 2. Upper bound for a special case of k-means, where the so-called sensitivities are bounded, which goes beyond the aforementioned lower bounds. 3. PTAS for dictionary learning and k-means based on random dimension reduction. 4. Space complexity **upper bounds** for the two problems in the turnstile streaming model. All the results are supported with detailed proofs. Admittedly I only briefly checked very few of the proofs due to the limited time, but from the general impression of the paper I expect them to be robust. Strengths: The major strengths that I can highlight are the following: 1) The problems that are being studied are related to important machine learning problems that have drawn a lot of attention in the past 2) The results that are presented reveal many interesting insights for these two problems, from an algorithmic perspective, and the algorithms communities can benefit from such a detailed analysis. 3) Many results are presented and heavily supported with theoretical analysis (this is mostly a strength, but the amount of work makes it hard to thoroughly review in the limited time that is available) 4) It is evident that there has been a lot of work preparing the paper and it seems that the majority the main claims are robust. Weaknesses: The major weakness of the paper that I can mention, unfortunately, is the way that it is written...! 1) There is no clear-cut summary of the results in the introduction. There are many interesting results and improvements, but it takes significant time to identify them. E.g., the order in which the results are presented in the introduction does not agree with the order that the results are presented in the sections thereafter. The first mention of contributions is PTAS, but this only appears in Section 3. Section 2, which precedes PTAS, is about space complexity lower bounds. I spent a lot of time trying to locate results and connect them with each other, which I could have spent in verifying proofs. 2) There are many results in the 20 pages of additional content seem to be new, non-trivial, and seem to be crucial parts of the paper, e.g. Algorithms 1 and 2, the polysolver, .... They should be part of the main paper, it is not ideal that they remain hidden in the Appendix. 3) There are few things that I think need to be clarified / missing definitions, See "Questions". **Note**: Items 1. and 2. did not affect my score. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I have the following questions for the authors that could help me understand better the paper and potentially modify the score. **Questions**: 1) Introduction, page 3: It is mentioned that a Gaussian JL sketch only need $O(ε^{-2}n)$ bits of space, could you explain a bit more? Maybe another JL sketch is required? (e.g. Rademacher,...) 2) Theorems 2.3 and Theorem 2.4 mention "a constant number of passes...". Does the model support more than 1 passes over the stream? This is not mentioned in Definition 1.3 of a turnstile stream. 3) Theorem 3.1: why is it needed that rank(A)=Ω(poly(k/ε))? How is it ensured later on? (an assumption of d=poly(k/ε) is made for d, not for rank(A)) 4) Theorem 3.2: $b$ is not defined. Also vec(A), which is first used in page 9, is not defined 5) Abstract: I am not sure why the bounded sensitivity assumption is natural. Could you explain a bit more? 6) Line 342: S seems to also depend on ε. 7) Line 26: $r$-sparse is not defined. It might be confusing to someone who is not familiar with the term. 8) I am not sure I understand the derivation of the PTAS (e.g. Section 3.4). The polysolver in the appendix seems to run in time exponential to the size of the input problem. From what I understood, this should not be a problem because we already reduce the matrix problem size with a sketch to something like $O(k log k)$, and if $k$ is fixed then $exp(k)$ is fine for a PTAS. Is this more or less correct? Could you provide a brief description that could help me understand the proofs of that section better? **Typos / Other suggestions**: 1) Theorem 2.1: "k-maens" 2) Line 64: Should it be "efficient" instead of "inefficient"? 3) The title of Section 4 is a bit too generic, it took me a while to understand why it is any different from section 2, which also describes space complexity for turnstile. **Main recommendation**: I think that the structure of the paper can be significantly improved with not too much effort, and it would substantially improve the overall impression of the paper. I.e., there should be a clear summary of the new (and interesting) results that can guide the reader to locate them easier. I really think the best fit for this (large) paper is to submit a revised full-version to a ML or TCS journal, where the reviewers can take the appropriate amount of time to provide insightful feedback, and the suggestions could be incorporated in a revision. In any case, this is up to the authors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: I cannot see any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > There is no clear-cut summary of the results in the introduction. There are many interesting results and improvements, but it takes significant time to identify them. E.g., the order in which the results are presented in the introduction does not agree with the order that the results are presented in the sections thereafter. The first mention of contributions is PTAS, but this only appears in Section 3. Section 2, which precedes PTAS, is about space complexity lower bounds. I spent a lot of time trying to locate results and connect them with each other, which I could have spent in verifying proofs. Thank you for pointing out that the order of sections makes our work harder to read, and we apologize for the inconveniences caused by this. We have reordered the sections of the main body of our work to follow the flow of the introduction. > Introduction, page 3: It is mentioned that a Gaussian JL sketch only need $O(\varepsilon^{-2}n)$ bits of space, could you explain a bit more? Maybe another JL sketch is required? (e.g. Rademacher,...) You are correct that exact instances of Gaussian random variables cannot be represented in finite space. However, Gaussian matrices truncated to finite bit complexity will suffice. We will avoid referring to $\mathbf{G}$ as a Gaussian matrix and instead we can refer the reader (among other places) to Definition 1.2 in [Makarychev et. al., 2020]. > Theorems 2.3 and Theorem 2.4 mention "a constant number of passes...". Does the model support more than 1 passes over the stream? This is not mentioned in Definition 1.3 of a turnstile stream. While turnstile streams can indeed be defined for more than one pass over the stream, our algorithms only need one pass, while our lower bounds work against a constant number of passes. We will clarify that our lower bounds work against this broader class of algorithms compared to our upper bounds in our revision. > Theorem 3.1: why is it needed that rank(A)=Ω(poly(k/ε))? How is it ensured later on? (an assumption of d=poly(k/ε) is made for d, not for rank(A)) Good point. We do not need this assumption since, if $rank(\mathbf A) < s$, then we can just directly reduce the dimension of the problem using SVD. We will change the theorem to remove this assumption. We do not need to change any later theorems, as we did not use this rank assumption anywhere else. > Theorem 3.2: $b$ is not defined. Also vec(A), which is first used in page 9, is not defined. Thank you for pointing this out. We write $b$ to denote the bit complexity of the input matrix $\mathbf A$, that is, each entry of $\mathbf A$ can be represented by $b$ bits. This is stated in Theorem D.1 of our initial draft, and we will move this definition to the main text in our revision. We have also clarified that vec(A) refers to the $nd$-dimensional vector obtained by flattening $\mathbf A$. > Abstract: I am not sure why the bounded sensitivity assumption is natural. Could you explain a bit more? The sensitivity quantity captures how sensitive the objective function is with respect to a given point $i\in[n]$, and is defined as the largest fraction of the objective function captured by the training example $i\in[n]$, ranging over all centers $c^1, c^2, \dots c^k\in\mathbb R^d$. The bounded sensitivity assumption states that there are no points that can take up a significant fraction of the objective function, and can also be interpreted as a way to formalize a ``well-clustered'' instance. In particular, this is one assumption which makes uniform sampling a good algorithm for sampling a small representative subset of examples. We have given a more in-depth discussion of this assumption in our revision. > Line 342: S seems to also depend on $\epsilon$. Thank you for catching this, we have fixed this in our revision. > Line 26: $r$-sparse is not defined. It might be confusing to someone who is not familiar with the term. We have clarified that an $r$-sparse linear combination is a linear combination of at most $r$ vectors. > I am not sure I understand the derivation of the PTAS (e.g. Section 3.4). The polysolver in the appendix seems to run in time exponential in the size of the input problem. From what I understood, this should not be a problem because we already reduce the matrix problem size with a sketch to something like $O(k\log k)$, and if is fixed $\exp(k)$ then is fine for a PTAS. Is this more or less correct? Could you provide a brief description that could help me understand the proofs of that section better? Yes, what you have described is indeed correct. Although the stated time complexity of PolySolver is $2^{O(nr + kd)}$ for input matrix $\mathbf{A}$, we never call the PolySolver on the full matrix, but instead call it on $\mathbf{W}\mathbf{A}$, which has a smaller dimension that is only logarithmic in $n$. To briefly give the intuition behind the proof of our sparse dictionary learning PTAS, we first assume $d = \operatorname{poly}(k/\epsilon)$ (this is justified since we give a way to efficiently reduce to this case in Theorem 3.1). We then reduce the size of $\mathbf{A}$ to be logarithmic in $n$ by applying a projective clustering coreset construction. We then show that using existing work on polynomial system solvers, we can solve this smaller problem in $\operatorname{poly}(n)$ time. The technical difficulty comes from rigorously interfacing the projective clustering bounds, polynomial system solver complexity, and the specific constraints of the sparse dictionary learning problem, and this is what causes the technical complexity in the proof despite the intuitive simplicity of the approach. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses and for taking all the reviews into consideration. I was not able to find the revised manuscript (the "revisions" page is empty). Could you point me to it? --- Reply to Comment 1.1.1: Comment: While we did our best to describe in detail the revisions that we plan to make, per NeurIPS policy, no revisions are allowed until the camera-ready stage (https://neurips.cc/Conferences/2023/PaperInformation/NeurIPS-FAQ), and we refrained from uploading a revised paper.
Summary: The paper considers the well studied $k$-means clustering problem and the $r$-sparse dictionary learning problem. The paper has multiple contributions: (1) It presents a new approach for obtaining a PTAS for $k$-means clustering which matches the time complexity of previous algorithms for the problems. This approach generalizes to give the first PTAS for the sparse dictionary problem. (2) Within turnstile streaming algorithms, they consider the setting where the algorithm has to output both the assignments to the clusters/dictionaries as well as the cluster centers or dictionary elements. Previous work, even in the case of the simpler $k$-means have either focused on one or the other so this is a more challenging setting. Omitting logarithmic factors, the paper provides an $O(nr/\varepsilon^2+dk/\varepsilon)$ space algorithm for the $r$-sparse dictionary learning problem with dictionaries of size $k$ and an $O(n/\varepsilon^2+dk/\varepsilon)$ space algorithm for the $k$-center problem. They also present an $O(n)$ space bounded algorithm when the points are inserted in a random order. On the lower bound side, they present an $\Omega(n/\varepsilon+dk/\varepsilon)$ space bound for $k$-means clustering as well as an $\Omega(n/\varepsilon^2)$ bound for algorithms that can estimate the cost for a fixed set of candidate centers. Technically most interesting seems to be the former lower bound for $k$-means clustering which is via a reduction from the multi-party set disjointness problem. Strengths: I found the paper to be quite strong. It is indeed surprising that the setting where both the assignments and the centers/dictionaries must both be output have only received limited attention. I found the ideas for the lower bound via multi-party set disjointness interesting (they are sketched well in the first 9 pages) and I als think it is nice that they design a PTAS for dictionary learning. I went through a few of the proofs in the appendices but far from everything, so I cannot vouch for correctness. However, the paper is well written and the proofs seem clear. Given that both the $k$-center problem and sparse dictionary learning is of interest to a good chunk of the NeurIPS community, I think the paper should be accepted. Weaknesses: It seems that some of the algorithms proposed by the paper might be implementable and it would be nice to see some experiments on their performance. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: l64: Should "inefficient" be "efficient"? l104-105: you should probably say "after applying $G$" somewhere. Definition 2.3. This definition is strange. Saying that the algorithm outputs an $\varepsilon$-approximation does not have anything to do with the rows arriving one at a time. It seems that it should be a definition of the model and not say anything about the approximation. l236: Appending $k$ rows of what? l238: "but a $k$ rows". Please check the writing. l263: What is an indicator matrix? l320: I am a bit confused why the dimensionality has been reduced to logarithmic in $n$. The new dimension seems to only depend on $k$ and $r$. l331: Has $b$ been introduced? Lemma 3.1: Please check the statement. Are the order of quantifiers correct? Should the bound on $s$ be moved further back? l360: I think $S$ is $m\times n$. Same for l380. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: None as far as I can tell Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > It seems that some of the algorithms proposed by the paper might be implementable and it would be nice to see some experiments on their performance. While we agree that experimental inquiry on these streaming/sketching algorithms would be interesting, we believe they would be best situated in a work dedicated to the topic. Our results are focused on algorithmic techniques that will achieve the best worst-case complexity, which is related to, but not the same as optimizing for practical performance. Rather than directly apply our proposed algorithms, we believe our sketching methods to reduce the problem size could be paired with existing heuristic algorithms to efficiently solve the reduced problems in practice, at the cost of worst-case guarantees. > l64: Should "inefficient" be "efficient"? We have clarified this as "where even an inefficient algorithm will be tractable due to the smaller size of the instance". > l104-105: you should probably say "after applying $G$" somewhere. Done. > Definition 2.3. This definition is strange. Saying that the algorithm outputs an $\epsilon$-approximation does not have anything to do with the rows arriving one at a time. It seems that it should be a definition of the model and not say anything about the approximation. Indeed, this definition is intended to specify the model in which an input of $n$ vectors arrive one at a time. One could solve a variety of problems in this model, such as $\epsilon$-approximate $k$ means clustering, exact $k$ means clustering, or any other problem whose inputs are $n$ vectors in $d$ dimensions. We have clarified this in our revision. > l236: Appending $k$ rows of what? We have clarified this as follows: "The result of Woodruff (2014) constructs a distribution over $O(k/\epsilon)\times d$ matrices such that one can recover an arbitrary random bit among $\tilde\Omega(dk/\epsilon)$ random bits by appending a set of $k$ ``query'' rows and then computing a $(1+\epsilon)$-approximately optimal low rank approximation to the resulting matrix." > l238: "but a $k$ rows". Please check the writing. We have fixed this to be "all but $k$ rows". > l263: What is an indicator matrix? We have clarified this as follows: "let $\mathcal X$ be the set of matrices $\mathbf X\in\mathbb R^{n\times k}$ with standard basis vectors as rows". > l320: I am a bit confused why the dimensionality has been reduced to logarithmic in $n$. The new dimension seems to only depend on $k$ and $r$. We were not precise in saying the size and dimensionality were at most logarithmic in $n$, as this is all we need for the intuition of the argument to hold. However, you are correct that, more specifically, the dimension is reduced to be independent of $n$ and the size of the input is reduced to be logarithmic in $n$. We will improve the clarity of this sentence by focusing only on the size reduction. > l331: Has $b$ been introduced? Thank you for pointing this out. We write $b$ to denote the bit complexity of the input matrix $\mathbf A$, that is, each entry of $\mathbf A$ can be represented by $b$ bits. This is stated in Theorem D.1 of our initial draft, and we will move this definition to the main text in our revision. > Lemma 3.1: Please check the statement. Are the order of quantifiers correct? Should the bound on $s$ be moved further back? Thanks for bringing this to our attention. The bound on $s$ does not need to be moved, since the set $\mathcal{S}$ only depends on $n$, $k$, and $\epsilon$. However, it should say: ``for every $\mathbf{A}$ and $\mathbf{B}$ there exists $\mathbf{S} \in \mathcal{S}$...''. We will fix this. > I think $S$ is $m\times n$. Same for l380. Thank you for catching this, we have fixed this in our revision.
Summary: This paper presents results for the k-means and sparse dictionary problems, both of which ask to summarize an $n$ point data set in $d$ dimensions in terms of $k$ points. In the former we map each point to a center, in the latter, we are allowed sparse linear combinations of points. The paper considers two models, the streaming model (various versions of it) and the "standard" model where the goal is to comes up with an algrithm that is polynomial in $n. d$, but $k, \epsilon$ are treated as constants, and the dependence on these can be arbitrary. They present both upper and lower bounds. Their lower bound results are: - An $\Omega(n/\epsilon)$ streaming lower bound for $k$ means clustering. This beats the trivial $\Omega(n)$ lower bound but it falls short of the $O(n/\epsilon^2)$ upper bound from JL. The proof is by a reduction from set disjointness, as is standard in streaming. The authors argue that their reduction is delicate and uses the structure of the hard instances from BYJKS'04. - An $\Omega(dk/\epsilon)$ lower bound which follows from earlier work by Woodruff. - They give some other lower bounds for restricted models. They also give results on PTASES for both problems. The idea behind both is to reduce the dimensionality of the points using various kinds of sketches. The exact sketches needed for these are chosen with some care. In low dimensions, one can afford a brute-force enumeration or similarly costly algorithm (this general idea goes back to the early work on coresets). They also give some results in the turnstile streaming model, but the results seem to have some caveats about the parameters/solution space. Strengths: - The problems considered are important and well-studied in the literature, the results will be of interest to people working in the general area os sketching/streaming. - I like the fact that they give unified results for $k$-means and the sparse dictionary problems. - The results seem to rely on a deep understanding of the prior work in the area, and on using exactly the right tools needed in each setting. Weaknesses: - The paper has too many results, at least some of them rather partial or for rather restricted models. I have a hard time deciding what the main contribution of the paper is. No one result stood out either in terms of the statement, or in terms of new techniques. - Some of the results seem a touch incremental, they come from applying prior ideas in a new setting. I realize that knowing what tools are applicable is no mean feat, given the vast literature. But I could not discern too much originality. Technical Quality: 3 good Clarity: 3 good Questions for Authors: If you wanted to reader to focus on one result or key idea which you see as the main contribution of your work, what would it be? I would suggest that the writeup focus on one or two main results, and defer the other results for the expert reader. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The paper has too many results, at least some of them rather partial or for rather restricted models. I have a hard time deciding what the main contribution of the paper is. No one result stood out either in terms of the statement, or in terms of new techniques. > I would suggest that the writeup focus on one or two main results, and defer the other results for the expert reader. Because the main contribution of our work is an exploration of a problem/setting which has not received much attention in the past, we believe that this work would be incomplete without including discussions about some of our more straightforward or partial results. We have attempted to highlight two technical highlights (discussed further in the next response) in Section 1.1. We will try to rewrite this section, as well as subsequent discussions, to focus more on these technical highlights, while still mentioning our other results and their connections to the landscape of results for PTAS and streaming algorithms for sparse dictionary learning and k-means clustering. > If you wanted the reader to focus on one result or key idea which you see as the main contribution of your work, what would it be? The overall message is that we initiate the study of k-means clustering and sparse dictionary learning (with assignment) in the PTAS and turnstile streaming settings, which we show are both amenable to techniques from the sketching literature. We would also like to reiterate that, surprisingly, prior work has not considered the task of outputting assignments for $k$-means or sparse dictionary learning in a stream despite its practical importance, and we hope our work will bring much-needed attention to this important problem. Technically, perhaps the two most interesting results (requiring the most "technical novelty") we would like to highlight are the first PTAS for sparse dictionary learning, as well as an $\Omega(n/\epsilon)$ lower bound for k-means clustering in turnstile streams. The first result uses a new reduction to coresets for projective clustering, while the latter uses a reduction to multi-party set disjointness (rather than the standard two-party problem), which requires delicate arguments to reason about optimal solutions to random instances of k-means clustering. > Some of the results seem a touch incremental, they come from applying prior ideas in a new setting. I realize that knowing what tools are applicable is no mean feat, given the vast literature. But I could not discern too much originality. While our techniques indeed rely on several standard results for our main technical results, such as the use of coresets for the sparse dictionary learning PTAS and set disjointness lower bounds for the k-means clustering communication lower bound, we argue that several innovations are still needed to make our proofs go through. For our sparse dictionary learning PTAS, the use of coresets to design a PTAS is indeed a well-known idea. However, the standard idea of building coresets for this problem by first computing an approximately optimal solution does not work in our setting, as the computation of an approximate solution is our goal to begin with. Instead, we show a reduction to coresets for projective clustering, which is a new connection to the best of our knowledge, and furthermore use the fact that there exist constructions for such coresets that do not need an approximately optimal solution, which uses a recent work of Tukan et al 2022. Note that older coreset constructions for projective clustering (e.g. https://people.csail.mit.edu/dannyf/stoc11.pdf) do not have this property. For our k-means clustering communication lower bound, one of the main challenges we face is to understand the optimal cost of a k-means clustering instance up to a $(1+\epsilon)$ relative factor, which is highly nontrivial since k-means clustering is an NP-hard problem in general, and we must exploit the structure of the specific instance at hand in order to characterize the cost of instances. This challenge is further complicated by the fact that our instance must be a dense random instance in order to use the set disjointness lower bounds, unlike some other lower bound results in other models shown in earlier work which have simpler instances to reason about that are supported on standard basis vectors (e.g., https://arxiv.org/abs/1905.06394, https://arxiv.org/abs/2202.12793). We introduce new techniques such as boosting the "signal" of the clustering instance with only a small decrease in the communication lower bound by using multi-party set disjointness, which we believe is a new interesting application of multi-party lower bounds. Note that with a standard two-party set disjointness argument, a nearly optimal k-means clustering may not necessarily solve the set disjointness problem, since the cost savings from clustering this coordinate correctly is only two, while the cost savings from finding the optimal clustering of the random bits could be much larger. Additionally, our turnstile streaming algorithms relies on a delicate argument combining a sketched multiple linear regression guarantee, affine-embedding, and JL-embedding. This combination must be done carefully to balance the space of the sketch with the strength of the guarantee needed, which we do by conceptually breaking the problem into disjoint pieces of 1) reducing the problem using the linear regression guarantee, 2) solving the reduced problem with the affine embedding, and 3) checking if we have solved the overall problem with a JL-embedding guarantee. Achieving this decomposition relies on leveraging a less well-known ``guess-the-sketch'' approach.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Don't be so Monotone: Relaxing Stochastic Line Search in Over-Parameterized Models
Accept (poster)
Summary: This paper proposes the use of nonmonotone line search methods to speed up the optimization process of modern deep learning models, specifically Stochastic Gradient Descent (SGD) and Adam, in over-parameterized settings. The proposed method relaxes the condition of a monotonic decrease in the objective function and allows for larger step sizes. The authors introduce a new resetting technique that reduces the number of backtracks to zero while still maintaining a large initial step size. The proposed POlyak NOnmonotone Stochastic (PoNoS) method combines a nonmonotone line search with a Polyak initial step size. The paper proves the same rates of convergence as in the monotone case. The experiments show that nonmonotone methods outperform the rate of convergence and also generalization properties of SGD/Adam. Strengths: - **Originality:** The use of nonmonotone line search methods to relax the condition of a monotonic decrease in the objective function is a stochastic generalization of [Zhang and Hager 2004] which was proposed initially for deterministic optimization. The initial step size is chosen on the basis of previous work [Vaswani et al 2019]. The paper also introduces originally a new resetting technique that reduces the amount of backtracks to zero while still maintaining a large initial step size. Overall, the paper's originality is a significant strength. - **Quality:** The paper provides rigorous proof that the proposed nonmonotone line search method has the same rate of convergence as in the monotone case despite the lack of a monotonic decrease. The experiments show that nonmonotone method has a larger speed of convergence and better generalization properties of SGD and Adam. Computational time comparison experiments also show the outperformance of the proposed method. The theory is solid and the experimental results are strong. - **Clarity:** The paper is well-written and easy to understand, with clear explanations of technical terms and concepts. Qualitative explanations of the theorems are provided to help the readers understand the main messages. The authors provide detailed descriptions of the proposed method and the experiments conducted to evaluate its performance. Comparisons with other methods are presented clearly. - **Significance:** The proposed method shows the outperformance of existing state-of-the-art algorithms in both computational time and generalization properties. Weaknesses: - The proposed method includes many parameters to be chosen artificially, such as $\eta_{\rm max}$, $c$, $c_p$, $\delta$, and $\xi$. Although the ranges of them are provided in the theorems, influences on the performance of the proposed method due to different choices of these parameters are not clear. Are the specific values used in a real experiment not so important? If so, to what extent? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Same as stated in the **Weakness** part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations of the proposed method are stated not sufficiently. Only future perspectives are stated. For example, considering the local PL assumption. The claims that the proposed method outperforms many other the-state-of-the-art methods from several perspectives are quite strong. Are there any drawbacks, or points to be improved, of the proposed method? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the accurate comments and the time spent reading the paper. Weaknesses: The performance of PoNoS is actually not sensitive to hyperparameters. In fact, the same values work across experiments and there was no need to fine-tune them. Most of PoNoS's hyperparameters were set to very standard values, while others were either inherited by recent papers or fixed by the theory: - $\delta = 0.5$, classical cut of the step [Nocedal and Wright, 2006]. SLS employs an unusual value of 0.9 and this choice is connected to the use of their resetting technique (3). We checked the results of SLS with $\delta=0.5$ and they indeed turned out to not be as good as with $\delta=0.9$. - $\xi = 1$, fully nonmonotone version of Zhang and Hager [2004]. - $\eta^{\text{max}} = 10$, very classical value [Vaswani et al., 2019, Loizou et al., 2021]. We conducted an ablation study in Section E.4, which shows that larger values of $\eta^{\text{max}}$ do not have a remarkable impact on the results. These results show that PoNoS is more robust than SLS and SPS to these changes. - $c = 0.5$, suggested by the theory. Both our theory and that of Vaswani et al. [2019] suggest employing 0.5 for $c$, rather than the classical 0.1 or lower [Nocedal and Wright, 2006]. The numerical results in Section E.1 of the supplementary support this choice. In particular, they show in Figure VII that $c=0.1$ might bring PoNoS and its monotone counterpart to diverge. - $c_p = 0.1$, half of the inherited value [Loizou et al., 2021]. In Loizou et al. [2021], the value 0.2 was suggested for SPS, however in our case, the initial step size is not the final step since the backtracking procedure might reduce this value to its half (or less). For this reason, we decided to employ a step that is initially double that of SPS. The results show that PoNoS|0.1 is consistently better than PoNoS|0.2 (see Section E.5 of the supplementary). Thus, we also checked whether SPS|0.1 would be consistently better than the original SPS, but this is not the case. Limitations: - In the case of transformers, PoNoS's advantage over Adam is not consistent with the one obtained for convolutional neural networks. The reason for this seems to be the reduced dynamics of the loss and of the norm of the gradient in the case of transformers. These two values directly determine the Polyak step, which consequently remains very flat along the training (see section D.5 of the supplementary). This observation suggests that (Po)NoS might need to be paired with a new initial step size and/or a different preconditioner for training transformers. - Given the local PL result by Liu et al. [2022], our Theorem 3 can only be considered to hold locally and only for neural networks with some specific properties that are still not completely realistic. In fact, Theorem 8 by Liu et al. [2022] holds only for very wide networks with a squared loss function and whose tangent kernel at initialization is strictly positive definite. To the best of our knowledge, other works on neural networks use the same assumptions since they are still the closest available to applications. - The $\xi$ permitted by our theorems is very small. On the other hand, the value employed in practice is large ($\xi = 1$). We conjecture that the bound on $\xi$ might be relaxed, even if the final shape of this bound might depend on the solution to the above-mentioned issue on Theorem 3. --- Rebuttal Comment 1.1: Comment: Your reply addresses my questions. Thank you very much. I will keep my score as "Weak Accept".
Summary: This paper proposes a non-monotonic line search method for choosing step sizes in stochastic optimization. Convergence rates are proved for strongly convex, convex, and PL functions, and the rates match those of previous work. Experimental results show that (1) for MLPs and CNNs, the proposed algorithm outperforms SGD, Adam, and previous line search methods, and (2) for kernel models and transformers, the proposed algorithm outperforms SGD and previous line search methods, and is competitive with Adam. Strengths: 1. The question is significant. Given the observations of the "edge of stability" and non-monotonic decreases in loss when training deep networks, it seems natural that incorporating non-monotonicity into line search methods may yield significant performance improvements. 2. The presentation is clear and easy to follow. 3. The theoretical results (Theorems 1, 2, 3) can recover convergence rates from previous work. 4. The experimental evaluation is very broad, covering many datasets and neural network architectures. Weaknesses: 1. The proposed algorithm appears to be a direct combination of existing techniques (non-monotonic line search with Polyak initial step size). While this isn't necessarily a problem in itself, as a result the technical novelty of the paper is not very high. 2. The theoretical results recover the previous convergence rate, but they do not exhibit any improvement over baselines. Recovering the previous convergence rates is natural and the proofs don't appear to contain any new techniques. Therefore, the theoretical contribution is not significant. 3. The main text contains no information about the tuning procedure for baselines or for the proposed algorithm, and the appendix contains very little information about hyperparameters. It's uncertain whether the experimental comparison is fair, and since the theoretical results do not exhibit improvement over baselines, the experimental performance is the only substantial contribution. Some previous baselines require additional hyperparameters (e.g. SPS with $\gamma$), but there is no assurance that this parameter was properly tuned. It is also difficult to see whether PoNoS was tuned more extensively than baselines, which would of course not be a fair comparison. 4. Evaluation for RBF kernel models and transformers does not compare by wall-clock time, and does not include results for test loss. Since PoNoS is competitive with Adam when measured by epochs, it is natural to assume that PoNoS lags behind Adam in terms of wall-clock time, which begs the question whether existing line search methods are useful for training Transformers. Line 344 says that only the training procedure is considered (following previous work), but this is not completely satisfying to me. If we compared test performance for MLPs, and CNNs, why not also for RBFs and Transformers? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Why is interpolation important to achieve the theoretical results? The introduction discusses interpolation in detail, but Section 4 only mentions interpolation as a condition for the theorems, and does not discuss why interpolation is necessary. Do previous line search methods also require interpolation to recover the same convergence rates? 2. How were the hyperparameters chosen? In particular, were all algorithms fairly tuned? 2. Why does PoNoS outperform other line search methods empirically when the theoretical guarantees of PoNoS do not improve over baselines? 3. How much do the two individual components of PoNoS (non-monotonicity and choice of initial step size) affect the performance? In particular, how would PoNoS perform if we used non-monotonicity with a classical initial step size? As a miscellaneous suggestion, please use lower resolution images for Figures 1-3, or if you're using a vector format please remove some data points. The provided PDF is unusually large and viewing the figures in Chrome's PDF viewer is quite slow. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The authors include some discussion of limitations and future work in the conclusion, though it would be nice to see some more discussion of the weaknesses of the proposed method instead of just directions for future work. Discussion of potential negative societal impact is, in my opinion, not necessary for this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments and the time spent reading the paper. As a general answer, it appears that many of his/her statements are influenced by the reviewer's belief regarding our theory not containing any novelty. In the reply to Weakness 2. below, we clarify that our proof does contain a new technique, which is a central contribution of our work and that seems already to have a candidate for being reused. Weaknesses: 1. Below, we provide an example of a valuable technical novelty of our work. Please also see the reply to Reviewer CWjK. 2. We understand the concern of the reviewer, however the derived theorems are not a straightforward consequence of Theorems 1-2 of Vaswani et al. [2019]. In particular, the proof structure is similar to theirs, on the other hand, the idea of summing the monotone and nonmonotone sequences is original and it is a non-trivial contribution of our study. This proof technique allows us to show for the first time in the stochastic setting that\ (*) the difference between the nonmonotone and the monotone terms is geometrically converging to 0.\ A few evidences of the non-triviality of our achievement are: - all the existing convergence rates for stochastic nonmonotone methods assume (*), instead of proving it (since [Krejić and Krklec 2015]); - in the preprint by Hafshejani et al. [2023], the converge proof seems incomplete because the authors did not show (*) but only an asymptotic version of it; - one of the main contributions of Grippo et al. [1986] is the inductive proof showing (*) for the deterministic setting. 3. PoNoS's hyperparameters were not fine-tuned, they were either set to very standard values or fixed by theory (see reply to Reviewer 9zy1 for a detailed discussion). The reviewer refers to $\gamma$ of SPS. If the reference is to the maximum step size of Loizou et al., [2021], the robustness analysis for SPS, SLS and PoNoS can be found in Section E.4 of the supplementary. If the reference is to $\gamma$ in (3) (which is only affecting SLS and SPS), our experiments were not conclusive towards changing it, so we decided to keep the default value chosen by Loizou et al., [2021]. Notice that we performed a per-problem fine-tuning of hyperparameters only for the learning rates of SGD and Adam. 4. Concerning transformers, as the reviewer pointed out, PoNoS is competitive with Adam in terms of epochs, but slower than it in terms of wall-clock time. On the other hand, Adam's learning rate has been fine-tuned, while PoNoS has been used off-the-shelf. In fact, as soon as a hyperparameter selection is performed to select its learning rate, Adam is overall slower also in terms of cumulative-training time. It is true, however, that the improvement of PoNoS over Adam for training transformers is not as noticeable as that over SGD for CNN. In Section D.5 of the supplementary, we discuss the limitation of the Polyak step in this set of experiments. As in the case of SGD, it seems that transformers need to be treated differently also for (Po)NoS, especially in terms of its initial step size. Regarding the wall-clock time of training RBF kernel models, these values are in the order of fractions of seconds, so we decided to not report them. The accuracy of RBF kernel models on the binary classification tasks can be found in Figure IV of the supplementary. Questions: 1. We would like to thank the reviewer for the question, we will clarify this point. The interpolation assumption is the property that allows stochastic methods to achieve linear rates [Ma et al. 2018] without needing to grow the batch size. Together with L-smoothness, this assumption allows the weak growth condition to be replaced by an alternative bound on the variance of the mini-batch gradients. A large chunk of the methods designed after Ma et al. [2018] assume interpolation because it is a realistic property of neural networks and because it allows us to avoid designing a hand-crafted sequence of step sizes. This includes the previous monotone line search [Vaswani et al., 2019] and the stochastic Polyak step size method [Loizou et al., 2021]. 2. See above. 3. Our theory is consistent with the literature. In fact, none of the existing nonmonotone rates is faster than their monotone counterpart [Grippo et al. 1986, Raydan 1997, Dai 2002, Zhang and Hager 2004] not even in terms of constants. But nevertheless non-monotonicity has shown to improve practical performance in a variety of settings. Please also see the reply to all reviewers. 4. This question is replied half in Section D.2 and half in Section E.2 of the supplementary. In Section D.2, we fix the use of the proposed nonmonotone stochastic line search and study different initial step sizes. In particular, the Polyak step is by far the best initial step among those explored. In Section E.2, we fix the use of the Polyak intial step and analyze different line search methods. The results show that the adapted stochastic Zhang and Hager [2004] line search achieves the best performance, especially because of the dominance in terms of test accuracy and the heavy computational costs of single_batch_grippo. Miscellaneous: Thanks for the suggestion, we will lower the resolution of all images. Limitations: The limitations of PoNoS were partially discussed in the answer to the reviewer's comment on transformers, but a complete frame is given in the reply to Reviewer 9zy1. References not in the main paper: [Ma et al., 2018] Siyuan Ma, Raef Bassily, and Mikhail Belkin. "The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning". In International Conference on Machine Learning, pages 3325–3334. PMLR, 2018. [Dai, 2002] Dai, Yu-Hong. "On the nonmonotone line search." Journal of Optimization Theory and Applications 112 (2002): 315-330. --- Rebuttal Comment 1.1: Comment: Thank you for your response, which cleared a few things up. There is slightly more theoretical contribution than I originally thought, and the discussion of the condition (*) clarified this contribution. However, I still believe that recovering the previous rate (as opposed to improving it) is a little unsatisfying. It is true that some papers, like Adam, only recover the previous rate in theory while improving the performance in practice. However, I don't think that the empirical results in this paper are enough to make up for the lack of theoretical improvement in the same way as e.g. Adam. The authors claim that the proposed algorithm is "the first stochastic nonmonotone line search method able to train neural networks efficiently while simultaneously achieving state-of-the-art generalization performance". I do not agree that PoNoS exhibits SOTA generalization performance, at least not for all tasks. The results in Figures 1-3 are good, but the main body contains no results on the test set for kernel models or transformers. The appendix does contain some test results for kernel models, but PoNoS does not outperform other methods. My biggest concern is that there are no test results for transformers anywhere, and without these results I don't believe the claim that PoNoS has SOTA generalization. With all of this in mind, the empirical improvement does not feel substantial enough to make up for the lack of theoretical improvement. I will keep my rating. --- Reply to Comment 1.1.1: Comment: "However, I still believe that recovering the previous rate (as opposed to improving it) is a little unsatisfying. It is true that some papers, like Adam, only recover the previous rate in theory while improving the performance in practice." It is true that in an ideal world we would be able to prove a faster rate. However, the (almost 4 decades of) impressive practical performance of non-monotone methods have thus far resisted analysis. On the other hand, it is reassuring that we can still match the rate of SOTA methods like SLS and SPS. Note that Adam did *not* recover the previous rates in theory (the proof in the original Adam paper is obviously wrong). The same is true for other important practical methods like AdamW and cosine annealing which have become the most popular optimization algorithms in the field. These practical contributions inspired later theoretical works exploring how to justify their empirical performance. "However, I don't think that the empirical results in this paper are enough to make up for the lack of theoretical improvement in the same way" We are confused by this comment. Our experiments indicate that PoNoS is the same or better than a variety of existing methods, across a range of tasks and datasets (the reviewer also acknowledged the breadth of our experiments within the Strengths of our paper). All our evidence points to PoNoS being a method that is useful in practice. And they indicate that PoNoS is particularly effective for CNNs, one of the most important model classes in computer vision. Our experiments are more extensive than most empirical works presenting new optimization algorithms, with most empirically-motivated methods only first showing success on a small-but-important class of problems (the AdamW paper with >10,000 citations only presented results on CIFAR-10 and downscaled ImageNet datasets). "The appendix does contain some test results for kernel models, but PoNoS does not outperform other methods" In these settings PoNoS outperformed the other methods on one dataset while matching SOTA performance on the other 3. This is in contrast to Adam, for example, which can perform poorly on convex problems [Reddi et al., 2018]. We should not expect any method to strictly outperform all other methods in every scenario. "My biggest concern is that there are no test results for transformers anywhere, and without these results I don't believe the claim that PoNoS has SOTA generalization." In the transformer experiments, all methods achieve the same test error while methods that decreased the training error faster achieve this error faster. We will update the paper to include these results.
Summary: This paper presents a non-monotone line search method for optimizing over-parameterized models. The method is equipped with some theoretical support for strongly convex, convex and the PL condition. Furthermore, experimentally, the method is shown to have favorable performance when optimizing various deep learning models of practical interest. ==> post rebuttal: increased score from 4 to 5. Strengths: Obtaining convergence results with non-monotone line search strategies appears to be a novel contribution - though, I don't know if some variant of this result has appeared in existing literature dealing with non-monotone line search. Weaknesses: - The proposed approach seems incremental compared to existing approaches. - The theory doesn't adequately capture why the proposed method outperforms existing approaches. The bounds suggest identical convergence rates as ones that use monotone line search methods. This suggests all these bounds are fairly worst case (loose upper bounds) that do not help quantify why these methods work well in practice in the first place. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In the theorem, particularly for the PL case, the degree of (non-) monotonicity that can be tolerated seems to be rather small (the permitted \zeta appears to be super small)? Can the authors clarify this? - The theorem for the strongly convex case doesn't appear to be particularly applicable in the over-parameterized case unless a regularized objective is being solved for, which doesn't reflect on how the behavior manifests on the loss function we truly care for. - Can the authors clarify why the proposed stepsizes tend to increase as a function of iterations? Is this related to using normalization layers (e.g. batch norm/layer norm) within the architectures? How do these stepsizes look like for convex case in the experiments conducted in the paper? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments and the time spent reading the paper. Weaknesses: · We understand the reviewer's position on this matter, however we would like to stress a few factors that make our contribution non-incremental. PoNoS is the first stochastic nonmonotone line search method able to train neural networks efficiently while simultaneously achieving state-of-the-art generalization performance. To the best of our knowledge, the following contributions were not existing in the literature before:\ &nbsp;&nbsp;1.1 the stochastic Zhang and Hager [2004] line search (2);\ &nbsp;&nbsp;1.2 the memory-based resetting technique (5);\ &nbsp;&nbsp;1.3 the combination of a Polyak initial step size and a Zhang and Hager [2004] line search (not even in the deterministic setting);\ &nbsp;&nbsp;1.4 Theorems 1-3 and the proof technique employed to show them.\ Of the above elements, 1.2 and 1.4 are non-incremental contributions of our work. In fact, (5) is very different from (3) and the idea of summing the monotone and nonmonotone sequences is original and it is a non-trivial contribution of our study. Regarding 1.1 and 1.3, we agree with the reviewer on the fact that they seem more easily reachable from the existing methods. On the other hand, the process of designing them included creative and non-conventional elements that in the end were replaced by adaptations of more consolidated techniques. We believe that this is actually a stregth of our method and not a weakness. In fact, simple changes with big practical impact are often very appreciated contributions in the long term. · Even though we do not improve on the worst-case theoretical rate, we match the worst-case theoretical rate while significantly improving practical performance. Please also see the reply to all reviewers. Questions: · The reviewer is right, the $\xi$ permitted by Theorem 3 is very small. This bound is a consequence of the fact the nonmonotone term needs to be "squeezed" between the monotone sequence and the linear bound. In fact, also in the deterministic case, monotone and nonmonotone sequences need to be geometrically converging to the same value. We conjecture that this bound does not need to be this tight, but we leave this exploration to future works. · We are not sure we understood the reviewer's comment correctly, which are the loss functions we truly care for? In deep learning, it is true that neural networks do not employ any regularization, on the other hand, one might be interested in applying our method to more classical machine learning models (e.g., linear Support Vector Machines (SVM)) on a dataset with many more features than instances. In this case, the over-parametrization property would be satisfied and also the strong convexity (SVM models are regularized models). · The step size increases because the squared norm of the gradient decreases as we get closer to a stationary point (see fourth column of Figure I of the supplementary). This is the consequence of the Polyak step size, whose denominator is the squared norm of the gradient. The same happens in the case of the convex experiments (see the third column of Figure IV of the supplementary). --- Rebuttal Comment 1.1: Title: Re. author response Comment: Thanks to the authors for their clarifications. I will increase my score to a 5. My comment about the loss function was that even if we solve the regularized loss, the eventual loss that we care about is the one without regularization, so the rates of convergence to the regularized loss don't really matter all that much. Nevertheless, thanks your response.
Summary: This submission proposed a new linear search method to ensure convergence without the monotone decrease condition of the (mini-)batch objective function. The method is quite suitable for the modem DNN training, which prefers the larger training learning rate. Strengths: - The explanation of motivation is very clear. - The related work has been extensively discussed. Weaknesses: - Discussing the difference between the proposed and the previous methods is inadequate, especially since the existing methods inspire some steps. - The theoretical benefits of the proposed method are not shown/discussed in the convergence rate results. The proposed method seems to share a similar rate with the previous results. However, since PoNoS prefers a larger learning rate, its rate at least can demonstrate the advantage of a constant level. - There is no numerical comparison in the main part, and just having the curve figures cannot fully display the results. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - The experimental results are not entirely convincing. With the development of neural network training technology in recent years, the Adam optimizer can achieve better training losses than SGD in most cases. It is strange that SGD beat Adam constantly, especially on the transformer model for the NLP task. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments and the time spent reading the paper. Weaknesses: · We don't understand the reviewer's comment, in what sense is our discussion on the difference between the proposed and the previous methods inadequate? · While it is true that nonmonotone methods take larger steps, they do not ensure that the function value will decrease at each iteration (but only every $W$ iterations). Even if practically this is a real advantage (i.e., it grants line searches more freedom of choice), this property it is not captured by the theory. In fact, none of the existing nonmonotone rates is faster than their monotone counterpart [Grippo et al. 1986, Raydan 1997, Dai 2002, Zhang and Hager 2004] not even in terms of constants. But nevertheless non-monotonicity has shown to improve practical performance in a variety of settings. For a discussion on the provable advantages of one method over another, see the general repose to all reviewers. · We don't understand the reviewer's comment, what does the reviewer mean with numerical comparisons? Questions: · Our results agree with the reviewer's comment: Adam is always faster than SGD for training transformers (see the last two columns of Figure 4 of the main paper and Figure V of the supplementary) and comparable with SGD for training convolutional neural networks (e.g., Figure I of the supplementary). This observation is consistent with other works [Kunstner et al., 2023] References not in the main paper: [Dai, 2002] Dai, Yu-Hong. "On the nonmonotone line search." Journal of Optimization Theory and Applications 112 (2002): 315-330.
Rebuttal 1: Rebuttal: We would like to thank all six reviewers for their feedback and the time spent reading the paper. Below we comment several reviewers' concerns regarding the lack of a theoretical result showing PoNoS's advantages over the existing methods. We understand the reviewers' opinion, however we would like to clarify our point of view on this matter. For DL researchers and especially for the possible users of PoNoS, it is of great interest to know that it is numerically faster than state-of-the-art methods while still being backed up by the same convergence rates. This theoretical achievement is not trivial and it is more than what can be obtained for instance by Adam [Reddi et al., 2018]. Despite the lack of a theoretical justification showing why Adam is faster than SGD [Kunstner et al., 2023], Adam has a huge practical impact and it replaced SGD for training transformers. In fact, if we would always require a theoretical justification of the superiority of a method, Adam would probably not be published, while it is becoming one of the most cited works of all time. Another example that is very appropriate for our case, is the spectral projected gradient by Birgin et al. [2000]. In this paper, the authors introduced nonmonotonicity and clever step sizes in a different setting and they showed that the new method works much better in practice without showing a faster convergence rate. This paper was published in a top optimization journal and has more than one thousand citations. References not in the main paper: [Reddi et al., 2018] Reddi, S. J., Kale, S. & Kumar, S. On the convergence of Adam and beyond. 6th International Conference Learning Representation (ICLR), pp. 1–23 (2018).
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a new line search method for determining the step size in SGD within the interpolation regime. In contrast to the previous approach called SLS, which relies on the monotonically decreasing Amijo condition, the proposed method adopts the non-monotone Zhang & Hager line search. The authors establish the convergence guarantees for the proposed Stochastic Zhang & Hager line search when an upper bound is placed on the initial step size, considering strongly convex, convex, and PL problems. Additionally, they introduce several enhancements to improve empirical performance: (1) utilizing the Stochastic Polyak Step (SPS) to set the initial step size of the line search and (2) introducing a new resetting technique to reduce the number of backtracking steps. Strengths: - Numerical experiments are extensive and the performance of PoNoS looks quite promising. It is great to see a provably convergent optimization algorithm (with other heuristics) working well in large-scale experiments while incurring only a minor computation overhead. - The proof techniques look new and interesting. Weaknesses: - There are certain expressions within theorems that may lead to confusion. I have elaborated on these concerns in the "Questions" section of my review. - The Polyak step and the resetting technique are only heuristics. It would be valuable to provide further analysis regarding how these heuristics influence the convergence properties of the algorithm. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - While reading the main text, I was initially skeptical about Theorem 1 and 2 due to their requirement of Lipschitz continuity for function $f$. It has been shown that the bounded gradient assumption contradicts the strong convexity [1]. Upon reviewing the proof in the appendix, it seems that the Lipschitz continuity of $f$ may not be necessary for Theorem 1 and 2. Additionally, in Theorem 3, it is unclear why the smoothness of $f$ needs to be separately assumed given that $f_i$ is $L_i$-smooth. It would be helpful if the authors could provide clarification. - Furthermore, the notation "LC" used in Theorem 2 and 3 lacks a definition in the main text. The corresponding definition can be found in Appendix B. - In theory, how can you ensure that backtracking (Lines 10 - 12) halts in finite steps? [1] Nguyen, Lam, et al. "SGD and Hogwild! convergence without the bounded gradients assumption." International Conference on Machine Learning. ICML, 2018. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the accurate comments and the time spent reading the paper. Weaknesses: Additional studies on the Polyak step alone show its connections to the "better model" of Asi and Duchi [2019] in Gower et al. [2021] and to the "passive aggressive" optimization framework [Gower et al. 2022]. However, we agree with the reviewer, it would be valuable to study what theoretical influence has the Polyak step when employed as an initial step for line search methods. Regarding the new resetting technique, we obtained some preliminary results linking $l_k$ and $l_{k-1}$, but they were not conclusive and they were left out of the paper. We will consider both these directions in future works. Questions: · We would like to thank the reviewer for the very constructive feedback, we didn't realize that the assumption on the Lipschitz continuity of $f$ may not be necessary. We will study this extension in future works. Moreover, it is true that the smoothness of $f$ is not needed, once we assume that of the $f_i$. The reason for this redundancy was just that of defining $L$ ($L$ would be upper-bounded by the average of the $L_i$). · We notice this issue and replaced the notation "LC" directly with "Lipschitz smoothness". · The fact that backtracking halts in finite steps is a consequence of Lemma 1 from the supplementary. This lemma shows that the step size yielded by the backtracking algorithm has a lower bound. This, together with the fact that the initial step size has an upper bound, shows that the amount of backtracks is limited. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I am keeping my current rating to champion this paper.
Summary: The paper presents a proposed Polyak nonmonotone stochastic (PoNoS) method which combines a nonmonotone line search with a Polyak initial step size. It builds on the work of Vaswani et al. [2019] by modifying the monotone line search to incorporate a nonmonotone approach. Strengths: Originality: The introduction of nonmonotone line search applied to Deep Learning is noteworthy. Clarity: The paper is generally well-written. However, the authors should emphasize their specific methodological innovation and contribution to the conclusion section to highlight the significance. Quality: The overall quality is good. The paper demonstrates the outperformance of the proposed methods over other methods in regard to runtime. Significance: The experiments show the advantages of the methods in terms of convergence speed and runtime when applied to Deep Learning. Weaknesses: The authors admitted the adverse impact on convergence speed of the proposed method. This matter should be further investigated to have a more comprehensive intuition of the robustness of the method. Additionally, the authors should conduct a further comparison with the state-of-the-art line search approaches and the based model of Vaswani et al. [2019] ?a Figurethis Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Lines 12-23 are about a reduction of the number of backtracks to zero. Can you specify which parts of the experiments to support the argument. 2. Did you have comparison with he based model of Vaswani et al. [2019]? 3. Can you explain further or provide intuition about the observation in lines 317-319? 4. In figure 4, data for SLS-prec, SPS-pre and PoNoS-pres seems to be missing. Do you have any reason for not to put the data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do address some of the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments and the time spent reading the paper. Strengths: We will follow the reviewer's suggestion and highlight our contributions in Section 6. Weaknesses: · We are not sure we understood the comment of the reviewer regarding the adverse effect on the convergence speed. We assume the reference is to our sentence "Our theory matches its monotone counterpart despite the use of a nonmonotone term" (line 367). What we meant there is that despite the convergence proof being more complicated, our method still achieves the same rate as its monotone counterpart. The additional complexity comes from the fact that nonmonotone methods only ensure the decrease of the (mini-batch) function value every $W$ iterations, instead of every iteration. In practice, this is a great advantage since it provides the line search with more freedom of choice. Despite the more complex analysis, nonmonotone rates have always been shown to be consistent with those of their monotone counterparts [Grippo et al. 1986, Raydan 1997, Dai 2002, Zhang and Hager 2004] and the same can be said for PoNoS. For a discussion on provable advantages of one method over another, see the general repose to all reviewers. · The comparison with Vaswani et al. [2019] was reported in Figures 1, 3 and 4 of the main paper and in the corresponding Figures I, III and IV of the supplementary materials. The method is there denoted SLS. Additional experiments on other stochastic line search methods are reported in Figure VIII of the supplementary. Questions: 1. This argument is supported in Figure 2 and in the corresponding Figure II of the supplementary. Moreover, a zoom-in on the amount of backtracks is reported in Figure IX and Section E.3 of the supplementary. 2. See the reply above regarding the comparison with Vaswani et al. [2019]. 3. We would like to thank the reviewer for this question, there is a mistake in the sentence, instead of "In fact, we can notice that the additional per-epoch runtime of PoNoS in the first 5-25 epochs is less than one half that of the later stage of the training." it should have been "In fact, ... is less than the double that of the later...". Also, what we meant is that the time of the last forward step (e.g., which in the case of "cifar10 | resnet34" is 4s/#batches-per-epoch) is less than the second forward step (6s/#batches-per-epoch). This observation supports the thesis that the first forward step is more expensive than the others, because of the loading of the data in the GPU memory. We will clarify this sentence and add a more rigorous timing comparison. 4. We understand the reviewer's confusion here, in Figure 4 we reported a convex experiment (first column) and two experiments on transformers (last two columns). SLS_prec, SPS_prec and PoNoS_prec were not tested on convex problems. We will clarify this point. References not in the main paper: [Dai, 2002] Dai, Yu-Hong. "On the nonmonotone line search." Journal of Optimization Theory and Applications 112 (2002): 315-330. --- Rebuttal Comment 1.1: Comment: Thank you for answering the questions.
null
null
null
null
Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective
Accept (oral)
Summary: The paper mainly focuses on theoretically proving the effectiveness of the CoT in autoregressive Transformer models for solving fundamental mathematical and decision problems through generating intermediate steps. It demonstrates that any finite-depth Transformer model cannot directly output correct answers to these tasks unless the model size grows super-polynomially with the input length. The paper also includes experimental validation using a constructed dataset. Strengths: 1. The paper provides a solid theoretical proof of the effectiveness of CoT, making a valuable contribution to further exploring the underlying mechanisms of CoT operation. 2. The theoretical proofs presented in the paper are comprehensive and well-organized. 3. The research problem addressed in the paper is clearly significant. CoT has shown strong empirical performance but lacks theoretical analysis. Thus, this study is timely and necessary. Weaknesses: 1. The experimental section seems to be insufficient, for example, the length extrapolation experiment only provides results for one task. 2. The exploration of model architectures is lacking. (see my questions) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Is there a specific reason for choosing autoregressive Transformer? Were other models, such as the encoder-decoder architecture like T5 model be considered? Would CoT be equally effective in those models? 2. In the experiments, a Transformer with three layers achieves near-perfect accuracy on each task, but the proofs utilize five layers. How much does the number of layers affect the performance of CoT, and are there necessary layer numbers for different tasks? Additionally, the difficulty of the datasets for now seem insufficient to explore this question. 3. In the Length Extrapolation section, only Arithmetic Expression Extrapolation is explored. Does CoT also possess the ability to learn the underlying mechanisms for other tasks? The conclusion seems to lack sufficient experimental verification and analysis. 4. Have you considered the impact of the quality of intermediate steps on the performance, such as inserting some invalid intermediate steps or omitting a certain number of intermediate steps? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Section 7 of the paper provides a thorough discussion of the limitations. Additionally, while CoT demonstrates excellent performance across various tasks, this paper mainly focuses on mathematical tasks and could further explore other types of tasks, such as logic reasoning tasks represented in natural language. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 7MZe for the careful reading, thoughtful inquiries, and positive feedback. Below, we are happy to provide further elaboration on each of the points you raised: > Q1: Is there a specific reason for choosing autoregressive Transformer? Were other models, such as the encoder-decoder architecture like T5 model be considered? Would CoT be equally effective in those models? Thanks for the question. We choose autoregressive Transformer as it is the de facto standard for LLMs (e.g., GPT). It is also simpler than encoder-decoder architectures, which makes our analysis and proof cleaner. Yet, our theoretical results can be easily transferred to an encoder-decoder model (like T5) using the following argument. (a) On one hand, given a bounded-depth polynomial-size log-precision encoder-decoder Transformer, its parallel complexity is still bounded by $\mathsf{TC}^0$, so our negative results hold. (b) On the other hand, any finite-depth autoregressive Transformer can be mimicked by a finite-depth encoder-decoder Transformer of roughly the same size, since causal masking for the input sequence can be mimicked through the use of positional encoding and joint attention can be mimicked by an integration of cross-attention and self-attention. Thus, all results in this paper are not exclusive to autoregressive Transformers. We can add those discussions into the paper if you think they are helpful. > Q2: How much does the number of layers affect the performance of CoT, and are there necessary layer numbers for different tasks? Additionally, the difficulty of the datasets for now seem insufficient to explore this question. Thanks for the question. We study simple tasks in the paper and found that from both theoretical and practical perspectives, a shallow LLM with CoT already has enough capacity to solve them. We agree and believe deeper LLMs are required to solve more complicated tasks with CoT. For example, it can be easily imagined that a deeper LLM is needed to complete a task if it composes of solving linear equations and arithmetic with dynamic programming. We are working on more general and advanced reasoning tasks and investigating the dependency between task and LLM parameter complexity. > Q3: Does CoT also possess the ability to learn the underlying mechanisms for other tasks beyond Arithmetic Expression Extrapolation? Thanks for the question. During the tight discussion period, we have completed experiments for longest increasing subsequence task. The results of the LIS task are shown in the table below. Here, we train the autoregressive Transformer model with various input length ranging from 1 to 80, and test the model using longer input sequence lengths unseen during training. It can be seen that the model can still extrapolate well to longer sequences. | **Length** | 82 | 84 | 86 | 88 | 90 | | ------------ | -- | -- | -- | -- | -- | | **Accuracy** | 94.3% | 92.8% | 90.7% | 89.6% | 87.4% | We are currently running experiments on the linear equation task. But it cannot be finished before the rebuttal deadline given very limited GPU memory resources (the required CoT length is very long even for solving linear equations with 7 variables). We will report the performance when it is finished using more advanced GPUs. > Q4: Have you considered the impact of the quality of intermediate steps on the performance, such as inserting some invalid intermediate steps or omitting a certain number of intermediate steps? This is a good catch as real-world language model training data often involves corrupted or omitted intermediate steps. To assess this, we conducted experiments on arithmetic task with varying rates of corruption and omission, denoted as γ. Specifically, γ = 0.1 indicates that we skip 10% intermediate steps and corrupt 10% steps with a single-token random replacement. The table below presents the experimental results: | γ | Accuracy | | ---- | -------- | | 0.1 | 98.5% | | 0.2 | 97.6% | | 0.3 | 95.8% | The results clearly demonstrate the robustness of training CoT demonstrations, showcasing its ability to maintain high accuracy even in the presence of imperfect intermediate steps in the training datasets. We will add those robustness evaluation into the final version of the paper. We hope these clarifications can address your questions satisfactorily and we are happy to delve further into any of these aspects.
Summary: The paper contributes to theoretical and empirical understanding of Chain-of-thought, i.e. intermediate process generation to assist desired output generation. In the theory part, authors show - log-presicion Transformer with bound depth cannot solve simple math tasks (calculate, solve linear equations) unless the model size grows super-polynomially w.r.t. input length. the proof is based on circuit complexity and a bottleneck of parallel complexity (assuming $TC^0 \neq NC^1$). However, constant-size Transformer can solve both by generating common math intermediate steps. - for dynamic programming (DP), Transformers with CoT can generate the correct answer intuitively. In contrast, it is proven that Context-Free Grammar Membership Testing cannot be solved with a bounded-depth Transformer with polynomial depth. Experiments then validate these results using two math and two DP tasks, showing CoT > no CoT (with more layers even). Strengths: - CoT has been important in language processing, and its theoretical and empirical understanding is important and timely. - As far as I can tell, both the theory and experiment parts of the paper is solid, and well-written. - I like the connection to circuit complexity theory. Though the conclusion and "intuition" is not hard to grasp, the theory part is technical and non-trivial to establish. Weaknesses: - As noted by authors, it's still limited to expressivity (not learning with large corpora, large model). - Some missing references around Dyck language recognition: Self-attention networks can process bounded hierarchical languages. ACL 2021 RNNs can generate bounded hierarchical languages with optimal memory. EMNLP 2020 Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Seems "Self-attention networks can process bounded hierarchical languages" proved that (2-layer) Transformers can recognize Dyck, yet this papers proves CFG recognition is in general hard? Some discussion would be nice. - Would results hold for RNN or other architectures? Some discussion would make the paper stronger. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Authors write about limitations fairly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer P6n5 for the positive feedback, valuable suggestions, and two insightful questions regarding the related work and other architectures. Below, we would like to give detailed responses to each of your comments and questions. **Regarding related work**. Thanks for the valuable suggestions on the related work. We will follow your advice and include these two papers in our related work section. These two papers focus on the Dyck language, a special case of the CFG. The first one shows that the hard-attention transformers can recognize Dyck language with bounded depth. Moreover, a two-layer soft-attention transformer can generate Dyck language. The second paper tries to use RNN to generate bounded-depth Dyck language. This paper demonstrates that RNN can generate the Dyck language with hidden units of reasonable size. Furthermore, the paper proves that the size of the hidden units is optimal. The conclusions of these two papers complement our theorems on the general CFG. We also thank you for posing two insightful questions that deserve careful discussion. > **Question**: Seems "Self-attention networks can process bounded hierarchical languages" proved that (2-layer) Transformers can recognize Dyck, yet this papers proves CFG recognition is in general hard? Yes, the problem of general CFG recognition is much harder than the special case of the Dyck language. The paper “Self-attention networks can process bounded hierarchical languages” employs a Transformer encoder that is similar to the auto-regressive Transformer without CoT, except for the causal mask. The complexity of both Transformer encoder and autoregressive Transformer without CoT is upper bounded by the circuit complexity $\mathsf{TC}^0$. For the special case of Dyck language, it has been proved that the problem of recognizing Dyck language is actually in the complexity class $\mathsf{TC}^0$ [1]. Therefore, it is possible for a transformer model to recognize the Dyck language. However, general CFG recognition is $\mathsf{P}$-complete, which is intrinsically hard to be solved by a Transformer without CoT. [1] On the relative complexity of some languages in NC1. Barrington et al. > **Question**: Would results hold for RNN or other architectures? First, most of the theorems in this paper can be naturally extended to some popular settings with Transformer, such as the encoder-decoder architectures (e.g., T5). However, as for the RNN, it is actually not the case. We can show that RNNs cannot generate the CoT sequence using the same format proposed in our paper for the arithmetic formula task and the linear equation task unless the hidden dimension of the RNN is at least $O(\frac{n}{\log n})$, where $n$ is the input length. The reason is as follows. When the RNN generates the first equal sign, it has to compress the input sequence into a hidden state of $O(D\log n)$ bits, where $D$ is the hidden dimension and each element is represented by $O(\log n)$ bits (by definition of log-precision). On the other hand, the first step of the CoT needs to output a sequence of length $O(n)$, which contains $O(n)$ numbers. Therefore, there are at least $2^{O(n)}$ different output sequences, and thus by the Pigeon Hole Principle, to be able to generate all $2^{O(n)}$ different output sequences, we must have $D=\Omega(\frac{n}{\log n})$. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks! Would be nice to see these incorporated into the revision. --- Reply to Comment 1.1.1: Title: Thanks for your feedback Comment: Thank you! We will definitely incorporate them into the next version of this paper.
Summary: This paper presents various separation results, showing that a Transformer with CoT can solve certain formally-defined reasoning tasks, but a Transformer *without* CoT cannot (assuming bounded depth). This sheds light on the power of CoT. The formal results are supplemented with empirical results that support the claims. Strengths: - Understanding CoT is an important and timely question. This paper is therefore tackling a significant question. - To the best of my knowledge, this is the first paper to provide a theoretical explanation of the power of CoT (i.e., originality). - In general, the paper is quite clear and high-quality, though I think the notation could be improved (see below). Weaknesses: - I think calling it CoT but focusing on "CoT generation" is perhaps misleading. Many people think of CoT as a prompting technique. In my opinion, the generation aspect that this paper focuses on is actually bigger/more important than CoT suggests, and I actually think that - although buzzwordy - CoT diminishes the general power of generation that this paper is getting at (i.e., this paper transcends CoT). I would suggest looking into alternative, more general titles. - Section 4.1 has a lot of notation, including double subscripts. I think there's a significant lack of accessibility created by the heavy notation. I would really encourage the authors to think about whether the notation could be simplified. I think doing so could substantially increase the long-term impact of this paper. - Relatedly, equations (4) and (5) would be clearer if their new objects were introduced and explained conceptually before jumping into equations (4) and (5). - Overall, I think the paper could do a better job explaining the significance of the theoretical results. Although this is an important problem to study theoretically, and this paper does a good job of initiating that study, it's not entirely clear what can be gained conceptually from this analysis. I think many people already intuitively grasp that generating more tokens gives an LLM more power, and various methods that allow more tokens to be generated before arriving at a final answer are more powerful. It would be nice to understand whether there's a conceptual message here that goes any deeper than the aforementioned intuition. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - See CoT naming comment above. Do you agree? - See notation comment above. Is there a way to simplify the notation in Section 4.1? Is there a reason it has to be this complicated? If it seems necessary, maybe there's a simpler version that can be presented in the body of the paper, with the full version moved to the appendix? - My main question is also discussed in the Weaknesses section. Is there any conceptual takeaway from this theoretical analysis beyond "generating more tokens is more powerful"? Even without that, I think this is a strong paper. However, I think this could be an especially powerful paper if the intuitions from the theoretical analysis could be further synthesized in this direction (and made accessible to readers). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do a nice job of discussing some of the limitations of this work. There is no discussion of societal impact, but I do not think this is a problem for this particular paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer QoTB for the careful reading, positive feedback, valuable suggestions regarding presentations, and detailed comments. Below, we would like to give detailed responses to each of your comments and questions. **Regarding the scope of the paper**. Thanks for the suggestion. We connect our work to "CoT" because our theory suggests that to solve math/reasoning problems, "generating intermediate deviations in an autoregressive way" is easier and much more parameter-efficient. As this generation process is regarded as "Chain of Thought" by the community, we leverage the concept during writing. On the other hand, we fully agree that intrinsically, this work is more about the way of using LLMs rather than how to develop specific CoT prompts. We will make this clear in the introduction and are happy to illustrate more and try to come up with a more accurate and appropriate title. **Regarding notations in Section 4**. We appreciate the constructive suggestions and will revise our manuscript accordingly. In particular, we intend to simplify the notations in Section 4.1 by using a vectorized notation. For example, in equation (5), we will use $s_{\mathbf{g}(i)}$ to represent the vector $(s_{g_1(i)},\cdots,s_{g_J(i)})$. In this way, equation (5) can be rewritten in a more concise form, avoiding the problem of double subscript. The original equation and the updated equation are shown below: $$ \mathsf{dp}(i)=f(i,s_{g_1(i)},\cdots,s_{g_n(i)},\mathsf{dp}(h_1(i)),\cdots,\mathsf{dp}(h_K(i))) $$ $$ \mathsf{dp}(i)=f(i,s_{\mathbf{g}(i)},\mathsf{dp}(\mathbf{h}(i))) $$ > **Question**: Is there any conceptual takeaway from this theoretical analysis beyond "generating more tokens is more powerful"? ...... However, I think this could be an especially powerful paper if the intuitions from the theoretical analysis could be further synthesized in this direction (and made accessible to readers). Thanks for raising this good question. It makes us realize that there is still much room for us to improve the writing of the paper. Indeed, our theoretical analysis has a bunch of conceptual takeaways beyond the surface conclusion that "generating more tokens is more powerful". We detail these takeaways below: * **The key role of self-attention in CoT generation**. Our analysis points out two key components that enable CoT generation, which we call **COPY** and **MEAN**. Specifically, the COPY operation extracts the hidden information of a specific previous position that satisfies certain conditions, e.g., extracting the hidden embedding of the last equal sign (=), or the last non-number token. The MEAN operation averages the hidden embedding for a set of previous positions that satisfy certain conditions, e.g., the average hidden embedding of all *number* tokens between the last equal sign and the current token. We prove that **both COPY and MEAN can be realized by a self-attention layer**. We then use exactly the two operations to build entire **parallel algorithms** that can generate CoT sequences for all math and DP problems. Our result highlights the crucial role played by the self-attention and Transformer architecture, and may inspire future research on architectural design in Large Language Models (e.g., more efficient LLMs). * **Regarding the length of CoT generation**. Our theoretical analysis also gives insights into how many intermediate steps are needed in the CoT generation. In particular, when the Transformer model generates a CoT sequence, we have to ensure that the complexity of generating each step is within $\mathsf{TC}^0$. Using this argument, one can easily check whether the length of a specific CoT format is sufficient. For more complex problems, we need to decompose it into more subproblems (or more steps), so that each step is within the complexity of $\mathsf{TC}^0$. Moreover, we give a standard criterion to check whether the $\mathsf{TC}^0$ expressivity is satisfied: if each CoT step can be represented by a finite composition of **COPY** and **MEAN** operations, then each CoT step will be within the complexity of $\mathsf{TC}^0$. * **Regarding dynamic programming**. We theoretically show that CoT allows Transformers to solve general DP problems. Thus, as a direct consequence, Transformers are even capable of solving extremely hard problems that are $\mathsf{P}$-complete. This provides a deeper understanding of why popular large language models can be so powerful in reasoning. We believe our proposed dynamic programming framework may also be useful for studying and measuring the expressive power of other architectures in the future. We will incorporate these discussions into the next version of our paper. We hope our response can clarify your concerns and will improve the paper writing according to your suggestions and questions. We are happy to go into more detail regarding any of the above questions and we look forward to your reply. --- Rebuttal Comment 1.1: Comment: Thanks for the very thoughtful response! I think incorporating these takeaways into the body of the paper and highlighting them as contributions significantly increases the impact of the paper. To what extent do you think this theoretical construction is capturing what's happening empirically? Do you have any evidence in either direction that you could provide in the body of the paper? I strongly support highlighting these takeaways in the body either way, but I also want to make sure the community doesn't latch onto them too prematurely (especially since the focus here is on what can be represented, not what's actually learned through gradient descent). Although negative evidence, to whatever extent it exists, seems like an important caveat, positive evidence would certainly be a very compelling addition to this paper. With the additions provided in the authors' response, I'm raising my score to a 7 (under the assumption that they'll be included in the final version). --- Reply to Comment 1.1.1: Title: Thanks for your feedback Comment: Thank you for your positive feedback and additional comments, and for asking the good question. We are happy to provide additional responses to the insightful question you raised: > Question: To what extent do you think this theoretical construction is capturing what's happening empirically? Do you have any evidence in either direction that you could provide in the body of the paper? We believe our theoretical construction can to some extent capture what's happening empirically and is more meaningful than several prior works. This is due to the following reasons. First, we use log-precision Transformer instead of infinite precision, which can be precisely implemented in modern GPU architectures in practice. Moreover, the values of weight elements in our constructions are often not large and thus are likely to be learned by gradient descent. Second, the size of the Transformer architecture in our construction is reasonable. For example, we only use no more than 5 layers, 5 heads and a hidden dimension of $O(1)$. Third (and most importantly), prior work has pointed out that the COPY operation, which forms the basic building block in our construction, does appear empirically [1]. Specifically, the authors proposed the concept of "induction head", which is a module that can find the position in the sentence where the current token previously appeared and then extract the next token after that position. The authors visualized the attention score matrix for various tasks and found that the Transformer model is indeed performing the induction head. Therefore, based on the above points, we argue that the theoretical construction is capturing some intrinsic characteristics in practical scenarios. Nevertheless, the above evidence is still kind of "indirect" since we cannot prove that gradient-descent-based optimizers can learn the construction. But we believe adding the above discussion into the main paper, especially the connection to the "induction head" mechanism, can benefit our community and may further enhance the impact of our paper. For other suggestions, we will definitely incorporate these insights into the final version of our paper. Finally, we really appreciate your effort in making our paper better and we will be more than happy to discuss more if you have other questions or suggestions. Thank you! [1] In-context Learning and Induction Heads. Olsson et al.
Summary: This paper studies the theoretical power of the Chain-of-Thought (CoT) prompting. In particular, this paper mathematically confirms that two well-chosen tasks (e.g., arithmetic and equation) and the problem of Dynamic Programming are beyond bounded-depth Transformer models without CoT (unless their size grows prohibitively large); with CoT and generated intermediate derivations, those problems become solvable. The mathematical derivations are under mild assumptions, and empirical results confirms the mathematical study (on four representative tasks). Strengths: Given the prevalence of CoT, theoretical study of the limit of CoT becomes extremely valuable. This paper overcomes several limits in previous studies (e.g., assuming infinite precision) and focuses on the setting of autoregressive Transforms, which is close to the scenario of real-world LLMs. Moreover, the proposed empirical tasks are illustrative and easily reproducible. Weaknesses: I don't find any noticeable weakness in this paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Line 245, the success of solving Dynamic Programming problems critically depends on the input sequences being laid out in a topological order. In the two DP experiments (Longest Increasing Subsequence and Edit Distance), the topological order appears to be learnable from the CoT dataset. Could there be any theoretical study about how hard it is to learn this order? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer C8fV for the positive feedback, appreciation for our work, and insightful questions. > **Question**: Line 245, the success of solving Dynamic Programming problems critically depends on the input sequences being laid out in topological order. In the two DP experiments (Longest Increasing Subsequence and Edit Distance), the topological order appears to be learnable from the CoT dataset. Could there be any theoretical study about how hard it is to learn this order? We thank the reviewer for the insightful question. In our paper, we mainly study CoT from an expressivity perspective, i.e., whether there exists a model that can generate the topological ordering of the DP states and find the correct solution, and we prove that the answer is yes. Your question relates to the generalization ability of CoT training, i,e., why the model behaves well on unseen data. For this question, we can offer some intuitive insights into why generating the topological ordering is easy for an autoregressive Transformer. In our formulation, the topological ordering can be determined by a function $F$ that takes the current state $i_k$ and the problem scale $n$ as inputs and outputs the next state $i_{k+1}$, i.e., $F(i_k,n)=i_{k+1}$. Assumption 4.4 guarantees that $F$ can be efficiently approximated by a constant-sized perceptron with GeLU activation, by which **the topological order can be easily generated in an autoregressive way by this simple function $F$**. In the Appendix, we give an explicit expression of the function $F$ for each DP problem considered in this paper, all of which are simple: $$ \text{LIS}: F((j,k),n)=\begin{cases} (j,k+1)\ \ \text{if}\ \ k<j-1\\\\ (j+1,0)\ \ \text{if}\ \ k=j-1 \end{cases} $$ $$ \text{ED}: F((j,k),(n_1,n_2))=\begin{cases} (j,k+1)\ \ \text{if}\ \ k<n_2\\\\ (j+1,0)\ \ \text{if}\ \ k=n_2 \end{cases} $$ It is easy to see that $F$ can be represented by a small MLP with ReLU (or GeLU) activation. Therefore, it may not be difficult for the model to learn the topological order given sufficient training demonstrations. On the other hand, we believe theoretically understanding the generalization ability of CoT learning process is extremely important and we leave it as future work. Although there are some works trying to explain the generalization of modern deep learning models [1,2], they may not provide useful insights for the CoT setting. Studying the generalization ability of large language models from massive data around their reasoning ability should be a very fundamental and practical problem that deserves more attention in the future. [1] Li, Yingcong, et al. "Transformers as algorithms: Generalization and implicit model selection in in-context learning." [2] Xu, Keyulu, et al. "How neural networks extrapolate: From feedforward to graph neural networks." --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you very much for the detailed and enlightening reply! I really appreciate the analysis.
Rebuttal 1: Rebuttal: We would like to express our sincere thanks to the reviewers and the area chair for taking the time to review our paper. We have responded to each reviewer's comments separately and will incorporate their suggestions into the next version of our paper. We hope that our response can adequately addresse the reviewers' concerns, and we're happy to provide more details about any questions they may have.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Gradient Descent with Linearly Correlated Noise: Theory and Applications to Differential Privacy
Accept (poster)
Summary: The authors propose a new analysis method that unifies the analysis of noisy gradient descent-based algorithms, where the noise is added to the matrix factorization, which provides a tight analysis of both the algorithms simultaneously, and hence would give the analysis for convergences for a general class of such matrix-factorization based gradient descent algorithms. Based on this new analysis, the authors also propose a new matrix factorization objective to further tighten the upper bound and empirically show it. Strengths: - The method of analysis is based on a very simple yet essential idea and it opens the possibilities of a unified method of analysis for a good class of algorithms which are based on gradient descent algorithms based on matrix factorization. - The transition from proposing a tighter bound to a new matrix factorization objective shows powerful empirical applications of this class of algorithms to private optimization. - The paper is theoretically very sound. - Evaluation done on the paper is simple and easy to understand and evaluation has been done not only with respect to utility but also to visualize the trend that the gradients follow across training. Weaknesses: - The novelty and the reach of this problem are limited to a very specific class of algorithms on a practical scale. - Although the authors have already addressed this in their future work, it is still important to note that the average case analysis for differentially private algorithms might not be a correct metric to judge the performance of the model since the final iterate of the algorithm is returned. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Can the authors elaborate on what they exactly want to denote using Figure 1 and maybe provide a more clear diagram for it so that the process can be understood better? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Authors have mentioned most of the limitations in their future work and additional points have been discussed above in the weakness section (novely and final iterate analysis). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your time and positive review! Below we address the raised points. ## Weaknesses 1. While our analysis is restricted to the specific class of algorithms, this class of algorithms capture significant advances in differentially private optimization which were recently shown to strictly generalize (and dominate) the major methods of differentially private optimization, practically improving over a popular and widely-used DP-SGD algorithm [1]. Our work analyzes the dynamics of a general matrix mechanism applied to model training with gradient descent, and is therefore applicable to any of the matrix-based mechanisms recently designed for improved scalability. To the best of our knowledge, none of the related work has moved beyond the Frobenius-norm formulation of error which we show to fail to be predictive of optimization performance in a variety of cases. This is to say: our work inherits the applicability, algorithmic and scalability improvements pursued in the rest of the literature, but addresses a fundamentally orthogonal problem. For this reason, we expect the domain of application of the analysis presented here to continue to grow in the near future. [1] Choquette-Choo et al, (Amplified) Banded Matrix Factorization: A unified approach to private training. ## Questions 1. In Figure 1 we aimed to illustrate the workflow of MF-DP-FTRL algorithms. There are two underlying optimization problems: a one-time matrix factorization problem (left) that is performed to obtain a noise correlation pattern B, depending on the workload matrix A, and a factorization objective; and (left) an ERM minimization that is data and loss-dependent, performed using the found noise correlation pattern B. Our work focuses on understanding the (right) part of this diagram: specifically, we aim to understand which factorization objective one should use for a one-time offline matrix factorization problem. We will make our diagram more clear in the next version. In particular we will explain the connection of our diagram to the MF-DP-FTRL algorithm in (4). --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thanks to the Authors for addressing the questions. I do not have further questions.
Summary: This paper conducted a theoretical study of gradient descent with linearly correlated noise, with strong motivation from the theory and applications of differential privacy (in particular, the popular DP-SGD algorithm). The main contribution is a new convergence analysis based on the idea of "restart iterations". The paper argues that it unifies two previous "incompatible" analyses for two different noisy gradient descent algorithms, namely perturbed gradient descent (PGD) and anti-PGD. Inspired by the new analysis, this paper proposes a new matrix factorization method for noisy SGD. The matrix factorization coordinates noises across different iterations of gradient descent. The new factorization refines a previous method. Experiments show that it compares favorably to prior works. Strengths: * The paper is very well-written and looks both educational and professional. * It tackles an important and popular question of optimization and private learning. The "restart iteration" analysis looks nice. However, I don't work on this area myself, it is hard for me to evaluate the novelty of the new analysis. Weaknesses: * I am slightly dissatisfied with the hyper-parameter "\tau" required in your new algorithm and analysis (more on this in the question section). Technical Quality: 3 good Clarity: 3 good Questions for Authors: * If possible, it would be nice to discuss why the two analyses for PGD and anti-PGD are incompatible. Also, I am curious if the two different proofs directly inspire your analysis. * Your paper shows that the "restart" in analysis and algorithm design does help. However, could you provide some intuition on why "restart" is beneficial in both theory and applications? Do you have evidence that this method yields tight analysis? * Are there good heuristics to find an appropriate "tau" in practice? Is there a way to come up with an algorithm that is equally good, but does not require the knowledge of the hyperparameter? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations and future directions are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Questions 1. Thanks for your question. Due to the space limitations, we discussed why the two analyses for PGD and anti-PGD are incompatible in the appendix F, where we showed that - using the real iterate sequence $x_t$ we cannot prove the tight convergence bound of Anti-PGD (12). - using the virtual iterate sequence $\tilde x_t$, we cannot prove the tight convergence bound of PGD. We also show in Appendix E.3. that for the PGD algorithm the virtual sequence $\tilde x_t$ is provably converging slower than the real iterate sequence $x_t$, making it fundamentally impossible to get tight rates of PGD through $\tilde x_t$. While we do not see any fundamental limitations of analyzing Anti-PGD using the real sequence $x_t$ we are unaware of how to prove a tight rate (12) using the real sequence $x_t$. The two different proofs directly inspired our analysis: our approach combines virtual and real iterates through the restart trick. Every tau iterations the virtual iterate $\tilde x_t$ is restarted to the real iterate $x_t$, allowing for the tight analysis of both algorithms simultaneously. 2. Our analysis is tighter than the prior rate in (5). Indeed, it holds that $|| \Lambda_{\tau} B||^2_F \leq 4 ||B||^2_F$, making our rates in Theorem 4.7. strictly tighter than that in (5). Moreover, our derived convergence rate in Theorems 4.6. and 4.7. is tight for some notable special cases of matrix B such as PGD, Anti-PGD and Chess-PGD (discussed in Appendix A.2), while the prior convergence rate based on $||B||^2_F$ cannot be tight for these cases (see discussion in lines 130-137, as well as in Appendix A). Therefore, minimizing a tighter bound based on $||\Lambda_{\tau} B||^2_F$ (under the fixed privacy) should directly transfer to the better optimization properties than minimizing a bound based on a loose bound $||B||^2_F$. 3. While in our experiments we found that setting $\tau = T$ gave the best results in practice, our experiments are relatively small-scale. We envision that for the larger scale experiments, one should set $\tau$ as large so that computing factorization of matrix $\tau \times \tau$ is still computationally feasible. On the other hand, our experiments in Figure 2 revealed a mismatch between the average and the last iterate behavior. We believe that analyzing our method for the last-iterate behavior can help to understand which $\tau$ we should pick to achieve the best performance in practice. This question is out of scope of our work, where we focused on understanding the convergence properties for the average iterate. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thanks to the authors for answering my questions!
Summary: This paper studies (stochastic) gradient descent with linearly correlated noise. The paper builds upon recent results on (MF-)DP-FTRL. It highlights the limitations of this methods, proposes a restarting trick to improve it, and derive the corresponding analysis. Interestingly, the proposed analysis gives a unified analysis of PGD and Anti-PGD, that are difficult to analyze together otherwise. Theoretical results are confirmed by numerical experiments. Strengths: Overall, I think this paper is a nice paper, that perfectly fits the conference. 1. The proposed method identifies the limitations of an (important) existing method and proposes a clever trick to overcome them. 2. Corresponding theory is derived in multiple settings (convex and non-convex), and is confirmed by numerical experiments. 3. Overall, the paper is very pleasant to read (even the appendix), with a clear introduction and clear motivations, and the math seems to be correct. Weaknesses: In my opinion, this paper does not have major weaknesses. I only have a few minor remarks. 1. The method is claimed to work for DP-SGD, but the theory is only derived with full-batch gradients. This is a bit unfortunate since DP-FTRL-like methods aim at providing stochastic DP algorithms that do not rely on amplification. 2. The proposed method seems a bit "handmade": it seems there is something deeper behind it. 3. Example 3.3 is a bit crude and could benefit from a little more details (on the choice of the pseudo-inverse and the fact that B and C are not square). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The restarting trick essentially states that not all iterates should have correlated noise. Do you think this is a fundamental property of the method or simply a caveat of the analysis? 2. How does the restarting trick affect numerical performances in terms of time of execution? It seems that solving $OPT_F(\Lambda_\tau A)$ could be easier due to the structure of $\Lambda_\tau$: is it the case? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, authors pointed that their analysis does not encompass clipping, and only works for averaged iterate. Some other research directions/limitations have also been mentioned in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and their positive assessment of our paper. ## Weaknesses: 1. In this work we aimed to find a simple theoretical model for which it would be interesting to study the effect of linearly correlated noise on the optimization dynamics. Since the stochasticity in the gradients is not directly linked to the linearly correlated noise, for clarity we decided to omit it from the theoretical analysis. Our model serves as a proxy to the real experimental setup. We verified the effectiveness of our proposed method experimentally, justifying the simplifications made. 3. Thanks for your comment. We excluded the details due to space limitations. Upon your suggestion, we will add more explanations of Example 3.3. in the next version in the appendix or in the main text if space permits. ## Questions: Thanks for your interesting questions! 1. We believe that there is something deeper going on. Our analysis was motivated by the intuition that not all the iterates should have correlated noise: intuitively, if the noise was added large number $T_0$ iterations ago, the iterate $x_t$ should move sufficiently far from $x_{t - T_0}$ making the old noise added at iteration $t - T_0$ to become effectively uncorrelated. However, this is only an intuitive explanation, and more investigation is needed to understand this question in detail. 2. Thanks for your question. Because of the repetitive pattern of matrix $\Lambda_{\tau}$ (See Appendix B), we need to perform optimization only for the block of $\tau \times \tau$ instead of $T \times T$ matrix $A$, repeating the found correlation pattern $B_{\tau \times \tau}$ $T / \tau$ times. See also our attached PDF to the reviewer dQeP for illustration of the repetitive pattern in $B$. We will include these details in the next version. --- Rebuttal Comment 1.1: Comment: Thank you for your answer and for the additional precisions. I don't have further comments.
Summary: This paper develops many techniques to analyze gradient descent convergence with linearly correlated noise, a setting motivated by many DP algorithms. The work derives tighter bounds, and use the resulting insights to motivate DP-MF and DP-MF+. Strengths: - The paper is structured well, and is very notationally precise for the most part. - The theoretical results are very involved, and I particularly appreciate the authors including several noteworthy special cases. Weaknesses: Clarity: - This work is not very self contained, and it seems like the authors expect the reader to be very familiar with Denisov et al. and Choquette-Choo et al. This makes the work a bit difficult to follow at times. I would advise the authors to include more details about these works in the supplementary or even the main paper if possible. - Related to the previous point, details about Example 3.3 in Section 3 are not provided at all. - This is a minor suggestion, but using $x$ to denote the weights is unconventional in the machine learning community as it typically is used to denote the dataset. This may be a source of confusion to some readers. Practicality: - The problem setting seems impractical, as factorizing $\mathbf{A} \in \mathbb{R}^{T x T}$ is intractable in realistic settings. In the experimental settings considered in this work, T is reasonable. However, in practice $T$ can be on the order of millions and even a one-time matrix factorization is not tractable. - Is Assumption 4.2 a realistic assumption for deep learning? The loss landscapes for neural networks are typically extremely sharp, rendering some of the convergence rate results vacuous. I understand that this is a common assumption in the optimization community. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How can this method be scaled to large $T$? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Clarity 1. Due to the page limitations we unfortunately were unable to fit all the background information. Upon your suggestion, we will add more details in the appendix in the next version or in the main text if space permits. In particular we will include details of how to compute sensitivity of C, and will elaborate on Example 3.3. ## Practicality 1. The reviewer is correct to point out that the class of methods we consider are not currently applicable to the T >= 1M setting. It is an interesting and important open problem to scale up the matrix factorization mechanism to these settings, but this is a problem that is orthogonal to what we study in this work. 2. The reviewer correctly points out that assumption 4.2 is a simplifying assumption that is not applicable to deep learning loss functions. Nevertheless it is an assumption that is commonly used in the optimization literature that allows us to get a theoretical handle on the problem. While our theory does not cover deep learning loss functions, our experiments on CIFAR-10 and StackOverflow do, and our method works well empirically in both of those settings.
Rebuttal 1: Rebuttal: We would like to thank all of the reviewers for their time spent to review our paper, their useful comments that will allow to improve our paper, and for their positive assessment of our work. Below we address comments from each of the reviewers separately. We will try to clarify most of these points in the next version of the paper as well. Upon the question of the reviewer dQeP, in the attached PDF we illustrate noise patterns $B$ found by minimizing the previous factorization objective in Problem 2.2. $\text{OPT}(A)$ (Fig. a) and our proposed objective in Section 5 $\text{OPT}(\Lambda_{\tau} A)$ (Fig. b). We see that $B_{\text{MF}}$ cancels gradually the noise added at the previous iterations throughout all of the iterations, while for $B_{\text{MF}^+}$ the noise cancels only within the last $\tau$ iterations. Moreover, $B_{\text{MF}^+}$ consists of exactly the same blocks of size $\tau$, making it more computationally efficient to solve $\text{OPT}(\Lambda_\tau A)$ when $\tau < T$. Pdf: /pdf/a151413abbd1fb462602439bbda38a4bb2767045.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies variants of noisy gradient descent, focusing on when the noise added at different time steps may be dependent. The motivation of this is differentially private optimization: while the standard DP-(S)GD adds independent noise at each time step, the recent DP-FTRL mechanism [1] adds carefully correlated noise. Even more recent work [2,3] extends this approach via the matrix mechanism. This paper builds heavily on the latter two works. It begins by identifying how the prior analysis is pessimistic and settings where one might hope to improve upon it. We then analyze in detail "perturbed gradient descent," PGD, ie gradient descent with independent Gaussian noise, and "anti-PGD," which after each gradient step we add a new noise term and *subtract* the noise term from the previous time step. This latter method is closely related to randomized smoothing. The authors establish the main theorems (for general "noisy SGD-type" algorithms without clipping) via a novel analysis, analyzing virtual iterates that periodically revert to the actual sequence. The authors perform experiments, first on synthetic data without gradient clipping (to validate their theorems) and then with clipping on benchmark data sets. They show that, in some settings, their approach matches or exceed DP-SGD. [1] Kairouz, P., McMahan, B., Song, S., Thakkar, O., Thakurta, A., & Xu, Z. (2021, July). Practical and private (deep) learning without sampling or shuffling. In International Conference on Machine Learning (pp. 5213-5225). PMLR. [2] Denisov, S., McMahan, H. B., Rush, J., Smith, A., & Guha Thakurta, A. (2022). Improved differential privacy for sgd via optimal private linear operators on adaptive streams. Advances in Neural Information Processing Systems, 35, 5910-5924. [3] Choquette-Choo, C. A., McMahan, H. B., Rush, K., & Thakurta, A. (2022). Multi-epoch matrix factorization mechanisms for private machine learning. arXiv preprint arXiv:2211.06530. Strengths: The paper investigates a topic of clear theoretical and practical importance; there is substantial food for thought. In particular, the authors convince me that there is more to be done in this direction. They analyze simplified cases but lay out many directions for further analysis. I agree that the paper "highlights the wealth of stochastic optimization questions arising from recent advances in differentially private model training." In addition, the paper makes clear progress beyond prior work, bringing a new set of tools to bear. I thought the analysis for the main theorems was very cool. It seems plausible that this type of analysis might be extended or applied elsewhere. I feel the authors do a good job placing their work within the broader literature. Weaknesses: The paper has limited scope and there were fewer conclusions after the main theorems than I had hoped. For instance, in line 253 we read "Intuitively, since $\lVert \Lambda_{\tau} \mathbf{B}\rVert_{F}^{2}$ is a better proxy for learning performance than $\lVert \mathbf{B}\rVert_{F}^2$, minimizing this quantity... should lead to... better privacy-utility tradeoffs." The "intuition" just repeats the paper's setup, that of finding better factorization. I would like a discussion of *why* this proxy is better, beyond the theorem statement. I feel that future work may supercede this approach without building on it. It is not clear how much this paper would contribute beyond pointing out that there is more work to be done. Indeed, perhaps algorithms from from Choquette-Choo et al. already outperform this approach? While the contribution is clear, I feel there is a ceiling on its impact. I feel the paper spends too little time on background. This is on purpose, since the setup is similar to that of Denisov et al. and Choquette-Choo et al. However, I would like to see a more self-contained discussion. Two examples in particular stand out to me: first, the expression sens(C) is never defined. Second, as far as I can tell there is no detail on how DP-SGD was run as an experimental baseline. Although the motivation and experiments deal with privacy, the main theoretical results are for nonprivate algorithms. I was confused by this at first, especially in the transition from Section 2 to 3: the paper says "we derive a tighter bound... to design better factorization mechanisms for differentially private optimization," and then almost immediately says "we omit gradient clipping from our analysis." Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: What is the DP-SGD you're comparing against? How were the hyperparameters selected? Can you elaborate on what kinds of factorizations your approach favors? Do you have an intuition for how they differ from those of Denisov et al.? The main theorems specify a value of $\tau$, but the experimental performance seems to improve monotonically with $\tau$. Maybe I missed it, but do you have an idea for why that is? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I feel the authors are clear about the limitations of their work. As I see it, the main limitations are (i) the narrow scope and (ii) the experiments, which I feel are more "illustrative" rather than comparing to the state of the art. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed review and positive assessment of our paper ! Below we address the raised questions: ## Weaknesses: 1. We write that $‖\Lambda_{\tau} B‖^2_F$ is a better proxy for learning performance because it captures convergence of the algorithm more tightly that the $‖B ‖^2_F$. Indeed, it holds that $|| \Lambda_{\tau} B||^2_F \leq 4 ||B||^2_F$, making our rates in Theorem 4.7. strictly tighter than that in (5). Moreover, our derived convergence rate in Theorems 4.6. and 4.7. is tight for some notable special cases such as PGD, Anti-PGD and Chess-PGD (discussed in Appendix A.2), while the prior convergence rate based on $||B||^2_F$ cannot be tight for these cases (see discussion in lines 130-137, as well as in Appendix A). Therefore, we believe that minimizing a tighter bound based on $||\Lambda_{\tau} B||^2_F$ (under the fixed privacy) should directly transfer to the better optimization properties than minimizing objective based on a loose bound $||B||^2_F$. We will add this discussion to the next version. 2. Our work analyzes the dynamics of a general matrix mechanism applied to model training with gradient descent, and is therefore applicable to any of the matrix-based mechanisms recently designed for improved scalability. To the best of our knowledge, none of the related work has moved beyond the Frobenius-norm formulation of error including (Choquette-Choo et al.), which we show to fail to be predictive of optimization performance in a variety of cases. This is to say: our work inherits the applicability, algorithmic and scalability improvements pursued in the rest of the literature, but addresses a fundamentally orthogonal problem. For this reason, we expect the domain of application of the analysis presented here to continue to grow in the near future. In particular, algorithms from (Choquette-Choo et al.) provide an orthogonal contribution to our work. In (Choquette-Choo et al.) the major contribution lies in extending computation of sensitivity of $C$ from having only one pass over the data (Denisov et al.) to the multi-epoch algorithms, and proposing an efficient way to solve the resulting matrix factorization optimization problem. However, their algorithms are still based on minimizing the objective $||B||$ (see equation (4) in Choquette-Choo et al. ). In our experiments in section 6 we already combined multi-epoch matrix mechanisms from (Choquette-Choo et al.) with our new proposed objective $|| \Lambda_{\tau} B||$, as we do multiple passes over the data in all the settings. 3. Limited by space we decided to skip some of the details that are not crucial to understanding our main contributions. Upon your suggestion, we will add more details in the appendix in the next version or in the main text if space permits. 4. By choosing a set of assumptions and relaxations in Section 3, we wanted to find the simplest model for which it would be meaningful to study the effect of the correlated noise. By choosing a simpler model we are able to study tightly the effect of the correlated noise structures. Moreover, if the function is Lipshitz, its gradients do not require clipping. In experiments we illustrate that our approach can lead to practical improvements even with the simplifications made. ## Questions: 1. For MNIST experiments we fixed the clipping threshold at 1.0 and the learning rate at 0.5. For the CIFAR-10 experiments we fixed the clipping threshold at 1.0 and we tuned the learning rate over a grid of {2^k for k=-4, -3, …, 3, 4} independently for each of the methods. 2. We plot the two factorizations (MF and MF+) in the attached PDF. Because of the repetitive pattern in $\Lambda_{\tau}$, our proposed factorization MF+ consists of exactly the same blocks of size $\tau$, with noise being correlated only within these small blocks, and not correlated otherwise. We will further polish and add it to the next version. 3. We believe that this is in part because of differences between the last-iterate, and the averaged iterate performances, as we illustrated in our experiments on a random quadratic function (Fig. 2). Specifically, but setting $\tau = T, \Lambda_{\tau}$ is a diagonal matrix where all entries are equal, except for the last which is much larger. This influences the objective by putting much more weight on the error of the last-iterate model. In our experiments (section 6.1) we evaluate the final trained model at the last trained model (last-iterate), while the theory gives guarantees for the averaged iterate performance. Studying last-iterate behavior theoretically is an interesting direction for future work. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed comments. My only comment (to which you need not reply) is that I feel your response to the first weakness misunderstands my concern. From the submission, I understood that $\lVert \Lambda_{\tau} B\rVert_F^2$ yields stronger theoretical guarantees that are tight in special cases. I was looking for more intuition, such as an informal description of what your proxy captures that $\lVert B \rVert_F^2$ does not. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for their prompt response ! Intuitively, our convergence rate accounts only for correlations in noise that are close in terms of round numbers (within the $\tau$ closest iterations) and does not consider noise correlations for round numbers that are far apart. We will clarify this in the main text.
null
null
null
null
null
null
VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
Accept (poster)
Summary: The authors present a new adversarial attack framework, VLATTACK, to evaluate the robustness of large vision-language models (VLMs). The paper introduces a two-step adversarial attack approach. The first step involves attacking each modality (image and text) independently, and the second step employs a cross-search attack strategy to iteratively query the fine-tuned model for perturbation update. The paper conducts experiments on multiple pre-trained models that are fine-tuned for downstream vision tasks. Strengths: 1. The paper is well-written with a clear presentation that demonstrates the proposed framework. 2. The VLATTACK, which is a two-step adversarial attack framework, involves both white-box gradient attack and black-box query attack. This idea is interesting with practical implementation. 3. The evaluation takes account of multiple types of pre-trained models, and the BSA part of the attack overperforms SOTA image-space adversarial baselines. Weaknesses: 1. The black-box setting is questionable. Although the adversary only has black-box access to the downstream fine-tuned model, the first stage of attack has white-box access to widely-used foundations like ViLT. There is a high chance that the victim model shares mutual information (e.g., has knowledge of the same vision-language dataset / has the same model architecture) with the white-box pre-trained model. It will be beneficial if the authors elaborate more on the source of transferability. 2. Section 4.2 (ICSA) claims to be under the black-box setting. However, Lines 228-229 describe the process to update the perturbation by optimizing the attack loss of Eq. (2). Authors should write clearly how the gradient is obtained, and what model is white-box during this step. 3. Section 4.1 Text-Attack part uses clean image $\mathbf{I}$ instead of generated adversary $\mathbf{I'}$ for text optimization. Authors claim that this is to avoid unnecessary modification. However, using $\mathbf{I'}$ can possibly ease the difficulty of textual search and increase the success rate of adversarial textual optimization. Moreover, the authors should argue clearly what the unnecessary modification stands for. 4. The success of the attack will require a sufficient number of queries to generate effective adversaries. It will be helpful if authors evaluate the query cost (e.g., time of inference/API call), and discuss attack effectiveness by the number of queries. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Fig. 7 second column demonstrates that the model comprehension shifts from *A man in a purple shirt and **jeans*** to *A man in a purple shirt and **denim***. However, denim is the material pattern for the jeans shown in the image. The model prediction is still consistent with the image. It will be beneficial if the authors state clearly what this example stands for. 2. Given the setting of this paper that we have (1) publicly available pre-trained models and (2) the clean dataset for future perturbation, I wonder if it is possible to first fine-tune the model on the clean dataset for the downstream task and then conduct the transfer-based attack? It will be beneficial for this paper to argue the infeasibility of such approaches. 3. To summarize, with a fair paper presentation but several concerns stated in the weakness/question sections, I will rate it as a borderline accept at this stage. However, I look forward to the authors' response and I will consider revising the rating based on the soundness of the responses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors have stated the potential limitations of the work in the supplementary materials. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable review. \ `>>> W1` ***Elaborate more on the source of transferability***\ `>>> A`: We agree that the pre-trained and fine-tuned models share almost the same architecture in the current setting. Such mutual information should help to improve the attack success rate. As suggested by the reviewer, we further design an experiment of __*transfer attacks on different structures*__. Based on the VQA task, we first generate adversarial samples on the pretrained ViLT and use them to attack the fine-tuned ALBEF model. The two models have different architectures. The results are shown in the table. |Method| SSP | Co-Attack | VLAttack | |-| - | - |-| |ASR|12.78 | 13.04 |__15.28__ | Although VLAttack is not explicitly designed for this scenario, it achieves the best ASR. Also, the relatively low ASR suggests potential space to explore this setting in the future. `>>> W2` ***How is the gradient obtained?***\ `>>> A`: Thanks for the advice. We first feed the output from the previous step \$\boldsymbol{I}_{k}^{'}\$ to the pre-trained model $F$. Then we compute the BSA loss by replacing \$(\boldsymbol{I}_k^{'},\boldsymbol{T}_k^{'})\$ with \$(\boldsymbol{I}^{'},\boldsymbol{T})\$ in Eq.(2) of our paper. The gradients are then derived on the $F$ and will be clipped to produce the new perturbed image of the $k+1$ step. So the pre-trained model $F$ is the only white-box model during this process. `>>> W3` ***Using $\boldsymbol{I}^{'}$ in the text-attack stage may increase ASR, and what does the '' unnecessary modification '' mean?***\ `>>> A`: According to our experiment, using $\boldsymbol{I}^{'}$ in the text-attack stage indeed increases the ASR, which rises from 78.05 to 78.54. But there is a trade-off between the ASR rate and the perturbation degree. Using the clean image $\boldsymbol{I}$ as the input of the text-attack makes the perturbation degree of the images to be zero in this stage. When using the perturbed image \$\boldsymbol{I}^{'}\$ as the input, if the text attack fails, then \$\boldsymbol{I}^{'}\$ will be fed into the multimodal attack, which does not affect the perturbation degree of images. However, if the text attack can succeed even with the clean image $\boldsymbol{I}$, in such a case, using \$\boldsymbol{I}^{'}\$ will increase the perturbation degree. Specifically, we compute the average $L_2$-distance between all output and original images in the 8-bit color space. The average $L_2$-distance of using \$\boldsymbol{I}^{'}\$ and $\boldsymbol{I}$ in the text-attack stage are 11.69 and 11.23, respectively. So the unnecessary modification means the increase of the perturbation degree of images. `>>> W4` ***The query cost***\ `>>> A`: Thanks for the advice. __*(1) Query Times*__ Among all baselines evaluated in Table 1 of our paper, only text attack methods BERT-Attack (B\&A) and R\&R do queries. So we compare VLAttack with BERT-Attack as it achieves better ASR. The average number of queries on BERT-Attack and VLAttack are 8.61 and 4.99 *times/sample*, respectively. Even when the ICSA stage involves extra queries, VLAttack has a lower query number than BERT-Attack. This is because some samples that can be directly obtained in the image-attack stage using BSA, and they only query once, thus avoiding queries in the text-attack and ICSA stages. __*(2) Computation Costs*__ We also analyze the computation cost on the largest pre-trained model OFA. Specifically, we report the average speed in seconds for computing an adversarial sample on the VQAv2 dataset. The results are shown in the table below: |Method| SSP | BERT-Attack | Co-Attack | VLAttack | |-|-|-|-|-| |Speed (s)|7.02 | 6.07 |18.46 |9.73| We can find that attacking a single modality (i.e., SSP and BERT-Attack) is faster than attacking two modalities (i.e., Co-Attack and VLAttack). Also, the speed of VLAttack is noticeably faster than that of Co-Attack. This is because Co-Attack requires adding both image and text perturbations for every sample, but VLAttack can achieve a successful attack by adding perturbations to a single modality only. In sum, with the outstanding attack performance, the acceptable generation speed further demonstrates the effectiveness of our proposed VLAttack. `>>> Q1` ***Explanation for Fig.7 second column***\ `>>> A`: In referring expression comprehension task, a bounding box prediction is considered correct only if the Intersection Over Union (IOU) with the label is greater than 0.5. Based on this metric, the bounding box prediction in Fig.7 of our paper is incorrect after the perturbation. Actually, this predicted bounding box doesn't encompass the person's head and unnecessarily includes the space above the head. Besides, substituting ''jeans'' with ''denim'' is reasonable, as it doesn't alter the original meaning of the sentence. In sum, this example stands for the claim that the VLAttack can alter the correctness of predictions by introducing minor perturbations to images and texts. `>>> Q2` ***Fine-tune the model on the perturbation dataset, and then conduct the transfer attack***\ `>>> A`: Thanks. We conduct the experiment on the ViLT model through VQA dataset. We first fine-tune the pre-trained model on 5K (size of the perturbation dataset) samples and then use the fine-tuned model to produce adversarial samples. We adopt the same fine-tuning process as in ViLT[1], and the fine-tuning loss is a categorical cross-entropy function. When attacking, we enlarge the loss and optimize for 40 iterations like BSA. Results are shown in the table below. The new method is named Surrogate Attack (SA). |Method| SA | BSA | VLAttack | |-| - | - |-| |ASR|24.39 | 65.20 |__78.05__ | In the table, SA performs poorly compared to BSA and VLAttack. This is because large VL models struggle to train well on limited data samples. Even when provided with clean samples, these surrogate models hardly make accurate predictions, leading to inferior ASR results. --- Rebuttal Comment 1.1: Comment: Thank the authors for providing a detailed rebuttal with experiments. Since the source of transferability remains the main challenge of this line of research, I will keep the rating as borderline accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer zqVz, Thanks for your valuable comments and feedback on our rebuttal stage! We really appreciate your time and effort. Sincerely, Authors
Summary: In this paper, the authors explore the adversarial vulnerability in visual language models. Specifically, a block-wise similarity attack is proposed to generate adversarial image examples, and the BERT-attack method is used for generating the adversarial text examples. The image and text pairs are perturbed by an iterative method with cross-search. Strengths: 1. This paper explores the black-box attack for vision language models, which is novel. 2. The proposed method can generate perturbed inputs to decrease the performance of the fine-tuned models on multi-modal and uni-modal datasets effectively. Weaknesses: 1. One important criterion for a successful adversarial attack is that the perturbed image cannot be distinguished by humans, and the semantic meaning of the text adversarial examples should be the same as the original one. However, there needs to be more analysis of the semantic similarity between the generated adversarial examples and the original text. The perturbation of the images should be measured as well. Most importantly, the perturbation of the generated image-text pairs should be analyzed. 2. By applying the iterative way for generating adversarial image and text pairs, the attackers need to query the large vision language model more times. The time costs and the average query numbers for a successful attack should be provided. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: It is better to provide the analysis of the semantic similarity and the perturbation degree, as mentioned in Weaknesses 1. Compared to other methods, could you provide a thorough analysis of the costs and average query times of the proposed framework? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: I did not find a potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable review. Our responses are addressed below.\ `>>> Q1` ***It is better to provide the analysis of the semantic similarity and the perturbation degree, as mentioned in Weaknesses 1***\ `>>> A`: Thanks for your constructive suggestions. We compare the semantic similarity score for the text modality. Following previous work [20][25], the score is obtained by first using the universal sentence encoder[44] to embed the output and original sentences into vectors. Then we compute the cosine similarity between the two vectors. We compute the average $L_2$-distance as the perturbation degree for the image modality between all output and original images in the 8-bit color space. The experiment is developed on the ViLT model through the VQAv2 dataset, and the results are illustrated in the following table. |Modality| Metric | SSP (image-only)| Co-Attack | VLAttack| |----| ---- | ---- |---- |---- | |Text|Semantic Similarity | 1.00 |0.97 |0.99| |Image|$L_2$ Distance | 11.51 |11.38 |11.23| We can observe that SSP has the highest semantic similarity score since it only perturbs the image modality. Due to the limited length of text, Co-Attack and the proposed VLAttack also achieve higher semantic similarity. However, the proposed VLAttack performs better than the baseline Co-Attack. There are two reasons: (1) In our model design, we first perturb the image modality and then the text modality. If the image attack succeeds, we no longer need to perturb the text. (2) Even if the single-modality attack fails, during the multimodal attack, we will first use the text perturbation candidates with the highest semantic similarity to generate adversarial samples, which also helps VLAttack to maintain semantic similarity. Our method also shows advantages in terms of $L_{2}$ distance on image perturbations, where VLAttack outperforms both baselines. This advantage arises because VLAttack adopts clean images as input during the text attack stage. Consequently, the adversarial samples produced at this stage do not perturb the image, leading to a reduced average $L_2$ distance. `>>> Q2` ***Compared to other methods, could you provide a thorough analysis of the costs and average query times of the proposed framework?***\ `>>> A`: Thanks for your comments. We detail the analysis of query times and computation costs as follows. __*(1) Query Times*__ The proposed VLAattack conducts queries in three phases: (1) During the image attack phase, a query is made after optimizing with BSA for $N_{s}=20$ times (Algorithm 1, line 5). (2) During the text attack phase, BERT-Attack is used for querying (Algorithm 1, line 11). (3) In the ICSA stage, queries are made for every $N_k$ step (Algorithm 1, line 19). Among all baselines evaluated in Table 1 of our paper, only text attack methods BERT-Attack (B\&A) and R\&R need queries. Thus, we compare VLAttack with BERT-Attack as it achieves better performance. The average number of queries on BERT-Attack and VLAttack are __8.61__ and __4.99__ *times/sample*, respectively. We can find that even when the ICSA stage involves additional queries, VLAttack has a lower average number of queries than BERT-Attack. This is attributed to a portion of the samples that can be directly obtained in the image-attack stage using BSA, and they only query once, thus avoiding queries in the text-attack stage and the ICSA stage. __*(2) Computation Costs*__ We also analyze the average computation costs on the largest pre-trained model OFA. Specifically, we report the average speed in seconds for computing an adversarial sample on the VQAv2 dataset. The results are shown in the following table: |Method| SSP| BERT-Attack | Co-Attack | VLAttack | |----| ---- | ---- |---- |---- | |Speed (s)|7.02 | 6.07 |18.46 |9.73| We can observe that attacking a single modality (i.e., SSP for image and Bert-Attack for text) is faster than perturbing two modalities (i.e., Co-Attack and VLAttack), which is reasonable. Besides, the speed of VLAttack is noticeably faster than that of Co-Attack. This is because Co-Attack requires adding both image and text perturbations for every sample, but our method can achieve a successful attack by adding perturbations to a single modality only. In sum, with the outstanding attack performance, the acceptable generation speed further demonstrates the effectiveness of our proposed VLAttack. --- Rebuttal Comment 1.1: Comment: Thanks for your reply! The clarification regarding semantic similarity and costs addressed my concerns. --- Reply to Comment 1.1.1: Comment: Dear Reviewer AgmN, We are delighted that our response addressed your concerns. Thank you for your insightful comments and valuable feedback! Sincerely, Authors
Summary: The paper presents VLAttack, which is a method for perturbing multimodal examples such that a multimodal model would get them wrong. VLAttack does not assume access to fine-tuned models but does assume access to foundation models that are used to create these downstream models. The authors argue that this level of access is reasonable because there are several real-world examples where the foundation model is available but downstream models are not. VLAttack makes models fail significantly more than similar adversarial attack algorithms across several datasets and tasks. Strengths: As far as I am aware, the algorithm is novel (especially the visual perturbation part). And the authors showcase that it outperforms similar algorithms in the sense that it does not assume access to the fine-tuned model (which could be unreasonable) and it makes models get more errors. Weaknesses: Major issues: It is great that VLAttack makes the models get higher error rates than other approaches, but crucially it is only a good result if VLAttack does not actually change the ground truth labels. It seems that you do not validate this with, e.g. crowd workers. Unless I missed it, the only validation that you do is presenting case studies of a few individual examples. Did you do a systematic validation? Less important (but still issues): More motivation would be nice. What scenarios are you envisioning for why VLAttack raises a safety concern? Self-driving car failures, etc.? Can you make these adversarial perturbations useful? For example, can you train a model on them to get better performance? Technical Quality: 3 good Clarity: 3 good Questions for Authors: I tried to frame everything in the weaknesses section as questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations seem adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable review. Our responses are addressed below.\ `>>>Q1` ***Systematic validation***\ `>>>A`: Adding a human evaluation experiment significantly increases the reliability of the proposed attack method. We conducted the human evaluation experiments using the results output from the ViLT model on the VQAv2 dataset using Amazon Mechanical Turk (MTurk). The baseline that we choose is SSP since it outperforms others in our experiments, as shown in Table 1 in our paper. Specifically, we randomly sampled 650 examples and stored the generated answers after the attack. To validate the performance of both approaches, we send the original image-question pairs and the corresponding generated answer to the MTurk system. We provide three choice candidates to workers -- '' Definitely Wrong '', '' Not Sure '', and '' Definitely Correct ''. A successful attack means that the worker will label '' Definitely Wrong '' to the pair. To make the annotations more accurate and reliable, each pair is annotated by three workers, and we report the majority choice as the final human evaluation result. The statistics of human evaluation results are shown in the following table. We can observe that the proposed VLAttack still significantly outperforms the strongest baseline SSP. |Method| SSP|VLAttack| |----| ---- | ---- | |Definitely Correct $\downarrow$|413 | __307__ | |Not Sure $\uparrow$|__58__ | 56 | |DefinitelyWrong $\uparrow$|179 | __287__ | `>>>Q2` ***More motivation would be nice.Can you make these adversarial perturbations useful? For example, can you train a model on them to get better performance?***\ `>>>A`: As mentioned in lines 21-27 of our paper, the `''pre-train $\&$ fine-tune'' paradigm has become a prevalent trend when training vision language models. The pre-trained VL models can be efficiently deployed in different scenarios and tasks after fine-tuning, such as virtual assistants [a] and robotic control [b]. Against this background, a key question raised in our paper is whether these publicly available pre-trained models are robust, which is also the primary motivation of our work. As a pioneering research, we believe it can further benefit various research fields, such as adversarial fine-tuning for more robust vision-language models. In fact, there has already been adversarial fine-tuning work on language models [c], but research on vision-language models remains unexplored. We believe that the proposed VLAttack can serve as significant guidance for future studies in this field. [a] Tu, T., et.al. Learning better visual dialog agents with pretrained visual-linguistic representation.CVPR 2021\ [b] Yecheng Jason Ma., et.al.. LIV: Language-Image Representations and Rewards for Robotic Control. ICML 2023 \ [c] Dong, X., et al. How should pre-trained language models be fine-tuned towards adversarial robustness?. NIPS 2021 --- Rebuttal Comment 1.1: Comment: Thanks for running that experiment. It is informative, and convinced me to change my rating above the "accept" threshold. --- Reply to Comment 1.1.1: Comment: Dear Reviewer NrLG, We are delighted that our response has addressed your concerns. Thank you for your constructive suggestions and positive feedback! Sincerely, Authors
Summary: This paper focuses on adversarially attacking the multimodal finetuned models without getting access to the finetuned weights. By utilizing the activations and the parameters of the open-accessed pretrained model, this work proposes a method called VLATTACK for creating the adversarial attack samples for downstream tasks. The creating method first attempts to learn image perturbations via block-wise similarity attack (BSA) strategy. If not succeed, the text attack using BertAttack will be facilitated. If the two single-modal attack both fail, VLATTACK will use cross-search attack (ICSA) method to construct multimodal disrupted adversarial samples. Experimental results show the effectiveness of VLATTACK on various cross-modal (image-text) tasks and model structures. Some detailed analysis on the adversarial samples are also conducted. Strengths: 1. The experimental results are obtained based on various downstream tasks and pretrained models (in different architectures), which demonstrate the generalization ability of VLATTACK. 2. The choice of pretrained models are open-sourced, making the results more reproducible. 3. The performance of adversarial attack is good, compared with other typical and previous attacking methods. 4. The case study and ablation study are both well-conducted, providing insights on how the VLATTACK can achieve better attacking performance. Weaknesses: Generally speaking, I think this paper is a good work on the topic of adversarial attacking. If the following issues are addressed, I think it will be much better: 1. Experiments or analysis on adversarial multimodal datasets: Some adversarially constructed datasets are proposed in previous works, based on the datasets used in this work, like Adversarial VQA. Does the created adversarial samples share some similarity with these datasets? I think some discussion or analysis on this research question can be performed and will be very welcomed. 2. Evaluation soundness: For VQA and captioning, an extra human evaluation on whether the predictions under attack are actually wrong will be much recommended, considering the automatic metrics may make false-positives. 3. Experiments on out-of-domain datasets: For OFA, we know the pretraining corpus include the downstream datasets (VQA, RefCOCO and MSCOCO), which may result in the adversarial samples easier to be obtained using the pretrained model. It would be very good if downstream datasets which not appear in pretraining OFA can be considered in experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How would the performance of VLATTACK be affected under different model size (like OFA-base vs OFA-large) and image resolution ($224$ vs $480$)? I will be very happy if some insights can be provided. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have proposed some limitations in the appendix, including the text attacking methods, downstream task scope and explored model architectures, which I think is reasonable and leave space for future research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable review. Our responses are addressed below.\ `>>> W1` ***Experiments or analysis on adversarial multimodal dataset***\ `>>> A`: Some methods [a,b] focus on constructing adversarial datasets, but they significantly diverge from the nature of our research work. Specifically, these adversarial datasets usually consist of new samples generated through human annotation [d] (e.g., writing questions until models give wrong predictions) or machine synthesis [a] (e.g., removing objects from images). They do not have any perturbation budget constraints and are mainly used to test the robustness of multi-modal models. However, in our work, we aim to add noise to the clean data under strict perturbation constraints. The generated samples maintain similar appearances to the clean ones but can cause incorrect predictions in the downstream task model. This not only tests the robustness of the model but, more importantly, exposes potential security concerns on these pre-trained models. [a] Agarwal, V., et.al. (2020). Towards causal vqa: Revealing and reducing spurious correlations by invariant and covariant semantic editing. CVPR 2020. [b] Li, L., et.al. (2021). Adversarial vqa: A new benchmark for evaluating the robustness of vqa models. ICCV 2021. `>>> W2` ***Evaluation soundness***\ `>>> A`: Thanks for your valuable comments and constructive suggestion. Adding a human evaluation experiment significantly increases the reliability of the proposed attack method. We conducted the human evaluation experiments using the results output from the ViLT model on the VQAv2 dataset using Amazon Mechanical Turk (MTurk). The baseline that we choose is SSP since it outperforms others in our experiments, as shown in Table 1 in our paper. Specifically, we randomly sampled 650 examples and stored the generated answers after the attack. To validate the performance of both approaches, we send the original image-question pairs and the corresponding generated answer to the MTurk system. We provide three choice candidates to workers -- `'' Definitely Correct '', '' Not Sure '', and '' Definitely Wrong ''. A successful attack means that the worker will label ``Definitely Wrong'' to the pair. To make the annotations more accurate and reliable, each pair is annotated by three workers, and we report the majority choice as the final human evaluation result. The statistics of human evaluation results are shown in the following table. We can observe that the proposed VLAttack still significantly outperforms the strongest baseline SSP. |Method| SSP| VLAttack| |----| ---- | ---- | |Definitely Correct $\downarrow$|413 | __307__ | Not Sure $\uparrow$|__58__| 56 | |Definitely Wrong $\uparrow$|179 | __287__ | `>>> W3` ***Experiments on out-of-domain datasets***\ `>>> A`: Thanks for the advice. We do have such experiments, which are shown in Table 1 of our original paper. We conduct experiments on the SNLI-VE dataset for the visual entailment (VE) task, which is not involved in the pre-training process. From the results on the SNLI-VE dataset, we can observe that VLAttack achieves the best performance. Thus, even using the out-of-domain datasets, the proposed VLAttack is still effective. `>>> Q1` ***How would the performance of VLATTACK be affected under different model size (like OFA-base vs OFA-large) and image resolution (224 vs 480)?***\ `>>> A`: Thanks for the constructive suggestions. These suggested experiments are interesting and will significantly improve the quality of our paper. Due to the limited time of the rebuttal stage, we only test the influence of model size and will put the results regarding the image resolution experiment in the final version. We evaluate different methods on the OFA-large model through the VQA task. The results are shown in the following table. We can observe that the proposed VLAttack achieves the best performance, but the results slightly decrease compared to those on OFA-base in Table 1 of our paper, which supports the claim that the model can be more robust with a larger parameter size. |Method| SSP|Co-Attack|VLAttack| |----| ---- | ---- | ---- | |ASR|33.82 | 39.41 |__75.44__| --- Rebuttal Comment 1.1: Comment: Thank the authors for providing the detailed response. I will keep the rating as accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 3TAC, Thanks for your positive feedback! We sincerely appreciate your effort and the acknowledgment of our paper! Sincerely, Authors
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose a substitute black-box attack strategy called VLAttack to generate adversarial examples by perturbing both images and texts on pre-trained models and then transferring them to finetuned models. At the image-modal level, they introduce a block-wise similarity attack (BSA) strategy to disrupt universal representations in the pre-trained model. At the multimodal level, they design an iterative cross-search attack (ICSA) to update adversarial image-text paris. Strengths: 1. This paper focuses on an important question concerning the prevalent pretraining and finetuning paradigm with respect to adversarial robustness. 2. The core idea of the proposed attack is clearly presented. 3. The proposed VLAttack could potentially serve as a valuable baseline for examining the robustness of multimodal models. Weaknesses: Issues that affect my rating. 1. Lack of technique contributions. The question asked by the authors is quite valuable. However, the proposed attack strategy, VLAttack, seems trivial. Regarding the text modality, VLAttack directly employs the existing BertAttack[20]. For the image attack, the classical PGD attack with iteration $N_{s}=20$ is adapted. The idea of the proposed BSA loss Eq.2 is straightforward and similar to that of BadEncoder [a] (see BadEncoder's Eq.2~5), except BadEncoder focuses on the backdoor attack. The cross-modality attack should be the most important part, where the two modality attacks boost each other is excepted. However, the proposed "ICSA" seems directly concatenate the image and text attacks. Did the author have attempted to jointly optimize these two attacks? [a] Jia, J., Liu, Y., & Gong, N. Z. (2022, May). Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning. In 2022 IEEE Symposium on Security and Privacy (SP) (pp. 2043-2059). IEEE. 2. Insufficient evaluation on popular multimodal models (CLIP, BLIP): I recommend that the authors extend their evaluation to include more prevalent multimodal models, such as CLIP or BLIP, as done in Co-attack[15] and BadEncoder[a]. This would enable readers and subsequent works to make more meaningful comparisons. 3. Ambiguity in the fine-tuned model settings: Considering the significant performance difference between VLAttack and baselines in Table 1, it is unclear how the target models are fine-tuned. Are the pre-trained model parameters fixed while training a task-specific head, or are all parameters fine-tuned throughout the pre-trained model? Providing clarity on this aspect is crucial for evaluating the impact of the proposed attack. Additional details should be included to enhance understanding. 4. Absence of essential ablation studies on BSA and ICSA: The effectiveness of BSA remains unclear, and a comparison with an attack on the pre-trained model's original loss could provide valuable insights. To showcase the efficacy of ICSA, a straightforward baseline could involve setting BSA's attack iteration $N_{s}=40$, without incorporating BERT-attack. The authors should consider conducting these ablation studies to better demonstrate their contributions. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable review. Our responses are addressed below.\ `>>>Q1` ***Lack of technique contributions***\ `>>>A`: The multimodal attack is a new research topic, and only one work Co-Attack has been proposed recently. However, its performance is even worse than state-of-the-art image attack approaches. To solve these issues, we designed a new multimodal attack approach named VLAttack, which can jointly optimize adversaries for both images and text. It first seeks potential adversarial candidates at the single-modality level. If unsuccessful, the outputs from single-modality attacks will be treated as initialization at the multi-modal attack level. In this work, we have two technical contributions. One is the new loss for the image attack, and the other is a novel cross-modality search strategy for the multimodal attack. In the single-modality attack, we propose a new block-wise attack loss to disrupt the features in the image encoder and transformer encoder of the pre-trained model. We directly apply BERT-Attack for the text modality due to the limited length of input text, as described in lines 189-193. Finally, we propose a novel cross-search strategy on top of the outputs from single-modality attacks, which iteratively perturbs image and text modalities further. Thus, we argue that the proposed VLAttack is simple, effective, and novel for the multimodal attack topic. `>>>Q2` ***Insufficient evaluation on popular multimodal models (CLIP, BLIP)***\ `>>>A`: Thanks for the suggestion. We further extend our experiments on the CLIP model. The experiment is developed on the image classification task on the SVHN dataset [a]. Concretely, we use the image encoder of CLIP as the pre-trained model $F$ and then fine-tune $F$ on the SVHN dataset after adding a classification head. For the choices of image encoder of CLIP, we adopt ViT-B/16 and ResNet-50, denoted by CLIP-ViT-B/16 and CLIP-RN50. We test the attack performance using 1,000 correctly predicted samples. All results are illustrated in the following table. Since the task only accepts images as input, we compare our BSA with other baselines. As shown in the table, our proposed BSA still maintains the best ASR using different image encoder structures, clearly demonstrating its effectiveness. |Method| DR | SSP | FDA | BSA(VLAttack) | |----| ---- | ---- |---- |---- | |CLIP-ViT-B/16|7.90 | 4.60 |6.20 |__20.10__| |CLIP-RN50|70.70 | 76.20 |74.20 |__77.40__| [a] Goodfellow I J, et al. Multi-digit number recognition from street view imagery using deep convolutional neural networks[J]. arXiv preprint, 2013. `>>>Q3` ***Ambiguity in the fine-tuned model settings***\ `>>>A`: Thanks for your valuable comments. All task models are fully fine-tuned. In other words, all the model parameters are updated in the fine-tuning stage, but they are not accessible during the attacking stage. We treated them as black boxes. Thus, we do not have any knowledge about the fine-tuned model parameters during the attack process. We will add these details in the final version. `>>>Q4`: ***Absence of essential ablation studies on BSA and ICSA***\ `>>>A`: Thanks for your comments. __*(1) Ablation study of BSA*__\ To validate the effectiveness of BSA, we conduct an experiment to compare the original pre-trained loss with the proposed VLAttack on the ViLT model using the VQA task as an example. Since different pre-training tasks have their task-specific loss functions, in this experiment, we validate the masked language modeling (MLM) loss as the attacking objective for the VQA task. Given an image-text pair, the MLM loss randomly masks 15% of words in the text, and the training target is to recover these masked tokens. The attack objective is optimized through cross-entropy. In our implementation, we replace Eq. (2) in our paper with the MLM loss and reverse the target by adding perturbation on the image to make the pre-trained model incapable of recovering the masked tokes. When attacking, we optimize the above objective for 40 iterations like BSA. The results are illustrated in the following table. |Method|MLM | BSA | VLAttack | |----| ---- | ---- |---- | |ASR|25.40 | 65.20 |__78.05__| We can find that the performance using the original pre-trained MLM loss is far inferior to the proposed BSA and VLAttack methods. The reason is that in the attacking stage, the downstream tasks are fine-tuned with task-specific datasets, which also update the relations among tokens learned in the pertaining stage accordingly. However, if we still use the attack strategy learned in the pre-training stage, the change of token relations will make it fail to generate adversarial samples. Thus, the original MLM loss performs worse than our proposed VLAttack, even BSA. __*(2) Ablation study of ICSA*__\ Thanks for pointing this out. We did have such baselines that involve setting BSA's attack iteration $N_s=40$, without incorporating BERT-attack, as shown in Figure 5 of Section 5.4. The experimental results show that our method equipped with ICSA (VLAttack) significantly outperforms the BSA method under the same iterations. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. **Question 1: Insufficient technical contributions** - The authors' responses do not fully address my concerns. It would be beneficial if the authors could directly answer my question: Have attempts been made to jointly optimize the text and image attacks? If so, why? If not, why not? Additionally, the authors might consider discussing and positioning their proposed Block-wise Similarity Attack (BSA) approach in comparison to the recently proposed BadEncoder [a] (see BadEncoder's Eq.2~5). Clarifying the distinction between these two methods could help to strengthen the contribution of this work. [a] Jia, J., Liu, Y., & Gong, N. Z. (2022, May). Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning. In 2022 IEEE Symposium on Security and Privacy (SP) (pp. 2043-2059). IEEE. **Question 2: Insufficient evaluation on popular multimodal models (CLIP, BLIP)** I would still insist that it is important to augment the results in Table 1 with those from the CLIP and BLIP architectures, as done in Co-attack[15]. Omitting this step would make it challenging for readers to draw fair comparisons and accurately interpret the results. --- Reply to Comment 1.1.1: Comment: `>>> Q1-1`: Joint Optimization We have attempted to optimize both image and text perturbations jointly, but the performance of the joint optimization is worse than that of the proposed VLAttack.Toward the joint optimization, we modify the proposed BSA loss by adding a new loss term for the text encoder, and its format is similar to the image encoder and multimodal encoder, where we compare the block-wise similarity between the clean text input and the perturbed text input. At each attack step, we directly follow the current BSA design to generate image perturbations for the image modality. We then move to the text modality. Due to the discrete nature of text data, we follow existing work [21] using the word embedding space to generate the text perturbations. First, we use the modified loss to calculate the gradients on the text and then add them to the original word embeddings. The summation can be treated as the ideal word embeddings of the perturbed text. The next step is to map these continuous embeddings to discrete words. For each informative word in the original text, we use BERT to calculate its synonym set following [20]. We then use the word synonym substitution approach to replace the original words with their synonyms under the constraint listed in Eq. (1) in the original paper to guarantee the semantics of the new perturbation. The new image and text perturbations will be iteratively updated until $N$ steps with the above processes. The experimental results are shown in the following table: |Dataset| BSA| BSA-joint| VLAttack| |-|-|-|-| |VQAv2|65.20|66.48|__78.05__| BSA-joint denotes the joint optimization, which performs slightly better than the proposed BSA on the VQAv2 dataset but is still far inferior compared to VLAttack. This is because the input text $\mathbf{T}$ is relatively short, and frequently perturbing texts and strictly coupling the updates may lead to fluctuations in the gradient information. Therefore, joint optimization degrades the overall attack performance. `>>> Q1-2`: BSA v.s. BadEncoder We admit the overall idea of the proposed BSA and BadEncoder is similar, and they all use similarity as the optimization target. However, the proposed BSA is different from or better than BadEncoder. (1) BadEncoder (Eq.2--5) only utilizes the final output feature vectors from the whole encoder, ignoring the outputs from the intermediate layers/blocks. However, BSA calculates fine-grained similarity scores. As shown in Eq. (2) of our original paper, we distinguish the outputs from image and Transformer encoders. Such a design can modify the low-level vision features and the high-level cross-modal representations. In our setting, we attack fine-tuned models of a wide diversity of downstream tasks, including but not limited to image classification tasks like Badencoder. The parameters of these task-specific models are fully finetuned on distinct datasets, and the output representations of the encoder significantly change accordingly. Thus, instead of only attacking the output feature from the last layer like BadEncoder, perturbing each intermediate feature representation from each encoder and each block can enhance the attack performance. This statement is also verified in Section 5.4, where the ASR score of BSA is higher than only attacking a single encoder. (2) The motivation for adopting the cosine distance is different. In BadEncoder, CLIP uses cosine similarity as a loss function to calculate distances for positive/negative image text pairs, which is motivated by the pre-training strategy of CLIP. However, we adopt cosine similarity because the fine-grained token representations attend to each other in the inner product space, which is inspired by the mechanism design of the Transformer structure. This motivation is also illustrated in Lines 180-181 of our original paper. We will add these discussions in the final version. `>>> Q2`: CLIP and BLIP Thanks for the constructive suggestions. We agree that adding more experiments on CLIP and BLIP is beneficial for validating the contributions of our work more comprehensively. For the CLIP model, we will add the experiments through the image classification task, which has been discussed in the previous response on the SVHN dataset. We will add results using more datasets. For the BLIP model, we experiment with the VQA task of the BLIP model using the VQAv2 dataset. The proposed models still achieve better performance, as shown below: |Dataset|DR|SSP|FDA|BSA|B&A|R&R|Co-Attack|VLAttack| |-|-|-|-|-|-|-|-|-| |VQAv2|7.04|11.84|7.12|26.36|15.30|2.94|20.62|__45.64__| We will add all experimental results in our revised version to enhance the validation of model effectiveness. We sincerely appreciate your valuable feedback, which elevates the significance of our model's design, highlights its distinctions from existing approaches, and fortifies our work's overall quality. We hope our responses can adequately address your concerns.
null
null
null
null
null
null
Complex-valued Neurons Can Learn More but Slower than Real-valued Neurons via Gradient Descent
Accept (poster)
Summary: The authors presents theoretical results on learning real-valued and complex-valued neurons with gradient descent. The key takeaways are that: * complex-valued neuron learns real-values neurons and complex-valued neurons with convergence rate $O(t^{-3})$ and $O(t^{-1})$ respectively * two-layer real-valued neural network cannot learn a single complex-valued neuron * complex-valued neuron learns slower than real-valued neurons when learning real-value neurons Strengths: * The paper is well-written and easy to understand even for non-experts (myself) * This appears to be a significant advance in the theory of complex-valued neuron learning, with previous work largely focusing on real-valued neuron learning. However, as someone outside this field, I defer to other reviewers' for their assessment of significance. Weaknesses: * While the main results in this paper are theoretical proofs, the paper could be furthered strengthed with empirical experiments that validate the proofs. * table 2 appears to containa typo? the result for theorem 2 should be $O(t^{-1})$? Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Do the authors expect the convergence rates to be further improved with other learning rate schedules/learning algorithms (other than projected gradient descent)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the comments and feedback. Q1: About empirical experiments to further strengthen the paper. A1: Thanks for your valuable suggestions. We provide toy experiments in the **global response**, where we verify our findings in more general settings. Q2: About the typo in Table 2. A2: Thanks for pointing out this problem. We will correct it in later versions. Q3: About further improving the convergence rate with other learning rate schedules and learning algorithms. A3: Absolutely yes. This paper focuses on traditional learning rate schedules and gradient descent methods since they are widely used in the analysis and early implementations of RVNNs. However, it remains an open problem whether they are the best choice for CVNNs. I prefer to believe they are not since learning CVNNs includes the learning of phase, which does not exist in learning RVNNs. It would be promising to find more suitable algorithms for CVNNs.
Summary: The goal is to explore the novel approach of complex valued neural networks using gradient descent. The paper investigates when and to what extent CVNNs outperform RVNNs using gradient descent for learning tasks. The researchers prove that a single complex-valued neuron can efficiently learn functions expressed by one real-valued neuron and those expressed by one complex-valued neuron, while a two-layer RVNN with finite width cannot learn a single complex-valued neuron. However, they also show that complex-valued neurons learn more slowly than real-valued neurons, with a lower bound of Ω(t−3) for learning functions expressed by one real-valued neuron using a complex-valued neuron. Theoretical comparisons between RVNNs and CVNNs provide insights into the success of CVNNs and the potential slow convergence of CVNNs learning. Strengths: The authors appear to have thoroughly analysed a complex valued neural network and developed what appears to be rigorous theoretical results calculating the convergence rates of these complex valued neural networks. The ideas appear to be sound Weaknesses: - While the notation appears to be self-consistent (in the main body - I did not read the appendix, which is very long and technical), the notation is very hard to parse through. If there were some tables to layout the important notation it could help lay the groundwork earlier on. - I am not familiar with these types of reformulations, but it is not clear to me why the authors want to try to use a two layer network to model a single neuron. The authors write "Although learning one neuron is the simplest case of learning neural networks, it is sufficient to embody the learning capability and efficiency of neural networks.", which I ask the authors to motivate more. Is it not obvious a two layer network with more parameters would be able to model a single neuron of just the weighted inputs? - I am not sure I fully understand this paper but it seems the authors only develop the proofs and theorem 1 and theorem 2 for 1d input such that they calculate the convergence rates for a two layer network to model a linear function wx where w is a vector length n and x is just a scaler number. If I understand this correctly, I am not sure how much we learn from this? Does this theorem apply for as real neuron where d is not one? -The ideas appear to be sound but they are not very palatable to a reader. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Motivation of application - I’m surprised the authors do not mention more applications to systems in the world of physics. Fluid dynamics, beam physics, particle physics, where phase information is ubiquitously relevant to quantum materials, dynamics, and -States that the CVNNs have shown to have some desirable properties - universal approximation - boundedness, stability → why is this desirable? Could the authors elaborate on this? - It is not clear to me, in Theorem 1 and theorem 2 when d=1, does this mean that x is now no longer a linear summation of inputs but truly just an input and the weight parameter? I did not work through the proof, but does the theorem generalize to inputs with more than one dimension? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: It often feels like one cannot truly understand what they are discussing in the paper without referencing the very dense and long appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough feedback. Q1: About tables to layout the important notation. A1: Thanks for your constructive suggestions. The notations in this paper are self-consistent and are equipped with informal explanations when they occur in the context for the first time. Readers may forget the explanations when reading through the paper. Thus, we will consider adding some notation tables to remind readers of the definitions. Q2: About using a two-layer network to model a single neuron. A2: Thanks for your valuable feedback. Throughout this paper, we focus on learning a neuron with a neuron (not a two-layer network), which is a standard and widely studied setting [1,2,3]. But when we consider learning a complex-valued neuron with a real-valued neuron, this learning process is unfair since a complex-valued neuron has more parameters than a real-valued neuron. This motivates us to investigate learning a complex-valued neuron with a two-layer RVNN to pursue fairness. Thus, the formulation in Eq. (1) considers learning with a two-layer network to cover all settings in the paper. It seems that the formulation and explanations are too complicated for readers, and we will consider reformulating our problem in later versions. [1] Mahdi Soltanolkotabi. Learning ReLUs via Gradient Descent (NeurIPS 2017). [2] Seyed Mohammadreza Mousavi Kalan, Mahdi Soltanolkotabi, and A. Salman Avestimehr. Fitting ReLUs via SGD and Quantized SGD (ISIT 2019). [3] Gilad Yehudai and Ohad Shamir. Learning a Single Neuron with Gradient Methods (COLT 2020). Q3: About $d > 1$. A3: This is an insightful question. In our paper, $d$ represents the dimension of the complex space. From the theoretical aspect, we introduce this condition for technical reasons. In traditional neuron learning, spherical symmetry is used to reduce the analysis to 2-dimensional real space. The condition $d=1$, i.e., 1-dimensional complex space, is used to inherit the analysis framework. From the experimental aspect, we find that our findings also hold when $d$ is larger than one (Fig. 2 in the **global response**). As far as we know, this paper is the first one to analyze neuron learning in the complex domain, where symmetry becomes more complicated, and we rely on this condition to simplify the analysis. As supported by the experimental results, we believe that our findings can be extended to more general settings and we leave them for future explorations. Q4: About applications to physics. A4: Thanks for the valuable suggestions. We will consider introducing some applications related to physics in later versions. Q5: About desirable properties. A5: Thanks for raising these questions. If I have not misunderstood the questions, I need to elaborate on the motivation of these desirable properties, including universal approximation, boundedness and stability. These properties, which hold for both RVNNs and CVNNs, have been investigated for several decades in the community of neural network theory. Universal approximation roughly states that neural networks can approximate any measurable functions [4,5]. This property affirms the expressive power of neural networks and explains the phenomenon that neural networks can fit random data to perfect accuracy [6]. (Complete) stability roughly indicates that neural networks with any initialization will converge when it is optimized by gradient descent (or gradient flow) [7,8]. This property guarantees convergence and implies that the optimization is meaningful. In our paper, boundedness occurs together with stability since boundedness can be used to derive stability under some settings [8]. Although these properties may not directly reflect the performance of neural networks (i.e., generalization), these properties are still desirable since they help us theoretically understand neural networks. [4] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators (Neural Networks, 1989). [5] Felix Voigtlaender. The universal approximation theorem for complex-valued neural networks (Applied and Computational Harmonic Analysis, 2023). [6] Devansh Arpit et al. A closer look at memorization in deep networks (ICML 2017). [7] Michael A. Cohen and Stephen Grossberg. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks (IEEE Transactions on Systems, Man, and Cybernetics, 1983). [8] Bo Zhou and Qiankun Song. Boundedness and complete stability of complex-valued neural networks with time delay (TNNLS 2013). Q6: About fair presentation, very long and technical appendix, hard-to-parse-through notations, and "it often feels like one cannot truly understand what they are discussing in the paper without referencing the very dense and long appendix''. A6: It seems that the reviewer pays too much attention to technical details and is trapped by mathematical formulas. It should be emphasized that we elaborate on our settings and conclusions in the introduction part, which is sufficient to understand and estimate our contributions. Meanwhile, we provide detailed and intuitive explanations after key definitions and theorems to help readers understand complicated mathematical formulas. If one hopes to delve into details of our theoretical results, it is necessary to possess some basic knowledge related to neuron learning and may require much more effort.
Summary: The authors contrast the learning and convergence properties of real-valued neural networks and complex-valued neural networks. Specifically, they study the problem of learning the function implemented by a single neuron using a 2-layer finite width network. Notably, they show that a complex valued neural network can learn functions expressed by a real-valued neuron as well as a complex-valued neuron. This result indicates the superior capacity of complex-valued neural networks as compared to their real-valued counterparts. Furthermore, they also study the convergence properties of learning a function expressed by a real-valued neuron with a complex-valued neuron and prove that its slower compared to learning with a real-valued neural network. Taken together, complex-valued neural networks have a better learning capacity at the cost of slower learning properties compared to real-valued neural networks. Overall, I think this paper makes a strong theoretical contribution to understanding the efficiency and capacity of complex-values neural networks in practice. Strengths: 1. The paper has strong theoretical foundations and presents arguments through rigorous proofs. 2. Despite the involved mathematical machinery, the authors present a simplified view of the underlying learning dynamics and present the existence of phases in the learning process. 3. Despite the assumption of the MSE loss, I feel the authors present a strong theoretical framework to analyze learning in complex-valued neural networks and contrast them with real-valued neural networks. Weaknesses: 1. The authors present strong theoretical results, and use specific assumptions to prove their results. However, it is unclear the extent to which these assumptions would hold in practice. 2. In addition to their (very impressive) theoretical results, I feel the current version of the paper would benefit from some empirical validation. Having some toy experiments would also be sufficient to significantly improve the readability of the paper, while also increasing the reader pool. 3. The manuscript is generally well-written, but there are certain typos or use of abbreviations/notations which make it confusing for the reader. E.g. it seems that there is a typo in Table 2, wherein the second row third column (complex-valued Neuron --> complex-valued neuron) should be $\mathcal{O}(t^{-1})$. Also, abbreviations CVNN and RVNN or the notation $t$ is not introduced in the abstract. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. The caption of Fig. 2 is currently not very descriptive. Could you please update it such that the takeaway is clear to the reader without reading in detail the text? 2. Is it possible to add a toy regression example where you can empirically demonstrate the different phases during the learning process? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for these constructive suggestions. A1: About the extent to which these assumptions would hold in practice. Q1: The settings and assumptions used in this paper are common ones [1,2,3,4] to make the analysis tractable. Neuron learning aims to provide meaningful insights for theoretical guarantees and practical usage of neural networks, based on simple settings and assumptions. We believe that such a work is significant and necessary for the development of neural networks' theory. [1] Yuandong Tian. An analytical formula of population gradient for two-layered ReLU network and its applications in convergence and critical point analysis (ICML 2017). [2] Mahdi Soltanolkotabi. Learning ReLUs via Gradient Descent (NeurIPS 2017). [3] Seyed Mohammadreza Mousavi Kalan, Mahdi Soltanolkotabi, and A. Salman Avestimehr. Fitting ReLUs via SGD and Quantized SGD (ISIT 2019). [4] Weihang Xu and Simon S. Du. Over-parameterization exponentially slows down gradient descent for learning a single neuron (COLT 2023). Q2: The paper would benefit from some empirical validation. Is it possible to empirically demonstrate the different phases during the learning process? A2: Thanks for this valuable feedback. We provide toy experiments in the **global response**, where we verify our findings in more general settings. For empirically demonstrating learning phases, it should be emphasized that these phases only work for proof but may not exist in practice. For example, the red line remains constant during stage II in Fig. 2(b), but this line is an upper bound and the practical line may be a smooth one through all stages. In part of our experiments, the loss curve has a constant segment, which implies that such phases may exist sometimes. Q3: About typos, use of abbreviations, the caption of Fig. 2, and $t$ in the abstract. A3: Thanks for pointing out these problems. We will update them in later versions. --- Rebuttal Comment 1.1: Title: Reply to the authors' rebuttal Comment: I would like to thank the authors for their response and effort during the rebuttal process. > The settings and assumptions used in this paper are common ones [1,2,3,4] to make the analysis tractable. Noted. I must admit that I am not very familiar with this line of research and therefore, wasn't sure how reliable these assumptions are. But I understand that the assumptions are fairly standard in the area. Thank you for pointing it out. > We provide toy experiments in the global response, where we verify our findings in more general settings. This is great! It would be great if you could add it to the paper, along with the details of the experiment and hyperparameters (in Appendix, if space constrained). > For empirically demonstrating learning phases, it should be emphasized that these phases only work for proof but may not exist in practice. It is however, striking to see the similarity between Fig. 2 and the empirical plot Fig 1 (that you added in the general response). I guess the correspondence is weaker when the dimension conditions are softened and bias is added (Fig. 2 of your general response). Overall, I am convinced that this is an interesting theoretical contribution and would be interesting to the NeurIPS community. I have therefore, increased my score. I wish the authors best with their endeavours and would like to congratulate them again on their excellent work. --- Reply to Comment 1.1.1: Title: Rely Comment: Thank Reviewer cJ2H for the detailed feedback and for raising the score. Have a nice day!
Summary: The paper develops theoretical results for the convergence rates of complex and real valued neural networks on complex-valued problems. It develops analytical models for the training regimes of complex valued NNs. Strengths: The results would be useful for those attempting to apply complex-valued networks to problem settings where they are not normally used. An intriguing result is that complex valued neurons have slower convergence rates for the same problems as real valued neurons. The theorems are thorough, and the diagrams provide excellent illustration to the proofs. They also provide a proof that real valued networks cannot learn complex valued functions, which is not surprising, but nice to have proved. Weaknesses: A fundamental flaw of the paper is that it does not have empirical verification of the theoretical results. The analysis makes many assumptions about different training regimes, but it is not obvious if complex valued neural networks actually behave like this. The complex-valued models in the theorems should be implemented and the convergence losses should be plotted alongside the theoretical predictions. Without supporting empirical evidence, the assumptions of the proof seem too strong. Fortunately, the problem settings should be simple to code up in any package, and the results would be easy to include in the author's presentation, given Figures 1 and 3. Page 4, line 156: “whereas a complex-valued neuron may only activate a small part as controlled by the parameter psi” This line makes it sound like the authors conclude that the primary benefit is the extra parameter in zReLU, not the complex arithmetic. The use of activation functions with trainable parameters is also a major distinction between the CVNN and the RVNN. Maybe all of the identified differences can come from a network with real weights, but an activation function with a learnable parameter? zReLU can be encoded in real-valued weights, if you interpret two real weights and use a learnable ReLU. Two variations are required to ablate this confounder: 1. complex weights with a zReLU with a fixed psi (Would this be easier to study theoretically than the existing proofs? It would also be easy to demonstrate empirically.) 2. real weights with a pseudo-zReLU with a learnable psi (Possibly hard to prove, but easy to empirically study.) I think considering case (1) is necessary to support the claim made by the paper that complex weights are the key differentiator. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: - In Section 4.3, could you clarify how the outputs of the RVNN are matched to the outputs of the complex valued target function? - In Section 5, does the trainable zReLU make the network effectively deeper? Does the convergence rate change if psi is not trainable? - Define the schema of $L_{rr}, L_{cr},$ etc., and remind the reader every time you use them. By the time I got to Lemma 4 and Lemma 5, I did not know which result was which. - $t4 was never defined. - Define rvnn and cvnn in the abstract - Add equation numbers. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: A limitations section is missing from the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed feedback but would like to point out a few misunderstandings in the review. 1. About assumptions on different training regimes. In the proofs and proof sketches, training regimes come from the idea of divide and conquer and we do not make any assumption about these regimes. We analyze the learning dynamics in each regime and then unify them to obtain the global dynamics. The introduction of training regimes implies complicated and challenging analysis rather than strong assumptions. 2. About a missing section of limitations. We discuss the limitations of this paper and promising future works in the second paragraph of Section 6. This is approved by Reviewer qTS2: The authors have adequately addressed the limitations of the work. Q1: About the fundamental flaw that we do not have empirical verification of the theoretical results. A1: It seems that the reviewer might not grasp the core contributions of this paper. As far as we know, this paper theoretically studies neuron learning in the complex domain for the first time, and the obtained theorems cast light on the difference between RVNNs and CVNNs. From the theoretical aspects, all theorems are clearly written and have detailed proofs in the appendix. Thus, our theoretical results are rigorous and sound, as estimated and mentioned by all other reviewers. From the experimental aspect, we admit (as suggested by other reviewers) that empirical results can further strengthen our sound theories and verify our findings in more general settings. Thus, we provide experimental results in the **global response**, where we verify our findings in more general scenarios. Q2: Can the difference come from RVNNs using activations with a learnable parameter? A2: Thanks for this important question. In a nutshell, the difference comes from both the learnable parameter and the complex arithmetic. Firstly, the learnable parameter is related to phase learning. With a fixed $\psi$, the phase prior is determined, which may cause disappointing results when the prior is unsuitable. Thus, the learnable parameter is essential for phase learning. Secondly, complex arithmetic is the basis of phase and a natural way of modeling phase information. Complex arithmetic is the basis of phase since phase does not exist in the real domain. One may argue that $C$ is isomorphic to $R^2$ (as real vector spaces), and a complex number can be modeled by two real numbers. We admit this fact but the modeling process is highly artificial. In the words of neural networks, the multiplication from $C$ to $C$ can be modeled by a complex-valued weight (2 parameters), or a 2*2 real-valued weight matrix (4 parameters). Moreover, typical activation functions used in CVNNs can be succinctly expressed by complex numbers and their arguments, while having a more complicated formulation when expressed by real and imaginary parts. Finally, it seems that the reviewer misunderstands our conclusions. Our conclusions are about complex-valued neurons rather than complex-valued weights. We never say complex-valued weights are the key differentiator. It is important to consider both the complex-valued weights and complex activation functions. Q3: How the outputs of the RVNN are matched to those of the complex-valued target function in Section 4.3? A3: Thank you for raising this question. In the literature, it is common to take the real part as the output in the readout layer [1,2]. In this paper, we adopt this choice. The activation function $\sigma_{\psi}$ contains the operation of taking the real part, as defined in Section 3. Thus, the outputs of both RVNN and CVNN are real-valued, and there is no trouble in matching the outputs. We realize that readers may forget the definition and we will add more explanations in later versions. [1] Scott Wisdom et al. Full-capacity unitary recurrent neural networks (NIPS 2016). [2] Shao-Qun Zhang, Wei Gao, and Zhi-Hua Zhou. Towards understanding theoretical advantages of complex-reaction networks (Neural Networks, 2022). Q4: Does trainable zReLU make networks deeper? A4: It is hard to measure the relation between the trainable parameter and network depth in general since the trainable parameter may have different influences on different theoretical properties. From the approximation aspect, the approximation power roughly comes from depth, width, and activation complexity [3]. The role of the trainable parameter in zReLU belongs to activation complexity and cannot be compared with depth since the effects of depth and activation complexity are incomparable. From the learning dynamics aspect which is considered in this paper, there is no proven conclusion about the effects of activation complexity and depth. We conjecture that activation complexity and depth are incomparable since a complex-valued neuron cannot represent a two-layer real-valued neural network (which can be proved by simply counting the number of linear regions). [3] Alexis Goujon, Arian Etemadi, and Michael Unser. The Role of Depth, Width, and Activation Complexity in the Number of Linear Regions of Neural Networks (2022). Q5: Does the convergence rate change if psi is not trainable? A5: The convergence rate changes if $\psi$ is not trainable since the redundant phase consideration slows down the convergence, as concluded in Section 6. The trainable parameter is important to learn phase information. With a fixed $\psi$, the phase prior is determined, and we cannot expect a complex-valued neuron to learn phase information. Q6: About notations, abbreviations, and equation numbers. A6: Thanks for your valuable suggestions. Definitions of $L_{rr}$ and $L_{cr}$ are provided in Eq. (2) and at the beginning of section 5, respectively. We will recall the meanings of notations and update the abstract in later versions. We only add a number when the equation is cited elsewhere, which is a convention in the community of neural network theory. --- Rebuttal Comment 1.1: Comment: Q1: Thank you for adding the experimental tests. The results provide the necessary empirical evidence for the theory and its assumptions. As such, I have raised my score to a 5. Q2+Q4: Thank you for the discussion on these points. My point was that the model graph for the CVNN has many differences to the RVNN that we could ablate: complex values and arithmetic, complex weights, zReLU with a learnable psi parameter, etc.. To quote your response, "In a nutshell, the difference comes from both the learnable parameter and the complex arithmetic. " Indeed -- but how do both contribute independently!? It is possible to perform model ablation on each of these aspects. Your conclusion states "These conclusions suggest that complex-valued neurons learn more than real-valued neurons since *CVNNs benefit from the flexibility of the phase parameter*, which helps CVNNs learn phase information more efficiently." You are specifically hypothesizing that the activation function is the crucial piece. It does definitely make sense that the psi is important for phase learning, and I think you did show it was necessary. But are the complex values (and/or weights) necessary for phase learning, if you have the equivalent graph of zReLU with a psi? Figuring out which part of the CVNN in the key component would be very interesting, and make for a much stronger paper with wider applications. For neural network analysis, the deconstruction into real-valued float multiplies and low-level activation functions is a more fundamental representation to study the model from. The construction/deconstruction sounds artificial only when viewed from the perspective of complex function analysis. At the lowest level, both the RVNN and CVNN are just graphs of parameter multiplies and ReLUs, and maybe the CVNNs have a sine/cosine/tangent in them. See, for example, Apicella et al., "A survey on modern trainable activation functions", 2021 for a systematic study of transforming complicated activation functions into simpler graphs. By breaking down the complex arithmetic and the trainable zReLU activation function into fundamental model graphs, the RVNN and CVNN can be directly compared in terms of depth and model complexity. If you trace out the model graph of the CVNN, I think you'll see that the psi parameter makes a directly comparable graph that is +1 layer deeper per activation layer, has a different connectivity, (might have sin/cos/tan), and might be able express more functions. Then, it will be trivially apparent that the real-valued function approximated by the CVNN low level graph can express more real-valued functions than the RVNN. Then, it will be interesting to figure what aspect of this real-valued lower level graph causes the convergence behavior you demonstrate, which would have a broader impact. A potential null result is that you primarily demonstrated that this particular complicated activation function is better than just a ReLU one for the class of problems, and the complex parts have no impact. Not that I think that is the case, because I think the complex arithmetic is also an important aspect, but I don't think you rigorously ruled that out. I do however think that the trainable activation adds a lot more to model expressiveness than you are giving it credit for. A stronger analysis would decompose the "complex valued" networks into the fundamental model graphs of basic operations, and apply basic theoretical and empirical analysis of neural network approximation theory and convergence rates to these graphs. Then, you could breakdown which aspects of the CVNN make which changes to the graph, and apply those changes independently to determine which in particular are contributing to the change in converge rate and model expressiveness. The hypothesis to test and falsify, as suggested by the comments in the paper, is that inserting the graph+parameters of psi-activation function into the RVNN graph, by itself, would make the biggest difference in function expression and convergence. Q3: Makes sense. Thanks for clarifying. Q6: Equation numbers everywhere makes it easier for readers and reviewers to cite equations :) --- Reply to Comment 1.1.1: Comment: Thank you very much for the detailed feedback and for raising the score. Q2+Q4: There might be an ambiguity in my sentence "In a nutshell, the difference comes from both the learnable parameter and the complex arithmetic". We want to say that the learnable parameter is necessary for our results, and complex arithmetic is a natural way to implement the learnable zReLU activation function. It is possible to discard complex arithmetic if one uses real arithmetic and shared weight parameters, which is artificial from the aspect of complex function analysis. It is interesting to consider the problem from the perspective of equivalent graphs. We admit that there exists a certain RVNN that is equivalent to a CVNN, but the RVNN is highly artificially designed: there are constant weights, different activation functions in one layer, and uncommon activation functions. We still think that when people talk about RVNNs, they usually mean feedforward neural networks with real-valued learnable weight parameters and one activation function (can be learnable or unlearnable) in all layers. If you would like to refer to the CVNN in this paper, we think it is more convenient to say CVNN with learnable zReLU than describing the equivalent graph. It is promising to decompose CVNN into fundamental model graphs of basic operations and analyze these graphs. We believe that there are many interesting problems in this direction and leave them to future work.
Rebuttal 1: Rebuttal: We thank the reviewers for their detailed feedback. We are thankful that our theoretical results are approved by most reviewers, many important questions are raised, and many constructive suggestions are proposed. A general consensus is that additional experimental verifications can help our paper become more convincing and increase the reader pool. Thus, we provide here several experimental results to further strengthen our theoretical results and verify our findings in more general settings. Detailed responses to individual questions and suggestions are provided in separate responses. **Experimental settings.** A training set of size 7,000 and a test set of size 3,000 are generated by a randomly initialized target neuron (can be a real-valued or a complex-valued neuron). After random initialization, a complex-valued neuron and a real-valued neuron are trained by gradient descent with the empirical mean square loss and a learning rate of 0.1 for 100 epochs (or 300 epochs when the loss does not converge). **Experimental results.** We investigate two different settings and report the test error curves in the attached pdf. In Fig. 1, we adopt the theoretical setting with $d=1$ and without bias term. In Fig. 2, we soften the dimension condition and add the bias terms. In all settings, we find that real-valued neurons fail to learn complex-valued neurons but converge faster when learning real-valued neurons. These experimental results verify our theoretical findings in general settings. Pdf: /pdf/7f289864fb03d51f8b030ae936fec826d553df52.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The theoretical paper compares the learnability of real-valued neurons and complex-valued neurons via gradient descent. In a nutshell, the paper proves that a complex-valued neuron has a lower convergence rate than its real-valued counterpart, but learn exponentially slower. The theoretical discoveries help explain the success of CVNNs and point out the potential slow convergence of CVNNs learning. Strengths: 1. The paper makes the first attempt ever to comparing the learnability of real-valued and complex-valued neurons on a theoretical level. 2. Even though the theoretical investigations are carried out on a single neuron, the findings still cast light on the success and failure of CVNNs. 3. The discoveries are presented as Theorems and proved with rigor in the appendix, in which I did not find any technical errors. 4. The paper is very well written. The implications of each Theorem are clearly explained and summarized in tables. Weaknesses: 1. The analysis is carried out on a simple type of complex neuron network, and it is hard to generalize the achieved findings to practical RVNNs and CVNNs. 2. The discoveries could be more convincing by comparing the learning curves of RVNN and CVNN on toy examples, e.g. to show that a real neuron cannot well learn a complex neuron. 3. The research questions should be reformulated. Currently they are quite ambiguous and do not directly reflect the research focus. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. It is argued in the paper that learning one neuron can embody the learning of neural networks. However, unlike RVNNs, multi-layered CVNNs may have intermediate layers with complex-valued input and outputs, a different setup from this study. Is it possible to employ a similar approach to this paper for studying these layers? 2. The bias term is removed for technical reasons. Could you please elaborate on how the presence of bias term will influence the analysis, and whether similar findings can still be obtained? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and comments. Q1: About generalizing to practical RVNNs and CVNNs. A1: The analysis of neural networks faces the tradeoff between simple assumptions and strong conclusions in the neural network literature. It is quite difficult to analyze practical neural networks without assumptions. Our work pursues theoretical results under simple settings and assumptions. It would be promising future work to generalize our findings to deep neural networks without complicated assumptions. Q2: About toy examples to make the discoveries more convincing. A2: Thanks for the constructive comment. We believe that our findings are well supported by our theoretical conclusions. To provide a more convincing and intuitive demonstration of our discoveries, we provide experimental results in the **global response**, where we also verify our findings in the presence of bias terms. Q3: About reformulation of the research questions. A3: Thanks for your valuable feedback. The current formulation focuses on generality and includes the setting of learning a neuron with a two-layer neural network, which is used in Theorem 3. However, this generality might not directly reflect the research focus. We will formulate the problem in a more appropriate way in later versions. Q4: About studying intermediate layers in CVNNs. A4: Neuron learning focuses on the learning dynamics of neurons, which helps us understand the learning process of neural networks since a neuron is the simplest form of a neural network. This motivation is widely adopted in real-valued neuron learning [1,2,3] and benign overfitting [4]. To analyze deep CVNNs, the complex-valued intermediate layer is not the central problem since it can be modeled by distributions related to the real and imaginary parts. The fundamental difficulty is the dependence between parameters in different layers (i.e., we have to analyze the compositions of several random variables when studying neural networks), which also occurs in RVNNs. In the literature, these compositions are analyzed under different strong assumptions or highly over-parameterization [5,6,7]. [1] Yuandong Tian. An analytical formula of population gradient for two-layered ReLU network and its applications in convergence and critical point analysis (ICML 2017). [2] Gilad Yehudai and Shamir Ohad. Learning a Single Neuron with Gradient Methods (COLT 2020). [3] Weihang Xu and Simon S. Du. Over-parameterization exponentially slows down gradient descent for learning a single neuron (COLT 2023). [4] Spencer Frei, Niladri S. Chatterji, and Peter L. Bartlett. Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data (COLT 2022). [5] Alexandr Andoni et al. Learning Polynomials with Neural Networks (ICML 2014). [6] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A Convergence Theory for Deep Learning via Over-Parameterization (ICML 2019). [7] Kwangjun Ahn, Jingzhao Zhang, and Suvrit Sra. Understanding the Unstable Convergence of Gradient Descent (ICML 2022). Q5: About the influence of the bias term. A5: This is an important question. From the theoretical aspect, although many efforts are devoted to simplifying assumptions, the bias term is still difficult to cope with and is set as 0 in the literature [1,2,3]. The bias term will change the analytical property of the loss function and make the closed-form expression unavailable, which makes our analysis infeasible. It is possible to analyze neuron learning without closed-form losses [2], but this method relies on "spread" input distributions (Assumption 4.1). If the bias term is considered, i.e., the original input concatenated with an additional "1'', the additional "1'' makes the input distribution discrete and not spread. Thus, the analysis of the bias term is still an open problem and is beyond the scope of this paper. From the experimental aspect, we verify our findings in the presence of bias terms (Fig. 2 in the **global response**). --- Rebuttal Comment 1.1: Comment: Thanks for your response. I hold my opinion that the article should be accepted due to its theoretical strength and top-quality presentations.
null
null
null
null
null
null
Trust Region-Based Safe Distributional Reinforcement Learning for Multiple Constraints
Accept (poster)
Summary: The paper proposed a trust-region method for handling multiple constraints in safe RL for CMDPs. If the TRPO-like gradient calculation with multiple constraints is not feasible, the authors proposed a gradient integration method to calculate feasible gradients. The authors also proposed a TD-$\lambda$ method to calculate the target distribution of distributional critic. Experimental results have shown that the proposed methods seem to outperform baselines. Strengths: ## Originality The gradient integration is novel and looks useful for the infeasible initial stage when solving CMDP. Computing the TD $\lambda$ for the distributional critic is also novel to me. ## Quality The paper is well-written and includes many technical details, which is good for understanding and reproducing the work. ## Clarity The paper is very clear. Especially, I like the two figures which explain the ideas very well and I get the idea immediately. The notation is a little bit messy but clear enough to read. ## Significance The TD-$\lambda$ method for distributional critic might have a broad effect on other distributional RL algorithms, but I am not sure if similar approaches have been proposed to solve the problem. Weaknesses: 1. The paper has novelty but the contributions are not clearly demonstrated. From the results in Figures 3 and 4, I did not see too much performance improvement, solid constraint satisfaction, or faster convergence to feasible policies compared to baselines. Given this paper focuses on practical contributions, the performance results are not convincing. 2. Two contributions of this paper are not related. Gradient integration and TD-$\lambda$ seem to stand alone on their own. The authors should explain why you put these into one paper and how they contribute together to improve the safe RL. 3. The gradient integration problem is fragile and might have some problems in the finite-convergence theorem. I will put more discussion in the questions section. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I got confused about the figures of the first column in Figure 4. The curves and shades are not continuous. Please explain why. *Updates after rebuttal: Clarified in the reply. They are random samples.* 2. For Figure 1, what if the $g_1$ and $g_2$ are parallel to each other (no feasible region), or have an arbitrarily small relative angle? Then you integrate them to near zero and get stuck. Then I doubt the conclusion about finite-time convergence to feasible region. You might need more assumptions like the feasible region does exist. Do you want to comment on these situations? 3. Following problems in 2, it seems like no guarantees like the proposed gradient integration could be faster than naive methods could be made. Then how exactly can safe RL methods benefit from the gradient integration? 3. Practically, Solving TRPO is already computationally heavy. Locally linearization of multiple constraints and objectives will cause more approximation and further harm the performance. If you only would like to handle multiple constraints, many straightforward approaches could be used like first optimizing a weighted sum of multiple constraint critics. How does the proposed algorithm outperform these intuitions then? Please carefully explain these questions. I will consider increasing the score if the authors give me a reasonable response. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors addressed the limitations well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer Cyuw for the feedback and thorough review on our work. We respond to reviewer Cyuw's comments and questions below. # Weakness: The performance are not convincing **Performance improvement** The most comparable method with SDAC in the safety gym is WCSAC. However, as mentioned in Q1 of the general response, it can be said that the proposed method is better than WCSAC regarding risk-averse property and multi-constraint setting. Please see Q1 of the general response. Also, in the locomotion tasks, SDAC shows the highest reward sum and meets constraints the fastest in all tasks. Thus, it can be seen that the performance is improved in the risk-averse and multi-constraint settings. **Constraint satisfaction** In the safety gym tasks, since the optimal policy exists on the boundary of the constraint, the training results exist at the boundary rather than rigidly satisfying the constraints. Nevertheless, a risk-averse constraint can be used by setting $\alpha=0.25$ to meet the constraint solidly. Figure 3.a shows that SDAC with $\alpha=0.25$ has the lowest cost (showing solid constraint satisfaction) and is in the middle or upper ranks of the reward sum. **Convergence to feasible policies** The convergence speed can be measured by the shortest step satisfying all constraints. The proposed method satisfied constraints at 1.5e6 step in Cassie, 0.25e6 step in Laikago, and 0.2e6 step in Cheetah, but the second-best method satisfied constraints at 3.5e6 step, 0.7e6 step, and 0.25e6 step in each task. Thus, the proposed method converged to the feasible policies the fastest. # Weakness: Two contributions are not related As mentioned in the introduction, if the estimation bias of critics becomes large, the policy can be trained to be overly conservative or risky. If the number of constraints increases, the possibility of updating the policy in a wrong direction rises exponentially. Thus, the importance of reducing the bias in the multiple constraint setting increases. To this end, we contribute to reducing the estimation bias of critics while handling multiple constraints. # Q1 In the implementation, the policy takes random actions during the initial 10000 steps, like other SAC-based algorithms. As a result, low rewards were received, making the learning graphs discontinuous. We have attached an enlarged graph for the initial steps to the pdf of the general response. Please refer to the pdf. # Q2 Lemma A.2 in the Appendix shows that the solution of equation (9) always exists under assumptions that the constraints are convex and the feasible policy set is not empty (see the assumptions introduced in Theorem 3.1). Therefore, there is no conflict between constraints due to the existence of the solution. # Q3 In Q2 of the general response, the worst-case time to meet all constraints of our method is expressed as: $$\frac{D_\mathrm{KL}}{\beta(1-\gamma)(\zeta-\frac{2\beta C_\mathrm{max}}{(1-\gamma)^2})\mathbb{E}\_t[\sum_i\lambda^t_i/W_t]}.$$ The worst-case time of the naive method is the same as the above equation, except for the $\lambda$ part. In the naive method, $\lambda^t$ is an one-hot vector where $i$-th value, $\lambda_i^t$, is one only for corresponding to the randomly selected constraint. In other words, only $\mathbb{E}\_t[\sum_i\lambda_i^t/W_t]=\mathbb{E}\_t[\sum_i\lambda_i^t/||\sum_i\lambda_i^tA\_{C_k}^t||\_2]$ is different. Let us assume that the advantage vector follows a normal distribution. Then, the variance of $\sum_i\lambda_i^tA_{C_k}^t$ is smaller for lambdas with distributed values than for one-hot values. Then, the reciprocal of the 2-norm becomes larger, resulting in a decrease in the worst-case time. From this, the gradient integration has a benefit over the naive method as it reduces the variance of the advantage vector. Of course, we cannot officially say that the worst-case time of the proposed method is smaller than the naive method because the advantage vector does not follow the normal distribution. Still, we can deliver our insight on the benefit of gradient integration. Also, the proposed method shows better results than the naive method in the multi-constrained experiments of Section 5.2. In Figure 4.d, the proposed method takes 1.5e6 steps to meet all the constraints, but the naive method takes about 2.5e6 steps. Also, the naive method shows an unstable training as some constraints are violated even after the 2.5e6 step. # Q4 As the reviewer commented, TRPO is computationally heavy, but in safe RL, it can be said that TRPO-based policy updates are more effective than first-order (or gradient descent) updates. If a policy is updated by the first-order method, the policy can change rapidly, resulting in the constraints being violated suddenly, but TRPO-based update allows more stable learning because the values of the constraints do not change significantly. For this reason, many safe RL algorithms are based on TRPO [R1, R2]. Second, as the reviewer mentioned, the policy can be updated using weighted sum of the cost critics to deal with multiple constraints. This method exactly matches the Lagrangian method of updating the policy to minimize the weighted sum of the Lagrange multipliers and cost critics. However, since these methods usually update the policy and weights simultaneously, the policy update direction can easily fluctuate. In contrast, due to the strong duality of the QP problem, the solution of equation (9) can be expressed as $\sum_k \lambda_k H^{-1}g_k$, so the gradient integration we propose can also be seen as a weighted sum method. Unlike the Lagrangian methods, it automatically calculates the appropriate weight by solving the QP problem at each gradient step, so it has the advantage of being able to find the policy update direction more stably. # **References** [R1] Achiam et al. "Constrained policy optimization." ICML, 2017. [R2] Xu et al. "Crpo: A new approach for safe reinforcement learning with convergence guarantee." ICML, 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. The reply fully addressed my problems in the comments. I decided to increase my score to 6. Things prevent me from further improving my score is (1) The limitations of trust-region-based second-order algorithms also apply to this paper. Especially for multiple constraint algorithms which are highly likely without local convexity and feasibility, the assumptions might be too harsh. (2) Either TRPO, TD(lambda), or CVaR is some existing techniques. --- Reply to Comment 1.1.1: Title: Response to Reviewer Cyuw Comment: Thanks for your reply. We are glad that our answers can help you with the questions in the comments.
Summary: This study focuses on multiple constraints setting in safe reinforcement learning, and an interesting method is proposed by leveraging gradient integration methods. Moreover, the feasibility of multi-constraint problems is addressed, TD distribution method is introduced to decrease the estimation bias. Strengths: 1. Multiple constraints are considered. 2. The code is provided. 3. The writing quality is good. Weaknesses: 1. Some papers are not investigated, e.g., [1] and [2] [1] Gu, S., Yang, L., Du, Y., Chen, G., Walter, F., Wang, J., ... & Knoll, A. (2022). A review of safe reinforcement learning: Methods, theory and applications. arXiv preprint arXiv:2205.10330. [2] Garcıa, J., & Fernández, F. (2015). A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1), 1437-1480. 2. The experimental results do not convince me, as shown in Figure 3, other baselines also present better performance than this study, e.g., the WCSAC method. 3. Could you compare the method with PPO Lagrangian? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Can we handle multiple constraints by averaging the multiple constraints as one constraint? 2. The essential contribution is to design the distributional critics with low biases. How about the distributional actors for estimating different types of constraints? 3. What is the difference between PCPO and this method? In PCPO, they also make projections to optimize reward and cost. 4. Why does the $J(\pi)$ like the formation in page 3, line 93? 5. Why do we need a Shannon entropy? 6. How does the study have Equations (4) and (5)? Could the study provide analysis to prove them? 7. Since the method claims the algorithm is efficient and computation efficiency is better than other baselines, could the study provide the sample complexity? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: 1. The cost is assumed as convex. However, in most cases, the constraints may be nonconvex. 2. The computation complexity and sample complexity should be provided to prove the effectiveness of this study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer HJro for the feedback and thorough review on our work. We respond to reviewer HJro's comments and questions below. # Weakness: Some papers are not investigated. I will add the two papers mentioned by the reviewer to the safe RL part of the related work section. # Weakness: Experimental results in Figure 3. As mentioned in Q1 of the general response, it can be observed that SDAC is better than WCSAC in terms of risk-averse property and multi-constraint handling. Please see the general response. # Weakness: Comparison with PPO-Lagrangian. We completed the experiments of PPO-Lagrangian (PPO-L) in the Safety gym and Cassie task, and the experimental results can be found in the attached pdf of the global response. Since PPO-L is an on-policy algorithm, it can be confirmed that the convergence speeds are slow compared to other algorithms. Also, like other lagrangian methods, PPO-L has the disadvantage of unstable training because multipliers and policies are updated simultaneously. # Q1: Merging multiple constraints into a single constraint. It is difficult to handle multiple constraints as a single constraint by averaging because the feasible regions of the multiple constraints, $F_k(\pi) \leq d_k$ $\forall k$, and the averaged constraint, $\sum_kF_k(\pi)/K \leq \sum_k d_k/K$, are quite different. Matching the feasible region using a single constraint requires designing a new integrated cost function rather than taking the average. Still, as mentioned in the introduction as a drawback of RL, this process is time-consuming. # Q2: How about the distributional actors for estimating different types of constraints? SDAC can be extended to use other types of constraints using the distributional critic, but it is essential to find the upper bound of the constraint to apply the trust region method. No study has yet derived the upper bound of risk-based constraints except for the mean-std constraint, so we can deal with this extension as future work. # Q3: Difference from PCPO. PCPO only deals with a single constraint but can be extended to multiple constraint settings. Nevertheless, the most significant difference from our method is that PCPO does not guarantee that the policy will converge to the feasible region for the infeasible starting case, but we can. However, we did not cite the PCPO paper, so we will cite this paper. # Q4: Why does the $J(\pi)$ like the formation in page 3? If we replace the $Z_R^\pi$ with the definition in $J(\pi)$, it becomes $\sum_t \gamma^t(R(s_t, a_t, s_{t+1}) + H(\pi(\cdot|s_t)))$, equivalent to the entropy-regularized RL problem [R1]. We will explain why the entropy term is added in the answer to the next question. # Q5: Why do we need a Shannon entropy? The reason for adding the entropy term is to improve the exploration as in many existing RL papers, including Soft Actor-Critic (SAC) [R1], and it is not a required term. We additionally discussed that the trust region method can still be used even if an entropy term is added in Appendix A.7. # Q6: How does the study have Equations (4) and (5)? Since $\mathrm{Std}[Z_{C_k}^\pi]=\sqrt{\mathbb{E}[(Z_{C_k}^\pi)^2] - \mathbb{E}[Z_{C_k}^\pi]^2} = \sqrt{J_{S_k}(\pi) - J_{C_k}(\pi)^2}$, we can achieve equation (4) by substituting the Std term in equation (2). Equation (5) defines the surrogate functions for the objective and mean-std constraints. The condition for the surrogate function in the trust region method is that the difference between the original function and the surrogate should be bounded by the trust region size. For the objective's surrogate, we derive the bound in Appendix A.7, and the bound for the mean-std constraint's surrogate is derived in [R2]. # Q7: Sample complexity. We analyzed the sample complexity of the proposed method in Q2 of the general response. Please refer to the general response. # References [R1] Haarnoja, Tuomas, et al. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." International conference on machine learning. PMLR, 2018. [R2] Kim, Dohyeong, and Songhwai Oh. "Efficient off-policy safe reinforcement learning using trust region conditional value at risk." IEEE Robotics and Automation Letters 7.3 (2022): 7644-7651. --- Rebuttal Comment 1.1: Comment: Thanks for your response, I plan to upgrade the score. --- Reply to Comment 1.1.1: Title: Response to Reviewer HJro Comment: We appreciate your thoughtful feedback. Thank you for raising your score.
Summary: The paper tries to address the problem of safe RL with multiple constraints with a safe distributional actor-critic (SDAC) approach. The approach includes a gradient integration method to manage the infeasibility issues in multi-constrained problems and a TD$(\lambda)$ target distribution to estimate risk-averse constraints with low biases. Experimental results show that the proposed approach outperforms the baselines in both single- and multi-constrained tasks. Strengths: Safe Rl with multiple constraints is an important problem. The presented approach uses a gradient integration method to address the infeasibility issues of the trust-region-based algorithms with theoretical guarantees and proposed a TD$(\lambda)$ loss to reduce the estimation bias of the critics. Sufficient experimental results and ablation studies are provided to support the claims of the paper. Weaknesses: 1. The writing of the paper can be improved. There are little background information and intuitions introduced, which makes the paper hard to read. For example, In Section 2 when introducing the trust-region method with a mean-std constraint, it would be better if the authors can introduce the intuitions behind the equations instead of just putting the equations there, because [Kim and Oh, 2022a] is not a very well-known paper. Also in Section 3.1, little information is provided about the intuition of equation (9). 2. See “Questions”. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Seems the performance of the algorithm depends a lot on $\alpha$ and $\lambda$, then how can we choose them in the experiments? 2. Since the algorithm is designed to deal with multiple constraints, could the proposed algorithm deal with constraints with different priorities, for example, collisions with humans should be avoided prior to collisions with obstacles? 3. In Figure 3(a), WCSAC with both $\alpha=0.25$ and $\alpha=1.0$ satisfies the cost threshold in all environments while in some environments it also has higher rewards compared with SDAC. Could the authors provide some intuition about these observations? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors discuss the limitations well in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer dXag for the feedback and thorough review on our work. We respond to reviewer dXag's suggestions and comments below. # Weakness: The writing of the paper can be improved. As the reviewer commented, in constructing the subproblem in equation (6), many parts are enumerated without formal meaning, making it difficult to read. We will supplement the explanation of why the subproblem is established and what is necessary to establish it (the square value and surrogate functions). In addition, we attached Figure 1 to improve the understanding of equation (9), but the intuition about why the constraints should be truncated to the trust region was missing. The reason for truncating is to make the gradient integration method invariant to the gradient scale. Otherwise, a dominant policy gradient may form for constraints with large gradient scales. We will also supplement this explanation. # Q1: How can we choose $\lambda$ and $\alpha$ in the experiments? $\lambda$ adjusts the balancing between the bias and variance of the critics. We have experimented with various values of $\lambda$ in Section 5.3 and Appendix A.5 and found that the value between 0.9 and 1.0 is good as in other TD($\lambda$)-based papers [R1]. Thus, we recommend setting $\lambda$ between 0.9 and 1.0. To give an intuition for setting $\alpha$, we can assume that the returns follow a Gaussian distribution. Then, the mean-std constraint becomes setting the probability that the return will be lower than the threshold. Thus, if you want to the cost return to be smaller than the threshold with a probability of $p$, you can find $\alpha$ as $p = \Phi(\phi(\Phi^{-1}(\alpha))/\alpha)$, where $\Phi$ and $\phi$ are the cdf and pdf of the standard normal distribution. Also, we wrote the tips for hyperparameter tuning in Appendix B, so please refer to it for the details. # Q2: Dealing with constraints with different priorities. Priority can be set indirectly through the thresholds of constraints. For the reviewer's example case, it can be implemented by setting the constraint for collision with people and the constraint for collision with obstacles separately and setting the threshold of the people constraint lower than the obstacle constraint. By giving different thresholds, we can generate a policy in which the probability of collision with a person is lower than the probability of collision with an obstacle. However, in decision-making situations where you have to choose between a collision with a person and an obstacle, it is challenging to always prioritize the choice of collision with an obstacle over a person. # Q3: Comparison of SDAC and WCSAC results in Figure 3. We presume that since WCSAC based on SAC [R2] takes a more significant number of gradient steps than SDAC based on TRPO [R3] (about 1000 times more), WCSAC performed better than SDAC on easy tasks. (All algorithms have high scores in the car-goal and point-button tasks, indicating that the tasks are easy.) However, as mentioned in Q1 of the global response, WCSAC does not significantly change the number of constraint violations even when the $\alpha$ value is adjusted, so it can be seen that SDAC is better in terms of the risk-averse property. # References [R1] Schulman, John, et al. "High-dimensional continuous control using generalized advantage estimation." arXiv preprint arXiv:1506.02438 (2015). [R2] Haarnoja, Tuomas, et al. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." International conference on machine learning. PMLR, 2018. [R3] Schulman, John, et al. "Trust region policy optimization." International conference on machine learning. PMLR, 2015. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their reply. I will keep my original score. --- Reply to Comment 1.1.1: Title: Response to Reviewer dXag Comment: Thanks for your response. We appreciate for the positive review.
Summary: The paper presents a safe reinforcement learning (RL) algorithm called SDAC for handling multiple constraints in safety-critical robotic tasks. SDAC incorporates risk-averse constraints and makes two key contributions: a gradient integration method for handling infeasibility issues and a TD($\lambda$) target distribution for estimating risk-averse constraints. Experimental results show that SDAC outperforms safe RL baselines, achieving fewer constraint violations and faster constraint satisfaction. Strengths: In general, the work is solid and contributes quite some novel ideas for safe RL. - The formulation of safe RL problem with multiple constraints is very important for real applications. - The infeasibility issues in constrained optimization are explicitly considered. - The risk-averse safety measures are practical to safety-critical problems. Weaknesses: - While the measures have low bias, it is still important to consider the bias when making decisions. - The presentation of the technical details, particularly the quantile regression part, is challenging to comprehend. It might be beneficial to refer to Dabney's paper, which provides a clearer explanation of the related content. - From the empirical analysis, the proposed method cannot ensure safety during the early stage of training, despite the theoretical guarantees. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the purpose of adding a policy entropy term in line 91? Is it necessary, given the use of trust-region methods, and how is it maximized? - Is there an implicit assumption that the distribution is Gaussian considering the risk measure in Equation 2? - What is the reason for balancing bias and variance? Even though the bias is low, are there still some negative effects? How can we mitigate them, especially in the context of risk? - How will the algorithm work if the constraints completely conflict with each other? - When the cost signal is binary, multiple constraints can be taken as one constraint. Then, a method like WCSAC or CPO can also be used. In this case, what are the main advantages of SDAC? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the limitations are clearly stated at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer A8gA for the feedback and thorough review on our work. We appreciate that the reviewer commented that the proposed method is novel and solid. We respond to the reviewer's comments and questions below. # **Weakness: Presentation of the quantile regression can be improved.** As the reviewer commented, we found that the explanation of the quantile regression part can be difficult to follow due to the omitting intermediate steps in equations (11) and (12). We will include the derivation of each equation in detail and explain the intuition of each equation. # **Weakness: Safety violation during early stage of training.** As we mentioned in the introduction, infeasible starting cases are typical if there is no prior knowledge. In our experiments, we randomly initialize the policy networks resulting in the infeasible starting case. We can use the gradient integration method to recover the policy to the feasible region to handle the infeasible starting case, which is one of the main contributions of our paper. # Q1 The reason for adding the entropy term is to improve the exploration as in many existing RL papers, including Soft Actor-Critic (SAC) [R1], and it is not a required term. We additionally discussed that the trust region method can still be used even if an entropy term is added in Appendix A.7. To maximize the entropy term, we can construct the objective value as $Q(s, a) - \beta\log{\pi(a|s)}$ and update the policy using backpropagation and reparameterization trick (please see SAC paper [R1]). # Q2 There is no assumption of Gaussian distribution in equation (2). Nevertheless, if we assume the return follows a Gaussian distribution, equation (2) becomes equivalent to conditional value-at-risk (CVaR), a well-known risk measure in finance. # Q3 In safe RL, since the safety of the policy is evaluated by the cost critics, a policy with the desired safety level can be generated only by training the critics with low biases. However, since the variance increases as the bias decreases in the TD($\lambda$) approach, the reward performance may decrease at $\lambda=1$, as in the ablation study (Figure 3.b). For this reason, balancing the variance and bias is essential. In order to mitigate balancing, it is important to set $\lambda$, and according to our ablation results, it is good to set a value between 0.9 and 1.0. In the context of risk, we can also increase the training batch size since risk measurement is more sensitive to sample noise. # Q4 If the constraints conflict with each other, the solution of equation (9) does not exist. Then the policy is not updated, so the conflict will make the policy get stuck at one point. However, according to Lemma A.2 in the Appendix of our paper, the solution of equation (9) always exists under our assumption that the constraints are convex and the policy set that satisfies the constraints is not empty. Also, in our experiments, no conflict occurred because the gradients of constraints was calculated stochastically. (Since the dimension of the gradient is much larger than the number of constraints, there is little chance of conflicts in the case of stochastic gradient calculation.) # Q5 As the reviewer commented, we can merge multiple constraints into a single constraint by the "and" operator in case of each cost function is binary. However, if the cost function is integrated in this way, it is difficult to establish a unified safety level (or threshold) that corresponds to the original constraint settings. We can give an example from the locomotion tasks, which have three constraints regarding body balance, CoM height, and feet timing. Since the body balance and the CoM height significantly affect the robot's stability, it is necessary to increase the safety level of the corresponding constraints. However, the constraint on the feet timing is to prevent the robot from standing still and has little relation to safety. Thus, if the level of this constraint is increased, the robot control may become unstable. Even though the integrated cost function can be defined as the result of the "and" operation of all cost functions, finding a proper safety level (or threshold) is tricky. For this reason, SDAC with multiple constraints has an advantage that can set the safety levels of each constraint independently. # References [R1] Haarnoja, Tuomas, et al. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." International conference on machine learning. PMLR, 2018. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing my questions. I am keeping a positive recommendation for the paper and have no further questions. --- Reply to Comment 1.1.1: Title: Response to Reviewer A8gA Comment: Thank you for your response. We appreciate for the positive feedback on our paper.
Rebuttal 1: Rebuttal: # General response We thank all reviewers for their valuable comments and suggestions. In the following, we respond to the comments on comparison with WCSAC and the sample complexity. ### **Q1. Comparison with WCSAC.** We first examine the experiments in Section 5.1. For $\alpha=1.0$ (risk-neutral constraint setting), SDAC achieves higher reward sums in the point-goal and car-button tasks, whereas WCSAC shows higher in the other two tasks. The same results are observed at $\alpha=0.25$ (risk-averse setting). However, risk-averse properties should also be taken into account in interpreting the results. WCSAC uses risk-averse constraints like SDAC, so it should be able to significantly reduce the number of constraint violations at $\alpha=0.25$, but it failed to do so. (For an analysis of this reason, please refer to Section 5.1 of the paper.) From this observation, SDAC has better risk-averse properties than WCSAC. Moreover, in the multiple constraints setting (Section 5.2), whether in terms of reward or constraint satisfaction, SDAC outperforms WCSAC. Hence, it can be concluded that SDAC is better than WCSAC in both the risk-averse and multi-constraint settings. ### **Q2. Sample complexity.** To analyze the sample complexity, we consider tabular MDPs and use softmax policy parameterization (see [R1]). It is challenging to analyze the sample complexity of TRPO-based methods since their stepsize is not fixed, so using natural policy gradient (NPG) methods is common, as done in [R1]. Thus, we introduce the NPG version of our method as follows. 1. (Recovery) If the constraints are not satisfied at $t$ step, we take a recovery step using the gradient integration as: $g^*=\mathrm{argmin}_g\frac{1}{2}g^THg$ s.t. $g_k^Tg+c_k\leq 0$ $\forall k\in$ {$k|F_k(\pi_t;\alpha)>d_k$}, $\psi\_{t+1}=\psi_t+\beta g^*/||g^*||\_2$ ,where $\beta$ is a stepsize, and other notations are the same as in the paper. 2. (Normal) If the constraints are satisfied at $t$ step, we store $t$ in a set $\mathcal{N}$ and maximize the objective as: $\psi_{t+1} = \psi_{t} + \beta A_R^{\pi_t}/||A_R^{\pi_t}||_2,$ where $A_R^{\pi_t}$ is the vector expression of the advantage function $\in \mathbb{R}^{|S| |A|}$. Derivation of the gradient can be referred to [R1]. According to the Lemma A.2 in our paper, $g^*$ in the recovery step always exists and can be expressed as: $g^* = \sum_k \lambda_k H^{-1}g_k = \sum_k \lambda_k A_{C_k}^\pi$, where $\lambda_k \geq 0$. #### **Analysis of worst-case time to satisfy all constraints** During the recovery step, the policy can be expressed as: $$\psi\_{t+1}=\psi_t-\beta\sum_k\lambda^t_kA\_{C_k}^t/W_t,\ W_t:=||\sum_k\lambda^t_kA\_{C_k}^t||\_2,\ \pi\_{t+1}(a|s)=\pi_t(a|s)\exp{\left(-\frac{\beta}{W_t}\sum_k\lambda^t_kA\_{C_k}^t(s, a)\right)}/Z_t(s),$$ where $Z_t(s)$ is a normalization factor. We can get the followings by using the Lemma 6 in [R1]: $$\sum_k\lambda^t_k(F_k(\pi\_{t+1})-F_k(\pi_t))/W_t=\frac{1}{1 - \gamma}\mathbb{E}\_{s\sim\nu_\rho}\left[\sum_a\pi\_{t+1}(a|s)\sum_k\lambda^t_kA\_{C_k}^t(s, a)/W_t\right]\leq-\frac{1}{\beta}\mathbb{E}\_{s\sim\rho}\left[\log{Z_t(s)}\right],$$ where we abbreviate $F_k(\pi_t;\alpha)$ as $F_k(\pi_t)$. We can also get the followings by using the Lemma 7 in [R1]: $$\sum_k\lambda^t_k(F_k(\pi_*)-F_k(\pi_t))/W_t\geq-\frac{1}{\beta(1-\gamma)}\mathbb{E}\_{s\sim\nu_*}\left[D_\mathrm{KL}(\pi_*||\pi_t)-D_\mathrm{KL}(\pi_*||\pi\_{t+1})\right]-\sum_k\lambda^t_k\frac{2\beta C_\mathrm{max}}{(1-\gamma)^2W_t},$$ where $\pi_*$ is an optimal policy, $C_\mathrm{max}$ is the maximum value of reward and costs. If $\lambda^t_k > 0$, $F_k(\pi_{t}) - F_k(\pi_*) > \zeta$. Thus, $\sum_k \lambda^t_k(F_k(\pi_*) - F_k(\pi_{t})) \leq -\zeta\sum_k \lambda^t_k$. If the policy does not satisfy the constraints until $T$ step, the following inequality holds by summing the above inequalities from $t=0$ to $T$: $$\beta(1-\gamma)(\zeta-\frac{2\beta C_\mathrm{max}}{(1-\gamma)^2})\sum_{t=0}^T\sum_i\lambda^t_i/W_t\leq\mathbb{E}\_{s\sim\nu_*}\left[D_\mathrm{KL}(\pi_*||\pi_0)\right].$$ Let denote $\frac{1}{T}\sum_{t=0}^T \sum_i \lambda^t_i / W_t$ as $\mathbb{E}\_t[\sum_i\lambda^t_i/W_t]$, and we can get $W_t=||\sum_k \lambda^t_kA\_{C_k}^t||\_2\leq\sum_k\lambda^t_k2|S||A|C_\mathrm{max}/(1-\gamma)$. Then, the maximum $T$ can be expressed as: $$ T\leq\frac{D_\mathrm{KL}}{\beta(1-\gamma)(\zeta-\frac{2\beta C_\mathrm{max}}{(1-\gamma)^2})\mathbb{E}\_t[\sum_i\lambda^t_i/W_t]}\leq\frac{2|S||A|C_\mathrm{max}D_\mathrm{KL}}{\beta (1-\gamma)^2\zeta-2\beta^2C_\mathrm{max}}=:T_\mathrm{max}, $$ where we abbreviate $\mathbb{E}\_{s\sim\nu_*}\left[D_\mathrm{KL}(\pi_*||\pi_0)\right]$ as $D_\mathrm{KL}$. Finally, the policy can reach the feasible region within $T_\mathrm{max}$ steps. #### **Convergence rate** For the normal step, we also can get the following inequality as done in the recovery step: $$ \frac{(1-\gamma)^2\beta}{2C_\mathrm{max}}(J(\pi_*)-J(\pi_t))-\beta^2\leq|S||A|\mathbb{E}\_{s\sim\nu_*}\left[D_\mathrm{KL}(\pi_*||\pi_t)-D_\mathrm{KL}(\pi_*||\pi\_{t+1})\right]. $$ If we sum up the above inequalities from $t=0$ to $T$, we can get the following inequality: $$ \frac{(1-\gamma)^2\beta}{2C_\mathrm{max}}(\sum_{t\in\mathcal{N}} (J(\pi_*) - J(\pi_t)) + \zeta (T - |\mathcal{N}|)) \leq |S||A|D_\mathrm{KL} + \beta^2T $$, and according to Lemma 9 in [R1], the set $\mathcal{N}$ is not empty if $T$ is large enough. Then, if we schedule $\beta = 2C_\mathrm{max}\sqrt{|S||A|/T}$ and $\zeta=2(D_\mathrm{KL} + 4C_\mathrm{max}^2)\sqrt{|S||A|/T}$, we can get convergence rate as: $J(\pi_*)-\mathbb{E}\_{t\in\mathcal{N}}[J(\pi_t)] = 2\sqrt{|S||A|/T}(D_\mathrm{KL} + 4C_\mathrm{max}^2)/(1 - \gamma)^2 = \mathcal{O}(1/\sqrt{T})$. ### **References** [R1] Xu, Tengyu, Yingbin Liang, and Guanghui Lan. "Crpo: A new approach for safe reinforcement learning with convergence guarantee." International Conference on Machine Learning. PMLR, 2021. Pdf: /pdf/b9b72b54c2a6c6437433288c75324b12a7c1a368.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Deep Contract Design via Discontinuous Networks
Accept (poster)
Summary: This paper studies the problem of contract design, focusing on a scenario where a single principal aims to design the reward structure for different outcomes, followed by a self-interest agent. The principal's utility function, which relies on the agent's best response strategy, is a piecewise affine funciton with points of discontinuity at the boundaries. Based on that, the authors introduce a novel approach by using a discontinuous ReLU (DeLU) network to model the principal's utility function. To infer the optimal contract using a trained DeLU network, the authors propose two inference methods: a Linear Programming (LP)-based approach and a gradient-based approach. The authors provide empirical evidence through various experiments to showcase the optimality, sample complexity, and time efficiency of their DeLU network in comparison to the conventional ReLU network, as well as the effectiveness of their proposed inference techniques. Strengths: 1. The idea of representing the principal's utility function using a neural network is novel and insightful. It paves the way for a new research direction in contract design that harnesses the power of neural networks. 2. The motivation behind the DeLU network is well-founded. The authors thoroughly analyze the piecewise affine linear nature of the principal's utility function, including its discontinuity at boundary points. Subsequently, they construct the DeLU network, which aligns with and preserves these crucial properties. 3. The paper exhibits excellent organization throughout, seamlessly integrating theoretical analysis, methodology descriptions, and the experimental section. The logical flow makes it easy to follow and comprehend the content. Weaknesses: From my view, one significant weakness of the paper is the lack of motivation for utilizing deep learning in the context of optimal contract design. Why is deep learning a relevant and beneficial approach in this domain? To enhance the paper, it would be essential to address this gap and explain the motivation behind incorporating deep learning techniques for contract design. For instance, researchers have explored deep learning for optimal auction design because the theoretical research on optimal auctions has encountered bottlenecks. Similarly, it is crucial to identify the motivation for leveraging deep learning in contract design. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. As I mentioned in 'Weakness', what is the significance of using deep learning in optimal contract design? why do we care about this? 2. Why model the utility function and infer the optimal contract instead of directly modeling the contract and optimizing it based on the utility function? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > (Q1) _`What is the significance of using deep learning in optimal contract design? Why do we care about this?`_ The reason, which is similar to the case of auction design, is that previous theoretical and empirical work has encountered bottlenecks. A particularly interesting angle is the computational one, given that many contracting settings exhibit combinatorial structure, in which case vanilla LP-based approaches fail to be efficient. Indeed, a recent literature in TCS has embarked on designing worst-case poly-time (approximation) algorithms for different combinatorial domains (e.g., [1, 2, 3, 4]). Our approach provides a scalable, general purpose, and beyond-worst case approach for computing (near-)optimal contracts in such settings. Considering learning theory, whereas there are exponential worst-case sample complexity bounds coming from recent work [5] (exponential in the number of samples in order to learn an approximately optimal contract, and thus severe difficulty when the number of outcomes is large), we show empirically that we can get good results with a relatively small number of samples. We see this as a strength of the proposed framework, which is simple to implement, versatile, and general purpose. Getting a better theoretical understanding beyond the worst-case, and in the offline setting of the present paper, is an open problem. In summary, _no previous method can scale well to the general case of contract design_, motivating us to explore deep learning methods. Moreover, _we expect deep contact design to become more important in the near future_, considering both the digital economy and likely near-term application of AI for automating economic decision making (e.g., contracting with an LLM to plan a vacation, with contracts on outcomes, and the application of smart contracts in DeFi, for example for trading and loans). It seems likely that technological approaches can find application to the design contracts in these kinds of settings [6, 7]. [1] M. Babaioff, M. Feldman, N. Nisan. Combinatorial Agency. EC'06 [2] P. Duetting, T. Roughgarden, I. Talgam-Cohen. The Complexity of Contracts. SODA'20 [3] P. Duetting, T. Ezra, T. Kesselheim, M. Feldman. Combinatorial Contracts. FOCS'21 [4] P. Duetting, T. Ezra, T. Kesselheim, M. Feldman. Multi-Agent Contracts. STOC'23 [5] Zhu, B., Bates, S., Yang, Z., Wang, Y., Jiao, J. and Jordan, M.I., 2023. The sample complexity of online contract design. ACM EC 2023 [6] Horton, J.J., 2023. Large language models as simulated economic agents: What can we learn from homo silicus? (No. w31122). National Bureau of Economic Research. [7] Park, J.S., O'Brien, J.C., Cai, C.J., Morris, M.R., Liang, P. and Bernstein, M.S., 2023. Generative agents: Interactive simulacra of human behavior. arXiv:2304.03442 &nbsp; > (Q2) _`Why model the utility function and infer the optimal contract instead of directly modeling the contract and optimizing it based on the utility function?`_ (1) A first reason is to handle a black box model of an agent, rather than explicitly modeling its actions. A direct optimization approach would need to be able to ``push gradients through" the agent's decision function, along with the effect of this decision (i.e., action) on the world (i.e., outcomes). For example, if using an approach similar to RochetNet/MenuNet, then the network would model a set of choices for an agent, with each choice corresponding to an action. As a result, the agent's actions would need to be explicitly modeled in the network architecture, which will get infeasible, for example with an exponential number of actions. In our framework, the agent's actions are implicit, and it is only the "contract to utility" function that is modeled by the network. (2) Direct optimization has been used for auction design. However, the difference there is that auction design is done against a _distribution_ of agent types. _This makes the revenue function in auction design continuous_, with a continuous measure of the type distribution "peeling off" and choosing a different menu choice as the specification of the choices change. Here, we optimize against a single agent type, which makes the principal's utility function discontinuous. For this reason, conventional gradient-based optimization would not be expected to converge to even locally-optimal contracts because the gradients at the boundary are not well defined.
Summary: The authors focus on the problem of contract design. In this problem, there is an agent which can take some costly action, each action resulting in a distribution over possible outcomes. A principle gets utility based on outcomes, and incentivizes the agent to take actions which will benefit them by transferring them a payment that depends on the outcome. The agent chooses the action that maximizes their expected received payment minus cost; the principal gets a utility equal to the utility of the outcome minus the required payment. In mechanism design for selling goods (e.g. auctions) there is a recent thread of work on “differentiable economics” for learning good mechanisms. The authors to some extent work within this broad area, although the techniques they represent depart from prior work in interesting ways. In particular, rather than using a differentiable neural network to represent the mechanism or the agent’s utility as a function of agent types, they instead represent the principal’s utility as a function of the mechanism (contract) itself. The authors present a new network architecture (DeLU) well-suited to learning to represent principal utilities (which are always piecewise affine, though unlike ReLU network not necessarily continuous). By training these networks in a supervised manner, the authors approximate the utility of a given contract. They then present various techniques to optimize over network inputs to find a high-utility contract, and show that these approaches do in fact give good-performing contracts. Strengths: The problem of contract design is relevant and interesting, and the authors tackle it in a novel way. The new network architecture and new techniques for optimizing over its input are interesting technical contributions. The experiments are carefully done and support the empirical claims made. Weaknesses: A major concern I have is that essentially, the authors take many examples of contracts, use them to supervised-learn an approximate utility function, and then optimize on that utility function. Does this reliably work much better than just taking the best contract observed from the training data, skipping this intermediate step? This general problem also shows up in other mechanism design problems where utility functions are learned via supervised learning -- does learning the utility function and using it significantly improve downstream performance compared to just using the fixed training dataset as a "pointwise" utility function? I don't think this is a trivial issue and if unaddressed it is hard to tell if the paper's contribution serves a purpose or not. This is the main reason for my low score and if it can be addressed and no other problems emerge during the discussion phase, I would significantly raise my score. After author response: this issue is satisfactorily addressed. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Figure 1, although a valuable and informative plot, is unpleasant looking. Now that you are not up against NeurIPS deadline it might make sense to take the time to do it and make it better looking (e.g. change the strange camera perspective, make axes not cut off, etc.) How would this approach differ from directly doing gradient-based optimization in the space of contracts, similar to RochetNet/MenuNet for auctions? It is possible to embed ReLU networks in a mixed-integer program and globally optimize over their inputs. This approach, along with some preprocessing and pruning, has been shown to work quite well even at relatively large scales (e.g. Tjeng et al., https://arxiv.org/abs/1711.07356), especially since good MIP solvers are so fast these days. Could this technique be used effectively to globally optimize over DeLU networks too? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately discuss limitations of their work, except for the issue mentioned in "Weaknesses". I think better discussion of the size of problem instances in which their approach can be used would be warranted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > (Major concern) _`Does learning the utility function and using it significantly improve downstream performance compared to just using the fixed training dataset as a "pointwise" utility function?`_ Thanks for this question. We present the results from a new set of experiments to demonstrate that learning and then maximizing utility functions finds significantly better contracts than those in the training samples, especially when the number of outcomes is large. __Experiments.__ We first compare the utilities of the best contracts in the training datasets and the contracts found by DeLU. In Table 2, we fix the number of actions $n$ to 50 and increase the number of outcomes $m$ from $2^5$ to $2^8$. As in other experiments, for each problem size, 12 combinations of $\alpha_p$ and $\beta_p$ are tested, and we report the median optimality (normalized principal utility) and standard deviation. __Table 2. Optimality of the best training sample and DeLU contract.__ The number of actions $n$ is fixed to 50. |# Outcomes|32|64|128|256| |-|-|-|-|-| |Best Training Samples|85.83±2.05%|78.72±2.54%|72.04±2.24%|65.41±2.07%| |DeLU|92.62±3.81%|97.22±3.71%|88.64±6.93%|94.14±6.27%| When the number of outcome is 256, the best training sample has an optimality of 65.49%, significantly lower than 94.13% achieved by DeLU. Moreover, the optimality of the best training sample decreases as the number of outcomes increases, while our method scales well with the number of outcomes. DeLU also performs substantially better than the best training sample when we increase the number of actions (Table 3). In Figure 1 of the PDF attached in the general response, we observe similar results of these two methods in more problem sizes. __Table 3. Optimality of the best training sample and DeLU contract.__ The number of outcomes $m$ is fixed to 50. |# Actions|32|64|128|256| |-|-|-|-|-| |Best Training Samples|73.34±3.41%|77.37±3.20%|73.16±3.19%|70.68±2.29%| |DeLU|96.44±5.68%|89.84±9.07%|95.03±5.31%|94.30±3.07%| The advantage of supervised learning is more significant in very large scale problems. In Table 4, we can see that in problems with 1M outcomes, the performance of DeLU is about 2x the performance of the best training sample. __Table 4. Utility of DeLU contracts / utility of the best contract in the training set in large-scale problems.__ |(# Outcomes, # Actions)|(1K,1K)|(10K,10K)|(50K,2K)|(100K,1K)|(1M,100)| |-|-|-|-|-|-| |DeLU Result / Best Training Sample|166.49%|190.54%|194.72%|197.64%|199.18%| __Why?__ As Lemma 3 shows, the optimal contract is on the boundary of a linear piece (at an optimal solution, the agent's utility of taking the actions corresponding to adjacent contracts should be the same). This sub-space in which these optimal contracts reside is of a lower dimension. As the dimension of the contract space grows, the probability of obtaining a random sample in this lower-dimensional sub-space gets closer to 0. &nbsp; > (Q2) _`How would this approach differ from directly doing gradient-based optimization in the space of contracts, similar to RochetNet/MenuNet for auctions?`_ A first difference is that auction design is done against a _distribution_ of agent types, which makes the revenue function in auction design continuous (with a continuous measure of the type distribution "peeling off" and choosing a different menu choice as the specification of the choices change). Here, we optimize against a single agent type, which makes the principal's utility function discontinuous. For this reason, direct gradient-based optimization would not be expected to converge to even locally-optimal contracts because the gradients at the boundary are not well defined. A second difference in applying RochetNet/MenuNet is that the choice in the menu would correspond to an action of the agent (it is not choosing from a menu of contracts, but choosing from a menu of actions given a single contract). For this, the actions would need to be explicitly modeled in the network architecture, and this will become infeasible, for example in the case of an number of actions that increases exponentially in some natural domain parameters. In our framework, the agent's actions are implicit, and it is only the "contract to utility" function that is modeled by the network. &nbsp; > (Q3) _`Could this technique (embed ReLU networks in a mixed-integer program and globally optimize over their inputs) be used effectively to globally optimize over DeLU networks too?`_ Thanks for this inspiring question. Globally optimizing a DeLU network involves the bias term for the last layer. This piecewise bias is generated by a Tanh-activated multi-layer network, which is challenging to be embedded in a mixed-integer program (MIP). But if we use ReLU as the activation function in this bias network, then it would be in principle possible to test a global MIP method for inference (our intuition is this would scale less well than the current, piece-oriented inference method). &nbsp; > (Limitations) _`I think better discussion of the size of problem instances in which their approach can be used would be warranted.`_ In Table 4, we test our method empirically on problems with up to 1M outcomes. As the standard LP algorithm becomes intractable, we compare DeLU results to the best training samples, and DeLU shows substantially superior outcomes. Given sufficient RAM and GPU RAM (we now use 80G RAM and an A100 GPU with 40G GPU RAM, allowing at most 10K training samples for 1M outcome problems), we anticipate applicability of our method to even larger problems. This is because a problem size depends on actions and outcomes, both of which DeLU well handles: (1) actions impact linear pieces count, and DeLU represents an exponentially large number of linear pieces (e.g., $2^{128}$ pieces by a single ReLU layer with 128 units); and (2) outcomes affect input dimension, and neural networks are known to be good at handling high-dimensional inputs. --- Rebuttal Comment 1.1: Title: Thanks for response Comment: Your new experiments have successfully addressed my major concern. I don't think the full description of the experiments needs to take up space in the main paper, but some mention in the text + full description in the appendix seems important to me. Thanks for your thoughtful responses to the other issues. I will raise my score to "7" as soon as I can figure out where the edit button is on OpenReview. --- Reply to Comment 1.1.1: Title: Thanks for your response! We will incorporate the new experiments into the main paper. Comment: Thank you for your prompt reply! We are pleased that our response addressed your concern. We concur that it is very important to include the new experiments in the main paper. We intend to: - Introduce _`Best Training Sample`_ as a new baseline (around Line 288). - Update Fig. 4 of the main paper to include plots of this baseline. - Provide a (concise) description and analysis of the results (around Line 305). - Describe the setup and results in detail in Appendix C. We hope these revisions can assist readers in better telling the contribution of our work.
Summary: This paper considers an offline learning problem of optimal contract through neural network. The authors propose a novel neural network architecture, called Discontinuous ReLU (DeLU) network, which models a piecewise affine function with discontinuous boundaries --- a representation that captures the principal (contract designer)'s utility function with respect to the different contracts. With the neural network to model the principal's utility, the paper showcases two methods to determine the optimal contract, linear programming or the gradient-based interior-point method. The gradient based method is shown to be more efficient in the experiments. Strengths: 1. The paper is well written and easy to follow. The authors provide a good introduction to the contract design problem and the related literature. The paper is also well organized, with a clear description of the proposed method and the experiments. The figures in this paper are very well designed for readers to intuitively understand the idea. 2. The paper proposes a novel neural network architecture for training a piecewise affine function with discontinuous boundaries, as well as the gradient based optimization for inference. This architecture is very interesting and useful for many other applications in multi-agent learning. The authors also provide a good explanation of the architecture and the intuition behind it. 3. The paper provides extensive empirical experiments on simulated data with comparison of different network architecture, inference methods. Weaknesses: 1. One major concern on the proposed method of this paper is the fact that it attempts to approximate the optimal contract simply as the argmax to the approximated principal’s utility. This does not seem to be a reasonable choice, because there may be a constant gap between the expected agent response and the actual agent response. In particular, the argmax contract at some boundary of the piecewise affine function in the approximated principal’s utility (as pointed out in Lemma 3) is not robustified from the small inaccuracies at the boundary --- at least I do not see how the proposed method can ensure an accurate boundary can be learnt. And this necessarily leads to a constant drop in the testing against the actual agent in a large class of problem instances. If this understanding is indeed correct, it is actually surprising that the proposed method can still achieve a good performance in the experiments. The authors should provide some explanation on this issue; I suspect this is due to the special structure in the synthesized data. 2. The paper in general lacks theoretical analysis. It would be interesting to see some theoretical analysis on the generalization error/sample complexity of the proposed network architecture. The authors could also provide some theoretical analysis on the convergence of the gradient based optimization method, as well as the convergence of the linear programming method. 3. I also expect the paper to include some discussion on the novelty of neural network architecture. I wonder if similar attempts have been made to design neural architecture for special function structures in the hypothesis class. > Both the first and second concern are resolved given the additional experiments and details provided by the authors in the rebuttal. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please answer to my concern in the first point of weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >(Major concern) `One major concern... is that it attempts to approximate the optimal contract as the argmax to the approximated principal’s utility. In particular, the argmax contract at some boundary... is not robustified from the inaccuracies at the boundary.` Thanks for this point, which motivates us to consider additional analyses and experiments to support the following three arguments, which we hope can address your concern. ___The current alignment degree between DeLU and real boundaries can support the good performance of DeLU.___ Although we didn't explicitly discuss this in the paper, we believe the reason for this is that MSE loss is sensitive to misalignment between real and DeLU boundaries. In particular, the jump of the utility function at boundary points can be arbitrarily large, and thus, a slight misalignment between DeLU and real boundaries can lead to a large increase in the MSE loss. We conduct additional experiments to test this viewpoint. For each contract design problem, we randomly sample a large number (50K) of contracts, and check whether they are simultaneously on the real and the DeLU boundary. Specifically, we randomly sample 10K directions for each contract and assess linearity of the real and DeLU utility function in each direction. If the function is non-linear in some (>20%) directions, we mark the contract as on a boundary. In Table 1, we report the percentage of overlapped boundaries points (# samples on both real and DeLU boundaries / # samples on real boundaries). Here we fix the number of outcomes to 5, and increase the number of actions from $2^2$ to $2^8$. For each problem size, we present the median and s.d. of 12 different $(\alpha_p,\beta_p)$ combinations. We observe that DeLU achieves a good degree of boundary alignment. Table 1. DeLU learns boundaries reasonably aligned with real boundaries, and the performance of DeLU is related to the boundary alignment degree. |# Actions|4|8|16|32|64|128|256| |-|-|-|-|-|-|-|-| |Overlapped boundary points (%)|99.71±12.40|83.09±17.32|74.54±15.20|91.36±10.23|76.44±17.36|98.86±4.41|80.57±12.65| |DeLU optimality (%)|95.81|88.54|91.78|91.13|89.30|93.02|88.87| Shown in the second row is the optimality (normalized principal utility) of DeLU contracts. It can be observed that the boundary alignment degree is related to DeLU (argmax) performance. ___Our method can be extended to support "sub-"argmax, which slightly improves its performance.___ Following the reviewer's comments, going beyond argmax is also possible with our gradient-based inference. When annealing the coefficient of the barrier function $1/t^{(k)}$, we can check whether the actual principal utility increases for each $t$ value. If the utility decreases, we know that we encounter inaccurate boundaries and can stop the inference to seek more robustness. Tested on 8 different problem sizes, we find that "early stop" can increase the optimality by 3.96±0.20% (avg±var). Thanks to the reviewer for inspiring this mechanism, which will be included in our codebase. But we also note that this performance improvement is not very large, in line with DeLU boundaries aligning reasonably with real boundaries. ___DeLU recovers optimal contracts on a range of problems.___ The reviewer asks whether the good performance is due to the special structure in synthesized data. We take DeLU to a nonlinear contract design problem with some real-world economic intuition, presented in STOC 2022 and EC 2019 tutorial ([1], Page 47). DeLU achieves an optimality of 97.83% (2.89/2.95). Furthermore, we consider a wide range of environments in our experiments including correlations of different kinds. The results in Figs 4-6 are presented for 12 different correlation structures. Table 2 (Appendix) gives a breakdown for these structures and shows robust performance. [1] Duetting, P. and Talgam-Cohen, I., Contract Theory: A New Frontier for AGT &nbsp; > `The novelty of neural network architecture.` A longstanding challenge in the deep learning community has been approximating discontinuous functions using neural networks. While the Universal Approximation Theorem only guarantees the approximation of continuous functions, many crucial problems involve discontinuity, such as astronomy (e.g., solar flare imaging [2]) and mathematics (e.g., uncertainty quantification [3]). However, establishing a discontinuous network is not an easy task. Discontinuities were considered as early as in the 1950s when neural networks were first proposed [4] by using step activation functions. Following this work, most models introduced discontinuity through the use of different discontinuous activation functions. However, optimizing these models is typically more challenging than with continuous activation functions [3], hindering the application of discontinuous networks. To the best of our knowledge, this paper presents the first discontinuous network architecture with continuous activation functions and stable optimization performance. For economics, we expect that our exploration of discontinuous networks can draw attention to problems involving discontinuity, especially in utility functions. For example, this kind of discontinuity also arises in mechanism design when agents, in effect, ``chose'' from a menu of options. For this reason, we are hopeful that further research on discontinuous network architectures and optimization methods can contribute to advancing AI progress in computational economics. [2] Massa, P., Garbarino, S. and Benvenuto, F., 2022. Approximation of discontinuous inverse operators with neural networks. Inverse Problems, 38(10), p.105001 [3] Della Santa, F. and Pieraccini, S., 2023. Discontinuous neural networks and discontinuity learning. Journal of Computational and Applied Mathematics, 419, p.114678 [4] Rosenblatt, F., 1958. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6), p.386 --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. The additional experiments on the alignment of decision boundary boosts my confidence in the techniques proposed by this work; to some degree, I find it magical and definitely worth follow-up studies from both theoretical and empirical sides. However, I still have concerns on these new experiment results. Therefore, I am only willing to increase my score if the author could provide some important details in the new experiments: (1) how many samples are used to train the DeLU networks in each instances? (2) Why is this approach reasonable, "If the function is non-linear in some (>20%) directions, we mark the contract as on a boundary."? Can you formally describe your algorithm to check whether a sampled contract is on the boundary? Below are some of my additional comments: - The high variance of "Overlapped boundary points (%)" seems to suggests that there are instances where the decision boundary alignment is bad, and I still suspect there might be adversarial instances where boundary alignment is arbitrarily bad (e.g., sharp drops at every boundary). I also wonder whether these instances where alignment is bad coincides with the instances where "sub-"argmax substantially improves the performance. These findings should help us understand the exact mechanisms undergoing when DeLU approximates the agent's decision function. - I suggest the authors to include (and highlight) these new studies on boundary alignment in the next version of the paper. The current lemmas of Section 3 are well-known in contract design theory and can be significantly enhanced to motivate the study of decision boundary alignment. I think the boundary alignment problem is the key learning challenge in strategic setting, and the paper should deserve higher score if the narrative were to be centered around the "surprising effectiveness of boundary alignment by DeLU networks". - Based on my understanding, the proposed techniques seems to be generalizable to problem setups beyond contract design (e.g., Stackelberg games, security games). It is unclear why the authors choose to focus on the contract design problem (or perhaps it is important that the contract space is unbounded?). I wonder if the authors have any comment on this. --- Reply to Comment 1.1.1: Title: Thanks for your valuable questions and comments! (Part 1) Comment: > (Detail 1) `# Training samples.` All the instances here are trained with 50K samples. > (Detail 2) _`Why is this approach reasonable: If the function is non-linear in some (>20%) directions, we mark the contract as on a boundary.`_ Since the principal's utility function is piecewise linear, the function exhibits linearity in the proximity of an interior point. Conversely, when a point lies on a boundary, there is a jump in utilities within its proximity, rendering it unable to pass a linearity test in some directions. Please note that we use exactly the same approach to check for both true and DeLU boundaries. &emsp; > (Detail 3) `The algorithm to check whether a contract is on a true/DeLU boundary` is in lines starting with a square symbol ($\blacksquare$). __Algorithm__ Boundary alignment degree calculation __Input:__ True principal utility function $u$ and its DeLU approximation $\tilde{u}$ * $n_{true}=0$, $n_{ovlp}=0$&emsp;`Count # true and overlapped boundary points` * __for__ $k=1,\cdots,K$ __do__:&emsp;`For K random contracts` * $\mathbf{f}^k\leftarrow$ a uniform random contract * $n_{\text{true-nl}}=0$, $n_{\text{DeLU-nl}}=0$&emsp;`Count # directions in which the true and DeLU function is non-linear` * __for__ $n=1,\cdots,N$ __do__:&emsp;`for N random directions` * $\mathbf{d}^n\leftarrow$ a uniform random sample in $\mathbb{R}^{m}$ * $\mathbf{d}^n\leftarrow\delta\frac{\mathbf{d}^n}{||\mathbf{d}^n||}$&emsp;`Normalize` * __if__ $u(\mathbf{f}^k)+u(\mathbf{f}^k+2\mathbf{d}^n)\ne 2u(\mathbf{f}^k+\mathbf{d}^n)$:&emsp;`If the true utility function is non-linear` * $n_{\text{true-nl}}$+=1 * __if__ $\tilde u(\mathbf{f}^k)+\tilde u(\mathbf{f}^k+2\mathbf{d}^n)\ne 2\tilde u(\mathbf{f}^k+\mathbf{d}^n)$:&emsp;`If the DeLU utility function is non-linear` * $n_{\text{DeLU-nl}}$+=1 * __if__ $n_{\text{true-nl}}/N>\tau$:&emsp;`If there are many non-linear directions` * $n_{true}$+=1 * __if__ $n_{\text{DeLU-nl}}/N>\tau$:&emsp;`Check whether it is also on the DeLU boundary` * $n_{ovlp}$+=1 * return $n_{ovlp}/n_{true}$ &emsp; > (Comment 1) _`Whether the instances where alignment is bad coincides with the instances where "sub-"argmax substantially improves the performance.`_ Thanks for asking about this. In fact, we do generally observe that the performance improvement from the "sub-"argmax method is related to the boundary alignment degree (Table 5). __Table 5__. Performance improvement provided by "sub-"argmax, in decreasing order of boundary alignment, on instances with __(a)__ # actions=5, $\beta_p=0.9$, $\alpha_p=0.5$. |# outcomes|256|8|128|32|4|16|64| |-|-|-|-|-|-|-|-| |Boundary alignment degree (%)|93.89|85.99|84.49|84.16|73.80|46.42|33.45| |Performance improvement (%)|-0.80|+2.08|+0.46|+1.31|+6.27|+2.40|+12.36| __(b)__ # actions=5, $\beta_p=0.3$, $\alpha_p=0.9$. |# outcomes|4|128|32|8|64|256|16| |-|-|-|-|-|-|-|-| |Boundary alignment degree (%)|98.14|97.88|93.97|92.90|86.90|84.74|66.59| |Performance improvement (%)|+0.63|+0.03|+0.17|+0.77|+22.56|+3.63|+3.10| &emsp; > (Comment 2) _`I suggest the authors to include (and highlight) these new studies on boundary alignment in the next version of the paper.`_ We have found this exchange very useful, and agree that studying boundary alignment is informative. We plan to make the following changes: - Motivate and discuss the boundary alignment question after the lemmas in Sec. 3. - Analyze the influence of MSE loss on boundary alignment in Sec. 4.2. - Introduce the "sub-"argmax approach in Sec. 4.2.2. - Add a new experimental subsection studying boundary alignment. Incorporate a detailed description of the boundary check algorithm and experiments on the relationship between alignment degrees, DeLU performance, and improvement provided by "sub"-argmax.
Summary: This paper proposes a deep-learning approach for contract design. The rationale is: - the principal's utility function should be learned from the data - the problem of approximating accurately the utility function is non-trivial, and the utility function may be discontinuous --- suggesting the development of better models such as those proposed in the paper In addition to the model approximation problem, the authors improve the available algorithms for training and inference. Finally, the authors experimentally evaluate their paradigm according to several dimensions. Strengths: The authors clearly posed the problem and the presentation is clear. Weaknesses: At the current stage, I have some doubts about the significance of the contributions and I need that the authors help me to understand better that issue. My feeling is that the machine-learning contribution is not very strong and that the work is primarily a slight variation of deep-learning tools applied to contract design. That is, from a machine-learning perspective, this paper does not provide substantial advancements. This is not necessarily a critical issue for the acceptance of the paper. Many papers just apply machine learning tools to design important applications. More importantly for me, the advancement in the contract design perspective is not really strong. Yes, the authors are providing better approximations, but it is not clear to me the applicability of the results, e.g., due to the need for thousands of samples for the training and it is unlikely such data are available. I agree with the authors that the main works done so far focus on online/bandit approaches and that the authors are providing a completely different perspective. While online/bandit approaches are directed at a small number of samples, deep learning makes the reverse and, in my opinion, could require a too large number of samples for a good approximation. Personally, I believe that something in the middle would be useful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: *Technicalities* - are the techniques for training and inference provided by the authors important advancements (i.e., non-trivial, non-direct) of the deep learning techniques? if yes, why and how *Applicability* - is correct my understanding that several thousands of samples are necessary for training? if yes, do you believe that making thousands of queries to the principal is reasonable (using $10^6$ in the plot does not help and I would suggest the authors use semi-log plots to show what happens with a small number of samples). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > (Q1: Technicalities) _`Are the techniques for training and inference provided by the authors important advancements of the deep learning techniques?`_ A longstanding challenge in the deep learning community has been approximating discontinuous functions. While the Universal Approximation Theorem only guarantees the approximation of continuous functions, many crucial problems involve discontinuity, such as astronomy (e.g., solar flare imaging [1]) and mathematics (e.g., uncertainty quantification [2]). However, establishing a discontinuous network is not an easy task. Discontinuities were considered as early as in the 1950s when neural networks were first proposed [3] by using step activation functions. Following this work, most models introduced discontinuity through the use of different discontinuous activation functions. However, optimizing these models is typically more challenging than with continuous activation functions [2], hindering the application of discontinuous networks. To the best of our knowledge, this paper presents the first discontinuous network architecture with continuous activation functions and stable optimization performance. As other reviewers stated, _"the proposed DeLU network and its concave counterpart are novel and worthy of studying further,"_ and _"the paper proposes a novel neural network architecture ... is very interesting and useful for many other applications in multi-agent learning."_ The proposed inference algorithm also provides a new approach to ReLU maximization. Previous work typically relies on mixed integer linear programming [4], while our method is gradient-based and scales well with the input dimensions and network sizes while matching GPU/TPU architecture. For economics, we expect that our exploration of discontinuous networks can draw attention to problems involving discontinuity. For example, discontinuity also arises in mechanism design when agents, in effect, ``chose'' from a menu of options. We hope that research on discontinuous network architectures and optimization will contribute to advancing AI progress in computational economics. [1] Massa, P., Garbarino, S. and Benvenuto, F., 2022. Approximation of discontinuous inverse operators with neural networks. Inverse Problems, 38(10), p.105001 [2] Della Santa, F. and Pieraccini, S., 2023. Discontinuous neural networks and discontinuity learning. Journal of Computational and Applied Mathematics, 419, p.114678 [3] Rosenblatt, F., 1958. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6), p.386 [4] Tjeng, V., Xiao, K.Y. and Tedrake, R., Evaluating Robustness of Neural Networks with Mixed Integer Programming. ICLR 2018 &nbsp; > (Q2.1: Applicability) _`Is correct my understanding that several thousands of samples are necessary for training? (... I would suggest the authors use semi-log plots to show what happens with a small number of samples.)`_ The number of training samples depends on the problem size. On relatively small problems (16 outcomes, 5 actions), using 100 samples is enough to achieve an optimality of 92.99%. For large problem sizes (50 actions, 32 outcomes), we need 5K samples to get a satisfactory optimality (94.67%). Fig. 2 in the PDF of the general response presents semi-log plots to show in detail how DeLU performance changes with small numbers of training samples. Considering learning theory, whereas there are exponential worst-case sample complexity bounds coming from recent work [5] (exponential in the number of samples in order to learn an approximately optimal contract, and thus severe difficulty when the number of outcomes is large), we show empirically that we can get good results with a relatively small number of samples. We see this as a strength of the proposed framework, which is simple to implement, versatile, and general purpose. Getting a better theoretical understanding beyond the worst-case, and in the offline setting of the present paper, is an open problem. [5] Zhu, B., Bates, S., Yang, Z., Wang, Y., Jiao, J. and Jordan, M.I., 2023. The sample complexity of online contract design. ACM EC 2023 &nbsp; > (Q2.2: Applicability) _If yes, do you believe that making thousands of queries to the principal is reasonable?_ We discuss this in different applications to which our method can be applied. (1) The first application is as a tool for _theoretical economists_ to study contract design problems. In economic theory, one assumes a model of a world environment (e.g., actions, technology, costs, rewards, outcomes), and looks to understand the optimal designs. In this setting, thousands of queries is reasonable because we have a model of the world. (2) A second application comes from the _digital economy_, where we can expect to gain access to large training sets as we see increasing automation of economic processes (e.g., it doesn't seem too unlikely that we will soon have contracts for LLM-style actors, for example working to plan vacation details for a user with the possibility of contracting on outcomes) [6]. (3) A third application comes from settings where we can expect to have access to a simulator of the agent behavior. In particular, a black-box simulator fits perfectly with our set-up, where we only need access to the induced utility to the principal for different contract designs. These simulations may be for automated agents in the digital economy. Another interesting development is that attention is going these days to _generative models as simulators of human decision making and behavior_ (and in the future, likely firm behavior) [7]. [6] Horton, J.J., 2023. Large language models as simulated economic agents: What can we learn from homo silicus? (No. w31122). National Bureau of Economic Research [7] Park, J.S., O'Brien, J.C., Cai, C.J., Morris, M.R., Liang, P. and Bernstein, M.S., 2023. Generative agents: Interactive simulacra of human behavior. arXiv:2304.03442 --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I understand the peculiarity of the problem (learning non-continuous functions) and the technical advancement. I also appreciate the authors' reply about the training data. I agree that similar concerns can be found in other settings in which works on deep learning can be commonly found. However, I keep being skeptical about the actual applicability in real-world applications. I raise my scores accordingly. --- Reply to Comment 1.1.1: Title: Thanks for your response! Comment: Thanks for your feedback! We are delighted that our response can address some of your concerns. We have found the review very helpful, and plan to make the following changes to our paper: - Motivate and discuss the technical novelties from the perspective of deep learning (around Line 71). - Update Figure 6 on Page 9, using semi-log plots to show DeLU performance with a small number of training samples. Also update the accompanying discussion in Sec. 5. - Provide a more extensive discussion about real-world applications of the proposed method in a future work section.
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewers for providing exceptionally high-quality and insightful reviews. Your thoughtful insights and valuable suggestions have significantly enriched our work. Thank you for your time, effort, and commitment, and we look forward to addressing your further comments during the discussion period. Here in the general response, we show the following figures in the attached PDF: - Figure 1. Optimality (normalized principal utility) of (1) _the best sample in the training dataset_, (2) DeLU, (3) ReLU, and (4) a direct LP solver (“Oracle LP"), for contract-design problems with increasing sizes. - Figure 2: The performance of the DeLU network trained with _different numbers of training samples (log scale)_ on three sizes of problems (# actions, # outcomes). The median performance as well as the first and third quartile (shaded area) of 5 combinations of $(\alpha_p, \beta_p)$ are shown. Pdf: /pdf/f40bbc24ec6ffdda9ae5c68add3732e94ac20253.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces an automated optimal contract design method from offline data using deep learning. In this setting, a principle deigns a contract establishing an agreement of payments the principle will undertake for the outcomes arising from the actions of an agent. Given a contract, the agent privately selects an action to maximize its expected utility under an action-outcome transition kernel. The principle, unaware of the agent's actions and the transition kernel, aims to maximize its own expected utility. The authors propose the use of neural network function approximation to learn the principle's utility function from offline samples. Due to piecewise discontinuous nature of the principle's utility function and the continuous nature of existing neural network (eg. ReLU), the authors propose the Discontinuous ReLU (DeLU) network to model the the principle's utility function as a discontinuous piecewise affine function, where each piece represents a particular action taken by the agent. Moreover, the authors present a computationally efficient inference method for contract designing based on interior-point method. Finally, the paper empirically evaluates the validity and the performance of the methods presented here on synthetic data. Strengths: 1) Originality: Noting that I am not quite familiar with this field, this is the first ever use of neural network learning of utility functions for contract design based on the author's claims. The authors not only combine two existing ideas, but also innovate a new neural network architecture as the existing models are not capable of capturing the critical discontinuities in the utility function. Moreover, they propose an efficient implementation of the inference computations by utilizing highly parallelized existing deep learning frameworks. They also propose an alternative concave neural network architecture. 2) Quality The claims made here were supported by sound theoretical and empirical evidence. 3) Clarity Even though I am out of the field, it was very easy to follow the paper. The problem and its motivation are explained very clearly. The theory and the experimental results are stated neatly and discussed properly. It was a pleasure to read this paper. 4) Significance Again, since this is outside my expertise, it is less obvious to me why this problem should be studied. The examples provided by the authors as well as a quick Google search were helpful to clarify this. Aside from the real-world significance, the proposed DeLU network and its concave counterpart are novel and worthy of studying further. Weaknesses: I don't have much to say here. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I have some question to the authors out of curiosity. 1) How easy is it to extend your method to contract with mixed payment-penalty structure (ie. negative values for f)? 2) Is there a sequential version of contract design problem with cumulative utility maximization? I am thinking of a case where the principle also is deemed responsible of its actions in the contract design and there is a dynamics interaction between the agent and the principle. 3) Is there a specific reason to pick a mother neural network to models the bias term? It'd be more straightforward to write the bias term as a linear function of activation pattern $ b^{L+1}(\mathbf{f}) = \sum_{l=1}^{L} \mathbf{r_l(\mathbf {f})}^T \mathbf{b}_l $. 4) What are some real world applications where the the offline learning with previously collected data could be feasible and reasonable? Are these type of data readily available in practice? 5) Does it require a significantly new treatment to extend this to multi-agent (single principle) case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations, societal considerations and future works are all discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thought-provoking review, which prompts us to think deeper about the applicability and extension of our work. > (Q1) _`How easy is it to extend your method to contracts with mixed payment-penalty structure?`_ For this, we can consider the case $\mathbf{f}\ge-c$, for a positive constant $c$ (with no limitation on how much the agent can pay the principal, there is a trivial optimal solution, charging the agent the entire expected social welfare). Everything then goes through immediately, with contracts sampled $\mathbf{f}\ge-c$, and the non-negativity constraints in the inference replaced with $\mathbf{f}\ge-c$. &nbsp; > (Q2) _`Is there a sequential version of the contract design problem with cumulative utility maximization?`_ This is an interesting direction, and sequential versions of the contract design problem are a classic topic in the economics literature; e.g., the seminal paper [1], see also [2]. Studying these models from a computational direction is an interesting direction for future work. In the future, if (deep) (multi-agent) reinforcement learning is explored to study more comprehensive or larger scale sequential problems, we believe the proposed DeLU architecture can contribute by approximating the Q function to estimate the long-term value the principal can expect with a contract. Such a Q function is discontinuous, with the agent's response a sequence of discrete actions. [1] Holmstrom, B. and Milgrom, P., 1987. Aggregation and linearity in the provision of intertemporal incentives. Econometrica, pp.303-328. [2] Zhang, H. and Zenios, S., 2008. A dynamic principal-agent model with hidden information: Sequential optimality through truthful state revelation. Operations Research, 56(3), pp.681-696. &nbsp; > (Q3) `Is there a specific reason to pick a mother neural network, instead of a linear function, to model the bias term?` The reason for using a second network is that the bias does not always depend linearly on the activation pattern. Here is an example to illustrate this. There are two outcomes with values $\mathbf{v}=[20,1]$, four actions with costs $\mathbf{c}=[1.0,2.1,2.3,4.7]$, and the action-outcome transition kernel is$$P=\begin{bmatrix}0.211&0.789\\\\0.398&0.602\\\\0.430&0.570\\\\0.684&0.316\end{bmatrix}.$$ Suppose we consider linear contracts, where $\mathbf{f}=\alpha\mathbf{v},\alpha>0$. Then the principal's utility function for different contracts is$$u^p(\alpha)=\begin{cases}-5\alpha+5&0.2<\alpha<0.3\\\\-8.57\alpha+8.57&0.3<\alpha<0.4\\\\-9.17\alpha+9.17&0.4<\alpha<0.5\\\\-14\alpha+14&\alpha>0.5\\\\ \end{cases}.$$Suppose that we have a 2-dimensional activation pattern, and the linear function converting activation patterns to the bias has parameters $[b_1,b_2]$. Then the bias for each of the four pieces would be $0$, $b_1$, $b_2$, and $b_1+b_2$, respectively. The difference between each piece's bias needs to model the discontinuity at contract parameter $\alpha=0.3,0.4,0.5$, but this is impossible with this linear model. To see this, we first assume that the piece $0.2<\alpha<0.3$ has bias 0. Then the differences of biases of the other 3 pieces would need to be 2.5, 2.86, and 5.28, which cannot be achieved with $b_1$, $b_2$, and $b_1+b_2$. It can be easily verified that the cases where other pieces have a bias of 0 are similar, demonstrating that a linear bias function cannot express the discontinuity. By contrast, appealing to a second network allows for non-linear dependency on activation, and can handle this problem. &nbsp; > (Q4) _`What are some real world applications where offline learning with previously collected data could be feasible and reasonable? Are these type of data readily available in practice?`_ First of all, an application that motivates us is that of developing a tool for theoretical economists to study contract design. In economic theory, one typically assumes a model of a world environment (e.g., actions, technology, costs, rewards, outcomes), and looks to understand the optimal designs. In this setting, offline data is reasonable because we have a model of the world. In regard to real-world, practical applications: (1) One application comes from the _digital economy_, where we can expect to gain access to training sets as we see increasing automation of economic processes (e.g., it doesn't seem too unlikely that we will soon have contracts for LLM-style actors, for example working to plan the details of a vacation for a user with the possibility of contracting on outcomes) [3]. (2) Another application comes from settings where we can expect to have access to a simulator of the behavior of agents. In particular, a black-box simulator fits perfectly with our set-up, where we only need access to the induced utility to the principal for different contract designs. These simulations may be for automated agents in the digital economy. Another interesting development is that attention is going these days to generative models as simulators of _human decision making and behavior_ (and in the future, likely firm behavior) [4]. As these models are developed, methods to optimize on top of them will become important, and we expect our method to be useful in regard to contract design. [3] Horton, J.J., 2023. Large language models as simulated economic agents: What can we learn from homo silicus? (No. w31122). National Bureau of Economic Research [4] Park, J.S., O'Brien, J.C., Cai, C.J., Morris, M.R., Liang, P. and Bernstein, M.S., 2023. Generative agents: Interactive simulacra of human behavior. arXiv:2304.03442 &nbsp; > (Q5) _`Does it require a significantly new treatment to extend this to multi-agent (single principal) case?`_ Not a significantly new treatment, as long as we have access to the behavior model of the system of agents. This would need to resolve, for example, a question in regard to computing or observing equilibrium behavior, perhaps also involving tie-breaking across multiple possible equilibria. --- Rebuttal Comment 1.1: Comment: Thank you very much for addressing my questions as well as other other reviewers' concerns. Based on all the other reviews and authors' responses, I'm in favor of maintaining my score. --- Reply to Comment 1.1.1: Title: Thanks a lot for your response! Comment: We would like to express our gratitude for the reviewer's valuable comments and inputs. We find the questions really helpful in improving the quality of our work. Specifically, we plan to make the following changes to our paper: - Discuss why we use another network to generate the bias of the last layer, instead of a linear transformation (around Line 204). - Provide an extensive discussion about the real-world applications of the proposed method, and whether previously collected data is feasible in these applications (in a future work section). - Discuss possible extensions of our work to multi-agent scenarios, sequential contract design, and mixed payment-penalty settings in the Limitation section.
null
null
null
null
null
null
Learning from Active Human Involvement through Proxy Value Propagation
Accept (spotlight)
Summary: The work introduces Proxy Value Propagation (PVP), an approach to reinforcement-learning without ground-truth rewards by instead using human intervention as a signal of quality. The value of a state-action pair is determined by a human intervention, where interventions signal that the agent's behavior was bad (and we assume that the human's action was good). Compared to naive RL approaches or other human-intervention methods, such as HG-DAgger, PVP learns self-driving policies much faster and with fewer crashes than all other approaches. The work is also tested with a user-study, gathering demonstration data from real humans. Strengths: Technical contribution: * The proposed method works very well empirically, outperforming baselines with and without human intervention. * The relative contribution of different components of the PVP approach are tested with an ablation study in the CARLA domain. * Experiments cover several domains including continuous/discrete actions, different input mechanisms, and different state observations. * The proposed approach specifically considers balance between human/agent labels, making sure to consider human preferences even when there are relatively few demonstrations/corrections. Experiments: * The experiments are clearly described and appropriately test the proposed PVP method against baselines. * Human users are recruited and the human-intervention method is tested with real users, not synthetic/heuristic labeling. Novelty: * The paper compares well against prior work and the related works section covers most important previous approaches to the problem. * The experiments compare against several baselines in prior work. Clarity: * The method is intuitive and easy to follow. Figures and tables clearly communicate the results. * The supplementary material provides additional detail where useful (hyper-parameters, control schemes, etc.). Reproducibility: * Code is provided to reproduce all experiments. Weaknesses: Technical contribution: * Prior work in human-labeling and intervention has found that human labelers are slow to react, and this time-delay must be accounted for (e.g. in [1], the authors needed to use eligibility traces to account for _which_ agent action was incorrect). Strange that this did not surface as a problem in these studies. * The proposed treatment of agent vs. human Q-values could result is serious policy degradation if the human labeler makes mistakes or if the demonstrator is sub-optimal. * Fusion of PVP with environment reward resulted in _worse_ performance. * Removing epsilon greedy exploration from DQN effectively forces humans to do the exploration for PVP, which seems to work well in the driving simulator, but could be an issue in other domains. User study: * Information on the study is lacking. How many participants? What about background in computer science or driving? Mean age? What sorts of instructions were provided? Were participants allowed to practice first? These details can be in the supplement if they are not critical to the analysis of the algorithm, but they should be included for a user study. Novelty: * The approach seems very similar to [1], which treats human labels as indications of advantage instead of simply as reward (as in DeepTAMER, which is cited in the paper). While comparison to [1] is perhaps not necessary (as the paper compares thoroughly to several state-of-the-art approaches), [1] should at least be mentioned. * Should be aware of other approaches to human-intervention labeling (e.g., [2], which also considered driving feedback, or [3] which performs BC and then RL) [1] James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David L. Roberts, Matthew E. Taylor, and Michael L. Littman. 2017. Interactive learning from policy-dependent human feedback. In Proceedings of the 34th International Conference on Machine Learning - Volume 70 (ICML'17). JMLR.org, 2285–2294. [2] Mariah L. Schrum, Erin Hedlund-Botti, Nina Moorman, and Matthew C. Gombolay. 2022. MIND MELD: Personalized Meta-Learning for Robot-Centric Imitation Learning. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction (HRI '22). IEEE Press, 157–165. [3] Cheng, Ching-An, Xinyan Yan, Nolan Wagener and Byron Boots. “Fast Policy Learning through Imitation and Reinforcement.” Conference on Uncertainty in Artificial Intelligence (2018). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Is there any drawback to the proposed approach to action labeling when compared to something like COACH [1]? Should PVP be considering the time-delay of human feedback? 2. Line 187-188 states that the novice-policy (agent behavior) contains information of the forward dynamics. What does this mean? 3. The addition to the CQL loss is likened to an L2 regularization on the Q-values from the agent, but later experiments adding this L2 penalty to CQL seem to perform much worse than PVP. Why is there a discrepancy between the two? Shouldn't CQL + L2 == PVP, as in Equation 6? 4. It seems that PVP relies heavily on humans doing the initial exploration. Would something like LOKI [3] perform comparably, or is the value propagation part of the key to success? 5. Please address the questions listed above concerning the user study. Additional detail should be provided. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors list and acknowledge several limitations. Mitigations to these are left to be the subject of future work. Societal impact of the work is not explicitly addressed, though an ethics statement does appropriately address IRB for the study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! --- > **W1:** *Prior work in human-labeling and intervention has found that human labelers are slow to react, and this time-delay must be accounted for. Strange that this did not surface as a problem in these studies.* **Re:** Indeed, we do not consider the time-delay in the current method. The problem is mitigated as PVP learns from active human involvement. If the agent learns a time-delayed policy, then human subjects will correct such behaviors in later training. The eligibility traces in [1] allow feedback to affect earlier decisions. The TD loss in our system serves a similar role that can propagate proxy values (considering it as an implicit form of human feedback, as it is not hard-coded to $\pm 1$ for each action) to previous steps. --- > **W2:** *The proposed treatment of agent vs. human Q-values could result in serious policy degradation if the human labeler makes mistakes or if the demonstrator is sub-optimal.* **Re:** In our empirical studies, we found that mistakes made by the human labeler can indeed impact the policy during the early stages of training. However, this degradation is usually smoothed out as training progresses, with correct human demonstrations mitigating the impact of early errors. Due to resource limits, we did not perform quantitative analysis on how the level of noise (or how the quality of demonstration) affects policy learning. As we shown in the analysis in Appendix D Theorem D.3, the training-time performance is bounded by the human’s performance in providing action and in intervention. The quality of demonstration and intervention does affect the performance of PVP. --- > **W3:** *Fusion of PVP with environment reward resulted in worse performance.* **Re:** As presented in the one-page PDF, the reward function we used for driving tasks has 4 terms. The displacement reward, the speed reward, the collision penalty and the termination reward. However, as we discussed in the experiment section, the environmental reward might not necessarily align with human preference. Human subjects usually demonstrated more fine grained behaviors such as decelerating before crossroad or maintaining a safe distance from the front vehicles. Those behaviors might contradict the speed reward as it encourages high speed. Such misalignment between reward functions and human preference leads to worse performance. --- > **W4:** *Information on the human subject study is lacking.* **Re:** Thank you for the advice! Section 1 in the one-page PDF discusses the requirements for human subjects, the onboarding period and the information / advice we provide to the human subjects. We summarize the answers to your concerns: * We recruit 5 human subjects. * All of them are college students with an average age of 21.4 years old. * They go through an onboarding procedure before the main experiments when they can practice the tasks with different control devices. * The participants are informed by the objectives: driving safely to the destination and ensuring the behavior aligns with traffic regulations and human preferences. * The participants are encouraged to perform intervention whenever they perceive the vehicle might be in a dangerous situation, in violation of traffic rules, or in whatever scenario the human subjects feel they wouldn't behave in the way the novice policies do. --- > **W5:** *The approach seems very similar to [1]. [1] should at least be mentioned. Should be aware of other approaches to human-intervention labeling.* **Re:** Thank you for bringing the insightful works [1, 2, 3]. We will discuss these papers in the revision. Compared to COACH [1], PVP accepts not only the feedback (the intervention signal) but also the human demonstration. Besides, PVP uses TD loss to build a proxy value function instead of using the feedback directly to train the policy. However, PVP does not consider the time-delay of human subjects explicitly as COACH. --- > **Q1:** *Is there any drawback when compared to something like COACH [1]? Should PVP be considering the time-delay of human feedback?* **Re:** We do not consider the time-delay in the current method explicitly and this is indeed a limitation. However, PVP has two features that can mitigate this issue. The eligibility traces in COACH allow feedback to affect earlier decisions. The TD loss in our system serves a similar role that can propagate proxy values (considering it as an implicit form of human feedback, as it is not hard-coded to $\pm 1$ for each action) to previous steps. Meanwhile, PVP trains a deterministic novice policy and thus its behavior is consistent in a short period of time. Human subjects can relatively easier to determine the intention of the novice policy and thus make an intervention accordingly, compared to DAgger or HACO where there exists stochasticity in the novice policy. Therefore the time-delay of human feedback might not be a large issue. Last but not least, as you can see in the human subject research protocol, the objective of the driving task we told the human subjects is to arrive at the destination safely while the behavior should be human-like. We don't require the human subjects to drive as fast as possible. Thus the requirement of real-time response is relaxed. --- > **Q2:** *The addition to the CQL loss is likened to an L2 regularization on the Q-values from the agent, but experiments adding L2 penalty to CQL seem to perform much worse than PVP. Why is there a discrepancy between the two? Shouldn't CQL + L2 == PVP, as in Equation 6?* **Re:** There might be a misunderstanding. In the ablation study section (Line 339 and Table 4), we conduct experiments to evaluate the performance of the pure CQL objective (without regularization) while other PVP designs remain the same. This experiment shows that pure CQL is not applicable to this human active involvement setting. We will modify the wording to address the misunderstanding. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their time and their thorough response. The TD loss could serve as a way of "backing-up" value estimates given that the agent uses a deterministic driving policy, but this assumes that the deterministic policy is consistent through time and not sensitive to subtle variations in the input state (which is not guaranteed, particularly as the agent moves to new maps). Further, it is likely that interventions will occur in dynamic/unpredictable states (e.g., during a lane-merge, a turn, or a changing traffic light), and a naive back-up of value might lead to undesirable behavior in these states (e.g., assigning higher value to acceleration while waiting a red light). --- Rebuttal Comment 1.2: Title: Acknowledgement of rebuttal Comment: Thanks for the detailed response and clarifying any misunderstandings I may have had. I also read through the responses to other reviews. I still think this is a very strong paper and will keep my score at Strong Accept.
Summary: The paper proposes an approach for learning from human interventions based on offline RL with pseudo-rewards. The paper assumes a shared autonomy setting where a human monitors an agents rollouts and can intervene whenever deemed necessary and provide corrective demonstrations. The algorithm relabels those interventions with positive Q value targets and the faulty policy actions that led to intervention with negative Q targets. It then trains the policy on these pseudo-rewards with offline RL. In extensive experiments on multiple driving simulators with differing human input devices, the approach leads to faster and more successful learning of desired behaviors than prior RL and learning-from-human-intervention approaches. Strengths: - the paper's approach is elegant and simple, it leverages common offline RL techniques in combination with pseudo-reward labels and, due to it's simplicity, seems easily applicable to other domains - the empirical evaluation of the approach is comprehensive, with numerous baselines spanning the relevant related works, multiple environments and many meaningful ablations - the paper is well-written and easy to understand. it has intuitive visualizations for illustrating the approach, evaluation environments and qualitative results. Weaknesses: (A) **Concerns about training objective**: the training objective sets the **Q-value targets** of the human intervention trajectories to +1 and the Q-value targets of the policy actions in those intervention trajectories to -1. I have a few concerns about this objective: (1) by setting the Q-value target instead of setting the reward, the objective does not allow reward propagation through these updates, e.g. from post-intervention states. At the same time the algorithm runs "regular" Q-updates with full propagation and adds both objectives. Why not set **rewards** instead of Q-targets to indicate good and bad behavior and only run the regular Q-update on all data samples? That seems more elegant than the current two-part objective (2) the current objective assigns low Q-value to all policy actions during human intervention. However, the human intervention only provides a signal that the sequence of policy actions **before** the intervention were bad. It would for example be possible that the human "over-intervenes" and keeps providing demonstrations in states in which the policy's actions are already good again. In these cases the current objective would penalize good policy actions (B) **Only navigation environments**: while the experimental evaluations are comprehensive, they are limited to navigation environments (driving and grid world navigation). It would strengthen the paper to show the same algorithm can work for teaching robot manipulation tasks, e.g. by performing teleoperation in a simulated robotic environment. (C) **Unclear human user experiments**: all experimental evaluations are performed with real humans in the pipeline. While this is generally positive, it means special care needs to be taken to ensure fair comparability between all methods, since different users may interact with systems differently, they learn and adapt to systems over time etc. Thus, human user studies require great care in execution and documentation, eg ensuring that the same users interact with all methods and baselines (to account for differences between individual users), the order of methods is randomized and all users are given some time to practice with the system (to account for adaptation effects) etc. The paper does not provide any details in this regard for the quantitative evaluations in Tables 1, 2 and 4. It would be important to provide more details on the design of all experiments involving human users to ensure fair comparison. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: (Q1): How could the proposed method be combined with pre-trained policies or ground truth rewards? --> the current method requires lots of human teaching, it would be beneficial if human teaching was only used to *supplement* autonomous agent learning from some external source of supervision. The one experiment in the paper that does combine it with external rewards seems unsuccessful -- how can this be fixed? (Q2): How can the proposed method be extended to situations where humans cannot directly provide demonstrations? --> eg for an ant robot it is impossible for humans to provide demonstrations (the authors mention this short-coming of the current approach in the limitations section). I am wondering whether you have any ideas how this could be extended to situations like the ant robot? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - add more details about human experiment design - add a (simulated) robotics manipulation environment - clarify concerns about the proposed objective ## Summary of Review Overall, I think this is a strong paper. It combines a novel, yet simple and elegant approach with an extensive evaluation and comparison to numerous prior works. While I would like the authors to address the points raised in my review, I am happy to accept the submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! Please find our responses as follows. --- ### Weaknesses > **(A1)** *Why not set rewards instead of Q-targets?* Please refer to the response to W1 of Reviewer pvGG. > **(A2)** *The current objective assigns low Q-value to all policy actions during human intervention. However, human intervention only provides a signal that the sequence of policy actions before the intervention were bad. It would be possible that the human "over-intervenes" and keeps providing demonstrations in states the policy's actions are already good again. In these cases the current objective would penalize good policy actions.* **Response:** Let's suppose the "over-intervention" happens and this means the human's action is identical to the agent's action $a_n = a_h$. At this step, PVP will try to assign high proxy value to $Q(s, a_h)$ (regressing to $+1$) and low proxy value to $Q(s, a_n)$ (regressing to $-1$). However because $a_h = a_n$, their proxy values will cancel each other: $ \min (Q(s, a_h) - 1 )^2 + (Q(s, a_n) + 1)^2 + \text{TD Loss} \equiv \min 2Q^2(s, a) + \text{TD Loss} $ Under this circumstance, the proxy value still prefers this action because those human-intervening actions have much lower value (near $-1$) while those good actions have values close to 0. --- > *(B) Only navigation environments: while the experimental evaluations are comprehensive, they are limited to navigation environments (driving and grid world navigation). It would strengthen the paper to show the same algorithm can work for teaching robot manipulation tasks, e.g. by performing teleoperation in a simulated robotic environment.* **Response:** We totally agree! It is a promising future direction to apply PVP in robot manipulation tasks. There is a huge challenge to existing RL algorithms in robot manipulation tasks since fine manipulation tasks such as threading a zip tie or juggling a ping pong ball is very hard to be characterized by a reward function without sufficient engineering efforts. The proposed PVP method can learn from human demonstration behaviors in an online manner and provide corrective feedback to the agent. Experimenting on Robot manipulation tasks will be our immediate future extension. We plan to utilize the teleoperation hardware developed in a recent work [A] to conduct the real-world experiment. [A] Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware. RSS’23 --- > *(C) Unclear human user experiments: all experimental evaluations are performed with real humans in the pipeline. While this is generally positive, it means special care needs to be taken to ensure fair comparability between all methods. Thus, human user studies require great care in execution and documentation, eg ensuring that the same users interact with all methods and baselines (to account for differences between individual users), the order of methods is randomized and all users are given some time to practice with the system (to account for adaptation effects) etc. It would be important to provide more details on the design of all experiments involving human users to ensure fair comparison.* **Response:** Thank you for your advice! We provide a one-page PDF on human subject research protocol in the common rebuttal, where we discuss the requirement to human subjects, the onboarding period and the information / advice we provide to the human subjects. For your concern on the proficiency biases, we do randomize the order of experiments with different control devices, tasks and training algorithms for each participant. Therefore we can average out the "adaptation effects". --- ### Questions > *(Q1): How could the proposed method be combined with pre-trained policies or ground truth rewards? --> the current method requires lots of human teaching, it would be beneficial if human teaching was only used to supplement autonomous agent learning from some external source of supervision. The one experiment in the paper that does combine it with external rewards seems unsuccessful -- how can this be fixed?* **Response:** With the recent successes on large decision models such as Robotic Transformer 2 (RT-2), pretraining policy on large datasets is a promising approach to further improve the proposed method. We haven't conducted an experiment on this. We have an ablation study that uses both the environmental reward and human demonstration at the same time. Because two supervision signals are provided at the same time, they might not be fully aligned and this causes confusion to the model. For example, humans might want to slow down before a crossroad, while the speed reward in our reward function encourages the agent to move as fast as they can. In future work, we can experiment with a natural idea: the two-stage training scheme. We can first pretrain policy with ground truth reward or imitation learning and in the second stage we can incorporate the proposed PVP method to finetune the policy to improve its preference alignment such as improving the safety-critical performance. --- > *(Q2): How can the proposed method be extended to situations where humans cannot directly provide demonstrations? --> e.g. for an ant robot it is impossible for humans to provide demonstrations (the authors mention this short-coming of the current approach in the limitations section). I am wondering whether you have any ideas how this could be extended to situations like the ant robot?* **Response:** We can learn a goal-conditioned policy first, which also connects to your first idea using pretraining. We will then conduct the human-AI shared control in the second stage: we can invite human subjects to give demonstrations on the goal space, instead of the low-level action space. PVP can thus learn a high-level control policy from human demonstration and intervention that produces high-level goals for the low-level controller. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal! Comment: Thank you for your response. I am generally happy with how it addresses my comments. I appreciate the explanations for my concern (A2) and the inclusion of details regarding the human user studies! I am still a bit unsure about question (A1), ie reward vs value targets. How is this different from off-policy RL where we need to learn from transitions collected with a different policy? Additionally, the failure case mentioned seems like it could be avoided by giving some strong penalty for human involvement that outweighs the benefits of the following good actions? This would encourage the policy to avoid interventions but still follow the good actions after? I would appreciate the authors insight on this question, but am in general happy to recommend acceptance for the submission.
Summary: This paper focuses on human-in-the-loop for reward-free policy learning, wherein a human has the option to override the policy by taking over control when the agent attempts to perform risk behaviors. To solve this problem, this work presents Proxy Value Propagation (PVP), which directly assigns positive values (+1) to the actions induced by the human interruption and negative values (-1) to the actions the agent takes in specific states. By manipulating the values, PVP encourages the agent to approximate human behaviors while avoiding the actions intervened by human. Strengths: - The problem was well-motivated and the setting is realistic. - The empirical results suggest PVP outperforms a set of baselines involving imitation learning and other human-in-the-loop methods in terms of multiple measurements. Furthermore, the experiments with human subjects are convincing and extensive. - The proposed method is simple and efficient. Weaknesses: One major concern is that the technique of directly manipulating the value seems to be not well-motivated. As claimed in lines 178-180, the optimal policy should approximate the behaviors of human subjects while avoiding performing the intervened actions. Starting from this motivation, two natural ways are 1) assigning high rewards to human actions and low rewards to intervened policy actions, and 2) constraining the policy by manipulating the likelihood of the specific actions. Since the value function induced by TD learning measures the long-term performances starting from specific state-action pairs, directly assigning +1 or -1 to the value function seems to tell the agent that it tends to succeed or fail starting from the s-a pairs. However, human might only take control at some discrete time steps, and the subsequent behaviors after the interruption are unpredictable. Thus, the natural design is to assign manufactured rewards (+1/-1) to the specific transitions from my perspective. Though I believe that directly manipulating the value and propagating through TD-learning might be a better choice, it is worth comparing with the naive baseline mentioned above and further discussing the advantages of PVP. Though learning with online human interruptions is quite an interesting setting, it is pretty ideal to me as online learning is risky and expensive even with a human guardian in several domains (e.g., robotics). In terms of the driving case investigated in the paper, I believe that offline learning with human interruption datasets might be more realistic and interesting given numerous in-the-wild datasets. However, I would agree the investigation is out of scope concerning the current paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - If the reward function is set to zero, why $Q=\sum r_t$ can be set to $+1$ or $-1$? - For two adjacent time steps in an episode, if both $Q(s_1,a_1)$ and $Q(s_2,a_2)$ are set to 1, how can it satisfy Bellman equation as $Q(s_1,a_1) = \gamma max_a Q(s_2,a)$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! Please find our responses as follows. --- ### Weaknesses > **W1:** *One major concern is that the technique of directly manipulating the value seems to be not well-motivated. As claimed in lines 178-180, the optimal policy should approximate the behaviors of human subjects while avoiding performing the intervened actions. Starting from this motivation, two natural ways are 1) assigning high rewards to human actions and low rewards to intervening policy actions, and 2) constraining the policy by manipulating the likelihood of the specific actions. Since the value function induced by TD learning measures the long-term performances starting from specific state-action pairs, directly assigning +1 or -1 to the value function seems to tell the agent that it tends to succeed or fail starting from the s-a pairs. However, humans might only take control at some discrete time steps, and the subsequent behaviors after the interruption are unpredictable. Thus, the natural design is to assign manufactured rewards (+1/-1) to the specific transitions from my perspective. Though I believe that directly manipulating the value and propagating through TD-learning might be a better choice, it is worth comparing with the naive baseline mentioned above and further discussing the advantages of PVP.* **Response:** Though the idea of assigning reward +1 to human's actions and -1 to intervention actions is straightforward, it doesn't work in practice. This is because in the framework of value-based RL, the reward participates in the learning via the TD loss: $ J^\text{TD}(\theta) = \mathbb E_{(s, a, s')} | Q_\theta(s, a) - (r(s, a) + \gamma \max_{a'} Q_{\hat{\theta}}(s', a')) |^2 $ You can see that the Bellman backup is conducted on the transition triplet $(s, a, s')$, where you have to use future's value with current reward as an estimator of current value. Therefore, the reward must correspond to the action from the behavior policy, the action $a$ causes the transitions from $s$ to $s'$. In our context, during human involvement, the action $a = a_h$ must come from human policy as the human subject is taking control $s' \sim \mathcal P(s, a_h)$. And thus the reward will be $+1$ for those human-involving transitions. Therefore, we have no way to "assign low rewards to intervening policy actions $a_n$" because we need to know the future state $s'' \sim \mathcal P(s, a_n)$ caused by this action so that we can compute the TD target. However, we can only have the future state caused by human action. It is also not practical to query the environment to get the next state $s''$ as the $a_n$ is potentially a danger / undesired action and there is no way to replay it in the environment. In our preliminary experiment, we have tried this idea and the policy failed to learn anything no matter how much human involvement is provided. The reason is the reward will always be $+1$ for the transitions that occur during human involvement. And the learning agent found a pitfall to maximize its reward: it always demonstrates undesired behaviors, e.g. making a sharp left turn toward the road edge, so that humans will always take control which causes +1 reward. --- > **W2:** *Though learning with online human interruptions is quite an interesting setting, it is pretty ideal to me as online learning is risky and expensive even with a human guardian in several domains (e.g., robotics). In terms of the driving case investigated in the paper, I believe that offline learning with human interruption datasets might be more realistic and interesting given numerous in-the-wild datasets. However, I would agree the investigation is out of scope concerning the current paper.* **Response:** We agree that offline learning is a very important setting that is worth a lot of effort. The active human involvement being tackled in this submission is an orthogonal research setting and has its unique value in safety critical systems. In the development and deployment of learning-based systems, such as self-driving cars, human-AI shared control is a common scenario as there is usually a human driver sitting behind the wheel preparing to take over the control. Our proposed method can effectively leverage the human data collected during such shared control. Offline learning can help augment PVP, especially when we want to scale up the proposed method. We can introduce a pretraining stage before the human-AI shared control in PVP. By doing so the pretrained base policy learns some basic skills from in-the-wild datasets to reduce the human cost while achieving human alignment in later stages when training with PVP. --- ### Questions > **Q1:** *If the reward function is set to zero, why $Q = \sum r_t$ can be set to +1 or -1? For two adjacent time steps in an episode, if both $Q(s_1, a_1)$ and $Q(s_2, a_2)$ are set to 1, how can it satisfy Bellman equation as $Q(s_1, a_1) = \gamma \max_a Q(s_2, a)$?* **Response:** The proxy value assignment is an auxiliary learning objective but not a hard constraint to the proxy value function. This is because apart from the PVP loss, we also have the TD loss and thus the proxy value might not regress to $\pm 1$. In the context of this work, the proxy value function is not an estimator of the discounted return, as we don't have a reward function at all. Instead, the proxy value function is a function measuring human preference and can be used to select human preferable actions when applying the argmax rule on it $a = \arg\max_a Q(s, a)$. That is to say, the proxy value function can induce desired behavior, while it may not necessarily hold a mathematical interpretation such as the estimator of the discounted return. --- Rebuttal Comment 1.1: Title: Response Comment: Given the response of the authors, I do think this paper is promising and reasonable. Nevertheless, I also think the mathematical interpretation of this method should be improved in future work.
Summary: This paper introduces a reward-free approach termed Proxy Value Propagation (PVP) to facilitate safe and human-aligned reinforcement learning (RL). PVP assigns high Q values to human actions (the state-action pairs in the human demonstration) and low Q values to agent actions that necessitate human intervention. The authors conduct experiments across diverse environments, offering a comparison with existing RL and human-in-the-loop (HL) techniques, as well as the results of user studies. Strengths: 1. This paper is well-written and readable, and provides detailed appendices and codes to supplement the paper's algorithms, experiments, and implementation details. 2. This paper presents a reward-free approach to improve learning from human active involvement, effectively utilizing all data generated during the human intervention, including both agent and human-generated data. 3. The proposed method can be extended to various value-based RL Methods. Weaknesses: The training process of PVP seems to rely heavily on human involvement, whether through observation or intervention, which can be quite costly, especially when scaling up to more complex scenarios. However, this may not be an issue if the cost is lower than designing safe rewards and the performance is superior to basic RL methods. Nonetheless, the experimental results and settings in the paper leave me puzzled, and I am unable to determine if the results sufficiently support the conclusions. 1. I am unsure if the experimental setup for Base RL Methods is fair. Why is the Episodic Return so high, but the Success Rate is lower than that of humans? For instance, GT Sophy [1], a race car AI trained using Deep RL by Wurman, Peter R., et al., has already surpassed human champions. Additionally, Imamura, Ryuji, et al. [2] mention that "all agents learn the policy in approximately 400 epochs" and "its score places it among the top 10% approximately 28,000 human players." Can the authors provide more detailed descriptions of the rewards for Base RL Methods? Although the reward settings are briefly described in the appendix, the calculation methods and weights for each reward are not listed. Unreasonable rewards may affect the performance of Base RL Methods and the fairness of comparisons. 2. PVP can be considered as providing dense training signals to the agent, as the agent's training relies on human judgment for every (s, a) pair. Even if the action is not intervened, it still depends on human judgment, thus providing a positive and accurate training signal, leading to faster convergence. For Base RL Methods, can designing more reasonable dense rewards and properly adjusting the weight of crash penalties achieve the same effect? Based on the evaluation metrics, human preference seems to be focused on reducing crash. Can manually designed rewards also express human intentions and preferences? The author should compare the cost of manually crafting the reward with the cost of human intervention, which can highlight the advantages of PVP. 3. I also have doubts about the experimental results for BC and GAIL. What was their training data? Why do human demos have a 97% success rate, while BC and GAIL have success rates of less than 1%? Are there any errors in the experiments? 4. I am also puzzled about the user study results in Table 3. According to the appendix, the scores can only be 1, 2, 3, 4, or 5, with an upper limit of 5 points, and PVP has an average score of 4.8. Why is there a standard deviation of 0.5 when the scores are so high? Are there cases with very low scores, and did the authors analyze these low-score cases in detail? Bad cases can be potential safety hazards. 5. The statement about PVP with reward results is unevidenced: "which might be caused by the fact that the native reward function might not be aligned with human preference." Success rate is a metric of agent capability, so why can't an agent's capability be improved if it is not aligned with human preferences? Is there evidence to prove that RL cannot surpass humans in this task [1, 2]? 6. Additionally, the description of human subjects is insufficient. What is the proficiency of the human subjects in the environments? Were they all novices or professionals? Was a standard test specification or guide provided before the test to ensure consistency in test objectives? 7. Have the authors compared the repetition rates of (s, a) and (s, an) in the Novice Buffer and Human Buffer? This might be closely related to the reliability of human subjects, and learning may not be meaningful for unreliable humans. 8. How much variation is there in (s, an) among different human subjects? When participants have different styles, how should this be handled? For example, if some participants think the current (s, a) is reasonable, but others think it needs intervention, how should this sample be treated? [1] Wurman P R, Barrett S, Kawamoto K, et al. Outracing champion Gran Turismo drivers with deep reinforcement learning. Nature, 2022 [2] Imamura R, Seno T, Kawamoto K, et al. Expert Human-Level Driving in Gran Turismo Sport Using Deep Reinforcement Learning with Image-based Representation, 2021 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See the revision recommendations in the "Weaknesses" section. The authors are encouraged to address the aforementioned concerns, provide more detailed experimental information, and incorporate these insights into subsequent revisions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: This paper partially discussed the limitations, but there may still be some ethical concerns. It is worth considering whether the proposed method could be potentially abused by malicious human subjects, leading the AI to learn in an incorrect manner. Can malicious human preferences be detected in a timely manner, and the data be promptly removed? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! --- ### Weaknesses > **W1:** *I am unsure if the experimental setup for Base RL Methods is fair. Why is the Return so high, but the Success Rate is lower than humans? For instance, GT Sophy has already surpassed human champions. Can the authors provide more detailed descriptions of the rewards?* **Re:** First of all, we should emphasize that the essence of this work is to use human online interactions to replace the heavy reward engineering to achieve human-AI alignment. It is true that through sophisticated reward engineering for a specific task, like what [A] did, the resulting agent can achieve outstanding performance, but it requires extensive domain knowledge and enormous amount of experimentation. Even with a well-defined reward function, the training safety and efficiency of RL remain to be solved. We demonstrate the success of our method across different tasks with high sample efficiency and safety under human guidance, without even the access to the environment reward in training time. **The detailed descriptions of the rewards.** Please refer to the one-page PDF for the detailed formulations of the rewards, which contains four parts: The displacement reward $R_{disp}$, a dense reward to encourage the agent to move toward destination; The speed reward $R_{speed}$; The collision reward $R_{collision}$; and The terminal reward $R_{term}$. This reward function is dense, considers collision penalty, and encourages reaching the destination. **Why do human subjects have a higher success rate but lower return?** As you can see from our reward function, high return is possible when the RL agents drive faster but fail before reaching the destination. On the contrary, human subjects might not adopt such high velocity in order to gain more control and make sure the destination is arrived. **Why can't RL agents surpass human performance in our experiments?** We do not have a heavily engineered reward function and do not fine tune the weights of different reward terms for different tasks like what [A] did, which is expansive. On the other hand, imagining we can have a well-engineered reward function for each task that can potentially lead to a super-human policy, the challenges of training-time safety as well as the training efficiency need to be addressed. Our proposed work addresses all these challenges. --- > **W2:** *For RL Methods, can designing more reasonable dense rewards and adjusting the weight of crash penalties achieve the human-like behaviors?* **Re:** Even though the reward function we used is much simpler than [A], the $R_{disp}$ is a dense reward for task completion and $R_{collision}$ considers the crash penalties and they form a reasonable reward function. It is possible that grid-searching all possible reward weights can lead to better agents. However, such sophisticated reward engineering might need to be performed for each new task that might cost huge computational resources. Meanwhile, the training-time safety and the training efficiency of RL algorithms remain to be solved. This paper focuses on achieving training-time safety and training efficiency and also the important AI alignment problem, not only on getting a high performing policy through reward engineering. --- > **W3:** *The statement about PVP with reward results is unevidenced. Is there evidence to prove that RL cannot surpass humans in this task [A, B]?* **Re:** The reward function in [A] consists of 8 terms and the weights of these 8 terms are differently configured for different driving courses (Extended Data Table 1 in [A]). The well-defined reward function and the careful adjustment of reward weights, with other techniques in distributed training, creates the huge success of this Nature paper. Compared to [A], the reward function we used is much simpler and might not be capable of fully capturing human preferences. For example, human subjects usually have more fine grained behaviors such as decelerating before crossroad or maintaining a safe distance from the front vehicles. We believe adopting the task-specific reward function with sufficient weights tuning can let the RL agent surpass humans in our tasks. We want to point out that, even though we believe reward engineering can finally solve the tasks used in this work and might outperform human players as in [A, B], the training inefficiency and the possible risk during training are still concerning. In this paper we want to demonstrate the idea that using human active involvement can better communicate human preferences to the learning system while ensuring learning efficiency and safety. --- > **W4:** *Doubts about the experimental results for BC and GAIL. Are there any errors in the experiments?* **Re:** Dataset size is a critical factor for imitation learning baselines. Table 1 in the submission presents the test performance of the BC and GAIL agents trained by 30K steps collected by human subjects. To further address your question, we run a CQL baseline on the same 30K steps human dataset. Moreover, we use a trained RL agent as the expert to generate 250K transitions and train baselines on those 250K steps dataset. As shown in the below table, these baselines can achieve decent performance with sufficiently large datasets. Therefore, there are no errors in the implementation in Table 1 and it is the insufficient training data that causes the failure of BC and GAIL. | | Episodic Return | Episodic Cost | Success Rate | |---|---|---|---| | BC (30K) | 113.3 | 2.17 | 0.07 | | GAIL (30K) | 81.5 | 1.30 | 0.0 | | CQL (30K) | 156.4 | 1.71 | 0.21 | | BC (250K) | 362.1 | 0.13 | 0.57 | | GAIL (250K) | 309.6 | 0.68 | 0.60 | | CQL (250K) | 373.9 | 0.24 | 0.72 | --- [A] Outracing champion Gran Turismo drivers with deep reinforcement learning [B] Expert Human-Level Driving in Gran Turismo Sport Using Deep Reinforcement Learning with Image-based Representation --- Rebuttal Comment 1.1: Comment: Thank you for your response. However, some of my main concerns still remain unresolved. Q4: **The reliability of the user study results** in Table 3. According to the appendix, the scores can only be 1, 2, 3, 4, or 5, with an upper limit of 5 points, and PVP has an average score of 4.8. Why is there a standard deviation of 0.5 when the scores are so high? I think the results are lacking in plausibility. Could the authors provide more detailed experimental results and a reasonable explanation? Q7&Q8: I have not seen relevant responses yet. --- Reply to Comment 1.1.1: Title: Response to the follow-up questions Comment: Thank you for your follow-up response. --- > **Q4:** *The reliability of the user study results in Table 3. According to the appendix, the scores can only be 1, 2, 3, 4, or 5, with an upper limit of 5 points, and PVP has an average score of 4.8. Why is there a standard deviation of 0.5 when the scores are so high? I think the results are lacking in plausibility. Could the authors provide more detailed experimental results and a reasonable explanation?* **Response:** For the item in the user study that has the mean 4.8 and the standard deviation 0.5 (Compliance, and Performance of PVP), the raw scores are typically 4, 5, 5, 5, 5. Given these scores, the average becomes $\frac{4+5+5+5+5}{5} = 4.8$ and standard deviation becomes $\sqrt{ \frac{0.8 ^2 + 0.2 ^ 2 + 0.2^2 + 0.2 ^2 + 0.2^2}{5 - 1} } = 0.447$. We will update the STD to 0.4 due to rounding in Table 3. --- > **Q7:** *Have the authors compared the repetition rates of (s, a) and (s, an) in the Novice Buffer and Human Buffer? This might be closely related to the reliability of human subjects, and learning may not be meaningful for unreliable humans.* **Response:** We haven't compared the repetition rates of $(s, a_n)$ and $(s, a_h)$ in the novice buffer and the human buffer as for continuous state and action space it is hard to evaluate the "repetition". As we discussed in the human subject protocol, in the onboarding procedure the human subjects get familiar with the environments and tasks and they can complete several episodes independently before stepping into the main experiments. Therefore we assume the human subjects are not "unreliable". --- > **Q8:** *How much variation is there in (s, an) among different human subjects? When participants have different styles, how should this be handled? For example, if some participants think the current (s, a) is reasonable, but others think it needs intervention, how should this sample be treated?* **Response:** For each experiment, there is only one human participant interacting with the system. Therefore there is no cross-participants data stored in the replay buffers when training a novice policy. Policy learning from different human subjects is an important topic for scaling up the proposed system and is not considered yet in this work.
Rebuttal 1: Rebuttal: In this common rebuttal, we provide a one-page PDF containing two sections: 1. Human subject research protocol: including information on human subjects recruitment, onboarding procedure, the information we provide to them, the main experiment and the questionnaire. We will update Appendix B to include newly added content. 2. Reward function for driving tasks: describing the detailed reward function we used. It contains four terms, a dense displacement reward which measures the progress toward destination, a speed reward, a collision penalty and a termination reward. Pdf: /pdf/48cc271bc692c2574c84067184a42fce555e6c75.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors present Proxy Value Propagation for policy optimization. They model a proxy value function so that human intents receive high value (i.e. actions for which the human has inputted as corrective actions over a policy) while on-policy agents have low values for actions that caused an intervention. When optimized via TD-learning they are able to stitch together labeled values of demonstrated state-action pairs with unlabeled data from agent exploration (claimed). They execute the proposed algorithm on a number of environments (e.g. MiniGrid, MetaDrive, CARLA and the Grand Theft Auto 5 driving task) and show significant improvements over baselines. Strengths: - Human-in-the-loop (with active human involvement) is one of the few tried-and-tested methods to fine tune imitation learning / RL methods on complex real-world domains. Improvements in this field are actually quite impactful to practitioners deploying these systems in production (unlike some other methods that rarely see use outside of the academic setting). To this end the problem is well posed and of interest to the community. - PVP is a conceptually simple algorithm. I'm pretty sure this paper is self contained and I have everything I need to re implement it. - The workflow of the algorithm is also practical. I can imagine using this algorithm in a real system with the scale properties such a system would require. Weaknesses: - nit: "It is also compatible with different forms of human control devices, including gamepad, driving wheel, and keyboard" - True of baselines (and many other methods) as well? Or alternatively, this seems like an obvious point. - nit: line 128: "It is unrealistic to invite a real human subject to involve in such training." There are many real-world uses of Dagger. Be careful with over generalization. - I'm not sure the claim that unlabeled (s, a) pairs are stitched together with human expert data is all that well experimentally justified. Or put another way, if the formulation has some equivalency to CQL (with an additional L2 regularization term), in the limit that these regularizes are extremely strong, the method will just look like behavioral cloning. To claim that there is value propagation to unlabeled states I would want to see experiments to confirm this (because in my experience CQL does not actually stitch together sub-optimal trajectories as is often claimed). - Figure 2 results are quite compelling. Although you're comparing a random-exploration based RL method (and pretty vanilla one at that) against a human-in-the-loop method. This is apples to oranges. I'd also be curious to see what performance looks like against state-of-the-art offline RL or supervised imitation learning in these domains (there's some of this in Table 1, but not in this earlier teaser image). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - A collection of human demonstrations is assumed (line 143), however it's still not overly clear to me how this is used. Maybe this is already clear in the text but which Q function is actually pre-trained with this data? Presumably there needs to be some overlap between the agent and human preferences initially otherwise every state would induce a human intervention? Maybe spill out this warm up procedure in more detail. - It's also not entirely clear under what conditions the human is told to intervene? In continuous and high rate action and state spaces (particularly in driving applications), there is not one specific action that causes failure. Are humans instructed to let the agent diverge from an optimal trajectory by a specific metric amount (e.g. half a lane in the case of self driving)? Or is it up to the human operator under what conditions would they intervene? Is the method sensitive to such human ambiguity? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: First NeurIPS paper where I can comfortably say the limitations section is well done :-) I have nothing else to add. A societal impact statement doesn't seem necessary for this application / paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! Please find our responses as follows. --- ### Weaknesses > **W1:** *nit: "It (the proposed PVP method) is also compatible with different forms of human control devices, including gamepad, driving wheel, and keyboard" - True of baselines (and many other methods) as well? Or alternatively, this seems like an obvious point.* **Response:** Different control devices correspond to different action spaces and different human interaction behaviors. For instance, the steering wheel enables a smoother continuous action space compared to the keyboard’s discrete action space by pressing certain keys. These variations pose a generalizability issue for many baseline algorithms. For example, in Appendix F.1, we show that HACO struggles with the gamepad compared to the steering wheel. On the contrary, PVP demonstrates consistent performance across all tested control interfaces. This highlights PVP's potential for real-world applications where control methods can vary significantly. --- > **W2:** *nit: line 128: "It is unrealistic to invite a real human subject to be involved in such training." There are many real-world uses of Dagger. Be careful with overgeneralization.* **Response:** Thanks for pointing this out. Real human experiments are expensive, but are not unrealistic. For example, HG-Dagger has been used to train policies in a real car. We will change the wording accordingly, from ‘unrealistic’ to ‘expensive’. --- > **W3:** *I'm not sure the claim that unlabeled (s, a) pairs are stitched together with human expert data is all that well experimentally justified. Or put another way, if the formulation has some equivalency to CQL (with an additional L2 regularization term), in the limit that these regularizations are extremely strong, the method will just look like behavioral cloning. To claim that there is value propagation to unlabeled states I would want to see experiments to confirm this (because in my experience CQL does not actually stitch together sub-optimal trajectories as is often claimed).* **Response:** Section 5.4 (Table 1 "PVP w/o TD") presents an ablation study showing that removing the TD loss can greatly damage the learning performance. It gives evidence that "there is value propagation to unlabeled states" since if we only use the PVP loss, there will be no value propagation to unlabeled states. --- ### Questions > **Q1:** *A collection of human demonstrations is assumed (line 143), however it's still not overly clear to me how this is used. Maybe this is already clear in the text but which Q function is actually pre-trained with this data? Presumably there needs to be some overlap between the agent and human preferences initially otherwise every state would induce a human intervention? Maybe spill out this warm up procedure in more detail.* **Response:** We don’t assume there exist collections of offline human demonstrations beforehand. Therefore, we don't conduct pretraining or warm-up based on held-out human demonstrations. As clarified in the human subject research protocol (please see the attachment in the common rebuttal), human subjects will provide several episodes of demonstration with full control at the beginning. PVP is efficient enough to learn a rudimentary policy for humans to interact with after a few minutes of training. This can be seen in the supplementary video. --- > **Q2:** *It's also not entirely clear under what conditions the human is told to intervene? In continuous and high rate action and state spaces (particularly in driving applications), there is not one specific action that causes failure. Are humans instructed to let the agent diverge from an optimal trajectory by a specific metric amount (e.g. half a lane in the case of self driving)? Or is it up to the human operator under what conditions would they intervene?* **Response:** It is up to the human operator under what conditions they would intervene. We provide a one-page PDF on human subject research protocol in the common rebuttal for more details. We will update Appendix B to include the newly added content. During the onboarding period, the participants will be briefed about the objectives of all driving tasks: to safely maneuver the vehicle to its designated destination while ensuring the vehicle's operation aligns with traffic regulations and human preferences. The participants will practice each task with different control devices. The human subjects will perform intervention whenever they perceive the vehicle might be in a dangerous situation, in violation of traffic rules, or in whatever scenario they feel they wouldn't behave in the way the novice policies do. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your taking the time to respond to the comments. Dear Reviewer e73G, After reading the authors' response, do you have any additional thoughts? Best, AC
null
null
null
null
null
null
Triple Eagle: Simple, Fast and Practical Budget-Feasible Mechanisms
Accept (poster)
Summary: The paper studies the design of budget-feasible mechanisms: a buyer wants to purchase from a set of potential sellers with different production costs which are private information, and the goal is to maximize the total value subject to a budget constraint. The main result is a new framework that achieves (1) good approximation to the first-best and (2) low pricing complexity, roughly measured by the number of questions each seller has to answer. The authors conduct experiments that show the new methods outperform existing methods on influence maximization tasks. Strengths: The problem is very well established and practically relevant. The paper is well written and technical ideas are clearly explained. The paper makes solid progress for the problem by proposing a new design framework, which might trigger further progress. The technical contribution appears to be nontrivial. Weaknesses: While I like the paper overall, one minor complaint is it doesn't say much about lower bounds. Of course this is largely due to the intrinsic complexity of the BFM problem. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: (also including detailed comments here) Line 117, "... as simple as a posted-pricing mechanism ...": maybe this will become clear later, but do you mean you offer a single price to each seller, and the seller can possibly be selected only if they accept the price? It might help to be explicit about the connection to and difference from posted pricing early on. Line 150, "... because otherwise we can ...": I guess this would add another pricing query to each seller? Is this why you keep writing O(1) rather than 1? Last paragraph in Section 6: it would be nice to explicitly comment on the tradeoff between queries and quality of solution for all 3 (families) of methods. The impression I got is RTM makes much fewer queries at the cost of worse quality of solution, while IP returns decent solutions at the cost of much more queries. (And then of course TER / TED has the best performance in terms of both.) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment\:While I like the paper overall, one minor complaint is it doesn't say much about lower bounds. Of course this is largely due to the intrinsic complexity of the BFM problem.** Response: Thanks a lot for your comments! For BFMs with additive valuation functions, Singer \[35] has proposed a lower bound of 2, and Chen et al. \[12] have proposed two improved lower bounds of 2 (for randomized BFMs) and $1+\sqrt{2}$ (for deterministic BFMs). Since submodular functions generalize additive functions, these lower bounds also apply to BFMs with submodular objectives. In Lines 34-36 of Section 1.1, we have listed the lower bounds of Chen et al. \[12]. To the best of knowledge, since the work of Chen et al. \[12] in SODA’11, no other papers have provided any further lower bounds for BFMs with submodular objectives, which remains as an interesting open problem and we will study it in future. We will add more discussions about this. **Question: Line 117, "... as simple as a posted-pricing mechanism ...": maybe this will become clear later, but do you mean you offer a single price to each seller, and the seller can possibly be selected only if they accept the price? It might help to be explicit about the connection to and difference from posted pricing early on.** Response: Nice question! You are correct. We have tried to briefly explain the differences between posted pricing and clock auctions in Lines 100-102 due to the space constraints, and we will follow your suggestion to revise these lines to make the differences clearer. Here is a detailed explanation: Suppose that each user is offered one price in a clock auction. Then the only difference between this clock auction and any posted-pricing mechanism is about how they treat any seller u who accepts the offered price: it is mandatory to select u as a winner in the posted-pricing mechanism, while u can be either a winner or a loser in the clock auction (this implies that the clock auction is allowed to use the observation of all the sellers’ behaviors in the end of the auction to decide whether u should be selected as a winner). However, from the perspective of pricing complexity, these two mechanisms are the same. That’s why we say “…virtually as simple as…”. **Question: Line 150, "... because otherwise we can ...": I guess this would add another pricing query to each seller? Is this why you keep writing O(1) rather than 1?** Response: Thanks a lot! You are right. As claimed in Lines 149-151, we have adopted the assumption made in \[9, 12] that no user’s cost is larger than B (our mechanism offers only one price to each user under this assumption). As you see, Lines 149-151 also suggest a simple method to remove this assumption by using an additional query to each seller. So we write O(1) instead of 1 for rigorousness. Actually, there is another simple method to remove the assumption in Lines 149-151 by using at most 1 additional query in total. Specifically, since each user is guaranteed to be offered a price no more than B in our main algorithm (which rules out every seller with a cost larger than B), we only need to identify a valid $v^*$ before running our algorithm. This can be done by first sorting all the sellers according to the non-increasing order of their values, and then offer B to them one by one until seeing the first seller accepting B and then that seller is clearly a valid $v^*$. After that, we delete all the sellers rejecting B and run our algorithm. Clearly,using such a method, eventually $v^*$ is queried for at most 2 times and any other seller is queried once. We have omitted this method for conciseness and would add a discussion on it if needed. **Question: Last paragraph in Section 6: it would be nice to explicitly comment on the tradeoff between queries and quality of solution for all 3 (families) of methods. The impression I got is RTM makes much fewer queries at the cost of worse quality of solution, while IP returns decent solutions at the cost of much more queries. (And then of course TER /TED has the best performance in terms of both.)** Response: Thanks a lot! You are correct. We will fully follow your suggestions to add these comments. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I have no further questions. --- Reply to Comment 1.1.1: Comment: You are very welcome!
Summary: This paper proposes a novel technique in designing budget-feasible mechanisms (BFMs), which improves the state of the art results on approximation guarantees and query complexity simultaneously. In particular: * With monotone submodular functions, the newly proposed mechanisms improve the approximation from 4.75 to $\frac{\sqrt{13}+5}2$ (randomized BFM) and $2+\sqrt{6}$ (deterministic BFM) and improve the query complexity from $O(n^2 \log n)$ to $O(n)$; * With non-monotone submodular functions, the newly proposed mechanisms improve the approximation from $14+6\sqrt{5}$ to 12 and improve the query complexity from $O(n \log n)$ to $O(n)$. The key innovation of the technique is to set query prices as a decreasing function of the value of the set of sellers accepted the offered prices so far. $$\mathsf{price}(u) = \mathsf{budget} \cdot \frac{\mathsf{marginal ~ value}(u | \mathsf{accepted ~ sellers})}{\mathsf{value}(\mathsf{accepted ~ sellers}) + \alpha \cdot \max_{s}\mathsf{value}(s)}.$$ With such adaptive price queries, each seller is queried with a price at most once. The final winners are then a subset of the sellers who accept the offers. One key advantage of this approach is that the query complexity becomes $O(n)$, and in the meanwhile the approximation guarantees are also improved with careful analysis and proper optimization of the parameter $\alpha$. Finally, the empirical evaluation shows that the actual performance of the newly proposed mechanisms indeed dominates the benchmarks on both optimality and the number of queries. Strengths: * A significant improvement with a novel technique on a widely studied problem. * Simultaneously improvements on both approximation guarantee and query complexity. * Solid analysis on proving the approximation ratios. Weaknesses: * No empirical evaluation for the non-monotone submodular function case. (Addressed after author response) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Is the current pricing formulation optimized? 2. Will querying the sellers in the order of their values be helpful? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: No specific limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment: No empirical evaluation for the non-monotone submodular function case.** Response: Thanks a lot for your comment! Due to the page limits, the empirical evaluation for non-monotone submodular objectives is provided in the supplemental file (Appendix F), and we have mentioned about this in Lines 312-315 in Section 6 of the main file. **Question: Is the current pricing formulation optimized?** Response: Thanks a lot for your question! We have carefully optimized the performance of our pricing method by adjusting the parameter \alpha (e.g., see Lemma 3, Lemma 6, and Theorems 1-3) to get better ratios than the existing studies. We have also listed in Lines 34-36 the currently best-known lower bounds for BFMs with submodular objectives (which were proposed by Chen et al. \[12]), i.e., 2 for randomized BFMs and $1+\sqrt{2}$ for deterministic ones. To the best of knowledge, since the work of Chen et al. \[12] in SODA’11, no other papers have provided any further lower bounds for BFMs with submodular objectives. It is interesting to make further optimizations for our algorithms to narrow the gaps between our performance ratios and the lower bounds in \[12] (or provide tighter lower bounds than \[12]), which will be our future work. We will follow your comments to provide more discussions about this. **Question: Will querying the sellers in the order of their values be helpful?** Response: Great question! Choosing order sounds a good idea. In fact, we had carefully thought about it before our submission,but it seemed that this idea can hardly help due to the following observations.In the traditional problem of submodular maximization with a knapsack constraint, good performance relies on selecting the elements strictly according to the non-increasing order of the ratios of marginal value to cost (e.g., see \[Maxim Sviridenko 2004] “A note on maximizing a submodular set function subject to a knapsack constraint”). However, in our setting the costs of the sellers are unknown and even can be falsely reported due to the sellers’ strategic behaviors, so the strict order mentioned above cannot be ensured no matter how we order the elements based on their values. Even if we use a sealed-bid auction and solicit the cost information from the sellers to achieve the strict order mentioned above (as done in \[12,21,22,35]), we have to trade off the approximation ratio for ensuring that the sellers report their true costs, and that’s why the sealed-bid auction approaches in \[12,21,22,35] all have worse approximation ratios than ours. Moreover, choosing order results in super linear complexities which could be unsuitable for large markets. Due to the above considerations, we had abandoned the ordering idea to ensure linear time complexity while still achieving better performance ratios than SOTA. --- Rebuttal Comment 1.1: Comment: Thank you for your response! --- Reply to Comment 1.1.1: Comment: You are very welcome!
Summary: The paper considers the problem of designing budget feasible mechanisms, in which the designer has a budget B, agents have private costs, and to each subset S of agents is associated a reward. The goal is to design a mechanism that incentives agents to truthfully reveal their privare costs and allows the designer to choose a subset S of agents whose total cost is at most B, and that maximizes the optimal reward of the designer. The paper provides both deterministic and randomized mechanisms for this setting both for monotone and submodular reward functions and for non monotone and submodular reward functions. All these mechanisms improves the approximation of the chosen subset S with respect to the optimal choice. Not only, they achieve these results with a faster and "simpler to understand" algorithm, in which agent are posted a price, they may accept or refuse the price, but the actual set of selected agents will be computed only at the end among the agents that did not refuse a price. Strengths: The problem is interesting and received a lot of attention recently in our community. The proposed algorithms improves over the previous ones. Weaknesses: None Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Can you stress the difference and similarities between your mechanisms and sequential posted-price mechanisms by Chawla et al., 2010? Do you believe that a specific order of processing in algorithm 3 may help the performances of the mechanism? E.g., since f(u) are known (from the computation of v*), you may use thus information to discard the "worse" agents. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question: Can you stress the difference and similarities between your mechanisms and sequential posted-price mechanisms by Chawla et al., 2010?** Response: Thanks a lot. Nice question! We guess that \[Chawla et al., 2010] refers to the STOC’10 paper “Multi-parameter Mechanism Design and Sequential Posted Pricing”. We think that this is an excellent paper, but it has studied a totally different problem. First, \[Chawla et al., 2010] considers a “forward auction” problem where a seller (without a budget) needs to maximize the revenue by selling a product to a group of buyers, while we consider a “reverse auction” problem where a buyer (with a budget) needs to buy some products from a group of sellers. Please note that forward auctions and reverse auctions are two separated lines of research in auction theory and that’s why our problem is raised in FOCS’10 \[35] after \[Chawla et al., 2010]. Second, a major difference is that \[Chawla et al., 2010] considers a “Bayesian setting” where the distributions of the preferences of the players are known, while we consider a “Prior-free” setting where there is no prior knowledge about the players. The only similarity between \[Chawla et al., 2010] and our work is that some prices are offered to players, but \[Chawla et al., 2010] use the “posted-pricing” scheme where each seller who accepts the offered price must be immediately selected as a winner, while we consider “clock auctions” where a seller is not mandatory to be selected as a winner if she/he accepts the offered price. Such a difference on pricing rule would lead to very different algorithm design and performance analysis even under the same problem.   In summary, although \[Chawla et al., 2010] is a great paper, it belongs to a different line of research and their techniques cannot be applied to our problem. We will cite \[Chawla et al., 2010] and discuss about it. **Question: Do you believe that a specific order of processing in algorithm 3 may help the performances of the mechanism?E.g., since f(u) are known (from the computation of v\*), you may use thus information to discard the “worse” agents.** Response: Great question! Choosing order sounds a good idea. In fact, we had carefully thought about it before our submission,but it seemed that this idea can hardly help due to the following observations. In the traditional problem of submodular maximization with a knapsack constraint, good performance relies on selecting the elements strictly according to the non-increasing order of the ratios of marginal value to cost (e.g., see \[Maxim Sviridenko 2004] “A note on maximizing a submodular set function subject to a knapsack constraint”). However, in our setting the costs of the sellers are unknown and even can be falsely reported due to the sellers’ strategic behaviors, so the strict order mentioned above cannot be ensured no matter how we order the elements based on their values. Even if we use a sealed-bid auction and solicit the cost information from the sellers to achieve the strict order mentioned above (as done in \[12,21,22,35]), we have to trade off the approximation ratio for ensuring that the sellers report their true costs, and that’s why the sealed-bid auction approaches in \[12,21,22,35] all have worse approximation ratios than ours. Moreover, choosing order results in super linear complexities which could be unsuitable for large markets. Due to the above considerations, we had abandoned the ordering idea to ensure linear time complexity while still achieving better performance ratios than SOTA. --- Rebuttal Comment 1.1: Comment: Thank you very much for the clarifying answers. --- Reply to Comment 1.1.1: Comment: You are welcome! That's our pleasure.
Summary: This paper studies the budget feasible mecchanism design, where a buyer needs to select a subset from strategic sellers with private costs , and pays them under a budget $B$. The goal is to maximize the valuation of the subset $S$ defined as $f(S)$. This paper gives both deterministic and randomized solutions for this problem, with improved approximation ratio and pricing complexity(defined as the maximum number of prices offered in the clock auction in the worst case), for monotone and non-monotone submodular functions, respectively. Finally, the paper provides an experimental evaluation of their methods against SOTA, and showed that the utility and the number of queries are better than the iterative-pruning(IP) and Random-TM(RTM) algorithm. Strengths: - The studied problem is well-motivated, and the theoretical results are solid. - The paper has experimental results to show empirically their advantage over existing methods. - The literature review is informative and pointed out some drawbacks of the existing works they mentioned. Weaknesses: - There is no formal problem setting, so it might be hard for reviewers from outside AGT community to understand this problem at the first sight. Some minor comments for this paper: - The reviewer strongly suggest to replace $f(O)$ by $\text{OPT}$. - line 138, $2^{\mathcal{N}}$ should be $2^{|\mathcal{N}|}$. - The description and the presentation of this algorithm is hard to understand. The reviewer suggests to add a line input with descriptions, e.g., the candidate set $C$, the accepted set $A$. - $v^*$ can be replaced by $u^*$. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Is there any concrete example of practical applications of this method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The reviewer personally believes that the studied problem might be a little inappropriate for machine learning conferences. Theory conferences might be a better fit. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment: There is no formal problem setting, so it might be hard for reviewers from outside AGT community to understand this problem at the first sight.** Response: Thanks a lot for your comment! We have provided the formal problem setting in Section 2, including the formal problem formulations, the definitions of clock auctions and submodular functions, as well as the assumptions and notation descriptions. For a reader outside of the AGT community, our problem could also be understood as a simple pricing problem under incomplete information: we need to offer a proper price to each user with an unknown cost and finally select a group of users who accept the prices under the budget, such that the revenue of the selected users (cast by a submodular function) is maximized. Clearly, each user must behave truthfully on whether accepting the offered price or not for her/his own benefit (i.e.,truthfulness is guaranteed). By addressing this simple pricing problem, we have avoided using the other more complex forms of auctions (e.g., the sealed-bid auctions in prior studies \[4, 10, 12, 21, 22, 35]) that are harder for a reader outside the AGT community to understand. We will follow your comments to make further efforts to help the readers outside the AGT community to understand our problem. **Question: Is there any concrete example of practical applications of this method.** Response: Thanks a lot for your question! We have provided two concrete examples of practical applications including crowdsourcing and influence maximization in the first paragraph of Section 1, Section 6 and Appendix F of the supplementary file. These applications have also been used as representative applications of budget-feasible mechanisms in prior studies(e.g., \[4, 21, 22, 36, 37]). Specifically:&#x20; (1) In the influence maximization application shown in Section 1 and Section 6, there is a set of users (i.e.,sellers) in a social network and each of them has a cost for being selected as a "seed user". After some users are selected as "seed users" by an advertiser with a budget for compensating the seed users, they will advertise the product to their friends and then the influence will propagate in the social network through the "word of mouth" effect. The goal of the advertiser is to select a proper set of seed users under the budget to maximize the "influence spread", i.e., the expectation of the number of users who are activated in the influence propagation, which is cast by a submodular function. This influence maximization problem was originally proposed by the celebrated paper \[25] (with 9900+ citations) and has aroused great attention during the past two decades (e.g., see \[5,15,31,32]). However, in practice the users may not report their true costs, so the advertiser is faced with the additional challenge of ensuring the truthfulness of the users, which turns the problem into an instance of the BFM problem studied in our paper.&#x20; (2) The crowdsourcing application is similar to the influence maximization application explained above, only with the difference that the "advertiser" and "seed users" in the influence maximization application are changed to the "crowdsourcing task owner" and "workers" in the crowdsourcing application, respectively, and that the target function is re-defined. In Appendix F of the supplementary file, we have elaborated on a concrete crowdsourcing example similar to that in \[4, 19, 21, 22, 37]. Please note that our problem model is exactly the same as that proposed in the seminal paper \[35] and is very general (one can regard it as essentially a submodular knapsack problem plus the truthfulness requirement), so there also exist many other applications in various fields such as mobile crowdsensing,federated learning, experimental design, social advertising, vehicle sharing,data pricing, team formation, cellular traffic planning, and so on. This can be verified by checking the 300+ citations of the seminal paper of \[35] published in FOCS’10.&#x20; If given a chance, we will follow your comments to further clarify the applications of our method. **Comment: The reviewer strongly suggest to replace f(O) by OPT.** Response: Thanks. We will do as you suggested. **Comment: Line 138, $2^{\mathcal{N}}$ should be $2^{|\mathcal{N}|}$.** Response: Thanks. We use $2^{\mathcal{N}}$ to denote the power set of $\mathcal{N}$. Such a notation for power set has also been used in many other proposals (e.g., \[2]\[3]\[4]\[8]\[9]\[16]\[19]\[20]\[35]). We will revise to make it clearer. **Comment: The description and the presentation of this algorithm is hard to understand. The reviewer suggests to add a line input with descriptions, e.g., the candidate set C, the accepted set A. $v^\star$ can be placed by $u^\star$** Response: Thanks. We will do as you suggested. **Comment: The reviewer personally believes that the studied problem might be a little inappropriate for machine learning conferences. Theory conferences might be a better fit.** Response: Thanks a lot for your comment! We agree on that our paper may also fit some theory conferences. However, the scope listed in the CFP of NeurIPS’23 includes “algorithm game theory”, which perfectly fits our paper. Moreover, since the representative applications of our paper include two social computing applications, i.e., influence maximization and crowdsourcing, which both have aroused great interests in the machine learning conferences/journals such as NeurIPS/JMLR/JAIR (e.g., \[15, 23,31, 32, 38]), we think that our paper also fits well to another scope listed in the CFP of NeurIPS’23, i.e., “Social and economic aspects of machine learning”. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, I've adjusted my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you very much!
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting
Accept (poster)
Summary: The paper focuses on generative diffusion modeling when some of the samples are known, which is advocated as the right setup for regression. Indeed, in a time-series context, those known samples may lie in the past (prediction), or provide boundary conditions (interpolation). In this context, the classical diffusion equations can be worked out to yield a generation that is compatible with the known samples to some controllable extent. This is the key contribution for this paper, and I must say that traming regression as a constrained diffusion is an interesting idea. From this starting point, the authors try to derive two variants: one where the model estimate is used as a prior, and one where some arbitrary other estimate is used. A set of forecasting experiments conclude the paper Strengths: - the paper is very readable and provides some background and references. It is quite clear - Framing regression as a constrained diffusion is definitely interesting and has some merits. - I would also say that I'm sure that the proposed method yields interesting results, compared to some other methods Weaknesses: Notwithstanding its qualities, I have several concerns with the paper that prevent me from giving it a high score. - I would say that the attempt at formulating the method in a Bayesian way is actually a bit awkward in my opinion, because it does not bring much more food for thought than just plainly presenting the guidance as a regularization over the (enegry-based) generating process. Indeed, the prior distribution that is picked in equation (5) just reads like picking some L2 regularization to me, and the quantile regularization that is advocated as working very well in practice just doesn't lend itself very well to a convenient Bayesian treatment. Actually, section 3.2 precisely drops this Bayesian vision to adopt an optimization-based point of view, which seems more natural to me. - I feel that there is actually a lot of redundancy between 3.1 and 3.2. As I see it, the two "variants" are actually the same, with the only difference that for self-guidance, the estimate is derived from the model while it is assumed arbitrary in 3.2. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - L85: consists of decomposing - L119: "see Eq 1": It is not clear to me why you provide this reference to eq (1) - eq (5), again, the choice of a unit variance for the likelihood is weird to me. I would say this is only picked this way to motivate some L2 regularization term in a "reverse mathemating" sense - I don't understand how you get \epsilon in equation (10) and further during inference time. - L249-250: when you write that the model does not require task-specific training, could you actually explicitly mention *what* is the model trained on ? I thought that there would be a separate training for each task ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - will you provide some implementation ? Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review and positive comments on the writing and the merits of our ideas. In the following, we respond to specific questions and concerns raised by the reviewer. **Comment:** On the Bayesian view of self-guidance and Eq. 5. **Response:** We introduced self-guidance using the Bayes' rule to retain the probabilistic perspective of the guidance term, and to keep the story and notation consistent with the original work on classifier guidance [1]. In our opinion, the Bayesian view is a cleaner and less ad-hoc way to set up the method. We later clarify in the paper that the selected distributions result in the MSE and quantile losses (see L138 and L148). We dropped the Bayesian view in section 3.2 because the _data-space_ refinement scheme makes it confusing to set it up. **Comment:** Redundancy between 3.1 (self-guidance) and 3.2 (refinement). "... the two "variants" are actually the same, with the only difference that for self-guidance, the estimate is derived from the model while it is assumed arbitrary in 3.2." **Response:** We believe that this is not an accurate characterization of the differences between our self-guidance and refinement schemes. The diffusion model estimate is used in *both* schemes. What differs is the interpretation and the sampling procedure. - **Self-guidance** modifies the reverse diffusion process with a self-conditioning score function. Ancestral sampling is performed starting from noise at step $T$ all the way to the observation space at step $0$ by iteratively denoising and guiding the sample. - **Refinement** interprets the implicit density of the diffusion model as a regularized energy landscape. Rather than starting from noise, it starts from the prediction of a base model and uses Langevin dynamics to sample _directly in the data space_. The self-guidance and refinement schemes can also be viewed as the predictor and corrector schemes used in some score-based diffusion models [2]. **Comment:** Why refer to Eq. 1 in L119? **Response:** The neural network $\epsilon_\theta$ is trained on noisy time series $\mathbf{x}^t\in\mathbb{R}^{L\times C}$. These noisy time series are obtained using Eq. 1, hence the reference to it. **Comment:** How to get $\epsilon$ in Eq. 10? **Response:** $\epsilon$ is drawn from the standard Gaussian distribution, wheras $\epsilon_\theta$ is the denoising neural network (see Fig. 2). $\mathbf{x}^t$ is the noisy transformation of $\mathbf{y}$ obtained using Eq. 1. As mentioned in L187, Eq. 10 is a simplification of the ELBO and serves as an approximation of $\log p_\theta(\mathbf{y})$. **Comment:** "What is the model trained on? I thought that there would be a separate training for each task?" **Response:** The model is trained as an unconditional generative model for each dataset. This means that the model is trained to denoise the complete sequence, $\mathbf{x}^t$. In the *forecasting with missing values* experiment, we trained one model per dataset and used the same model for inference on the different missing value tasks (e.g., random missing, blackout missing). This is in contrast to the conditional models that are trained for specific missing value tasks. Please note that the term "task" in our context does not refer to the one used in meta-learning and foundation models literature, as clarified in footnote 1. **Comment:** Will the implementation be available? **Response:** The implementation has been provided as supplementary material and will be made publicly available at a later stage. We thank the reviewer again for their feedback and hope that we have satisfactorily addressed their concerns regarding the Bayesian view and the difference between self-guidance and refinement. If so, _we hope that the reviewer would consider raising their score_. [1] **Dhariwal, Prafulla, and Alexander Nichol**. "Diffusion models beat gans on image synthesis." Advances in neural information processing systems 34 (2021): 8780-8794. [2] **Song, Yang, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole**. "Score-based generative modeling through stochastic differential equations." arXiv preprint arXiv:2011.13456 (2020).
Summary: This paper proposes TSDiff, which is an unconditional diffusion model for time series generation. Besides, the authors propose self-guidance and prediction refinement. The empirical results showcase the superiority of TSDiff over existing baselines on forecasting, refinement, and generating synthetic samples. Strengths: 1. The paper is joyful to read and the methodology is clearly described. 2. The empirical results are more than enough. The authors conduct extensive experiments on various of tasks and datasets, making the proposed methodology very convincing. 3. It is good to see that the authors proposed a new metric to make more reasonable comparison with other baselines in Section 5.3. Weaknesses: 1. If y_obs is a subset of timesteps representing the observed timesteps, then is the conditional generation similar to an inpainting problem in 2D images? Then important baselines in solving inverse problems with pre-trained diffusion models, e.g., DDRM [1], should be included, or at least be discussed thoroughly. 2. There are several equations that are not labeled with numbers. It would be the best if the authors can label them. I have several questions regarding the equation after Eq. (10): [1] Is tau hard to solve? It looks like solving tau is computationally expensive. It would help if the authors can provide some empirical results. Actually I did not quite understand "tau can be computed efficiently by tracking the running mean for the losses at all diffusion steps during training". [2] How does the optimum, tau, vary with different data point? In this equation, the expectation does not take x_t (or y) into account, does this mean we have to solve for tau for every y? Or do we jointly solve a tau using all the data points in the training set? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are well discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review and positive comments on our method and writeup. In the following, we respond to specific concerns and questions raised. **Comment:** "...is the conditional generation similar to an inpainting..." and discussion of image inpainting models. **Response:** At a high level, there are indeed similarities between the time series forecasting/imputation problems we address and the inpainting problem in 2D images. Both involve generating missing or unobserved data conditioned on the observed data. We acknowledge the similarities and cite prior work on inpainting in our related work discussion (see L198). However, there are also key differences. In time series problems, the temporal dynamics and the sequential nature of the data introduce complexities and challenges that are not present in the static 2D inpainting problem. While DDRM provides a powerful framework for linear inverse problems, directly applying it to our problem might not be straightforward due to the aforementioned complexities. That said, we agree that it is an important related work and we will include a discussion on DDRM in the revised manuscript. **Comment:** Missing labels for equations. **Response:** We only added labels for equations that we refer to in the text. We will add labels to all equations in the revision. **Comment:** Computational complexity of the representative diffusion step, $\tau$. **Response:** $\tau$ is not computationally expensive to solve. There are two ways to obtain $\tau$: - **After training**: Compute the average loss over random datapoints for each diffusion step $t$. Then, select the $t$ closest to the average loss over all diffusion steps as the representative step, $\tau$. This can be done by the following line of code where `losses` is an array of the loss for each diffusion step. ```python tau = ((losses - losses.mean()) ** 2).argmin() ``` As a result, we end up with one $\tau$ for the entire dataset. In our experiments, we computed the loss per diffusion step with a randomly sampled batch of 1024 datapoints. The computation of $\tau$ took around 13 seconds per dataset on a single Tesla T4 GPU. We will add these details to the revision. - **During training**: Rather than computing the losses for each $t$ after the training, we can keep a running average loss for each $t$ and compute the representative timestep using these losses, as above. This is possible because the loss we use to obtain $\tau$ is the same as the training loss. We will clarify the text in the revision. **Comment:** "...the expectation does not take x_t (or y) into account..." How does the optimum ($\tau$) vary with different data points? **Response:** Thank you for spotting this typographical error. There should be an expectation over the datapoints as well. We will correct this in the revision. As mentioned above, a single diffusion step $\tau$ is computed per dataset and it does not vary with data points. Fig. 2 in the additional rebuttal PDF shows the loss per diffusion step, the average loss and the resulting representative timestep for the 8 datasets used in our experiments. We thank the reviewer again for their review and suggestions. We hope that we have addressed the reviewer's concerns regarding the computational complexity of the representative diffusion step. If so, _we kindly ask the reviewer to consider raising their score_. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the additional results and clarifications. I will keep my rating and I tend to accept this paper. --- Reply to Comment 1.1.1: Comment: Thank you for your response. If you have any further questions, we will be happy to answer them.
Summary: This paper describes a diffusion model for time series problems. Contrary to the popular approach of using a conditional diffusion model, the authors proposed to use the unconditional diffusion model, which is supplemented by a self-guidance mechanism. The authors also proposed a prediction refinement algorithm to improve the prediction of any time series model by using the trained diffusion model. Strengths: Training a conditional diffusion model is challenging due to the high dimensionality of the conditional problem. The authors proposed an interesting method of bypassing training a conditional diffusion model. The proposed methodology is simple and sound. Weaknesses: The paper does not elaborate the details of their model, which makes it difficult to fully understand the methodology. For example, it does not clearly show the dimensions of input and output variables, how the missing inputs are treated, and so on. There are also a few comments in Questions sections. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In equation (6), as $t$ gets larger, the denominator $\sqrt{\overline{\alpha}_t} \rightarrow 0$, which is a natural consequence of the diffusion process. How do you handle this singularity issue? In practice, it will not be singular, but potentially cause a numerical instability. 2. The target distribution, $p(y_{ta}|y_{obs})$, but it actually should be $p(y_{ta}|y_{obs}) = p(y_{unobs}|y_{obs})p(y_{obs}|y_{obs})$ and $p(y_{obs}|y_{obs}) = \delta(y-y_{obs})$. So, upon the guidance denoising, we expect the observed $y$ should be converged to $y_{obs}$. I wonder if the model guarantees this. 3. Provide the proof that equation (9) converges to the maximum likelihood solution when $\gamma = 0$. Also, please provide a proof that (9) converges to $p(y)$. 4. Carefully reading it, I can understand how the guided diffusion, eqn (5), is computed for missing data and prediction. But, it will be useful if the authors can elaborate it to improve the readability. 5. It is claimed that the model can be used for imputation. Is the diffusion model trained with a missing data? Or, does it require a full data? *I am willing to upgrade my score once these questions are addressed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: One of the questions of time series diffusion model is whether it memories the pattern or learns the dynamics. Based on the learning objectives, it is more likely to learn the patterns, which may limit the capability of diffusion-based models for the time series problems of chaotic dynamics and a stochastically forced system. Also, as the authors suggested, the computational cost is certainly of concern for a scalability. But these are beyond the scope of the study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their comprehensive review and appreciation of our work. Our response to specific questions raised by the reviewer follows. **Comment:** On model details (input and output dimensions, how missing values are treated, etc.). **Response:** - The denoising network takes an $L \times C$ dimensional input, as mentioned in L119, where $L$ is the sequence length and $C-1$ is the number of lags. As is typical in unconditional diffusion models, the output dimension is same as the input dimension. - During inference, the missing dimensions are masked when the self-guidance term is computed. Details on missing value experiments can be found in Appendix B3. We will further elaborate the model details in the revision. Specific implementation details can also be found in our code which has been released as part of the supplementary material. **Comment:** How is the singularity of $\sqrt{\bar{\alpha}_t}$ handled in Eq. 6? **Response:** We used a linear beta scheduler with $\beta_1=0.0001$ and $\beta_T=0.1$ with $T=100$ in our experiments. This results in $\sqrt{\bar{\alpha}_T}=0.075$ which did not lead to numerical instabilities. However, in our earlier experiments using the cosine scheduler [1], we observed unstable behavior with $\sqrt{\bar{\alpha}_T}=0.0002$. Based on this experience, numerical instability can be avoided by appropriately choosing $\beta_1$, $\beta_T$, $T$ and the beta scheduler. **Comment:** Does the observed section of $y$ converge to $y_\textrm{obs}$? **Response:** Self-guidance enforces a _soft-constraint_ on the observed timesteps. Therefore, the diffused values are not guaranteed to be exactly equal to the observations. The alignment between the predictions and observations for the observed timesteps can be controlled by the scale parameter $s$ (see Eq. 3). In practice, the diffused values for the observed timesteps are close to the observations, as shown in Fig. 1 in the rebuttal PDF. **Comment:** Proof of convergence of maximum likelihood and Langevin Monte Carlo. **Response:** - Note that Eq. 9 with $\gamma=0$ corresponds to gradient descent which is not guaranteed to converge to the global optimum for general (non-convex) problems. It may converge to a different local optima, depending on the intialization. This behavior is apparent in our results in Table 3 where the scores vary for different base forecasters. We will clarify this in the revision to avoid confusion. - For an SDE of the form $dX_t=-\nabla E(X_t)dt + \sqrt{2\gamma}dB_t$, the invariant distribution is the Gibbs-Boltzmann distribution $p(x) \propto \exp\left(-\frac{E(x)}{\gamma}\right)$, where $E$ is the energy function (equal to $-\log p_\theta(\mathbf{y}) + \lambda\mathcal{R}(\mathbf{y},\tilde{\mathbf{y}})$ in our case) and $B_t$ is a Brownian motion. This result can be derived using the Fokker-Plank Equation associated with the SDE (see Ch. 4 in [2]). The discretization of this SDE used in Eq. 9, also known as the _unadjusted Langevin algorithm (ULA)_, converges to the invariant distribution given certain regularity conditions (e.g., differentiability and Lipschitz gradients) on $E$ (see [3] for the analysis of ULA). Note that the invariant distribution in our case is not the true underlying distribution $p(\mathbf{y})$, which is unknown. Instead, we designed the energy function such that low energy is assigned to samples that are likely under the diffusion model $p_\theta(\mathbf{y})$ and also close to $\tilde{\mathbf{y}} = \mathrm{combine}(\mathbf{y}\_{\mathrm{obs}},g(\mathbf{y}\_{\mathrm{obs}}))$, ensured by the first and the second terms in the energy function, respectively. We will include this discussion in the revision. **Comment:** Is the model trained on missing or complete sequences? **Response:** The model was trained on complete sequences for the missing values experiments. The goal of this experiment was to evaluate the model's ability to handle/impute missing values during inference without any knowledge of the missing value patterns during training. We will clarify this in the revision. Further details on the missing values experiments (e.g., data splitting) can be found in Appendix B3. Note that while the model has been trained on complete sequences in our experiments, it is not a requirement for training TSDiff. Missing values during training can be easily handled by the S4 layers [4] used in our model by appropriately masking the missing timesteps. **Comment:** Does the model learn the dynamics or the patterns? **Response**: Additional investigation is needed to determine if the model learns the dynamics or patterns. Nevertheless, our results indicate that the model can make reliable predictions for typical time series forecasting datasets and tasks. Future research could focus on evaluating diffusion models' performance with chaotic/stochastically forced time series data, which could yield valuable insights. We thank the reviewer again for their insightful comments and valuable suggestions on improving the readability of our work. Based on our responses to the reviewer's questions on the specifics of our model, _we hope that the reviewer would consider raising their score_. [1] **Nichol, Alexander Quinn, and Prafulla Dhariwal**. "Improved denoising diffusion probabilistic models." In International Conference on Machine Learning, pp. 8162-8171. PMLR, 2021. [2] **Pavliotis, Grigorios A**. "Stochastic processes and applications." Springer-Verlag New York, 2016. [3] **Durmus, Alain, and Eric Moulines**. "Nonasymptotic convergence analysis for the unadjusted Langevin algorithm." (2017): 1551-1587. [4] **Gu, Albert, Karan Goel, and Christopher Ré**. "Efficiently modeling long sequences with structured state spaces." arXiv preprint arXiv:2111.00396 (2021). --- Rebuttal Comment 1.1: Comment: While I believe that there is still room for improvement, the manuscript is interesting enough to guarantee acceptance. I changed my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for increasing the score. We will be happy to hear any other suggestions that you may have regarding our manuscript.
Summary: In this paper, the authors proposed an unconditional diffusion model and a self-guidance mechanism for time series data that can be used for conditioning diffusion model for downstream tasks, e.g. time series forecasting and imputation. The effectiveness of proposed model is evaluated from three aspects: prediction, refinement of prediction from other base forecaster, and time series generation. The results show that unconditional diffusion models can achieve comparable forecasting accuracy to conditional ones. The proposed model can be potential useful for other downstream time series analysis tasks other than studied ones. Strengths: 1. The author proposed a new self-guidance mechanism to approximate conditional probability p(y_obs|x^t) which enables time series forecasting with unconditional diffusion model 2. Extensive experiments on multiple benchmarks demonstrate the effectiveness of the proposed unconditional diffusion model comparing to conditional ones. 3. The paper is well written and easy to follow. Weaknesses: 1. Refining the prediction of a weak forecaster is a good way to evaluate the model, but it has no real application scenario. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The authors described the details of how to do self-guidance without explanations of why this approximation can work in practice. It would be helpful to explain the high level intuition behind it. 2. The idea of self-guidance seems to be general, can it be used in other domain, e.g. image? 3. It is a good way to evaluate the model with refining the prediction. But this is not a practical senario. If one has the diffusion model, why not just use it for prediction? There are a lot of spaces used for this in section 4. It would be better for me to discuss other more interesting things. 4. The authors mentioned that the computational cost of the proposed model is high. It would be better to have such experiments comparing the forecasting speed with conditional diffusion models. 5. The authors evaluate the time series synthesis using train on synthetic-test on real setup. It would also be interesting to see train on synthetic and real - test on real setup. Can synthetic time series helpful for forecasting? 6. The experiments are conducted on univariate case, can the method be applied for multi-variate time series? 7. Can other time series analysis task get benefit from proposed models other than forecasting and imputation? For instance, time series classification, clustering, similarity search. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the author claimed the limitation of the paper, which is the computational cost. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and constructive feedback. In the following, we respond to specific questions raised by the reviewer. **Comment:** Refinement has no real application scenario / Too much space dedicated to refinement. **Response:** The primary goal of the refinement experiments was to offer an alternative view on the diffusion model. Self-guidance modifies the reverse diffusion process whereas refinement utilizes the implicit probability density of the model. However, we respectfully disagree with the reviewer's comment that refinement lacks practical application scenarios. Refinement offers a complementary approach that caters to different computational constraints. While self-guidance generally yields superior results, it involves iterating through all $T$ diffusion steps, which can be computationally expensive. In contrast, refinement can be performed for fewer optimization steps, suitable to one's compute budget. Particularly, in the case of simple base forecasters, such as Seasonal Naive and Linear, even a single refinement step yields considerable improvements (see Fig. 5 in the Appendix). This highlights a valuable trade-off between runtime and prediction performance when compared to self-guidance. In industrial forecasting applications, it's not uncommon where one has access to a complex production forecasting system of black-box nature. Refinement provides a computationally efficient yet theoretically sound way to improve forecasts as a post-processor. **Comment**: What's the high-level intuition behind self-guidance? Why does the approximation work? **Response**: Guidance controls the reverse diffusion process via a conditioning term. Forecasting and imputation involve conditioning on the observed section of the time series. The main intuition behind "self"-guidance is that a model designed for complete sequences should reasonably approximate partial sequences. The one-step denoising used in Eq. 6 serves as a cost-effective approximation of the model for the observed time series, providing the requisite conditioning term for guidance in the form of the score function. **Comment:** Can self-guidance be used in other domains? **Response:** Yes, the idea of self-guidance is general and could potentially be used for other domains and tasks, e.g., images and videos. We are optimistic that future research will adopt our self-guidance mechanism across diverse domains. **Comment:** Computational cost of inference compared to conditional diffusion models. **Response:** Thank you for your suggestion. We will add a table in the appendix comparing the inference costs of conditional vs. unconditional diffusion models on different datasets. The following table shows the inference time for the conditional and unconditional models on the **Exchange** dataset while controlling for everything (e.g., batch size, number of samples, GPU). | Model | Inference Time (seconds) | | -------- | -------- | | TSDiff-Cond | 162.97 | | TSDiff-MS | 201.08 | | TSDiff-Q | 201.67 | The additional overhead for the unconditional model (i.e., TSDiff-MS and TSDiff-Q) comes from the computation of the gradient of a neural network during self-guidance (see the equation after L128). **Comment:** Can a _train on real and synthetic_ scenario improve forecasting performance? **Response:** Based on our _train on synthetic-test on real_ results, we are hopeful that data augmentation with synthetic samples would improve downstream forecasting performance. Given other aspects of the current work (self-guidance and refinement), the scope of our generative evaluation was limited to the _train on synthetic-test on real_ scenario. This could be explored in future work. **Comment:** Can the method be applied to multivariate time series? **Response:** Yes, the ideas presented in this work are not limited to univariate time series. We decided to focus on the univariate case as univariate time series constitute a significant portion of real-world problems. The following are two simple ways of extending the model to handle multivariate time series: - Train the model with the channel-independence assumption (popularized by PatchTST [1]), i.e., all feature dimensions of the multivariate time series are treated as independent univariate time series. - Modify the backbone denoising network to incorporate a feature embedder for the multivariate time series. We will add this discussion to the revision. **Comment:** Can the proposed models benefit other time series tasks such as classification, clustering and similarity search? **Response:** Yes, we expect the proposed models to benefit other tasks. For example, by augmenting a classifier with synthetic samples or utilizing the imputation capabilities to clean datasets. Furthermore, future work could investigate the possibility of anomaly detection using the implicit density learned by the model. We thank the reviewer again for their comments and suggestions. We hope that we have satisfactorily answered the reviewer's questions and clarified sufficiently on the reviewer's main concern regarding the utility of refinement. If so, _we request the reviewer to consider raising their score_. [1] **Nie, Yuqi, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam**. "A time series is worth 64 words: Long-term forecasting with transformers." arXiv preprint arXiv:2211.14730 (2022). --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their feedback. Still I am not fully convinced that the refinement of baseline forecaster like seasonal naive has practical impacts regarding computational constraint. The proposed method TSDiff could also be used with limited iterations and outputs a less accurate prediction, right? Then the authors would need to answer how do this one compared to the refinement of baseline forecasters with similar cost. Otherwise, maybe the authors could consider to sell the refinement point as a speed up or trade-off of accuracy and speed for the proposed TSDiff, which would be more convincing for me. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We agree that the possibility of introduction of new forecasting models into specific real-world systems brings forth a need to study the trade-off between resource expenditure (in terms of computation) and performance. Specifically, this involves comparing the use of TSDiff as a standalone forecaster (with a limited number of iterations) against refining an existing base forecaster using TSDiff. However, in many industrial forecasting scenarios, factors such as resource limitations (pertaining to human capital), the presence of legacy systems and the downstream decision making system that relies on them, and stringent testing requirements often hinder or delay the replacement of existing production forecasting models. In these instances, refinement presents a cost-effective solution that enhances forecast accuracy post hoc without modifying the core forecasting process — a change that could potentially be a lengthy procedure. Importantly, refinement is versatile; it is independent of the base forecaster’s type and requires access only to forecast samples. > The proposed method TSDiff could also be used with limited iterations and outputs a less accurate prediction, right? Then the authors would need to answer how do this one compared to the refinement of baseline forecasters with similar cost. Comparing the performance of refinement and guidance might not be fair due to their distinct motivations and the reliance of refinement’s performance on the quality of underlying base forecasts. Nonetheless, we agree with the reviewer that this is an interesting experiment. We conducted an initial investigation using the first two datasets from our paper (Electricity and Solar). The table below presents a comparison of refinement from a seasonal naive predictor (LMC-Q, ML-Q) and diffusion guidance (TSDiff-Q) under different computational budgets (1, 2, 5 and 10 iterations). | Iterations | Model | Electricity | Solar | |-------------:|:---------------|-------------------:|-------------:| | | Seasonal Naive | 0.069 | 0.512 | | | | | | | 1 | LMC-Q | **0.054** | 0.505 | | 1 | ML-Q | **0.054** | **0.504** | | 1 | TSDiff-Q | 0.759 | 1.038 | | | | | | | 2 | LMC-Q | **0.054** | 0.501 | | 2 | ML-Q | **0.054** | **0.499** | | 2 | TSDiff-Q | 0.816 | 1.013 | | | | | | | 5 | LMC-Q | 0.054 | 0.494 | | 5 | ML-Q | **0.053** | 0.493 | | 5 | TSDiff-Q | 0.088 | **0.483** | | | | | | | 10 | LMC-Q | 0.054 | 0.486 | | 10 | ML-Q | **0.052** | 0.485 | | 10 | TSDiff-Q | 0.073 | **0.419** | We observe that TSDiff-Q’s performance is poor with a low number of diffusion steps (1, 2), but improves when using 5 or more steps. In contrast, refinement demonstrates significant early-stage enhancements over the base model, which then plateau as more iterations are performed. We utilized the uniform skipping method proposed in [1] for diffusion guidance with reduced diffusion steps. The performance trends indicate that TSDiff-Q’s effectiveness increases with more iterations, while refinement provides considerable benefits during initial iterations before converging to a value. We hope that our response has convinced the reviewer of the practical utility of refinement. We will be happy to answer any further questions. [1] **Song, Jiaming, Chenlin Meng, and Stefano Ermon**. “Denoising diffusion implicit models.” arXiv preprint arXiv:2010.02502 (2020).
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful reviews and their constructive feedback to improve the quality of our paper. We are pleased to note that the reviewers appreciate: - the **technical significance of our self-guidance approach** ("an interesting method of bypassing training a conditional diffusion model", "constrained diffusion is definitely interesting and has some merits", rated *good* on **Soundness** and **Contribution** by 3/4 reviewers); - the **clarity of our manuscript** ("paper is joyful to read and the methodology is clearly described", "the paper is very readable and provides some background and references. It is quite clear", "paper is well written and easy to follow"); - and, the **thoroughness of our empirical evaluation** ("Extensive experiments on multiple benchmarks demonstrate the effectiveness of the proposed unconditional diffusion model...", "The empirical results are more than enough ... making the proposed methodology very convincing", "I'm sure that the proposed method yields interesting results, compared to some other methods"). Detailed responses to individual reviewers are available under each review. In this general response, we seek to reemphasize the key contributions of our work. - We present a fresh perspective on time series forecasting and imputation via *unconditional* diffusion modeling, in contrast to prior work that focuses on conditional models. To this end, we propose two novel inference schemes to utilize unconditional diffusion models for conditional tasks during inference. - **Observation self-guidance** conditions reverse diffusion for arbitrary forecasting tasks via a guidance term derived from the model's own estimate of the observed time series. The idea behind self-guidance is general and may be applicable to other domains. - **Refinement** uses the implicit density learned by the diffusion model to improve forecasts from base forecasters by sampling from an energy-based model. - We show, through extensive benchmarks, that the proposed self-guidance approach is competitive against task-specific conditional models across datasets and forecasting scenarios. Furthermore, our refinement scheme is able to improve forecasts from base forecasters (especially the simple ones such as Linear and Seasonal Naive). - We demonstrate empirically that downstream models trained solely on the synthetic samples from the diffusion models generate forecasts of high quality, outperforming existing time series generative models based on VAE and GAN frameworks. This opens up potential research directions analyzing the utility of the synthetic samples for downstream tasks either theoretically or empirically. - We propose a metric to evaluate the quality of synthetic samples based on a simple Linear regression model which is not sensitive to architecture choices and random initializations, unlike existing predictive metrics. Pdf: /pdf/a18dfe82f96aa09c592ab4a32dd3d16fb2b1d0db.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models
Accept (poster)
Summary: This paper proposes a new method for performing the task of "Foley", which entails the creation of sounds that match a video, for example, making an audio clip in a studio that matches a character's footsteps in a film, as a post-production task. The authors proposed to do this automatically via deep learning, which has been proposed in other works, notably Im2Wav from last year. They propose 4 main contributions: audio-visual contrastive pre-training, latent diffusion modelling, an augmentation technique based on cropping different clips together, and the combination of classifier guidance and classifier-free guidance. Through a set of ablations, the authors show the importance of these four contributions. They also show superior results compared to 2 previous works on 2/4 metrics, and on inference time. Many qualitative results are shown and the appendix includes a large amount of discussion around many of the central topics. Strengths: Fundamentally, the paper has enough contributions, in my opinion. Out of the 4, the first is very substantial, and the other 3 are reasonably novel when applied to this field. The writing is quite consistent with little to no mistakes, and in general the clarity of the discussions is remarkable. The paper does well in introducing readers to the task of video foley, which is not particularly well known amongst most readers, and also does a good job at presenting a lot of the modeling techniques used. The ablations satisfy the vast majority of questions the reader may have, and the more fine-grained ones are a nice addition. Quantitative results are explicit and well-presented. The lengthy appendix is welcome with some fruitful discussions. The demos are really impressive when compared to other works. Weaknesses: From the demos and quantitative results, the previous approaches seem to feature quite weak reproductions of the original sound. The authors only compare with two methods, which leaves me wondering if there could be comparisons with more works. I have found, for example, that FoleyGAN (2021) seems to perform the same task, but no comparison (or citation) is present. I believe the results would be a lot more convincing with more comparisons. Also, Diff-Foley is outperformed on 2/4 metrics, which is also not very reassuring, although they are FID and KL which are known to be somewhat unreliable. While the authors clearly show that adding Classifier guidance helps compare to only CFG, I would like to see an ablation where only Classifier guidance is used as well, as a matter of completeness, to show that both components of the double guidance are truly important. Figure 2 is good for the most part, but the latent diffusion section is, in my opinion, too vague - instead of just having a big box labelled "latent diffusion", it would be nice to have more detail to explain exactly what is happening here. Particularly for readers who are not already familiar with LDMs. The third contribution "Temporal Split & Merge Augmentation" is similar to augmentations that have long been used in other domains. Namely cropping parts of different data points together to form a new one. For example, "MixUp" does exactly this by mixing two different images and labels together. Some discussion around previous approaches similar to this contribution with adequate referencing is necessary. In table 1, if inference time is mentioned, then I believe number of parameters and training time, if possible, should also be presented for a more fair comparison. In the related work, there is a section on "Video-to-Audio Generation", but existing works on a specific subset of video-to-audio which has received a lot of attention, video-to-speech, are not mentioned at all. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As mentioned above, are there any other methods that this work could be compared with in the results section? Will the code be released upon acceptance? If so, will this include training code, pre-trained models, etc.? These contributions to the open-source community would be very welcome. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Although I realize there are space limitations, the broader impact is really insufficient. Please put a bit more substance into this discussion. For example, you can talk about how this technology could be used to create fake videos more accurately, which could help spread false information. Or perhaps you can talk about the consequences of using producing an inaccurate reproduction of the audio, which may change the meaning of the video and mislead viewers. This may also be used to improve video surveillance, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the positive feedback from you. We respond to the weakness and questions below. ### 1. About more baselines. > *1.1: FoleyGAN (2021) seems to perform the same task, but no comparison (or citation) is present.* Thanks for pointing out, we are glad to cite the references in our final paper. Regarding FoleyGAN (2021), we couldn't reproduce their experiment as their code is not publicly available. Additionally, FoleyGAN has only been tested on a small dataset of 28K samples, and its efficacy on large-scale audio-video datasets like VGGSound remains unverified. ### 2. Discussion of metrics. > 2.1: Diff-Foley is outperformed on 2/4 metrics, which is also not very reassuring. **(1) Update on Human Eval:** As suggested by reviewer k7sd and tzzm, we have conducted a human evaluation, which further demonstrate the superiority of our method in generating highly synchronized audio with strong audio-visual relevance. Please refer to **Global Response 1**. **(2) About FID and KL:** As you mentioned that FID and KL are known to be somewhat unreliable, we agree with your point. As for the detailed discussion, please refer to **Global Response 2**. ### 3. Ablation on using only classifier guidance. Sure. Results are provided in the following table for completeness. We see that only adopt the classifier guidance leads to inferior results. Only using the gradient of classifier for guidance is unstable during inference, but combining CFG and CG together results in better performance (as discussed in supplementary C.3,4). This further verify the effectiveness of our proposed double guidance techniques. | Model | Stage1 CVAP Dataset | CFG | CG | | | Metrics | | | :---------------- | :---------------------: | :-: | :-: | :--------------: | :-----------------: | :----------------: | :-------------------------: | | | | | | IS $\uparrow$ | FID $\downarrow$ | KL $\downarrow~$ | Align Acc (%) $\uparrow$ | | | VGGSound | &cross; | &cross; | 19.86 | 18.45 | 6.41 | 67.59 | | | VGGSound | &cross; | &check; | 16.58 | 20.20 | 6.81 | 62.24 | | | VGGSound | &check; | &cross; | 51.42 | 11.48 | 6.48 | 85.88 | | Diff-Foley (Ours) | VGGSound | &check; | &check; | 53.45 | **10.67** | 6.54 | 89.08 | | | VGGSound + AudioSet-V2A | &cross; | &cross; | 22.07 | 18.20 | 6.52 | 69.41 | | | VGGSound + AudioSet-V2A | &cross; | &check; | 17.57 | 20.87 | 6.69 | 66.05 | | | VGGSound + AudioSet-V2A | &check; | &cross; | 52.07 | 11.61 | **6.33** | 92.35 | | | VGGSound + AudioSet-V2A | &check; | &check; | **60.39** | 10.73 | 6.42 | **94.78** | We will add these additional results in the final version of our paper for completeness, thanks for the suggestion again. ### 4. About paper revision. > *4.1: Figure 2, latent diffusion models too vague.* Thanks for pointing out, we will draw more details of latent diffusion module in Figure 2 in the final version of our paper. > *4.2: Reference for Temporal Split \& Merge Aug.* Thanks, we will add more discussions and cite relevant papers like Mixup in the final version of our paper. > *4.3: Show training time and number of params.* Thanks for mentioning it. In the realm of generative models, our focus lies primarily on inference time, which has larger impact on pratical need. Diffusion model, current state-of-the-art generative model, is known to require a larger number of parameters and longer training time to achieve remarkable generation performance, like Stable Diffusion. Detailed training time and number of parameters are provided in the supplementary. > *4.4: Video-to-Speech are not mentioned.* Thanks for pointing out. We will add more discussions and references for video-to-speech in the final version of our paper. > *4.5: The broader impact is really insufficient.* Thanks for your advice, we will add more substance in the discussion of broader impact in the final version. ### 5. Other Questions. > *5.1: Are there any other methods that this work could be compared with in the results section?* Thank you for raising this point. We have compared our work with SpecVQGAN and Im2wav, the latest two models that have been applied to the VGGSound large-scale dataset for video-to-audio tasks. Other related works may either be outdated or have not been validated on large-scale datasets, which limits their relevance for comparison with our method. We believe that our comparison with the baseline methods **provides a comprehensive evaluation and stands as relatively complete.** > *5.2: Will the code be released upon acceptance? If so, will this include training code, pre-trained models, etc.?* Yes, we recognize the value of open-source. We will release the code and the pre-trained model once the paper has been accepted. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for answering my questions. I appreciate the new ablation and human evaluation. Also happy to hear you'll be releasing code, acknowledging previous works in video-to-speech, adjusting Figure 2, and have added num. params and training time. Happy to raise my score slightly. --- Reply to Comment 1.1.1: Title: Response to Reviewer MA6i Comment: Thanks for vour response. We express our gratitude to you for the valuable discussion and positive evaluation. We are glad to hear that our discussion has cleared your concerns.
Summary: The paper presents an approach to generate and align audio with an existing video track, using a diffusion process. This is useful in video post-processing, where frequently sound effects have to be aligned with existing video footage (and not just match semantically). The authors demonstrate the quality of their approach, which relies on a "double guidance" technique to improve the quality of the reverse process in LDM, and improve the alignment. Authors perform fine-tuning and ablation studies, to further understand the properties of the proposed approach. Strengths: Originality The paper presents a novel approach to learning audio-visual alignment, using a CLIP-style loss, taking within-clip and across-clip contrastive pairs from audio and video. This CVAP is a novel contribution, and at the core of the proposed work. LDMs are still a non-standard use in audio, although they are finding applications. Quality, Significance Given the paper's improvements (mostly given through the CVAP formulation and its application to the LDM), I think the paper makes a significant contribution, although the topic is somewhat "niche" Clarity The paper is mostly well written and covering a reasonable set of experiments, it describes the main argument. Weaknesses: I don't understand the effect of temporal split & merge, section 3.3 (also see the question below) -- IIUC, the model could be trained on any audio-video pair, so if AudioSet+VGGSound is not enough, why not simply crawl more data from the Internet? It is clear that temporal augmentation improves results, specifically around the alignment metric, but Table 5 does not seem to indicate that more data always improves results? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In Section 3.3, you say "We validate the effectiveness of this augmentation method in Sec 3.3." - please explain? - In the abstract (and in other places): "we demonstrate DIFF-FOLEY practical applicability and generalization capabilities via downstream finetuning." -- how does fine-tuning, which is kind of a specialization of the model, demonstrate generalization capability? - Figure 5 - I am not sure I understood what is presented. The model makes mistakes for the first example - are these being resolved through fine-tuning the diffusion model, or do they still present? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the constructive feedback from you. We address your concerns below. ### 1. About Temporal Split & Merge Augmentation. > *1.1: Explain for sentences: "We validate the effectiveness of this augmentation method in Sec 3.3."* My apologies for the confusion. The correct sentences should read: "We validate the effectiveness of this augmentation method in Sec 4.1.2." The ablation study in Sec 4.1.2 verifies this temporal augmentation's effectiveness. This error will be corrected in our final version paper. > *1.2: Model can be trained on any audio-video pair? If AudioSet+VGGSound is not enough, why not simply crawl more data from the Internet?* Yes, you're correct that our model can be trained on any audio-video pair, and more data might lessen the need for Temporal Split and Merge Aug. However, Diff-Foley demands high-quality and natural audio-video pairs with strong audio-visual correlation. Simply crawling more data from the Internet often leads to low-quality pairs. Most online videos contain elements like human speech, unrelated audio, and noise, which can seriously degrade performance. **Cleaning and filtering this data require considerable time and effort**. This augmentation method offers an elegant solution to this problem. By incorporating the temporal alignment prior into the training process, it enables the model to be trained on relatively smaller but carefully filtered, high-quality datasets like VGGSound. > *1.3: Table 5 does not seem to indicate that more data always improves results?* In Table 5, all the metrics has improved except for FID and KL when using more data for Stage1 CAVP. As reviewer MA6i mentioned that FID and KL are known to be somewhat unreliable, however, we included them for the sake of completeness. In general, IS and Align Acc are more convincing metrics that truly reflect audio quality. The results in Table 5 illustrate the potential for enhanced performance by scaling up the dataset to super large scale for Stage1 CAVP, akin to CLIP model. ### 2. About downstream fine-tuning. > *2.1: How does fine-tuning, which is kind of a specialization of the model, demonstrate generalization capability?* Thanks for pointing out the ambiguity. What we originally meant is that by fine-tuning the pre-trained Diff-Foley model, we can skillfully adapt it to specific sound synthesis tasks, much like fine-tuning Stable Diffusion models for personalized images. This adaptation not only promotes broader applications but also demonstrates the strong generative capabilities of the origin pre-trained model, showing certain extent of generalization capabilities. Thank for your valuable feedback, we'll place greater emphasis on the broad applicability and specialization of downstream fine-tuning to clarify this aspect and make it less ambiguous in final paper. ### 3. Illustration of Figure 5. > *3.1: The model makes mistakes for the first example.* Thanks for asking. Figure 5 shows the generative results after fine-tuning on the Kitchen dataset. It's important to note that our audio generative model isn't aimed at perfectly reconstructing audio from video content—a task that is indeed impossible. Rather, we seek to **generate various audios that align with human perception, even if they differ from the ground-truth audio**. In Figure 5, Diff-Foley capably generate the corresponding cutting sounds, an outcome that aligns with human perception, despite noise and other variations in the ground-truth audio, which leads to spectrogram differences. Feel free to revisit this sample on our website, you'll find they are reasonable. Despite the subtlety of the cutting movement, which is extremely challenging, Diff-Foley manages to create synchronized audio that aligns convincingly with human perception. ### 4. Update on human evaluation. For your information, as suggested by reviewer k7sd and tzzm, we've conducted human evaluation, please refer to **Global Response 1**. This further demonstrate the superiority of our method in generating highly synchronized audio with strong audio-visual relevance. --- Rebuttal Comment 1.1: Title: Awaiting Your Valuable Feedback Comment: We appreciate the time and effort the reviewer has dedicated to providing us with thorough and constructive feedback. Please inform us if our response addresses all concerns and let us know if more information is needed. We are committed to providing any necessary clarifications. --- Rebuttal Comment 1.2: Comment: Thank you for providing a detailed rebuttal and addressing my questions. The addition of the subjective evaluation is a good further validation of the usefulness of the approach. The paper will benefit from including the information currently conveyed in the author responses, and under the assumption that authors will be able to include it, I am happy to raise my score from 5 to 6. Also looking forward to seeing the code being released as open source. --- Reply to Comment 1.2.1: Title: Response to Reviewer 55K5 Comment: Thanks for vour response. We express our gratitude to you for the valuable discussion and positive evaluation. We will refine our paper, taking into account the valuable suggestions from the reviewers and further clarifying some points to fully address several concerns.
Summary: This paper present DIFF-FOLEY, a synchronized Video-to-Audio synthesis method with a latent diffusion model (LDM) that generates audio with improved synchronization and audio-visual relevance. The method adopts contrastive audio-visual pretraining (CAVP) to learn more temporally and semantically aligned features, then train an LDM with CAVP-aligned visual features on spectrogram latent space. During inference, the method adopts a combination of classifier-free guidance and classifier guidance based on a synchronization classifier. The proposed model outperforms existing methods in audio-visual synchronicity and inception score. Strengths: The proposed model achieves greater audio-visual synchronicity compared to several baselines. Through analysis, the audio-visual pre-training module indeed brings significant gains to the synchronicity. The paper is overall well-written and the authors have conducted a thorough analysis on the effectiveness of different modules, including the effect of two guidances, the used features, effect of pre-training. Weaknesses: Despite its advantages in audio-visual synchronicity and inception score, the proposed method falls below the baselines in other metrics such as FID and KL divergence. There is no human evaluation metrics in the comparison among the methods. Thus it is unclear overall how realistic the generated audios are compared to other methods. Regarding the synchronicity classifier, the authors have not thoroughly analyzed the synchronicity classifier, which is the key model to measure the main metric used in the paper. The paper mentioned it achieves 90% accuracy on the test set. However, it is unclear how the test set is constructed. For example, how is the negative-sample set constructed? What is the precision and recall respectively? How well does the accuracy match the human perception of audio-visual synchronicity? In terms of modeling, the generative model is a typical LDM model used in audio generation and lacks general novelty. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * What is the audio-visual synchronicity classifier used in the model? Is it the same with the classifier for accuracy measurement? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the constructive feedback from you. We address your concerns below. ### 1. Discussion on metrics. > *1.1: The proposed method falls below the baselines in other metrics such as FID and KL divergence.* Thanks, please refer to **Global Response 2**. ### 2. Human evaluation. > *2.1: There is no human evaluation metrics in the comparison among the methods.* Thanks for your valuable advice, please refer to **Global Response 1**. ### 3. About classifier for accuracy measurement. > *3.1: Unclear how the test set is constructed? Recall and Precision? Human perception of audio-visual synchronicity?* Thanks for asking the details of classifier for accuracy measurement. While we have a detailed discussion in supplementary Section A.1, we're adding further discussion here to fully address your concern. **(1). Test set construction:** As mentioned in Line 188-191, we train the classifier with three types of pairs: true (label 1), temporal shift (label 0), and wrong (label 0). The test set is constructed similarly. Using a fixed random seed, each original audio-video pair in the VGGSound test set is either left unchanged with prob. 50% (true pair, label 1), temporally shifted with prob. 25% (temporal shift pair, label 0), or mismatched with another video's audio with prob. 25\% (wrong pair, label 0). The total test set is around 14K samples, with 50% true pairs **(positive sample set)**; and 25% temporal shift pairs + 25% wrong pairs **(negative sample set)**. **(2). Precision and Recall:** Detailed classifier's metrics are provided here. **Recall: 84.92%**, **Precision: 91.32%**, and **Accuracy: 88.31%**. **(3). Human Perception Alignment:** As discussed in Q2, human evaluation results on content relevance and synchronization are provided. We observe that **content relevance and synchronization assessments from our classifier closely align with those from human evaluators**, confirming the effectiveness of our sync classifier. We will add the above details to the supplementary. Thanks for your advice again. > *3.2: What is the audio-visual synchronicity classifier used in the model? Is it the same with the classifier for accuracy measurement?* First of all, it's important to note that these are two distinct classifiers and are trained in different ways. When utilizing the double guidance techniques, we did not access the alignment classifier that used for accuracy measurement. In specific, the classifier used in double guidance is a noisy classifier, named $F_\theta^{DG}$, and it takes the noisy latent $z_t$, time embedding $t$, and visual aligned features $E_v$ as input, the predicted alignment label $\hat{y}$ is computed as follows: $\hat{y}=F_\theta^{DG}(z_t, t, E_v)$. While the sync classifier for accuracy measurement, named $F_\phi^{sync}$, it only takes two input, latent $z_0$ and visual aligned features $E_v$ to predict the synchronization label with $\hat{y}=F_\phi^{sync}(z_0, E_v)$. ### 4. About model novelty > *4.1: The generative model is a typical LDM model used in audio generation and lacks general novelty.* Thank you for pointing out. The LDM model is recognized as the current state-of-the-art generative models, having achieved significant progress in fields such as image generation (e.g Stable Diffusion). Its effectiveness make it a natural choice for our work, aligning with our focus on **exploring its application rather than designing an entirely new generative model**. However, we emphasize that our work isn't just a straightforward application of the LDM model. We've introduced significant innovations for audio generation using LDM, including novel techniques like CAVP features, double guidance and temporal split and merge augmentation. These enhancements were rigorously verified to confirm their effectiveness. In this context, the contributions of our article go beyond the mere application of a known model. **We firmly believe that our models present significant contributions and novelty to the field**. --- Rebuttal Comment 1.1: Title: Awaiting Your Valuable Feedback Comment: We appreciate the time and effort the reviewer has dedicated to providing us with thorough and constructive feedback. Please inform us if our response addresses all concerns and let us know if more information is needed. We are committed to providing any necessary clarifications.
Summary: The focus of the paper is audio synthesis. More specifically, it focuses on video to audio synthesis and in particular on synchronized synthesis of audio. It relies on Latent Diffusion models for the synthesis and proposes an aligned audio-visual representation learning approach to improve synchronization of synthesized audio and input video. CAVP - Contrastive Audio-Visual Pre-training – tries to learn these features through contrastive learning. Experiments are conducted on the VGGSound dataset and both quantitative and qualitative results are shown in the dataset. In quantitative terms, the proposed DIff-foley method outperforms prior works by a considerable margin with respect to some of the metrics. Strengths: -- The paper addresses a key problem in audio/video synthesis. Generating synchronized audio for videos is challenging and most of the current struggle at. -- Learning good audio-visual representation is essential to solving this problem. The approach taken by this paper makes sense. -- The gain in IS metric and inference time using the proposed method is large. Moreover, the demo and the qualitative results does show superior and more synchronized Weaknesses: -- In Eq 1, aren’t the two terms same ? Why have they been separated ? Same is true for Eq 2. -- Analysis and discussion on duration as a factor is completely missing. It appears that all empirical results are given for fixed 8 seconds audio. Some results for variable length video inputs would be helpful to understand the duration factor. More importantly, producing synchronized video on longer form audio/video would be more challenging than relatively short audio/video. How does the method work out in those cases ? -- WHile the Diff-foley method does well (compared to others in Table 1) in terms of IS and synchro. Acc. it does not do well in terms of FID and KL. Some discussion on why that might be happening is desirable. Diff-foley is significantly inferior to other methods on these metrics. In Table 2, on impact of visual features - CAVP ends up doing better than others in terms of FID but not IS and KL. Overall, these results and lack of discussion in what might be happening leads to unclear understanding of the performances. -- Audio/speech synthesis works in my opinion should include some form of subjective tests. Objective metrics (especially when some of them are basically imperfect pre-trained models) do not paint a clear picture. This is also necessary because the generated sounds themselves are often not corresponding to the visual objects. The frequency content of the generated sound in Fig 3 and Fig 4 is clearly far from the ground truth. From the demo also, in many cases the generated sound does not correspond to the visual object. --- updated score after rebuttal --- Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please respond to the points outlined in the weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss limitations of the method in terms of scalability. A bit more discussion on how generative AI for audio can have societal impact might be good. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the positive feedback from you. We respond to the weakness and questions below. ### 1. Formula Clarification. > *1.1: In Eq 1, aren’t the two terms same ? Why have they been separated ? Same is true for Eq 2.* In Eq 1, the two terms are distinguished by the normalized value in the denominator. The first denominator term, $\sum_{k=1}^{N_S}\exp{(sim(\bar{E}_a^i, \bar{E}_v^k)/\tau)}$, fixes the i-th audio embedding $\bar{E}_a^i$ and sums over the exponential similarity score with different video embeddings $\bar{E}_v^k$ in a batch. The second denominator term, $\sum_{k=1}^{N_S}\exp{(sim(\bar{E}_a^k, \bar{E}_v^j)/\tau)}$, does the opposite, fixing the j-th video embedding $\bar{E}_v^j$ and summing over the exponential similarity score with different audio embeddings $\bar{E}_a^k$ in a batch. This loss function is almost the same as the one used in CLIP model. A similar explanation applies to Eq 2. ### 2. About variable length. > *2.1: Analysis and discussion on duration as a factor is completely missing.* Thanks for pointing out. While our experiments have thoroughly verified the effectiveness of different modules, we haven't specifically addressed the duration factor, as this wasn't discussed and solved in previous works like SpecVQGAN and Im2Wav. Diff-Foley is currently designed to support audio generation up to 8 seconds in duration. For shorter videos, we extend them to 8 seconds using zero-padding, generating the full audio and trimming as needed. For longer videos, we segment them into 8-second chunks and concatenate the resulting audio, although this may lead to discontinuities. This maximum duration constraint is a common challenge, shared with previous works such as SpecVQGAN and Im2Wav, and presents an area for future exploration and improvement. > *2.2: How does the method work out in longer form audio/video cases?* We recognize the importance of handling longer form audio/video cases. Currently, Diff-Foley, like SpecVQGAN and Im2Wav, has a maximum duration constraint. To handle longer content, we divide the video into 8-second segments, process them individually, and concatenate the resulting audio. This can lead to inconsistencies between segments. While we've considered potential solutions such as: **(1). Transforming Diff-Foley into the form of auto-regressive generation**. **(2). Training Diff-Foley on longer audio-video pairs**. Still, we believe our current contributions are substantial. The validation and implementation of these potential solutions present exciting avenues for future work. ### 3. Discussion on Metrics. > *3.1: Diff-Foley does not do well in terms of FID and KL. Some discussions on why is desirable.* Thanks for pointing out. Please refer to the **Global Response 2**. > *3.2: Ablation study on visual features CAVP ends up doing better on FID but not IS and KL.* Our ablation study shows that CAVP features significantly enhance audio-visual relevance and synchronization, as evidenced by the substantial improvement in Align Acc metrics. Also, as discussed in Q3.1, the rich semantics in CLIP feature (training with billions of text-image pair) indeed contribute to better KL metrics, We expect such a gap can be bridged by expanding CAVP datasets to similar scale compared with CLIP. ### 4. Subjective tests and clarification for demos. > *4.1: Audio synthesis should include subjective tests.* Thanks for your valuable advice. Please refer to the **Global Response 1**. > *4.2: Frequency content of generated sound in Fig 3,4 is far from the ground truth.* Thanks, we would like to emphasize that the goal of audio generative model is not to perfectly reconstruct the audio based on video content - a task that is, in fact, impossible also unnecessary. Our aim is to **generate audios that align well with human perception**, even if they differ from the ground-truth audio. For instance, in Figure 3, the audio generated by Diff-Foley remains nearly silent until the golf ball is struck, a result in tune with human perception, regardless of variations in the ground-truth audio, such as wind noise, resulting differences in spectrogram. Diff-Foley precisely generates the sound at the moment the golf ball is hit, showcasing its superiority in creating synchronized audio and significantly outperforming other methods. Audios in Figure 3 and Figure 4.c might appear different from the ground-truth spectrogram, yet they're sensible and well-aligned with human perception. Feel free to revisit these examples on our website. We believe you'll find they are reasonable and align well with human perception. > *4.3: Some generated sound in demos does not correspond the visual object.* Our human evaluation results on content relevance metrics in **Global Response 1** and other evaluation metrics in Table 1 effectively support our points that Diff-Foley show superiority on generating highly synchronized audio with strong audio-visual relevance compared with other methods. We acknowledge that some extremely challenging cases in the demos may yield unsatisfactory results across all methods. ### 5. Limitations. > *5.1: A bit more discussion on how generative AI for audio can have societal impact might be good.* Thanks for pointing out. We will incorporate further discussions on the societal impact of generative AI for audio in the final version of our paper. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal and addressing all the points. Addition of the subjective evaluation is great addition. It would be good to add it to the final paper and also clearly describe the subjective test procedure. While the authors provided a good rebuttal and addressed concerns from the reviewers, the paper may need a good overhaul to clarify several concerns. I hope the authors do it. I have made my overall rating more positive. --- Reply to Comment 1.1.1: Title: Response to Reviewer k7sd Comment: Thanks for your response. We are glad to hear that our discussion has cleared your concerns. We will add the subjective evaluation results and detailed testing procedure in the final version of our paper. We will refine our paper, taking into account the valuable suggestions from the reviewers and further clarifying some points to fully address several concerns.
Rebuttal 1: Rebuttal: ## Global Response 1: Human Evaluation Results As suggested by reviewer k7sd and tzzm. We've conducted a human evaluation by randomly selecting 60 videos from the VGGSound test set and having different models to generate corresponding audio samples. The output and groundtruth audios were anonymized and rated by 30 people unfamiliar with the project. Each sample received scores from at least 5 raters, ranging from 1 (bad) to 5 (excellent) for content relevance and synchronization. The scores were then scaled by a factor of 20. The human evaluation results is shown below, which effectively demonstrate the superiority of Diff-Foley in generating audios with strong audio-visual relevance and synchronization. The human evaluation results and details will be added to the main paper and supplementary, respectively. | Model | Guidance | Content Relevance | Synchronization | | :------------------- | :------: | :---------------: | :-------------: | | SpecVQGAN (ResNet50) | - | 46\.20 | 45\.20 | | Im2wav | CFG | 62\.13 | 57\.73 | | Diff-Foley (Ours) | CFG | 71\.73 | 71\.00 | | Diff-Foley (Ours) | Double | **74\.53** | **74\.93** | | Groundtruth | | 84\.80 | 84\.27 | |||| ## Global Response 2: Discussion on inferior metrics on FID and KL. Thanks for mentioning it. As acknowledged by reviewer MA6i, FID and KL measurements are known to be somewhat unreliable, but we included them to provide a comprehensive analysis. FID and KL may not consistently reflect human subjective perceptions. In our study, they seem less representative of audio quality than metrics such as IS and Align Acc, which have shown a stronger correlation with the perceived quality of audio in human eval results and experiments. Still, some discussions on why FID and KL are relatively inferior to other method are provided. (1). KL: Diff-Foley ranks second to Im2Wav [2], possibly due to Im2Wav's use of the CLIP feature. We found that incorporating CLIP features in Diff-Foley also improves KL results (see Table 2.). The richer video semantic features in CLIP may contribute to this improvement, and this gap is expected to be bridged by scaling the CAVP pretraining to a scale of billions, akin to CLIP. (2). FID: Diff-Foley outperforms Im2Wav [2] but falls short of SpecVQGAN [1]. This might be related to our use of the frozen Stable Diffusion latent encoder and decoder. FID seems very sensitive to reconstruction quality. As shown in supplementary Table 1, the groundtruth spectrogram reconstruction FID of $9.20$ represents Diff-Foley's FID lower bound. Improving the reconstruction quality of frozen latent encoder and decoder is left for future work. (3). To enhance perceptual quality, we adjusted the CFG scale to 4.5, further sacrificing Diff-Foley's FID and KL performance. Figure 6 illustrates a U-shape curve for FID and KL, indicating optimal results around a CFG scale of $2.5\sim3$. ### Reference: [1] Vladimir Iashin and Esa Rahtu. Taming visually guided sound generation. arXiv preprint arXiv: 2110.08791, 2021. [2] Roy Sheffer and Yossi Adi. I hear your true colors: Image guided audio generation. arXiv preprint arXiv:2211.03089, 2022.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability
Accept (poster)
Summary: This paper introduces VerT, a method to distill *verifiable* models from black-box models. Concretely, These verifiable models are distilled by fitting a model $f_v$ that reproduces the predictions of the black-box model $f_b$ on a training set. Unlike the black-box model, the model $f_v$ is made verifiable by making its predictions self-consistent if a mask isolating the signal is applied to the input data. This mask is fitted by adding an extra objective to the model distillation objective, which guarantees that the mask is sparse and that the predictions of $f_v$ is similar for both the masked and the unmasked input. The methods is evaluated on 3 image datasets, including 2 where ground-truth feature importance is known. This analysis demonstrates that VerT outperforms standard gradient-based feature importance methods. Strengths: **Good writing.** The paper was really easy to read and follow. The notations are on-point and intuitive. **Solid empirical validation.** The experiments provided in Section 5 are convincing and thoroughly verify important claims: the features identified as salient by VerT have the strongest impact when masked (which is not surprising given the optimization objective underlying VerT), these features have a reasonable overlap with the ground-truth salient features in a setting where the later are known and VerT shows improved robustness. The authors also show that VerT improves the robustness to adversarial attacks on the explanations with respect to gradient-based feature importance methods (which is unsurprising as the attacks are designed to fool gradient-based methods specifically). Insightful illustrative examples to understand the gains of VerT are provided in Figure 2. Weaknesses: **Restrictive assumption on replacement distribution.** In Lines 125-137, the authors discuss the importance of the choice of the replacement distribution $\mathcal{Q}$ in order to avoid creating OOD examples by masking. It should be mentioned that some important masking stategies, such as [Gaussian blurs](https://arxiv.org/abs/1704.03296), are omitted from this discussion as these are conditioned on the input image $\mathbf{x}$. I am wondering if it is even possible to define masking strategies with replacement distributions *independent* of the input $\mathbf{x}$ that do not create OOD examples. Intuitivelly, it is legitimate to expect the replacement input $\mathbf{q}$ to have (at least) some information about $\mathbf{x}$ to avoid replacing the features of $\mathbf{x}$ by OOD values. I would recommend the author to discuss this point thoroughly and (possibly) acknowledge this as a limitation of the work. **Unrealistic signal-distractor decomposition.** The signal-distractor decomposition defined in Definition 3 is key for Theorem 1. It appears to me that this decomposition is unrealistic for a simple reason: in the underlying DGP, the mask $\mathbf{m}$ and the signal $\mathbf{s}$ are independent. To make this point more clear, I would like to consider the example discussed in the paper. If we consider a cow detection task, the signal $\mathbf{s}$ would typically correspond to the cow-part of the image. If that is the case, the mask $\mathbf{m}$ should depend on the position of the cow on the image (e.g. performing a translation on the signal $\mathbf{s}$ should result in a similar translation on the mask $\mathbf{m}$ in principle). Again, I would recommend the authors to comment on the realism of their assumptions. ### Minor Weaknesses - Algorithm1: what is $M(\mathbf{x})$? Is this the same as $\mathbf{m}(\mathbf{x})$? Also, should there be a minus sign in front of the gradients if the objectives are minimized? - In the appendices, Theorem 2 corresponds to Theorem 1 the main paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: All my questions and recommendations are contained in the weakness section. I will not repeat them here to avoid redundancies. I will consider improving my recommendation if the above weaknesses are addressed by the authors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Some limitations of the work are discussed in Section 6 of the main paper. I do not believe that negative societal impacts are a real concern for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review! *“Restrictive assumption on replacement distribution. In Lines 125-137, the authors discuss the importance of the choice of the replacement distribution Q in order to avoid creating OOD examples by masking. It should be mentioned that some important masking stategies, such as Gaussian blurs, are omitted from this discussion as these are conditioned on the input image x. I am wondering if it is even possible to define masking strategies with replacement distributions independent of the input x that do not create OOD examples. Intuitivelly, it is legitimate to expect the replacement input q to have (at least) some information about x to avoid replacing features of x by OOD values. I would recommend the author to discuss this point thoroughly and (possibly) acknowledge this as a limitation of the work.“* We apologize for the confusion. Please take a look at our comment (G1) regarding this issue. While it is true that any such masking creates OOD examples, the critical issue here is that we can create models that are robust to such replacement of the unimportant features. In other words, we would like to modify models (via VerT) to be not sensitive to such OOD examples, where pixels are replaced with $Q$ in the non-important features. By predefining our feature replacement distribution $Q$, we can modify our models to be robust to these perturbed OOD examples while still maintaining that the replacement distribution is independent of the input $x$. We will clarify this in the paper. We do not discuss gaussian blur because in this case the “replacement distribution” $Q$ is dependent on $x$, however our framework requires them to be independent. For replacement distributions that are not independent of $x$, we are unable to disentangle the effect that the replacement distribution has on the model from the true signal in $x$, i.e., blurring does not completely remove the information in the image, but masking out the pixels does. --- *“Unrealistic signal-distractor decomposition. The signal-distractor decomposition defined in Definition 3 is key for Theorem 1. It appears to me that this decomposition is unrealistic for a simple reason: in the underlying DGP, the mask m and the signal s are independent. To make this point more clear, I would like to consider the example discussed in the paper. If we consider a cow detection task, the signal s would typically correspond to the cow-part of the image. If that is the case, the mask m should depend on the position of the cow on the image (e.g. performing a translation on the signal s should result in a similar translation on the mask m in principle). Again, I would recommend the authors to comment on the realism of their assumptions.”* Great observation! This is a very subtle point about the signal-distractor decomposition. Note that definition 3 has redundant information: the mask being independently sampled from a distribution $m \sim \mathcal{M}$ implies $p(y | x) = p (y | s) = p(y | s \odot m)$ (because the distractor component $d \odot (1- m)$ is necessarily independent of the label), which is listed as an additional condition. Hence we can eliminate the $m \sim M$ statement and instead *define* mask $m$ to be a binary mask such that condition (1) $p(y | x) = p(y | s \odot m)$ and condition (2), that the mask $m$ is minimal, holds. From your example, this ensures that mask $m$ now depends on signal $s$, and a translation of the cow in the image results in a corresponding translation of the mask to satisfy conditions (1,2). This amendment to our definition does not impact Theorem 1 or the rest of our theory, and we hope this makes the resulting graphical model more realistic. We again thank the reviewer for noticing such a subtle issue. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough and clear rebuttal. I think that the explanation they provided about the signal-distractor decomposition is perfectly valid. I am still convinced of this work's quality in spite of the other reviews, and will therefore increase my score.
Summary: The present work introduces a theoretical framework of verifiability of feature attributions based on the sparest (binary) feature attribution mask that only barely changes the models' output. The authors theoretically (and empirically) show that for signal-distractor decomposable datasets off-the-shelf black-box models cannot be verified according to their definition of verifiability. The main reason being that off-the-shelf models cannot handle the OOD samples created by the masking intervention ("feature replacement"). To overcome this, they propose a finetuning scheme, in which they make off-the-shelf black-box models robust to such feature interventions. Specifically, they alternatingly optimize for the sparsest mask (that only barely changes the models' output) and apply a distillation loss. While the former makes the model robust to the feature interventions, the latter ensures the similarity to the original model. Experimentally, they show that VerT improves interpretability over gradient-based methods. VerT robustly identifies the signals (most salient features) of the inputs, while retaining the performance of the original model. Strengths: - The theoretical verifiability framework is simple yet sound. - The theoretical analysis (Sec 3.2) is interesting and supported both theoretically as well as empirically. - The finetuning scheme is simple yet effective. It does not change the models' output while simultaneously enhancing its interpretability by making it robust to the input feature removal. - The paper is clearly written (except for some small issues; see questions & suggestions below) and easy to follow. - Code is provided. Weaknesses: - The major weakness of the present work is that it does not compare to any removal-based feature attribution methods (e.g., [SHAP](https://proceedings.neurips.cc/paper_files/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html), [FastShap](https://openreview.net/forum?id=Zq2G_VTV53T), etc.), models also trained on feature attributions (e.g., [right for the right reasons](https://www.ijcai.org/proceedings/2017/371)), nor inherently interpretable models (e.g., [JAM](https://arxiv.org/abs/2103.01890) or [B-Cos](https://openaccess.thecvf.com/content/CVPR2022/html/Bohle_B-Cos_Networks_Alignment_Is_All_We_Need_for_Interpretability_CVPR_2022_paper.html)); only comparisons to gradient-based methods are provided. - The experiments always include manually-defined spurious correlations (to contain clear signals and distractors). While this provides empirical evidence for their theoretical framework, it would be meaningful to compare the method on “untouched”, more challenging datasets besides Celeb, e.g., ImageNet. - The finetuning scheme may change the model’s behavior (but not its output due to Eq. 2). While Tab. 2 shows that the prediction stays similar, it may change what type of signals in the inputs the classifier considers for its prediction (and its inner workings). Consequently, the finetuned model may not be faithful to which signal it uses (or how it processes it) for its prediction. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - In Fig. 2, do all methods use the same model (finetuned or not finetuned version)? - Is there any explanation for the large difference for the non-finetuned and finetuned models using VerT in Fig. 4? Is the adversarial training objective also used during finetuning? More generally, would finetuning without model distillation (Eq. 2) and data distillation in Eq. 1 lead to more robust (and interpretable) models that only focus on the important parts of the image? - How did the authors set $\lambda_1, \lambda_2$? **Suggestions** - [B-Cos networks](https://openaccess.thecvf.com/content/CVPR2022/html/Bohle_B-Cos_Networks_Alignment_Is_All_We_Need_for_Interpretability_CVPR_2022_paper.html) or [BagNets](https://openreview.net/forum?id=SkfMWhAqYQ) could also be mentioned as inherently interpretable models. - The abbreviations in L68 could be introduced. - There are several recent advancements of concept-based models that close the performance gap, e.g., [post-hoc CBMs](https://openreview.net/forum?id=HAMeOIRD_g9) that could be mentioned in the related work. - Theorem 2 in the supplemental should have the same number as Theorem 1 in the main text as well as formulation to make it easier for readers to find. - In the experiments section there are several missing “Table/Figure {num}”. - The hyperparameters are neither described in the main text nor supplement. - To which Figure/Table are L288-299 referring to? - There are several typos throughout the present submission. - Results for Celeb in Table 3 in the supplement are missing. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review! *“The major weakness of the present work is that it does not compare to any removal-based feature attribution methods (e.g., SHAP, FastShap, etc.), models also trained on feature attributions (e.g., right for the right reasons), nor inherently interpretable models (e.g., JAM or B-Cos); only comparisons to gradient-based methods are provided.”* Please refer to our comment (G2) for discussion on differences with classic feature attributions (like SHAP). Nonetheless, we present comparisons with SHAP in comment (G5). Our paper differs from the paper “Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations” (Ross et al., AAAI 2017) in that while their paper assumes (partial) knowledge of the ground truth feature attributions, our setup does not assume any such knowledge, thus making this incompatible with our setup. We have already presented a comparison with JAM in Table 3 in the Supplementary material, where JAMs are “f_b + input (pixel) dropout”, which is precisely the method proposed by JAMs to train interpretable models. We shall change that column to read “JAMs” instead. We have added B-CosNetsv2 (Bohle et al., 2023, "B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers") as a baseline, with results shown in (G5). We find that they perform comparably for the IOU tests and qualitatively, however VerT still significantly outperforms for pixel perturbation and has the advantage of being applicable post-hoc. Thus, VerT models are verifiable, meaning that any attribution map has a precise meaning in terms of concrete perturbations made to the inputs via definition 1. On the other hand, B-CosNet models are not verifiable, meaning that perturbing non-important pixels leads to output change, thus making the attribution misleading wrt model behaviour, which is precisely what pixel perturbation shows. --- *“The experiments always include manually-defined spurious correlations (to contain clear signals and distractors). While this provides empirical evidence for their theoretical framework, it would be meaningful to compare the method on “untouched”, more challenging datasets besides Celeb, e.g., ImageNet.”* For natural datasets such as ImageNet, a popular evaluation for explanations includes the pixel perturbation test, which we already perform for our existing datasets. Note that this evaluation does not require the use of a ground truth feature attribution. ImageNet experiments also require considerably more computational resources and engineering to train models and store masks for every single training data point, and hence we are unable to do these during the rebuttal. However, we shall prioritize this experiment for a future version of this draft. --- *“The finetuning scheme may change the model’s behavior (but not its output due to Eq. 2). While Tab. 2 shows that the prediction stays similar, it may change what type of signals in the inputs the classifier considers for its prediction (and its inner workings). Consequently, the finetuned model may not be faithful to which signal it uses (or how it processes it) for its prediction.”* While this is an intriguing hypothesis, note that we are unable to either confirm or deny this, as it is fundamentally unclear what features are used by black-box models! This motivates our framework which emphasizes verifiability. Further, we also think any such change is irrelevant – since the Q-verifiable models are equivalent to the original, why not use the verifiable models instead, since they come with the additional benefit of interpretability? --- *“In Fig. 2, do all methods use the same model (finetuned or not finetuned version)?”* In Fig. 2, row 3 (VerT ours) uses the VerT model $f_v$ finetuned via our method to explain $f_b$. All other rows use the model, $f_b$, which is not finetuned via our objective. Note that $f_v$ and $f_b$ are functionally equivalent. --- *“Is there any explanation for the large difference for the non-finetuned and finetuned models using VerT in Fig. 4? Is the adversarial training objective also used during finetuning? More generally, would finetuning without model distillation (Eq. 2) and data distillation in Eq. 1 lead to more robust (and interpretable) models that only focus on the important parts of the image?”* Please refer to global comment (G1) for a clarification about VerT and connection to robustness. We do not use adversarial training for VerT. The objective of VerT training (model + data distillation) is precisely to make the models more mask-robust and hence interpretable. We are unsure if this answers your question, please let us know in case of any clarification. --- *"How did the authors set lambda_1, lambda_2?"* For all experiments, we set both $\lambda_1$ and $\lambda_2$ to 1. We will be sure to add this to our implementation details. Thank you for your suggestions regarding the writing! We will be sure to address these in the final draft. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for their detailed response! Specifically, I appreciate the efforts to extend the experimental design (inclusion of SHAP & B-Cos). Nevertheless, I’d like to re-raise some concerns not sufficiently answered in my opinion. > “B-CosNet models are not verifiable, meaning that perturbing non-important pixels leads to output change, thus making the attribution misleading wrt model behaviour, which is precisely what pixel perturbation shows.” I have severe doubts that this claim is true since B-Cos networks are inherently interpretable and, consequently, automatically faithful & verifiable in a general sense; beyond the limited verifiability definition of Def. 1. Since they are inherently interpretable, the final statement (“thus making the attribution misleading wrt model behaviour”) is incorrect. This seems to also be related to my original concern that VerT may change the underlying model, while for B-Cos this is not the case. Thus, VerT may be over optimized for metrics like pixel-perturbation -as suggested by reviewer 2ozo- and is acknowledged by the authors (“But your observation that our method performs the best because it is robust to masking is correct”). On the other hand, B-Cos networks are not optimized and thereby may be susceptible to non-important pixels (due to limitations of the training procedure and data), which actually may be true, e.g., in the presence of spurious correlations (as mentioned by reviewer QRQ5) or similar. Thus, the attributions of such (obviously undesirable) non-important pixels faithfully contribute to the model’s output. > “however VerT still significantly outperforms for pixel perturbation and has the advantage of being applicable post-hoc.” The last statement (“[VerT] has the advantage of being applicable post-hoc”) feels like a contradiction since we need to finetune VerT first and cannot just apply it post-hoc. This is also acknowledged by the authors in the general response (“our method (VerT + QFA) involves adapting the model to the attribution method”). > “ImageNet experiments also require considerably more computational resources and engineering to train models and store masks for every single training data point, and hence we are unable to do these during the rebuttal.“ While I acknowledge the honesty of the authors, this also highlights a major practical limitation to models trained on large-scale data. > “Another fundamental contribution of this work lies in its conceptual framework. Classical feature attribution literature does not have a precise notion of “ground truth”, making it fundamentally unclear what quantities these are estimating in the first place. In this paper, we introduce one notion of ground truth – the signal-distractor decomposition that is a property of the dataset itself (and NOT the model!).” I kindly refer the authors to previous works (e.g., [1]) that also used a signal-distractor framework. > “In particular, Q-verifiability can be thought of as a very specific form of robustness, where models are ONLY robust wrt the distractor in the input and NOT the signal, thus distinguishing it from classical robustness” Did the authors try to evaluate robustness on standard pipelines? It would be interesting to see if their proposed fine tuning scheme can go beyond mere input feature interpretability (my original comment had a typo…). --- [1] Kindermans, Pieter-Jan, et al. "Learning how to explain neural networks: Patternnet and patternattribution." ICLR 2018. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. We hope to clarify some points below. --- *I have severe doubts that this claim is true since B-Cosnets are inherently interpretable...* Our claim is that in our framework, if a part of the image is deemed not important, then our method ensures that perturbing those pixels do not change model outputs. This is critical to our notion of verifiability. Our assertion is that this doesn’t hold for Bcosnets, and the pixel perturbation experiments show this precisely. Overall, we believe that it is misleading for a pixel to be considered unimportant by an attribution method for a model, and simultaneously have random perturbations in that pixel leading to large changes in model outputs. This is precisely the scenario we aim to avoid with our method. Note that BCosNets have a different notion of “inherent interpretability” that does not involve perturbations and thus this is consistent with our earlier comment. We emphasize that while the attribution masks produced by BCosNets are nearly equivalent to ours in terms of the IOU with the ground truth, these attributions do not have a precise meaning in terms of model behavior on perturbations in their formalism. --- *VerT may change the underlying model, while for B-Cos this is not the case.* There is no “underlying black-box model” in B-CosNets, as these are trained from scratch. Furthermore, our method creates an “inherently interpretable” (verifiable) model that is trained to match the behavior of the original black-box model. --- *Thus, VerT may be over optimized for metrics like pixel-perturbation... On the other hand, B-Cosnets are not optimized and may be susceptible to non-important pixels, which actually may be true, e.g., in the presence of spurious correlations...* Please refer to our earlier response to your question about B-CosNets and inherent interpretability (the first response in this comment). Regarding spurious correlations, please also see our response to QRQ5 about the signal-distractor decomposition being in line with the spurious correlations. Also, the reviewer’s comment that “*thereby may be susceptible to non-important pixels (due to limitations of the training procedure and data), which actually may be true, e.g., in the presence of spurious correlations*” is incorrect and conflates pixel importance and spurious correlations. If a pixel is not important for a model and an input, then by definition it means that perturbing that pixel does not affect the model output. However, spurious correlations are a property of the dataset – if a spurious correlation is leveraged by a model (which we can verify experimentally), then pixels corresponding to these spurious correlations are “important” to the classifier! --- *The last statement (“[VerT] has the advantage of being applicable post-hoc”) feels like a contradiction...* We apologize for the confusion. By “post hoc'' in this case, we mean that VerT finetunes a black-box model, resulting in another model that closely approximates its behavior. However, Bcosnets train models from scratch, thus bearing no resemblance to any underlying black-box model. We shall avoid using this term to refer to this (important) distinction to avoid more confusion. --- *I kindly refer the authors to previous works...* Thanks for pointing us to that work! The provided reference formulates a signal-distractor decomposition only for linear models, whereas our formulation is more general and holds for non-linear models as well. We will cite this in our work, and also amend our claims to state that we are the first to formulate a general signal-distractor framework that applies to non-linear datasets and models.
Summary: This paper proposes a method called Verifiability Tuning (VerT), which transforms black-box models into models that naturally yield faithful and verifiable feature attributions. Authors further conduct experiments on semi-synthetic and real-world datasets to verify the effectiveness of the proposed VerT method. Strengths: 1. The motivation of this paper is clear. 2. This paper focuses on the faithfulness of post hoc explanation methods, which is a very important topic in XAI. Weaknesses: 1. The equation in Definition 1 is problematic. For example, let three positive input variables $a=b=c>0$ have a MAX operation, $output = \\max \\{a,b,c,0\\}$. Then, we mask any two input variables will result in different results. Specifically, we can mask $a$ and $b$, and keep $c$ unchanged. We can also mask $c$ and $b$, and keep $b$ unchanged. These two masking methods will result in different explanation results, but the actual importance of $a$, $b$, $c$ to the inference is the same. Hence, the equation in Definition 1 is problematic. 2. I disagree with your claim on the optimal Q. Specifically, the optimal Q can be found. Theoretically, the optimal Q should be the same as the distributions of the input image, i.e., setting q=x (in this case $\epsilon=0$ in Definition 1). Although this setting q=x conflicts with the motivation of masking the input, it is indeed the optimal solution to q in mathematics. 3. The ground truth of attribution methods constructed in experiments (in Figure 2) is too simple. (1) If a classifier is powerful enough to just use five pixels for inference, instead of using the entire foreground of “4,” then is this classifier better or worse? The proposed evaluation does not consider this as a good explanation if annotating the entire foreground as the ground truth. (2) Defining “4” as the foreground also seems problematic. The edge feature of “4”contains both pixels in foregrounds and pixels in background. Hence, information encoded in background can be used for inference theoretically. In this way, pixels outside of the “4” should also be considered as the ground truth. However, there exists another problem that how many pixels outside of“4”should be included as the ground truth. (3) The correlation between dark hair and glass in CelebA dataset is just assumed. Specifically, this correlation is not a necessary condition for hair color classification, because the DNN can either exclusively use glasses for classification, or exclusively use hair, or use both hair and glasses for classification. The correlation is assumed and used for evaluation, but the correlation is not a certificated truth of a DNN. This is a typical case when the input information for inference is redundant. When a small part of foreground objects are already enough for classification, it is difficult to annotate the ground-truth attention of a DNN. 4. Pixel perturbation tests are circular arguments, because the proposed method is learned by minimizing the loss. The loss is designed to mask unimportant pixels for inference. 5. Only the accuracy is not enough to evaluate the faithfulness. Please compare the advantages of the proposed method with advantages of the Shapley value. We cannot assume that the direct change of the output caused by the masking of an input variable is the exact importance of the input variable. Please apply more sophisticated evaluation metrics for attributions proposed in recent years. 6. Authors just compare the proposed method with gradient-based explanation method, which is a quite weak competing method. Authors are suggested to compare the proposed method with more sophisticated baselines, such as Shapley values, DeepLIFT, IG, etc. 7. Why “QFA applied to a model from a Q-verifiable model class” can be considered as “a verifiable feature attribution?” Authors do not provide proofs to support this claim. Moreover, what is the definition of “a verifiable feature attribution?” Please clarify how a feature attribution can be considered as “verifiable.” Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No, the authors do not discuss limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review! *“The equation in Definition 1 is problematic. For example, let three positive input variables a=b=c ? 0 have a MAX operation, output= max{a,b,c,0} ...actual importance of a, b, c to the inference is the same...”* Great example! This shows that with duplicate features, the QFA optimization problem does not have a unique solution. This is expected, as there is no single “minimal” component of the input that drives the output, and this multiplicity highlights a critical property of the model. We’d also like to point out that similar notions of feature importance have been used in literature, see references [8,9,10] in the main paper. We’d like to further point out that defining “actual importance” of variables is an open problem, and there is no agreement in the field regarding what it constitutes (see reference [1] in our paper). Overall, we don’t see why multiplicity makes definition 1 problematic. Can you please elaborate in case we missed something? --- *“I disagree with your claim on the optimal Q. Theoretically, the optimal Q should be the same as the distributions of the input image, i.e., setting q=x (in this case e=0 in Definition 1)...”* This is incorrect. Please note that the distribution Q is independent of the instance x: samples from the same “Q” must be used for all inputs “x”. Thus it cannot be the case that q = x, where q $\sim$ Q. --- *“The ground truth of attribution methods constructed in experiments (in Figure 2) is too simple.” “(1) If a classifier is powerful enough...” “(2) Defining “4” as the foreground also seems problematic...” “This is a typical case when the input information for inference is redundant. When a small part of foreground objects are already enough for classification, it is difficult to annotate the ground-truth attention of a DNN.”* Good observation! For real-world datasets, a precise ground truth signal is fundamentally unknown, which is why we rely on semi-synthetic datasets where at least approximate information about the ground truth signal can be known. For hard-MNIST, while we also expected the classifier not to use all pixels of the digit “4” as the signal, VerT in Figure 2 seems to extract something very close – indicating that most of the pixels of “4” are indeed useful for classifying the digit. The purpose of our IOU experiments is to ensure that unimportant pixels (i.e. background pixels that are definitely not informative to the task) are not given more importance than pixels that may or may not be important (i.e. pixels that lie within the signal). All other baselines methods heavily attribute background pixels, whereas ours does not. We hope this addresses your question, and we are happy to clarify otherwise. --- *“(3) The correlation between dark hair and glass in CelebA dataset is just assumed..."* This is incorrect, we do not assume the correlation, rather we explicitly designed our dataset such that dark hair and glasses were correlated. All images of people with dark hair were chosen to have glasses, and all images with blonde or white hair do not have glasses. We further test that the model relies on this correlation by testing performance on dark haired people without glasses and blonde and white haired people with glasses, noting that accuracy drops from 97% to 38% (near guessing). We will clarify this in the paper. --- *“Pixel perturbation tests are circular arguments...”* We don’t completely agree – the Q we use to train the model is different from that used to perform the pixel perturbation method. But your observation that our method performs the best because it is robust to masking is correct, and it is precisely the point of our approach! Please see global comments (G1-G4). We think that this is an advantage of our method, as pixel perturbation is a commonly used test and aligns well with our intuitions for what feature attributions must satisfy. We present this evaluation as a sanity check with respect to known metrics for feature attribution. --- *“Only the accuracy is not enough to evaluate the faithfulness...”* Our core argument (G1) is that the feature perturbation strategy is critical. Typical implementations of Shapley values provide attributions of black-box models using an arbitrary feature perturbation strategy, whereas we propose to align the feature perturbation method with the model’s robustness to that perturbation method. We present a comparison with SHAP in (G5). We also present an extensive set of evaluations: pixel perturbation (Figure 3), IOU with an approximate ground truth (Table 1), robustness to explanation manipulation (Figure 4) and sensitivity to hyper-parameters (Figure 5). Is there a specific evaluation metric you had in mind that is conceptually different from these? If so, we would be happy to consider it. --- *“Authors just compare the proposed method with gradient-based explanation method...”* Please refer to (G2) for an explanation of our method and how it differs from classical feature attribution methods such as Shapley values, DeepLIFT, etc. Essentially, our method is orthogonal to these works. While most feature attribution methods attempt to perform feature attribution of black-box models, VerT aligns the model to the explanation method (via Q) used. We nonetheless present a comparison to SHAP in (G5) as requested. --- *“Why “QFA applied to a model from a Q-verifiable model class” can be considered as “a verifiable feature attribution?” Authors do not provide proofs to support this claim. Moreover, what is the definition of “a verifiable feature attribution?” Please clarify how a feature attribution can be considered as “verifiable.””* Please refer to (G1-G4) for clarification. We apologize for any confusion! That statement defines “verifiable feature attribution” in our paper, i.e., QFA applied to a model from a Q-verifiable model class. If you think this phrase is confusing, we are happy to consider renaming it. --- Rebuttal Comment 1.1: Title: Responses to authors Comment: I would like to thank the authors for the detailed rebuttal. Some of my concerns are addressed (e.g., the lack of comparison between the proposed method and advanced explanation methods, such as SHAP). However, the overall quality of the paper still does not meet the standard for publication, so I would keep my original rating. Nevertheless, it is good to hear that the authors are going to further polish the paper. I hope it can be presented in a clearer way and get accepted to a future conference. --- Reply to Comment 1.1.1: Comment: Thank you for your response! Can you please point to any specific concerns that remain unaddressed?
Summary: The paper proposes a way to verifiably get feature attributions of the ground truth signal when the input can be decomposed into independent signals and distractor features, assuming there is a counterfactual generator Q that can provide sparse attributions. The process consists of first deriving (\epsilon, Q) feature attributions for each point to get optimal masks, and second use distillation to train a new model that matches the outputs of the QFA with the original prediction while staying close to the original model. The first and second steps are alternated to minimize the overall loss. A rounding scheme is used to stabilize training with hard masks and superpixels are used. The approach is tested on MNIST, Chest X-ray, and CelebA. Update: After the discussion, I have updated my score accordingly. Since the authors believe scenarios that satisfy the signal-distractor decomposition are likely to exist, it would be great to incorporate such a concrete example somewhere in the text. Strengths: The paper proposed an approach that, in principle, could potentially extract the ground truth signal (if all assumptions are met). The approach, even though it trains a new model, replicates the original fairly well. The experiments are couple with 2 ablations on adversarial robustness and sensitivity to hyperparameters. Weaknesses: The signal decomposition assumes that the signal and distractors are generated independently, and that therefore the correct feature attribution is the sparsest one. This is not going to always be the case in real settings. The method requires a strong counterfactual generator Q to ensure correct recovery of sparse attributions. However, in practice the authors just use a dirac delta of the dataset mean which is not really a counterfactual generator. The evaluation is specific to vision, and the two non-MNIST datasets have spurious correlations injected into the dataset. It'd be nice to have some kind of breadth here beyond vision, or to have some results on more natural data that did not need to be artificially correlated. The claim that one can change any black box model into a verifiably interpretable model might be a bit strong---it seems to also depend on a number of assumptions (ground truth decomposability) and having a powerful generator in order for the "verified" part to hold true. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Theorem 1 states that QFA applied to optimal predictors from the function class recovers the signal distribution. The corollary then states that QFA does not recover the signal when applied outside of the function class. Neither of these address non-optimal predictors from the function class---what happens there? The last sentence of the section seems to state that feature attributions from the function class are in fact able to recover the signal, but this does not seem to strictly follow from the theorem and corollary. What superpixels are being used? How do we know that we've reached the optimal f^*? It seems important to know this due to Theorem 1 requiring an optimal predictor. The practical details states at the end that replacing masked pixels with a counterfactual distribution Q somehow ensures that Q is an optimal counterfactual distribution. It is then asserted that this means the f comes from the Q-verifiable model class. I am not following either of these logical jumps, can you explain? There is nothing about Q in the pseudocode either. This seems to be critical, since we need to find an optimal f from Q for the signal to be recovered. Is there an ablation you can use to show how much of an effect Q has experimentally? It is not entirely clear what the simplified dataset is, which appears to be spontaneously used in Section 5. Why is the evaluation focused on gradient-based feature attributions? Why not other classic local surrogates like LIME/SHAP which are also widely used? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The limitations do in fact mention that a decomposition must exist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive review! Overall, we feel you may have misunderstood some parts of the paper, and we’d like to clarify these below. *“The signal decomposition assumes that the signal and distractors are generated independently, and that therefore the correct feature attribution is the sparsest one. This is not going to always be the case in real settings.”* Please refer to global comments (G2, G4). Can you please provide examples of settings where you believe these may not hold? --- *“The method requires a strong counterfactual generator Q to ensure correct recovery of sparse attributions. However, in practice the authors just use a dirac delta of the dataset mean which is not really a counterfactual generator. ” “The claim that one can change any black box model into a verifiably interpretable model might be a bit strong”* Please refer to (G1, G3) for clarification. Theorem 1 states that **any** Q-verifiable model used with a corresponding QFA can recover the optimal signal-distractor distribution. Note that the Q here is unrelated to the distractor distribution ($\mathcal{X}_{distractor}$). The important thing here is that QFA is only used on Q-verifiable models with the same “Q” used for both – this is exactly what leads to verifiable explanations. Note also that for our method we use a Normal distribution for Q when training VerT models. We use the dirac delta of the dataset mean **only** for pixel perturbation evaluations to remain consistent with prior works and evaluation of baselines. We have also provided an ablation on the choice of Q in (G5). --- *“Theorem 1 states that QFA applied to optimal predictors from the function class recovers the signal distribution. The corollary then states that QFA does not recover the signal when applied outside of the function class. Neither of these address non-optimal predictors from the function class---what happens there? The last sentence of the section seems to state that feature attributions from the function class are in fact able to recover the signal, but this does not seem to strictly follow from the theorem and corollary. ” “How do we know that we've reached the optimal f^\*? It seems important to know this due to Theorem 1 requiring an optimal predictor.”* Please refer to (G3, G4). The main point is that QFA must be used with a Q-verifiable model class (with the same Q used for both) for recovery. When the predictor is non-optimal, intuitively the model may not learn to “look” at the right portions of the input, and thus may not recover the signal, which is a dataset-dependent quantity. Thanks for bringing to our notice the final line in the section, we will change it to “Finally, we find that feature attributions derived from **optimal models** in model class Fv (Q) are able to recover the signal-distractor decomposition of datasets.” --- *“What superpixels are being used?”* We do not use superpixels in this work. We instead use a low-resolution binary mask (say 8x8) which we upscale (to, say 224x224) using bilinear interpolation. Please refer to the paragraph on “Mask Scale” in line 222 of the paper. We use the word “superpixel” in that section loosely to refer to a group of pixels obtained from this procedure. We shall remove the usage of that word for clarity. --- *“The practical details states at the end that replacing masked pixels with a counterfactual distribution Q somehow ensures that Q is an optimal counterfactual distribution. It is then asserted that this means the f comes from the Q-verifiable model class. I am not following either of these logical jumps, can you explain? There is nothing about Q in the pseudocode either. This seems to be critical, since we need to find an optimal f from Q for the signal to be recovered. Is there an ablation you can use to show how much of an effect Q has experimentally? ”* We’re sorry for the confusion, please refer to (G1-G4). Our algorithm “verifiability tuning” converts a black-box model (which has an unknown Q) into a Q-verifiable model (for some Q), which results in the sparsest possible masks when used with QFA (with the same choice of Q). The results of our ablation are given in (G5), where we see that for different parameterizations of Q, the results are still relatively consistent. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I understand the claim about working for any Q and appreciate the additional results on SHAP. On decomposition: In the contrary, I would think that the signal-distractor decomposition is a very strong assumption that is rarely fulfilled in practice. Spurious correlations in the data is a well known problem in the data---i.e. cats/dogs and indoor/outdoor backgrounds, water environments in the waterbirds dataset, age and gender in CelebA, etc. Even benign correlations can fail to satisfy the assumption: any image photograph is affected by lighting, lens quality, corruptions, etc. which affects and correlates all pixels, and cannot be decomposed into two independent feature sets. As far as I'm aware, it is much rarer to have data that can always be split into two disjoint sets of features that are independently generated. After all, the examples in the submitted paper are either (a) synthetically created to satisfy the assumption or (b) do not satisfy the assumption (celebA). On optimal predictors: the response did not answer how we know we are using an optimal predictor, nor did it explain the logical jumps/pseudocode. I have looked at G1-G4 as the authors referred to but these did not answer the technical question. The former (how we know if the predictor is optimal) still appears to be a critical requirement. --- Reply to Comment 1.1.1: Comment: Thank you for your response! We hope to answer your questions regarding decomposition and optimal predictors. --- *On decomposition: In the contrary, I would think that the signal-distractor decomposition is a very strong assumption...* We stress that the notion of signal-distractor decomposition is perfectly in line with the existence of spurious correlations in the data! In fact, this was our motivation to begin with – given a dataset, we’d like to find the signal distractor decomposition, which is a property of the dataset that can be used to detect the presence of spurious correlations in the data. In other words, If the dataset has spurious correlations, then this is well-reflected in the signal portion of the signal-distractor decomposition. As an example, consider the waterbirds dataset, which has a spurious correlation between the label and the background. In this case, the signal also consists of the background (as this contains information about the label) in addition to the bird. Thus by inspecting the recovered signal distribution from a model, we are able to ascertain whether the dataset encodes spurious correlations. We agree with the reviewer that feature attribution methods (like ours) are unable to detect all sources of spurious correlations, and can in fact only detect those that can be localized in terms of individual pixels, and perhaps not those like “lighting, lens quality, corruptions, etc” which may be non-localized, like you mention. This has also been explored in other related works (Adebayo et al., “Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation”, ICLR 2022) which establishes that this failure case applies to ALL feature attribution methods. Our framework helps formalize these notions that precisely identify the technical limitations of using feature attribution. In other words, all datasets have a signal-distractor decomposition (i.e., some parts of the image may be completely uninformative to predicting the label), but this signal-distractor decomposition may not always help with diagnosing spurious correlations, especially when the spurious signal is non-localized. As a correction to the reviewer’s comment that “..do not satisfy the assumption (celebA)” (with the assumption being a signal-distractor decomposition), we note that even with CelebA, we selected a subset that contained a spurious correlation, such that “dark hair” and “glasses” were spuriously correlated, and we used this information to evaluate qualitatively whether feature attribution methods (including ours) are able to recover this spurious correlation. Note that this subset of CelebA does have a signal-distractor decomposition (like all datasets do), but it is just that in case we do not know it a priori. --- *On optimal predictors: the response did not answer how we know we are using an optimal predictor, nor did it explain the logical jumps/pseudocode. I have looked at G1-G4 as the authors referred to but these did not answer the technical question. The former (how we know if the predictor is optimal) still appears to be a critical requirement.* We apologize for the confusion. Regarding the jumps, are you referring to lines 219-221 in the paper? We mean the following: our procedure (VerT) aims to train models such that their QFA masks are as sparse as possible for a given choice of Q. This aligns with the definition of Q-verifiable models, as these are models with the sparest mask for that choice of Q. In practice, we use a Q that is a 3-dimensional Gaussian distribution, with mean equal to the dataset mean (which is a 3d RGB value for images), and a standard deviation of approximately 0.2 (which is the standard deviation of the dataset images). And this choice of Q is used for QFA in the code. However, this choice of Q is not critical, as we show in our experiments in the 1-page PDF. Please let us know if it is still unclear, we are happy to clarify further. Regarding optimal predictors, it is true that in practice we do not always know whether our predictors are optimal – Bayes optimal classifiers are a theoretical model that exist only when the complete data distributions are known in advance, i.e., $p(y \mid x) = p(x \mid y) p(y) / p(x)$. However, another way to check for optimality is to consider label noise. For instance, if the underlying dataset is clean and has no label noise, then a classifier that obtains 100% test accuracy is Bayes optimal. Real datasets always have small amounts of label noise, which upper bound their test accuracy (say ~99% for MNIST, ~96% for CIFAR, etc). In practice, if we have close-to-optimal models (i.e., models that perform well in terms of test accuracy), then in accordance with our theory, these must also approximately recover the signal-distractor decomposition, which we verify with our experiments.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their constructive feedback. We are glad that reviewers found our theory “simple yet sound” (reviewer qGoL) and that our paper had “solid empirical evaluation” (reviewer dxrg). However, we feel that there were also some misunderstandings with respect to our method, and we aim to clarify these here. **(G1) The core idea of the paper:** We’d like to perform feature attribution of black-box model f using a **masking-based procedure** (i.e., QFA in definition 1, with some input-independent choice of Q). However, this is problematic, as the **underlying black-box model may not be robust to perturbations** introduced by our procedure, leading to large change in model outputs upon masking. Ideally, one needs to find a distribution Q that the model is maximally robust to (if that even exists) and use that choice of Q for QFA to perform attribution. Another solution, which we propose, is to assume a choice of Q, and **fine-tune the black-box model to be robust to masking made with this Q** (also called the Q-verifiable model in definition 2) in the non-important features. In particular, Q-verifiability can be thought of as a very specific form of robustness, where models are ONLY robust wrt the distractor in the input and NOT the signal, thus distinguishing it from classical robustness. Further, verifiability requires robustness wrt masking, as opposed to that additive noise, as is usual with the robustness literature. However, note that we do not know the signal and distractor apriori, which is why we use alternating minimization in VerT to alternately estimate the signal-distractor masks; and the resulting Q-verifiable models. We shall emphasize this point of view of verifiability as “robustness to masking on the distractors”, in the main paper. **(G2) How is this paper different from prior works on feature attribution?** Classical feature attribution methods (LIME, SHAP, smoothgrad, etc) work by attributing to black-box models. As we mention above, these may lead to erroneous results when the model is non-robust to the perturbations introduced by these methods. On the other hand, our method (VerT + QFA) involves adapting the model to the attribution method, making this fundamentally distinct from classic feature attribution methods. Thus our framework contains aspects of both inherently interpretable models and post-hoc explanations. Another fundamental contribution of this work lies in its conceptual framework. Classical feature attribution literature does not have a precise notion of “ground truth”, making it fundamentally unclear what quantities these are estimating in the first place. In this paper, **we introduce one notion of ground truth – the signal-distractor decomposition** that is a property of the dataset itself (and NOT the model!). Using this notion, we are able to verify that our procedures work as intended at least in settings where the ground truth is known, and when the model correctly identifies the entire signal component (i.e., for well-performing models). **(G3) Why is theorem 1 interesting?** Theorem 1 states that QFA (with some choice of Q) used with an appropriate Q-verifiable model (using the same Q distribution) is able to recover the underlying signal-distractor decompositions for optimal models on a dataset. This is interesting because this works with any choice of Q! This should intuitively make sense: if a model is able to be robust to masking with some distribution Q, then we can use this fact to do feature attribution, for any chosen Q. **(G4) Does VerT only work in limited settings?** Not at all! Theorem 1 shows that when we are able to define the ground truth, i.e., the signal-distractor decomposition, this is recovered by VerT. This does not imply that VerT does not apply in other settings. In other settings where the signal-distractor distribution is unknown we only cannot evaluate the correctness of the QFA mask w.r.t. ground truth, but the theorem 1 guarantees correctness for optimal models. Non-optimal models may not identify the correct signal component (hence their non-optimality), making it such that the signal cannot be theoretically extracted from such models. However, in practice if a model is close-to-optimal, then it is reasonable to assume that it identifies close-to-correct signal components, and this is exactly what our experiments demonstrate. VerT models recover masks that are close to the ground truth, and our method (QFA) is able to identify this. We aim to add discussions in the paper clarifying points (G1-G4). **(G5) Additional Experiments:** Please refer to attached pdf. As requested by reviewers QRQ5, 2ozo, and qGoL, we add SHAP as a baseline. We note that in general SHAP does not often perform as well as most gradient-based methods for image data. We find that it performs similarly to random attributions. We also add B-CosNets as an inherently-interpretable model baseline, as suggested by reviewer qGoL. We find that B-CosNets performs comparably to our method for the IOU test on Hard MNIST. We were unable to train it to convergence on the Chest X-ray dataset. We also find that visualizations created by B-CosNets align with our expectations and appear visually interpretable. However, our method still significantly outperforms B-CosNets on the pixel perturbation test, showing that B-CosNets are not robust to perturbations of the distractor (i.e. are not verifiable). We also note that our method can be applied to a trained black-box model of any architecture and training procedure, whereas B-CosNets, like all inherently interpretable models, cannot. We finally perform the ablation requested by reviewer QRQ5 to test the effect that the choice of Q has empirically. We consider various parameterizations of the normal distribution used for Q, and find that results are relatively consistent across the different choices of Q. Pdf: /pdf/665238cfe4430b729cfc27ec78111826a64f8415.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Efficient Data Subset Selection to Generalize Training Across Models: Transductive and Inductive Networks
Accept (poster)
Summary: The authors train 1) an architecture encoder using a GNN with a variational graph autoencoder loss to learn a good representation for model architectures, 2) a model approximator taking the GNN representation and a sample x to predict the output of the model given x. The goal is to learn a probability distribution over the samples of the datasets to obtain a subset of the data with minimum loss while penalizing an entropic objective to encourage maximum diversity. Given a chosen architecture, Transductive-SUBSELNET directly optimizes all probabilities of the samples. Meanwhile, Inductive-SUBSELNET learns a neural network taking the dataset (and both the GNN representation and the model approximator output) to generate a probability for each sample. They also do a hybrid approach for better efficiency. Although I understand the algorithm, this is quite complicated, and I cannot claim to understand all the specifics of every moving part (for example, how the GNN forms a representation for the architecture and learn to be insensitive to the model parameters, and why we need BFS; mainly I am not very familiar with GNNs). Strengths: The results are very in-depth, showing a lot of interesting ways to apply this general method and some useful applications like making NAS and hyperparameter-tuning nearly as good by much faster by reducing the dataset size. They present comparisons to other works, showing significantly improved performance in terms of RAR and memory. They train on many datasets of various image sizes but only in the image domain. I think that this is great work. Weaknesses: Limited to CNNs for now (but that's okay). The biggest thing that could be improved is the notation and formatting of the paper to make the approach clearer, a lot of the details like in the GNNs could be left to the appendix. I feel like the paper overcomplicates things. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Have you considered using pre-trained graph models for the architecture representation? It seems like there should exist some, and they may be trained with more than 250 architectures. If there are, it would be a worthwhile comparison and remove one big step from the algorithm. I know that there are GNNs for weights such as GHN-3, but I am not familiar with those for architecture. It would be great to see b in Figure 3, maybe in the top x-axis, because right now, it's difficult to infer which size of the data is used. You mention taking the images and then taking the 2048 Resnet-50 embedding as image representation; it seems to have come out of nowhere. Could you add more context to this? Because I assume that the actual images are used for real architectures and possibly in the Inductive-SUBSELNET neural network. The notation can sometimes be weird, especially with the big dot in Pr_pi(dot); shouldn't x_i replace the dot? I think that instead of separating algorithms 1 and 2 as training vs inference, it would be easier to understand by separating them into two algorithms with one Transductive and one Inductive, and each algo could be separated into top and bottom where the top is training and the bottom is inference. TRAINPIPELINE could be separate. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes, the authors properly address limitations and broader impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review of the paper! > *Have you considered using pre-trained graph models for the architecture representation? It seems like there should exist some, and they may be trained with more than 250 architectures. If there are, it would be a worthwhile comparison and remove one big step from the algorithm. I know that there are GNNs for weights such as GHN-3, but I am not familiar with those for architecture.* GNNs embed model architecture into representations independent of the underlying dataset and the model parameters (Appendix F page 9, L789). Thus we train GNN once for all and the same GNN embeddings are used in all datasets. Indeed, this training was done at the very initial stage of the experiments and the resulting embeddings are being used everywhere. We did not find any pre-trained graph models for architecture representation. There are pre-trained graph embeddings for social network type graphs, i.e., Amazon recommendation network, open academic graph [1]; and molecular graphs [2]. However, we are not aware about similar pre-trained weights for architectures. If our work gets accepted, we will release the pre-trained graph embeddings, so that it can be directly used for subsequent work without further training from scratch. [1] Hu et al. GPT-GNN: Generative Pre-Training of Graph Neural Networks, KDD 2020. [2] Xia et al. Mole-BERT: Rethinking Pre-training Graph Neural Networks for Molecules, ICLR 2023 > *It would be great to see b in Figure 3, maybe in the top x-axis, because right now, it's difficult to infer which size of the data is used.* Following the suggestion, we have added the budget to the computational efficiency plots in the top x-axis in Figure 1, and as a separate plot in Figure 2 in the global-pdf. > *You mention taking the images and then taking the 2048 Resnet-50 embedding as image representation; it seems to have come out of nowhere. Could you add more context to this? Because I assume that the actual images are used for real architectures and possibly in the Inductive-SUBSELNET neural network.* We sincerely apologize for the misunderstanding which arose in Section 5.1 about usage of the ResNet-based embedding for images. To calculate the similarity matrix for facility location, we utilize the penultimate-layer of the ResNet-based feature extractor to represent the image representation, which is common in literature especially for calculating similarity between images. Hence, the feature extractor information on L276 is used **only during the subset selection stage** for the facility location formulation in L281. The input to the model architectures for all the methods and $\texttt{SubSelNet}$ neural network are the actual images. > *The notation can sometimes be weird, especially with the big dot in Pr _pi(dot); shouldn't x _i replace the dot?* We use $\Pr _\pi(\bullet)$ as the distribution itself, which is over the subset $S$. Here, $S$ can replace the dot. > *I think that instead of separating algorithms 1 and 2 as training vs inference, it would be easier to understand by separating them into two algorithms with one Transductive and one Inductive, and each algo could be separated into top and bottom where the top is training and the bottom is inference. TRAINPIPELINE could be separate.* We thank the reviewer for this suggestion - we will update the presentation algorithm and split it separately into Transductive and Inductive. --- Rebuttal Comment 1.1: Comment: I'm very satisfied with the reviewer response, I am not changing my score of 9/10.
Summary: Current subset selection methods are architecture specific and requires solving an optimization problem for each architecture individually. The subset selected through solving an optimization for one architecture does not generalize to another model. This paper addresses this problem and introduces an end-to-end training method using multiple architectures (through embedding them via GNN) to select a generalizable subset. The paper addresses an important problem, written thoroughly and consists of sufficient empirical experiments to verify the idea. Strengths: - Addresses a relevant practical problem in selecting subsets which can generalize across different architectures. - The paper is generally very well written and each component of the method pipeline is very well explained. - Good speedup with the proposed design (which is very relevant in practical scenarios) and also lower RAR compared to other baselines. Weaknesses: - While the method is a generalization of the subset selection algorithm — it is inherently complex which can lead to less adoption by the community - The paper will be stronger if more complex datasets are added. Currently only smaller datasets are used, which can obfuscate the real-world capabilities of this method. While Tiny-ImageNet is used, a few experiments at ImageNet scale will help verify the efficacy of the method more strongly. - I would expect a little bit more analysis on the characteristics of the data subset selected using their method vs. other baselines. Are there difference in the properties of data which are selected by SubSelNet? What is the overlap factor with other methods? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weakness Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Overall, I believe that the paper is well-written with good empirical results. Given that the method (although not simple for adoption) offers a significant speedup for new architectures, I would vote for a weak acceptance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed reading of the paper and their positive feedback. > *The paper will be stronger if more complex datasets are added. Currently only smaller datasets are used, which can obfuscate the real-world capabilities of this method. While Tiny-ImageNet is used, a few experiments at ImageNet scale will help verify the efficacy of the method more strongly.* We want to note that Tiny-ImageNet (a 100,000 point subset of ImageNet with 200 classes), and CalTech-256 (a 30,607 point dataset with 257 classes) are challenging real-world datasets, especially for subset selection. We further conducted experiments on ImageNet and presented the results in Figure 2 of the global-pdf and also here in the following table which shows the compute time and memory to reach 10% and 20% RAR values. We observe that we perform better than other methods (- means that we couldn’t achieve that RAR with the method). | | Speedup | | Memory (Gb-min) | | |---|---|---|---|---| | **RAR** $\to$ | 10\% | 20\% | 10\% | 20\% | | Pruning | 0.99 | 1.03 | 7.31e5 | 6.94e5 | | Random | 1.06 | 1.21 | 6.90e5 | 5.69e5 | | Proxy | 0.91 | 0.99 | 8.24e5 | 7.19e5 | | Our | 1.25 | 1.47 | 5.77e5 | 4.87e5 | > *I would expect a little bit more analysis on the characteristics of the data subset selected using their method vs. other baselines. Are there difference in the properties of data which are selected by SubSelNet? What is the overlap factor with other methods?* Note that the key to our algorithm is to select different subsets that are optimal to different architectures. In that way, we found that the similarity between architectures has positive correlation between subsets. We compute similarity between graphs $s(G _i,G _j)$ and the underlying subsets $|S _i\cap S _j|$ for each pair of architectures $i,j$. Then we get the Kendall’s-tau (k) between the list of all possible pairs of $s(G _i,G _j)$ and $|S _i\cap S _j|$ to be 0.42 for CIFAR10 and 0.55 for CIFAR100. This shows that there is a positive correlation between the model structure and the subset chosen. We also observed overlap between subsets returned by other methods with ours. Indeed there is a good amount of overlap. For example, we observe that for CIFAR100, the subset chosen by us has a 31% overlap with the subset chosen by GRAD-MATCH. However, it is to note that our method selects the subset significantly faster than others, as shown in Table 4 of the global-pdf. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I thank the authors for their detailed response; I will maintain my rating!
Summary: This work proposes a new method to select subsets of valuable training examples, with an emphasis on specializing the selections to new model architectures. The motivation is that selections made for one architecture may not work well when used with a different architecture (a claim that the authors did not show evidence for here). At inference time, the method takes a new network architecture, converts it to a learned embedding via a GNN, feeds it along with new inputs $x$ to a model approximator that estimates predictions from a trained model, and then optimizes over a probability vector from which we can sample a valuable subset. Rather than training the different components all at once, the authors propose separate objectives for each learned module. And intuitively, the final optimization over the probability vector (shown in eq. 1) encourages low loss among the selections, as well as high diversity. Strengths: This is the first method I've seen that focuses on specializing data subset selection to specific architectures, so that angle is novel. It's also challenging, but the authors manage this complexity by setting up a GNN and model approximator to efficiently estimate predictions from trained models with arbitrary architectures. The full pipeline is involves multiple steps, but the authors show how to train each module independently and the results show that it works well in practice. Weaknesses: - One of the main motivators for this method is the idea that data subset selection cannot generalize across architectures. I didn't see any evidence for this claim in the paper, and in fact there are other works that show the opposite conclusion: for example, the selection via proxy approach from Coleman et al (2020) relies on the generalization of valuable data from smaller to larger models. Can the authors explain their focus on this issue? And if they believe it is a serious issue, can they provide evidence of lack of transferability across models, and perhaps of very different selections in their method depending on the architecture used at inference time? - The authors provide no information about the computational cost of the pre-processing step in this method. If the goal is to perform efficient neural architecture search or hyperparameter tuning, this cost should be accounted for, and it seems to require training at least 250 models with different architectures, plus the GNN and model approximator. Ultimately, it's hard to say if this actually reduces the computational cost of training. - The terminology for "transductive" and "inductive" solutions for $\pi$ didn't make sense to me. I wonder if there's a simpler way to refer to these methods - for example, the "transductive" method solves a per-instance optimization problem, and the "inductive" method can be viewed as an amortized optimization solution (see "Tutorial on Amortized Optimization" by Amos). - The main objective shown in eq. 1 didn't quite make sense to me. The first part of the objective, which focuses on the loss for the selected examples, seems like it wouldn't capture anything meaningful - many samples will be perfectly predicted and have near-zero loss, and it doesn't seem especially helpful to focus on the easiest examples that are correctly predicted. Am I missing something, or can the authors explain this potential issue? - It would seem like the diversity term in the eq. 1 objective is intended to ensure diversity in some representation space and minimize redundancy for very similar training examples. Indeed, that's what prior methods aim to accomplish with submodular set functions like facility location. So the choice to instead use the entropy over $P_\pi$ is strange to me, because it cannot be expected to encourage this diversity. In fact, I'm not sure what it accomplishes because we're already performing sampling without replacement. Can the authors explain this choice, and explain whether it would be possible to test their method with a submodular set function here instead of the entropy term? - I'm not sure why the authors included eq. 3 given that they don't suggest training all the models end-to-end like this. It's not clear that this end-to-end objective would even result in good models for the GNN and model approximator. I wonder if the authors could skip this somewhat confusing overview objective and proceed directly to the training steps for each module. - In section 4.1 about the GNN, could the authors confirm which architectural information they encode in the input representation? It sounded like they only indicate the operation performed at each layer, but perhaps not the size of each layer. If so, that seems somewhat limiting for the GNN's ability to distinguish between different architectures. - The main text doesn't seem to explain how the authors optimize eq. 9 and eq. 10. We optimizing over $\pi$ here (or a network that outputs $\pi$), and this would seem to require backpropagating gradients to a large number of logits through a non-differentiable sampling process; in particular, the "inductive" solution seems like it could require a forward pass through all the training examples for each gradient step. Algorithms 1 and 2 provide no information, and I didn't see a pointer to an appendix section that explains this point. - The notation for the "inductive" model on line 235 is confusing because it doesn't show the inputs referenced in the same sentence. This is only corrected below on line 241. - The RAR metric is sensible, but it also leads to a y-axis in Figure 3 that's difficult to interpret. In particular, it's hard to tell at what point the trained models have unacceptably low accuracy due to too much speedup. Would the authors consider instead showing raw accuracy, or absolute accuracy reduction? - I wonder if the authors could include strong but simple baselines to compare to the existing ones. For example, could they include training on the full dataset for different numbers of epochs, training on a fixed random subset, or training on newly sampled subsets at each epoch? Given the focus on speedup on the x-axis rather than the number of epochs, I could see these baselines outperforming some of the existing ones, which are in some cases quite slow. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Several questions are mentioned in the issue above. One of the baselines used here, EL2N, was recently shown to have a serious bug in its implementation that invalidated some of the findings. Can the authors explain the relevance of that issue to their experiments and say whether they're using a corrected implementation here? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper does not include much discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for detailed feedback which would help us improve our paper. > *evidence of lack of transferability* We provided a comparison with Selection via proxy in Fig 7 of App E.1 (also Fig 3, global-pdf), where we outperform it. Also, we select 5% subsets of CIFAR10 for 4 architectures with the lowest no. of parameters and use them to train another architecture. We observe that this simple baseline gives RAR of (0.36, 0.39, 0.37, 0.37) whereas our method gives RAR=0.31. Thus, the subset obtained from one architecture may not be the best for other architectures. > *cost of pre-training* GNN is trained only once, irrespective of datasets since it only captures the structure of the architecture. The model approximator indeed involves an overhead, but it is one-time and is amortized by no. of unseen architectures during inference time training. To give an analogy, premier search engines invest a lot of resources in making fast inferences rather than training (e.g., LLM). They build complex models that are difficult and computationally intensive, but their inference is fast for several queries once trained. We have discussed them in detail in Limitations, App.A. We have provided this amortization analysis in Table 6 (L387) of our main paper. Here, we show that even when we include training times into total cost, we still gain a better speedup (CIFAR10). |# of archs for training approximator|100||||250|||| |-|-|-|-|-|-|-|-|-| ||Speedup||Memory (Gb-min)||Speedup||Memory (Gb-min)|| |RAR $\to$|10%|20%|10%|20%|10%|20%|10%|20%| |GLISTER|1.52|2.12|515.96|365.05|1.52|2.12|515.96|365.05| |GradMatch|1.69|2.20|457.67|362.47|1.69|2.20|457.67|362.47| |Inductive|3.18|5.23|253.40|171.10|2.94|4.41|274.08|202.91| |Transductive|4.25|7.21|188.03|122.97|3.65|4.97|218.94|178.39|| Note that the model approximator and pre-trained models can be trained only once for a large dataset and be fine-tuned for smaller datasets. This allows us to significantly reduce total computation cost at least by a factor of 5, with nearly same performance. > *New metric: absolute accuracy reduction, Simple baselines: fixed random subset, newly sampled subsets at each epoch* We added new plots in Fig 1 of the global-pdf and show that SubSelNet performs better > *main objective in eq. 1* Note that the diversity term allows it to select examples from different areas of the instance space, by adding several hard examples. At the outset, one can think of the problem as $\min\sum _{i \in S}\ell(m _{\theta}(x _i),y _i)$ s.t. $\text{Diversity(S)}\ge a$. Thus, we aim to choose a representative subset $S^*$ among many subsets $S$, which gives minimum training error. As $S^*$ is a representative set, most test examples are likely to be in the regime of S. The trained model $\hat{\theta} (S^*)$ gives the lowest error across all other models trained across different representative subsets. Therefore, it gives lowest loss on most test examples, since they will be fall in the regime of $S^*$. Indeed, there are some hard examples giving bad accuracy, which, however, is significantly compensated by very high accuracy of most test examples. Explicit training over hard examples leads the model to learn unnecessary on outliers, resulting in overall accuracy drop (26% for CIFAR10). $a$ ($\lambda$ in our original objective) controls this regularization. Generalization of subset selection is already very challenging. As the model is trained on diverse and easy examples, the neural pipeline finds it easy to generalize this task across architectures. In contrast, selecting hard examples is already quite difficult for one architecture. This hardness gets amplified when we try to generalize these across different architectures. > *$P _\pi$ is strange* We tried to approximate Facility location (FL) using a neural network. For a non-sequential $Pr _\pi$, we have: $\sum _{j\in D} \max _{i\in S} sim _{ij} \approx \sum _{j\in D} \log(\sum _{i\in D} e^{sim _{ij}} Pr _\pi (i) ) $. We faced several difficulties here. (1) We need to pass all examples twice, so it can be computationally expensive $O(|D|^2)$. As $|S|>1$, $\sum _{i} Pr _{\pi} (i) > 1$ – we cannot relax it with Jensen inequality. (2) The sequential extension when sampling of one instance will affect subsequent sampling is very non-trivial to incorporate here. (3) $sim _{ij}$ is computed in some latent feature space that requires architecture embeddings, model approximator, etc. and thus it demands another neural network on top of all current networks, which would further complicate the training. Hence, we choose to increase entropy of $P _{\pi}$ at each selection step. This leads the softmax probability distribution (Eq 8) across candidate instances to be close to uniform at each step. This will allow us to choose samples uniformly from different regimes. This will lead to choosing a representative subset although we agree that it would not encourage minimum redundancy. We saw that in practice this works very well, and also allows for log-derivative tricks to perform efficient sampling. > *What does GNN encode* The entire neural architecture is fed to GNN including all layer (thus including model size), structural and operation information. > *optimize eq. 9 and eq. 10 over $\pi$* We use log derivative trick to compute $\nabla _{\psi} \mathbb{E} _{S\sim P _{\pi _\psi}}[ \Lambda (S,\psi) ] = \mathbb{E} _{S\sim P _{\pi _\psi}}[ \nabla _{\psi}\log P _{\pi _\psi} (S)\Lambda(S,\psi)+ \nabla _ {\psi} \Lambda(S,\psi) ]$. This would allow us to (1) compute gradient for backpropagation and (2) distribute product of softmax probabilities (Eq 8 in the paper) into sum of log probabilities, which allows to compute outer expectation easily. > *El2N bug* We believe to be using the correct (updated) version of the code, where the bug in flax was fixed in April 2021, regarding the loading of checkpoints. > *lack of limitations* We discussed limitations in detail in App A. --- Rebuttal Comment 1.1: Title: Thanks for response Comment: Thanks to the authors for their response. I remain skeptical of several design choices in this work (the need for an architecture encoder, the wisdom of the main objective, the entropy penalty which is apparently chosen for convenience), and like some other reviewers I find it overly complicated and highly burdensome for users. Regarding the lack of transferability across architectures, the response offered by the authors is not exactly convincing: their results are slightly better than SVP, but it's not as if SVP doesn't work at all due the smaller architecture, which seems implied by the paper's narrative. The redeeming factor here is that the results are in fact better. I'm not enough of an expert in these methods to have a strong view on why, but the authors deserve credit for their results. Given my concerns, I plan to keep my score as "borderline accept." I would be interested to chat with the other reviewers at some point. --- Reply to Comment 1.1.1: Title: Clarifying some of the concerns Comment: Many thanks for your quick response. We would like to address your concern, esp. for the justification between neural pipeline and SVP once more. We would earnestly request you to have a look into it. Given the character limit, we could not delve into details of design choices especially architecture encoder in the original rebuttal. However, we describe them in details here. **SVP**: We would like to clarify that the improvement over SVP is indeed significant over our method. We apologize for the miscommunication. The numbers (0.36, 0.39, 0.37, 0.37) we showed in the following statement “We observe that this simple baseline gives RAR of (0.36, 0.39, 0.37, 0.37) whereas our method gives RAR=0.31. Thus, the subset obtained from one architecture may not be the best for other architectures. “ are NOT the results for selection of proxy. Here, we use *our method* to compute the optimal subset and use it train *only one new architecture*. Neither, they are results from SVP nor are they averaged across architectures. We used this example to simply show that optimal subsets using our method may differ across architectures and the intention was not to contrast/compare SVP against our method. We simply used one randomly drawn architecture to show that the optimal subsets may differ. This difference is actually large if we aggregate across architectures. In fact, in Figure 3 of the rebuttal PDF, the plots indeed show that the results of our method are indeed significantly better than SVP, for complex datasets like TinyImagenet and Caltech 256. For example in TinyImagenet, RAR values of **our method is 0.52, SVP is 0.71** for 20% subset, and for caltech 256, **ours is 0.46 and SVP is 0.67.** We believe that it does show that our method is overall significantly better SVP, which showcases that subsets trained on one architecture may not generalise across others. **Explanation on the flow of our pipeline:** Generalization of the subset selection across architectures is an extremely complex task. Since we are generalizing the process across architectures, in a non-modelAgnostic manner, the pipeline becomes a bit complex. When we generalize across any objects, we always convert them into embeddings, so that vectorial operations can be performed. Since the architectures are the objects over which we wish to generalize, we convert them into vector. For graph structured objects, GNN are the state-of-the-art for this purpose. We made a detailed discussion in the response to Reviewer KRgz. We reproduce it here once again for your convenience. Here, we try to establish a reasoning behind using a transformer-based network along with the architecture encoder for model approximation. First note that the architectures under consideration can be represented as directed acyclic graphs, with forward message passing. During the forward computation, at any layer for node $v$, the output $a(v)$ can be represented as $$a{(v)}=H _v\left(\sum _{u\in InNbr(v)} op _v(a(u)) \right) \quad \text{with} \ a (\text{root}) = x----- (A)$$ Here, $op _v$ is the operation on the inputs coming into the node. E.g., $op _v$ can be simply a linear matrix multiplication and $H _v$ is the activation function at node v. We are interested in $a (\text{OutNode})$ where OutNode is the final node where the output is computed. Now, given the nature of this recursion, a graph neural network--- which exactly operates like above--- can approximate $a (\text{OutNode})$ with appropriate nonlinearities. Specifically, GNN will gather messages from $k = 1,...,K$ hop starting with with $h _0 = nodeFeature$ as follows: $$h _{k+1} (v)=NN_1\left(\sum _{u \in InNbr(v)}NN _2(h _{k+1}(u)) \right)$$ Here, $NN_1$ and $NN_2$ are neural networks. Since GNN operates exactly as the computation process (A), it makes sense to assume that $h _K(1),...,h _K(V)$ are good representations of the operation within the architecture. They, together with the feature $x$ should be able to predict the $a (\text{outNode})$. Thus, our task is now to find nonlinearities $F$ and $G$ so that $a(\text{outNode}) \approx F( G (h _K(1), …, h _K(|V|)), x )$. Now, the set $(h _K(1), …, h _K(|V|))$ is permutation equivariant with respect to node indexing. As suggested in [1], transformers are universal approximators of permutation equivariant functions. Therefore we use G as a transformer. Furthermore, we apply another neural network F on top of output of G and x to predict $a$(outNode). [1] Yun et al. Are Transformers universal approximators of sequence-to-sequence functions? ArXiv, abs/1912.10077. We believe that the above reasoning justifies our design choices to some extent. Note that the problem is extremely challenging and largely unaddressed in literature. As a first step, some of the design choices may not be rigorously justified but they showed consistent good results across datasets; and thus, contributes to a crucial step forward.
Summary: The paper presents a model agnostic method of subset selection using a graph neural network a surrogate. Strengths: - Problem area is an interesting space, and an area of interest to the community at present. - Novel application of GNN for subset selection - Seems to be a performance gain in some settings for SubSelNet over existing methods (but hard to tell from graphs). Weaknesses: - The paper not well written. I found the paper hard to parse. Structure is messy, and the paper visually feels like there's too much going on. The figures are small. The graphs and tables are hard to read. This paper would not have a much of an impact as it stands, simply because it's difficult to read. - Doesn't seem to a convincing narrative here on why are existing methods not model agnostic? See Selection via Proxy by Coleman et al and RHO-Loss by Mindermann et al. - Datasets are all small. The graphs are hard to read - The method seems overly complicated and some of the justifications seems slim (e.g. GNN for architecture encoder? why architecture encoder? Why "neural approximator"? Minor Comments: - The arrows (on RAR as other metrics) in figure 3 and table 1 are confusing. They don't indicate in the way arrows usually do (i.e. what direction is best). I don't even know why there is an arrow the table 1. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: See above. But on the whole this paper is just needs a thorough restructuring at the very least. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: Societal Impact: Would be nice to see the effect of subset selection on fairness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. > *The paper presents a model agnostic method of subset selection using a graph neural network a surrogate.* We believe that there is a misunderstanding. Our method is not model agnostic. Rather, it chooses a different subset for each architecture using a neural model which is trained to select optimal subsets given an architecture. This allows us to obtain a subset that is optimal, specifically for a given architecture, without any combinatorial or other expensive subset selection algorithm. Thus, although it does not require to explicitly train any model or solve a combinatorial optimization problem every time, it selects a subset using a neural subset selector, which is indeed different for different architectures. > *Doesn't seem to a convincing narrative here on why are existing methods not model agnostic? In the introduction, we mentioned that many subset selection methods that show good accuracy are required to train everytime from scratch for every new architecture. This is time consuming. Indeed there are works like Selection via Proxy by Coleman et al which selects subsets efficiently, without training the model again and again. However, they perform worse in terms of accuracy with respect to those methods which attempt to select a subset each time for a new architecture. This is natural because the model specific subset selectors tailor their subset selection per each architecture. Our method attempts to effectively tradeoff between these two aspects. By training on several architectures, we aim to learn to select a subset that is optimal for a new architecture, without explicit training or expensive combinatorial computation. Note that we already compared Selection via proxy by Coleman et al (2020) in Figure 7 (Appendix E.1), which shows that our approach provides a better trade-off than their method. For convenience, we have shown the plot in Fig 3 of the global-pdf. Moreover, we show the compute time and memory to reach 10% and 20% RAR values for FMNIST, and observe that our method outperforms others. ||Speedup||Memory (Gb-min)|| |--|:-:|-:|:-:|:-:| ||10\%|20\%|10\%|20\%| |Selection by Proxy|3.65|18.09|168.20|35.27| |Inductive|28.64|69.24|22.73|8.24| |Transductive|28.63|68.36|21.25|8.24| To further enhance the fact that the subsets to be chosen depend not only on the dataset but also on the architecture under consideration, we selected 5% subsets from 4 architectures with the lowest number of parameters and used them to train a larger architecture. We observe that this simple baseline gives RAR - (0.36, 0.39, 0.37, 0.37) whereas our method gives RAR of 0.31. > *Datasets are all small* We want to note that Tiny-ImageNet (a 100,000 point subset of ImageNet with 200 classes), and CalTech-256 (a 30,507 point dataset with 257 classes) are challenging real-world datasets, especially for subset selection. We further conducted experiments on ImageNet and presented the results in Fig 2 and also here in the following table which shows the compute time and memory to reach 10% and 20% RAR values. We observe that we perform better than other methods. | | Speedup | | Memory (Gb-min) | | |---|---|---|---|---| |**RAR** $\to$ | 10\% | 20\%|10\% | 20\% | |Pruning|0.99| 1.03 | 7.31e5|6.94e5 | |Random|1.06 |1.21 | 6.90e5|5.69e5 | |Proxy|0.91| 0.99 | 8.24e5|7.19e5 | |Our|1.25|1.47 | 5.77e5|4.87e5 | > *The method seems overly complicated and some of the justifications seems slim (e.g. GNN for architecture encoder? why architecture encoder? Why "neural approximator"?* Our task is as follows: Given an architecture m and a training D, we should be able to find a subset S *without explicitly training m*, which would give optimal performance across all subsets of the same size. That means we need an algorithm A which would take input m and D and will output S, *without any explicit training of m on D*. Thus, the goal is to find A such that $A(m,D)=S^* _m$, the optimal subset for the architecture m. In general, such an algorithm should solve some candidate optimization problem (Eq 1 in the paper) like the following: $$\min _{\theta, S\subset D: |S|=b}\sum _{i \in S}\ell(m _\theta(x _i),y _i)-\lambda\texttt{Diversity}(\{x _i|i\in S\})$$ *Why a neural approximator?:* The key bottleneck here is to train $m_\theta(x)$ every time for a new architecture. Our work focuses on bypassing this expensive step by providing an approximation of the *trained output of $m_\theta (x)$*, without explicit training of $m_\theta$. Hence, whenever we need to train $m_\theta$, we can use the approximation directly which will replace the entire training stage. To do so, we design a neural approximator which takes the architecture of m and the instance x as input and provides the output $m_{\theta^*} (x)$, the prediction made on x by the trained architecture. Whenever we require to train the model m during the subset selection procedure, we directly feed the architecture of m and x into the neural approximator and directly obtain the output $m_{\theta^*} (x)$ — the entire process of training $m_\theta$ is removed. Once we have the approximation of the trained model output, we feed it into another neural network, which directly samples the subset. *Why architecture encoder?* The neural approximator aims to make efficient generalization of predicting model output across different architectures. We note that generalizing any task across the different architectures requires the architectures to be embedded in vector space. However, directly using the graph matrices doesn’t encompass the operations or structure completely, as shown in Table 1 in the global pdf. > *The arrows in RAR* The $\uparrow$ in Figure 3 denotes the axis, and $\to$ in Table 1 of the main paper denotes that **RAR** refers to 10% and 20%. We will address them in the revised version. --- Rebuttal Comment 1.1: Title: Response Comment: I've now read the other reviews, along with the authors rebuttal. Responses to my concerns firstly. | We believe that there is a misunderstanding. Our method is not model agnostic. No misunderstanding. I meant model agnostic from the perspective that it can be applied to any model architecture rather than the data selected can be used to trained any model architecture. | In the introduction, we mentioned that many subset selection methods that show good accuracy are required to train everytime from scratch for every new architecture. This is time consuming. Indeed there are works like Selection via Proxy by Coleman et al which selects subsets efficiently, without training the model again and again. Okay but then what about adaptive methods like RHO-LOSS. Considering the setting, this seems like the most appropriate baseline then considering the majority of the method tested are non-adaptive? | ImageNet Results and Results more generally. Firstly, question for all the results, this seems like one single run was conducted for each experiment. In my experience, these methods tend to have non-trivial standard-deviations and at the very least 3 runs would be necessary. Apologies for not raising this earlier, it was on my notes on the paper, but I seem to have missed adding it to the review. Secondly, the gains here seem small. 1.21 using random subset vs 1.41. Including standard deviations, I would say this difference might be even smaller. Add in the pretraining time, this would be negligible. | On the justifications for the components I understand that. Equation 1 is a reasonable thing to optimise for. The part I felt wasn't justified was each component working. e.g. neural approximator. Upon re-reading the paper, I do think the KLs are helpful here. I would be nice have a baseline for this (KL between two of the same models with different initialisations, and perhaps ImageNet pre-trained ResNet or something). However, this does resolve some of my concerns here. | readability This paper is still hard to parse, and I saw no acknowledgments of that throughout this rebuttal. The rebuttal response further provided further evidence of that. The figures, graphs, tables and spacing make the very aesthetically unappealing and just difficult to read. I wouldn't vote for a rejection for this on it's own but it is a contributing factor. This paper has the potential to be impactful, but only if someone reads it. The paper would really benefit from additional time here. --- Reply to Comment 1.1.1: Title: Clarification of the concerns Comment: Many thanks for your response. We attempt to clarify your concerns once again. We would earnestly request you to look into it. > *RHO-LOSS…the majority of the methods are non-adaptive* We have tested our method against 4 adaptive baselines, and 2 non-adaptive baselines (described in L279 of the paper). Here, we compare our method against RHO loss. We observe that we outperform RHO loss by a significant margin in CIFAR10. ||Speedup||Memory (Gb-min)|| |-|-|-|-|-| ||10\% RAR|20\% RAR|10\% RAR|20\% RAR| |RHO-Loss|1.09|1.37|390.3|310.26| |Our|5.61|16.52|142.45|53.67| > *ImageNet Results and Results more generally.* No. of runs: Due to some baselines being computationally very expensive, we couldn’t run them multiple times for multiple architectures (we indicated this in the openreview submission form). Although we could have reported standard errors except those baselines, we thought that such partial reporting would be confusing and further given that there are so many points in the graph, error bars would make it look more difficult to parse. In general, we observed that **seeding doesn’t affect the pareto efficiency overall**. In the table below, we report the speedup and memory, along with their standard errors for 10 runs, required to reach 10% to 40% RAR for the most efficient baselines. The gap for others is even larger. (Speedup) ||10% RAR|20% RAR|30% RAR| |-|-|-|-| |Pruning|3.54±0.06|5.53±0.12|7.97±0.22| |EL2N|1.93±0.03|4.78±0.11|7.34±0.11| |Our|5.61±0.06|16.52±0.18|28.36±0.41| (Memory (GB-min)) ||10% RAR|20% RAR|30% RAR| |-|-|-|-| |Pruning|221.1±3.75|139.41±3.03|87.13±2.41| |EL2N|413.9±6.43|170.03±3.91|101.88±1.53| |Our|142.45±1.52|53.67±0.58|29.18±0.42| ImageNet: Achieving 10% or 20% RAR requires |S| = 0.88|D| and |S| = 0.81|D| for random whereas, we reach this with |S| = 0.76|D| and 0.63|D|. Despite this larger size for random, we had to draw several random subsets, to ensure at least one lucky random subset gives 10% or 20% RAR. We reported the speedup/memory for only that subset, without incorporating how many subsets thus generated could not reach the desired RAR. Hence, the probability that the speedup taken by random is 1.21 is very small. For fair comparison, we now compute $\mathbb{E} _S[\text{time taken}|RAR(S)=\text{Desired RAR}]$, by generating different sized-subsets, which reach required RAR. We notice there is a significant gap between our and random. ||10% RAR|20% RAR|30% RAR|40% RAR| |-|-|-|-|-| |Random|1.02±0.03|1.06±0.06|1.38±0.06|1.93±0.08| |Our|1.25±0.03|1.47±0.03|2.47±0.04|6.02±0.07| Further, we also calculate the difference in ours and random’s performance for a fixed speedup, where we notice that we have a 18.15% and 21.08% RAR gain when targeting a 5x or 10x speedup, which are significant. > *KL with pre-trained ResNet* We had also considered pre-trained ResNet during the initial stage of experiments, since it can significantly reduce the pre-training cost. However, the performance was poorer, with a KL of 0.261, whereas our method achieves a KL of 0.089. > *paper is hard to parse, I saw no acknowledgment* We sincerely apologize for not addressing this in the response. We are thankful to the reviewer to point essential aesthetic and readability changes out in the paper, especially to the plots and tables. However the character limit during the rebuttal was 6k and we thought of giving the overall explanation of our method more priority there. We compared the variants of our method with 6 baselines in main, and 3 more in the appendix. This generated complex trade-off plots, which made the graphs difficult to parse. On the other hand, generating a single large table out of so many numbers might be overwhelming, and we decided to keep the figures. On top of that, for quick reference to subparts of our method along with ablations, we aimed to keep those tables in the main paper for quick access by the reader. However, we understand that this made results difficult to parse Please note that NeurIPS provides an additional page for accepted papers. If our paper gets accepted, we would use this for presentation changes in the paper, as follows: - We will split the large main figure into two larger figures (separating adaptive and non-adaptive baselines), so that every plot is clearly readable. We will also put FMNIST in the Appendix, so that the figure in the main can be made larger. - We will fix the notations and structures in the table, to make the results more apparent without being a source of confusion. - To reduce notational overheads, we will move the low-level GNN description to appendix and describe the theoretical underpinning. - We will additionally place SVP and RHO-Loss in the introduction and the method section, to motivate them better. Rebuttal: We got 6 reviews and we were required to report many results. Thus, unfortunately we crowded the global pdf with results. However, if the paper gets accepted, we will distribute them in the same manner.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive and insightful comments. We would like to summarize the reviews and the global-pdf (attached with this rebuttal) here. > (GR.1) *Results after addition of pre-training time* Currently, the approximator is trained with 250 architectures. However, we observed that even when we train the approximator with a lesser number of models, we beat the top competitive baselines after accounting for amortization. Below, we show the amortized time and memory to reach 10% and 20% RAR for the CIFAR10 dataset with the model approximator training architecture set of 100 and 250, for the most competitive methods. |Approximator training size|100||||250|||| |---|---|---|---|---|---|---|---|---| ||Speedup||Memory||Speedup||Memory|| |**RAR**$\to$|10%|20%|10%|20%|10%|20%|10%|20%| |GLISTER|1.52|2.12|515.96|365.05|1.52|2.12|515.96|365.05| |Grad-Match|1.69|2.20|457.67|362.47|1.69|2.20|457.67|362.47| |Inductive|3.18|5.23|253.40|171.10|2.94|4.41|274.08|202.91| |Transductive|4.25|7.21|188.03|122.97|3.65|4.97|218.94|178.39|| > (GR.2) *Other applications beyond NAS* In addition to NAS, our work can be used for hyperparameter selection in the AutoML domain. Primarily, we address the optimization of network-related hyperparameters such as activation functions and intermediate layer widths. The approach we propose involves training these model instances on a subset of data derived from our method. This expedited model training strategy quickly yields trained models and facilitates efficient cross-validation procedures. Moreover, let us consider the case where we need to tune non-network hyperparameters, such as learning rate, momentum, and weight decay. Given the architecture, we can choose the subset obtained using our method to train the underlying model parameters for different hyperparameters, which can then be used for cross-validation. > (GR.3) *Clarification regarding the pre-trained feature extractor* We sincerely apologize for the misunderstanding which arose in Section 5.1 about usage of the ResNet-based embedding for images. To calculate the similarity matrix for facility location, we utilize the penultimate-layer of the ResNet-based feature extractor to represent the image representation, which is common in literature especially for calculating similarity between images. Hence, the feature extractor information on L276 is used **only during the subset selection stage for facility location** as shown in L281. Thus, the input to the model architectures for $\texttt{SubSelNet}$ and other methods are the actual images. > (GR.4) *Contribution of the GNN-based architecture encoder* Discussion regarding the GNN is present in Appendix F. Summarizing (1) GNN provides contextual embeddings of each node that captures not only the operation for that node but also the operations preceding that node. (2) GNN gives embeddings that are independent of the underlying dataset allowing us to train the encoder only once and use it for multiple datasets. As a baseline, we feed the graph structure explicitly into the model approximator using the adjacency matrix instead of the GNN-derived node-embeddings. We note that such a change negatively impacts the performance, resulting in a 5-6% drop in RAR for 10\% subset on CIFAR10 as shown in Table 1 of the global-pdf. > (GR.5) *Imagenet results* We have added the results for ImageNet in Fig 2 in the global-pdf. We want to note that the adaptive baselines gave an out-of-memory error, and hence we were not able to experiment on those. Here we report the compute time and memory required to reach 10% and 20% RAR. | | Speedup || Memory | | |---|---|---|---|---| | **RAR** $\to$| 10\% | 20\% | 10\% | 20\% | | Pruning | 0.99|1.03 | 7.31e5 | 6.94e5 | | Random | 1.06 |1.21 | 6.90e5 | 5.69e5 | | Proxy | 0.91 | 0.99 | 8.24e5 | 7.19e5 | | Our | 1.25 | 1.47 | 5.77e5 | 4.87e5 | > (GR.6) Summary of the global-pdf In the global-pdf, we have added: **Figure 1**: Tradeoff curves between absolute accuracy reduction (difference of test error of full selection and subset) and Speedup/Memory for the baselines: Random, Random-switch (random subset chosen at each epoch) and Selection-via-Proxy **Figure 2**: Tradeoff curves between RAR and Speedup/Memory/Budget for ImageNet for the baselines: Random, Random-switch and Selection-via-Proxy **Figure 3**: Tradeoff curves between RAR and Speedup for Tiny-ImageNet and Caltech-256 for the baseline: Selection-via-proxy **Table 1**: Comparison of performance of GNN-based embedding and graph matrix embedding in the model approximator **Table 2**: Test Error of NAS on DARTS search space for all non-adaptive baselines on CIFAR10 **Table 3**: Variation of SubSelNet performance on changing number of training architectures in the model approximator **Table 4**: Breakdown of selection and training time for top adaptive and non-adaptive baselines on CIFAR100 for 0.5\% dataset Pdf: /pdf/f09f6ea24f33c4db34c62c4c0939d6948e1e87af.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a new subset selection method, SUBSELNET, for efficient training of neural network models. SUBSELNET selects optimal subsets for a model based on its architecture with minimal overhead. SUBSELNET outperforms existing subset selection methods in terms of computational efficiency. Strengths: New data-driven approach to subset selection: the proposed method turns subset selection into a learning problem and learns a subset selection pipeline that can generalize to new architectures. Such approach has the potential of being more universal while also more efficient at certain conditions (see below). A step towards uncovering the relationship between optimal subsets and model architecture: the proposed method in principle shows that optimal subsets can be determined from a simple representation of the model architecture, which is a quite strong hypothesis worth investigating. This line of research has the potential to better our understanding of the relationship between data and model in deep learning. Efficient subset selection adaptive to mode architecture: although with significant drawbacks (see below), the proposed method points to a promising possibility of architecture-adaptive subset selection with very little cost. Application in NAS and hyperparameter search seem promising: from the results reported in the paper, the proposed method has the potential to significantly accelerate NAS and hyperparameter search if used properly. Weaknesses: Method has limited efficiency advantage because of costly training: the proposed method needs to train a large number of models in order to learn a model approximator, therefore though it is more efficient than other adaptive subset method at test time, the computation cost is considerably shifted to the training phase. As a result, the proposed method is only clearly more efficient when it is required to efficiently train a large number of models of different architectures so that cost can be amortized. This situation seems only happen when doing NAS. NAS is undeniably an important task, but otherwise the potential use of the proposed method seem limited. (minor) Another limitation of the method is that the trained model approximator does not generalize across datasets. The method needs to be retrained for each new dataset, which further limits its scope of application. The method also requires a pre-trained feature extractor suitable for the current data distribution, which may not be available for less common datasets. Insufficient analysis of the components in the proposed method: to appreciate the merits of the proposed design of approximating model prediction based on architecture, one would want to know how effective is the model approximator. For example, how effective does it generalize to unseen architectures? Also, to what extent is the subset selection dependent on architecture (how much do subsets selected for two different architectures overlap, and does the degree of overlap depend on architecture similarity)? For the GNN architecture encoder, how much does it contribute to the final performance? And what about the pre-trained feature extractor? (minor) Scope of experiment is kind of limited: experiment is mainly performed on small datasets such as CIFAR100, no experiment on larger datasets such as ImageNet where efficient training is much more meaningful. Some assumptions made in the paper are likely questionable: see the `Questions` section. Potentially incomplete evaluation protocol: see the `Questions` section. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Assumptions on architecture encoder: Section 4.1 describes the parameterization of operations in a neural network into node representations, and use them as features in the model approximator to approximate the prediction of a model. This implicitly assumes that the prediction of a model depends only on the structure of the network, which is likely not the case. Hyperparameters like model size, optimizer, learning rate, etc., can all affect model performance and could affect different architectures differently. It may require empirical analysis to show if using architecture alone suffices to approximate model predictions. Assumptions on using SUBSELNET in AutoML: The idea of using adaptive subset selection in NAS, etc., is potentially problematic. The goal of NAS is to select $\underset{m}{\operatorname{argmin}} L$, while the proposed method selects $\underset{m}{\operatorname{argmin}} \underset{S}{\operatorname{min}} L$ which is a different optimization problem. Because the subset can adapt to each particular model, the performance on the model's most advantageous subset might not correlate well with its performance on the whole dataset. For example, it is possible that a generally worse-performing model has a good training subset. On the other hand, non-adaptive subset selection does not risk over-adaptation between subset and model and can be more safely used in NAS to approximate $\underset{m}{\operatorname{argmin}} L$. Evaluation: The main results of the paper, Figure 3 and Table 1, are mostly dedicated to comparing the speed (computation time) and memory usage. Here the computation time involves multiple components (e.g., running subset selection, model training) and can be tricky to compare across methods and is implementation-dependent. Because the proposed method uses costly training, to include the amortize training time or not in the comparison could make a large difference. The authors need to be more explicit in the detailed protocol used when comparing computation time. Because the training phase involves training many models, the cost is mostly linked to the number of models trained. Therefore, it seems necessary to verify the minimum number of models need to be trained in order to learn a good model approximator. Comparison of efficient training can be evaluated in other dimensions as well, most notably with sample efficiency: would the proposed method be able to select a smaller subset to maintain certain accuracy? Sample efficiency is also a more reliable metric than computation time and allows for more objective comparison. Results in "using SUBSELNET in AutoML" section: why only `random` and `proxy` are evaluated in Table 4, what about other subset selection methods listed in Figure 3? Why are non-adaptive baselines not evaluated in Table 6, which could have low cost? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the main limitations of the proposed method, including dependence on the training dataset and the inability of generalizing model prediction to the test set. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thorough reading and valuable feedback. Please find our answers below: > *how effective does model approximator generalize to unseen architectures?* Ablation study (L337) in our paper provides an evaluation on model approximators. Further, the analysis from L349 and Table 2 addresses this exact question. We evaluate the model approximator $g _{\beta}$ alone--- without the presence of the subset sampler--- using KL divergence between the gold model outputs $m _{\theta^*} (x _i)$ and predicted model outputs $g _{\beta}(H _m,x _i))$. Here we measure $AVG _{m \in M _{test}} KL(m _{\theta^*} (x _i) || g _{\beta}(H _m,x _i))$. For convenience, we reproduce the numbers from table 2 for CIFAR10. || KLDiv| |-|-| |FF|0.171| |LSTM|0.102| |Our|0.089| Further, we note that the classification accuracy of the model approximator on unseen architectures taken from NASBench-101 is 96%, and from DARTS is 94% in CIFAR10 > *to what extent is subset selection dependent on architecture?* As per the reviewer’s suggestion, we compute similarities between graphs $s(G _i,G _j)$ and the underlying subsets $|S _i\cap S _j|$ for each pair of architectures $i,j$. We calculate the Kendall’s-tau between the list of all possible pairs of $s(G _i,G _j)$ and $|S _i\cap S _j|$ to be 0.42 for CIFAR10 and 0.55 for CIFAR100. This shows that there is a positive correlation between the model structure and the subset chosen. Additional results in App 7 and Fig 3 in the global-pdf show that existing works, which assume subset for one model can generalize across others, are outperformed by our method. > *How much GNN contributes to the final performance* If we replace GNN embeddings with simple architecture embeddings, we see a 5-6% increase in RAR. We have added the details in the global response (GR.4) and Table 1 of the global-pdf. > *what about the pre-trained feature extractor?* We have clarified about the extractor in the global response (GR.3) > *Results on large dataset* Fig 2 in global-pdf and (GR.5) in the global response compares subset selection across baselines on ImageNet, which show the efficacy of our method. > *Hyperparameters like model size, optimizer, learning rate, etc., can all affect model performance and could affect different architectures differently* GNN encodes structure of the architecture and thus *does incorporate model size*. Since other hyperparameters (H) are not structure related, we did not include them in the GNN. We also did not incorporate H in the model approximator as its performance as well as the final accuracy deviates by max 2-3% for different H and we noted that they mostly change the rate of convergence rather than the final accuracy for *all methods including training with full subset*. Thus, our plots (speedup/memory v RAR) remain almost the same. > *idea of using adaptive subset selection in NAS, etc., is potentially problematic* Note that, due to the entropy term (Eq 2), we are minimizing the loss over all representative subsets (over the full dataset). So, the optimal S is itself a representative subset which gives minimum loss. Hence, the model optimized on the subset in NAS also performs well in the full dataset. This is why we observe that this approach works well in practice. > *(a) Computation time can be tricky to compare across methods; (b) Amortize training time or not in the comparison make a large difference; (c) the detailed protocol used when comparing computation time;* (a) Whenever available, we used the available code of the baselines. All these baselines are specifically optimized for compute efficiency and therefore they did run in their full strength. Every experiment is on the same GPU server in exactly the same setting. During time/memory computation, we ensured that no other processes were running on the server. (b) Computation time including amortized training is analyzed in (GR.1) in the global-response which shows that our method still stays efficient. (c) As we mentioned in the paper (L306), “Computational efficiency… $T$ is the time taken for the entire inference task, which is the average time for selecting subsets across the test models $m'\in \mathcal{M} _{test}$ plus the average training time of these test models on the respective selected subsets. “ Hence, this is total time = running subset selection+model training. Comparison on these two aspects separately is given in Tab 10 in the Appendix and Table 4 in the global-pdf. > *verify the minimum number of models need to be trained in order to learn a good model approximator.* We trained with different numbers of models $n$ and observed that beyond $n_{min}= 100$, we are able to train a good approximator. (Please refer to Table 3 in the global-pdf). > *Sample efficiency is also a more reliable metric than computation time and allows for more objective comparison.* We plot the accuracy-budget tradeoff in Fig 2 of the global-pdf for Imagenet, which shows that our method performs better than the other. Moreover, we also provide a table here which shows the value of $|S|$ required to reach 10–30% drop in max performance for ImageNet, where we notice that we are able to select a smaller subset to achieve the same RAR. **RAR**$\to$|10\%|20\%|30\% ---|---|---|--- Pruning|0.86|0.80|0.62 Random|0.88|0.81|0.69 Proxy|0.83|0.74|0.56 Our|0.76|0.63|0.44 > *why only random and proxy in AutoML?* Random and proxy are the state-of-the-art specifically for NAS. During the rebuttal period, we also experimented with our non-adaptive baselines: Pruning and Facility location. We notice that our methods still selects better architectures for NAS. We put the results in the global-pdf in Table 2 --- Rebuttal Comment 1.1: Comment: The reviewer thank the authors for providing many extra results and clarifications. Most of my questions are answered, although I still do not feel extremely confident of the results and the generalizability of the findings given the complexity of the method itself and the complexity of the design settings that could affect the results. I will raise my score to the positive side. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their reply and increasing their score! > *I still do not feel extremely confident of the results and the generalizability of the findings given the complexity of the method itself and the complexity of the design settings that could affect the results.* In the common response titled *Why we use GNN and transformer: Insights from theoretical underpinning*, we have highlighted the construction through which we have decided to use the specific elements in the pipeline in their current form. Note that our method performed well across six diverse datasets. Moreover, through extensive ablation studies present in the main paper (Table 2, Line 337), Appendix E (Table 9, L663) and the global-pdf (Table 1 and 3), we have observed that although the GNN+Transformer blocks provide the best performance. Moreover, our design choice is robust. For example, we used a transformer as a sequence encoder in a model approximator. Here, if we use another sequence encoder like LSTM, we still outperform the nearest baseline (selection via proxy) by 9.1% and 7% RAR at subset sizes of 5% and 10%. However, we observed that the transformer gives more performance boost.
Summary: This paper focuses on the Subset Selection problem for selecting the best subsets training samples with which a well-performed model can be trained. This paper focuses on the challenge that previous methods for subsets selection cannot transfer across architectures. Specifically, this paper proposes to use a transformer to predict the accuracy of a model trained on a subset of training samples and use the predicted results as the surrogate to solve the combinatorial optimization problems for the selection (Transductive). This paper also proposes inductive selection, which learns another neural to predict the result of combinatorial optimization. Further, this paper proposes to encode the neural network's architecture using GNN, which will also be fed to the predictor. __________________________________________________ Post Rebuttal _________________________________________________ The reviewer has already read the rebuttal. All my concerns have been addressed. And the reviewer wishes that the insights discussed in the rebuttal can be accommodated in the final version of the paper. The reviewer will maintain the original score since the overall novelty, contribution, and impact cannot reach the reviewer's bar for papers scoring 7, and a score of 6 is already positive for the paper. Strengths: 1. In this manuscript, the authors present an innovative approach toward subset selection, putting forward a new framework that emphasizes efficiency and practical applicability. 2. A feature of the proposed framework is its ability to generalize across various architectures. This is a step forward, as it ensures the versatility and universal applicability of the method. The framework is not only confined to a specific architectural layout, but it can adapt to and function effectively within a range of scenarios. 3. Additionally, the paper is substantiated by robust experimental results that highlight the superiority of the proposed method in comparison to existing baselines. The authors provide strong empirical evidence of the efficacy of their approach, demonstrating that it not only meets the performance standards set by previous methods but often surpasses them. Minor: The authors have also shown how their method can be naturally incorporated into Automated Machine Learning (AutoML) systems. This seamless integration is beneficial, as it minimizes compatibility issues that could arise when introducing a new method into an established system. AutoML stands to gain from this method as it could optimize the automated selection, deployment, and tuning of machine learning models, and the paper illustrates this potential effectively. Weaknesses: 1. Despite the promising results presented in the manuscript, one key concern that emerges is the absence of a justification for the methods put forward. Specifically, it is not well-established whether the transformer in use is genuinely capturing the intricate mapping between models and data, or if it is merely overfitting to the training samples. Overfitting would lead to a lack of generalizability in unseen or novel data situations, which could severely limit the practical utility of the proposed method. It would greatly strengthen the paper if the authors could provide deeper insights into these theoretical underpinnings, perhaps by conducting additional analysis or testing to definitively establish the veracity of the transformer's operations. 2. Another potential shortcoming that emerges from the work is the limitation inherent in the utilized dataset. The dataset, NAS-Bench-101, contains Convolutional Neural Network (CNN) models with architectures that bear high similarities. Specifically, it is not clear whether the proposed method can be extended to other network architectures, such as Residual Networks (ResNets). It would therefore be beneficial if the authors could test their method with a broader range of architectures and provide evidence to support its efficacy across such a diverse range. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: To the best of the reviewer's knowledge, analysis over accuracy is an NP-hard problem. It remains elusive to the reviewer that transformer really has the ability to capture the mapping between neural networks and accuracy. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The proposed methods rely significantly on the pertaining of the model approximation to produce the correct accuracy and largely depend on the transferability of the model approximator. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful feedback. We in turn answer the questions below: > *Despite the promising results presented in the manuscript, one key concern that emerges is the absence of a justification for the methods put forward. Specifically, it is not well-established whether the transformer in use is genuinely capturing the intricate mapping between models and data, or if it is merely overfitting to the training samples. It would greatly strengthen the paper if the authors could provide deeper insights into these theoretical underpinnings, perhaps by conducting additional analysis or testing to definitively establish the veracity of the transformer's operations. [...] To the best of the reviewer's knowledge, analysis over accuracy is an NP-hard problem. It remains elusive to the reviewer that transformer really has the ability to capture the mapping between neural networks and accuracy.* We would like to point out that the neural approximator captures the probability distribution across the classes — and does not directly predict the accuracy based on the features. Note that the task is to choose the subset from the training set and therefore the values of gold labels are available to us. Since the labels are already given to us, they can be directly compared with the predicted outputs from the model approximator to provide the predicted accuracy. Hence, the model approximator does not aim to predict accuracy only based on the features— it simply predicts the model outputs and compares them with the gold labels. Next, we try to establish a reasoning behind using a transformer-based network along with the architecture encoder for model approximation. Given the hardness of the problem, we will leave the exact formal statement for future work. First note that the architectures under consideration can be represented as directed acyclic graphs, with forward message passing. During the forward computation, at any layer for node $v$, the output $a(v)$ can be represented as $$a{(v)}=H _v\left(\sum _{u\in InNbr(v)} op _v(a(u)) \right) \quad \text{with} \ a (\text{root}) = x----- (A)$$ Here, $op _v$ is the operation on the inputs coming into the node. E.g., $op _v$ can be simply a linear matrix multiplication and $H _v$ is the activation function at node v. We are interested in $a (\text{OutNode})$ where OutNode is the final node where the output is computed. Now, given the nature of this recursion, a graph neural network--- which exactly operates like above--- can approximate $a (\text{OutNode})$ with appropriate nonlinearities. Specifically, GNN will gather messages from $k = 1,...,K$ hop starting with with $h _0 = nodeFeature$ as follows: $$h _{k+1} (v)=NN_1\left(\sum _{u \in InNbr(v)}NN _2(h _{k+1}(u)) \right)$$ Here, $NN_1$ and $NN_2$ are neural networks. Since GNN operates exactly as the computation process (A), it makes sense to assume that $h _K(1),...,h _K(V)$ are good representations of the operation within the architecture. They, together with the feature $x$ should be able to predict the $a (\text{outNode})$. Thus, our task is now to find nonlinearities $F$ and $G$ so that $a(\text{outNode}) \approx F( G (h _K(1), …, h _K(|V|)), x )$. Now, the set $(h _K(1), …, h _K(|V|))$ is permutation equivariant with respect to node indexing. As suggested in [1], transformers are universal approximators of permutation equivariant functions. Therefore we use G as a transformer. Furthermore, we apply another neural network F on top of output of G and x to predict $a$(outNode). [1] Yun, C., Bhojanapalli, S., Rawat, A.S., Reddi, S.J., & Kumar, S. (2019). Are Transformers universal approximators of sequence-to-sequence functions? ArXiv, abs/1912.10077. We leave the exact theory for future work. > *Another potential shortcoming that emerges from the work is the limitation inherent in the utilized dataset. The dataset, NAS-Bench-101, contains Convolutional Neural Network (CNN) models with architectures that bear high similarities. Specifically, it is not clear whether the proposed method can be extended to other network architectures, such as Residual Networks (ResNets). It would therefore be beneficial if the authors could test their method with a broader range of architectures and provide evidence to support its efficacy across such a diverse range* We want to note that the multi-stage training approach can be easily extended to architectures beyond the NAS-Bench search space. In section 5, we have also tested the methods on the DARTS space, and observed that our method outperforms other methods in terms of speed and computation efficiency. As suggested by the reviewer, we add the final results for ResNet below, which shows the speedup and memory to achieve 10% and 20% RAR values across different methods for the top competitive baselines and observe that we outperform them in both aspects. ||Speedup|| Memory (Gb-min)|| |--|:--:|--:|:-:|--:| | **RAR** $\to$ | 10\% | 20\% | 10\% | 20\% | GLISTER| 1.54 | 2.71 | 84.41 | 40.80 | | GradMatch | 1.44 | 3.36 | 79.74 | 35.47 | | Inductive | 3.57 | 10.31 | 23.30 | 8.07 | | Transductive | 3.61 | 10.35 | 23.04 | 8.03 --- Rebuttal Comment 1.1: Title: Thanks for the well-prepared rebuttal Comment: The reviewer wishes to thank the author for the well-prepared rebuttal. All my concerns have been addressed. And the reviewer wishes that the insights discussed above can be accommodated in the final version of the paper. The reviewer will maintain the original score since the overall contribution and impact cannot reach the reviewer's bar for scoring 7 papers and a score of 6 is already positive to the paper.
null
null
null
null
Brain Dissection: fMRI-trained Networks Reveal Spatial Selectivity in the Processing of Natural Images
Accept (poster)
Summary: The authors train CNNs to predict the responses of individual voxels across several ROIs to the natural scenes dataset (NSD). They then develop and employ a strategy termed “brain dissection” to uncover the properties/features of images for which specific visual regions are tuned. They highlight gradients within subregions of the visual stream that correspond to tuning variation along mid-level image characteristics including depth, curvature, and object relations. Strengths: Originality: The methodology, as the authors note, is conceptually similar to Khosla & Wehbe (2022) but is applied towards evaluating sub-region tuning of mid-level visual features rather than reaffirming category selectivity. This is an effective use and reference of an existing computational strategy towards novel scientific investigation. My only comment would be to include some more comprehensive methods and related work description, instead of relying on the reader going back to this original work for the conceptual motivation and validation. A few sentences will suffice. Quality: This submission is technically sound, uses appropriate methods, and the claims are well supported for the most part. The authors should, however, be a bit more cautious about claims tying the trends they observe in RSC and OPA/PPA to specific functional preferences, e.g., in 4.1.3 and 4.3, as these are somewhat speculative hypotheses that were not explicitly tested, e.g., tying “up” normal to preferences for an “allocentric frame of reference.” Clarity: The submission is clearly written and well organized. Significance: The results are important, tie in with existing literature and discussions, and will inform other researchers interested in visual stream organization. Weaknesses: Wrapped in with above and discussed again below. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: As noted above, I would recommend expanding the discussion of related work and methodology to make for a clearer and more well-packaged reading. It would also be beneficial to tie in more related work supporting or refuting the hypotheses introduced about the subdivision of the “scene network”, such that these data can be appropriately placed into context with other findings and approaches. This limited connection to existing literature, context, and related findings is the work’s greatest weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Methodological limitations are mostly well discussed. An additional sentence about limitations with this type of ROI modeling strategy in general, e.g., potential lack of transfer or generalization to real experiments, should be mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **"More comprehensive methods and related work description" "Tie in more related work supporting or refuting the hypotheses"** Thank you for your feedback on the need for comprehensive methods and an expanded discussion on related works. - We have included additional motivation points for the method as it relates to previous methods in the global reviewer response. We have added these to the paper. - In addition, we have added the following paragraphs to the discussion section: > We observed specific preferences in regions like the RSC, OPA, and PPA. The RSC demonstrated a pronounced preference for greater depths, outdoor object categories, attributes, and relations, and predominantly “right/left” surface normals. On the other hand, the OPA exhibited a preference for proximate depths, intricate 3D geometries, indoor scene object categories, relations, and attributes, along with a higher inclination towards “upward” surface normals. These findings align with existing literature suggesting that OPA is primed for local navigational affordances [1,2], whereas RSC is geared more towards facilitating landmark-based spatial-memory retrieval [3,4]. In the case of the PPA, its preferences spanned a middle ground between the OPA and RSC, albeit leaning slightly more towards the OPA. This supports the notion of PPA's role in encoding scene structure, albeit at a coarser scale than OPA [2]. A salient finding from our study is the pronounced selectivity of RSC for vertical surface normals when contrasted with PPA and OPA, implying its emphasis on encoding vertical structures. In contrast, PPA and OPA demonstrated a marked preference for horizontal supporting structures, such as tabletops and floors. Prior research has highlighted PPA's sensitivity to scene layout and consistent surface arrangement [5]. Our findings further indicate that such selectivity might be particularly driven by these supporting surfaces. > > Further distinguishing between high-level visual ROIs, our study revealed distinct gradients moving from ventro-lateral to medial areas. Ventro-lateral regions exhibited a preference for closer depths, predominantly horizontal surface normals, and darker shading. In contrast, medial areas showed the opposite preferences, which resonates with the idea of distinct processing of foreground objects and distant background elements in these pathways [3]. An added layer of granularity was evident in the observed variability in depth and surface normal selectivity across voxels in the medial and parietal regions. Such variability is indicative of specialized regions tailored for different 3D profiles and global shape processing [6,7]. Furthermore, the parietal region stood out in its pronounced selectivity for spatial relations (like 'on', 'near', etc.), underscoring the significance of spatial relation encoding for this area, a finding that corroborates recent research [7]. **Be more clear about speculative conclusions in the paper.** We appreciate your feedback. We've refined the paper to clarify speculative statements. For instance, we modified the statement “Surface normals oriented at the camera indicate an egocentric frame of reference..." to "It can be hypothesized, though not yet confirmed, that surface normals facing the camera might suggest an egocentric perspective, while those oriented from the ground could indicate an allocentric viewpoint." All speculative content has been relocated to the discussion section. **"Limitations with this type of ROI modeling strategy in general"** Thank you for highlighting the need to discuss the limitations of our ROI modeling strategy in more depth. To expand on some of the limitations of our approach: 1. While our analysis is primarily restricted to the ROIs hypothesis space, we've supplemented it with additional analyses that encompass entire streams, ensuring a broader scope of interpretation. 2. We acknowledge that focusing solely on individual voxels might miss out on representations in networks or multiple voxels. This limitation forms the basis for future work, a dissection procedure considering multiple units, not just individual ones. 3. Our approach relies on pretrained networks to derive spatial measures from images, and this can introduce some estimation errors. 4. We are also aware that in natural images, certain categories inherently correlate with specific spatial attributes, like scenes predominantly showcasing rectilinear and faraway features, while bodies are usually depicted up close and facing the camera. Our methodology specifically targets ROIs that are widely accepted to encode these spatial properties, which helps to counteract this limitation. **References** 1. Lescroart & Gallant (2019). Neuron, 101(1), 178-192.e7. 2. Bonner & Epstein (2017). Proceedings of the National Academy of Sciences, 114(18), 4793-4798. 3. Epstein (2008). Trends in Cognitive Sciences, 12, 388–396. 4. Auger, Mullally, & Maguire (2012). PLoS One, 7, e43620. 5. Epstein & Kanwisher (1998). Nature, 392(6676), 598-601. 6. Welchman (2016). Annual Review of Vision Science, 2, 345-376. 7. Ayzenberg & Behrmann (2022). Journal of Neuroscience, 42(23), 4693-4710. --- Rebuttal Comment 1.1: Title: Comments addressed Comment: Thanks for dialing back on speculative conclusions and for incorporating more detailed limitations. The improved discussion of related work is a useful addition for paper clarity.
Summary: Understanding the organization of higher level information representation in the brain is a challenging task in neuroscience. Modern deep learning methods together with big data of brain recording have opened up new opportunities for constructing large-scale models in a data-driven way and for gaining valuable insights about information processing in the brain. The present paper explores this idea by training a deep neural network model for predicting human brain fMRI from natural scene images. By analyzing the feature map generated by the model, the paper reveals spatial and functional organization of the brain for a wide range of high-level visual features. Strengths: Overall, I find that the paper is well written and that it addresses important questions at the intersection of machine learning and neuroscience. Weaknesses: I have a number of concerns about the results and particularly, technical details presented in the paper, which prevented from accepting it directly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Major comments 1. The paper mentions that the model is trained on the NSD dataset including 68400 training images and 3600 validation images. However, the performance of the trained network on estimating fMRI data on the training/validation dataset is missing. Since the main point of the paper is to use a DNN model to gain insights about feature representation in the brain, it is crucial to establish an understanding of how well the model can predict brain activity in the first place. I notice that in the supplementary material, the performance of the model is reported for 3 ROIs (PPA, OPA, RSC) in terms of the Pearson correlation coefficient. However, unless the performance of the model is clearly reported for all ROIs, it is difficult to meaningfully interpret the analysis based on the proposed brain dissection method. For instance, the correlation between the DNN prediction and the ground-truth fMRI data in each ROI could be presented as a flatmap. 2. One concern regarding the calculation of selectivity (Eq. 1-2) is that a voxel is assigned to a feature value disregarding whether the voxel is truly selective at all. For instance, if L_c masked by M_k across all images is uniformly distributed, then no selectivity can be said about this voxel. To remedy this, the author should rule out these non-selective voxels from the analysis, otherwise it could lead to misleading interpretations of the result. Minor comments 1. The dimensionality of the neural network model should be clearly stated for reproducibility. 2. Since the paper is trained on human fMRI data, a discussion about the biological plausibility of the model would be appropriate. For instance, how much can we infer from the feature representation in the model about that in the real brain? Can we make any experimentally testable predictions? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper technically sounds. But the technical details are not clear and guarantee reproductivity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The performance of the trained network on estimating fMRI data is missing.** Thank you for highlighting the importance of performance reporting for all ROIs. Based on your recommendation, we've provided the Pearson correlation coefficient plotted on a flatmap for all ROis on a held-out test set of NSD data in Figure R1 for example Subject S1. Additionally, we compare our model's mean Pearson correlation to the features from [1] and ImageNet task-optimized features (fit to brain data via Ridge regression) for scene ROIs. Notably, our model surpasses the features from [1] and aligns with AlexNet ImageNet features, even though the AlexNet network has a 76x larger parameter size and is trained on 17.6x more images. We will include correlation flatmaps for all subjects in the revised version of the paper. **Exclusion of non-selective voxels in analysis** Thank you for highlighting the potential issue of including non-selective voxels in our analysis. In response to your suggestion: 1. We've reanalyzed our data by excluding voxels that lack selectivity across all evaluation set images, essentially comparing the distribution of L_c masked by M_k against a uniformly distributed (full image) mask. This exclusion used the Kolmogorov–Smirnov test, with a significance threshold of p>0.01. 2. Updated findings can be seen in Figure R3 of the provided pdf. This figure presents the absolute depth both with and without the non-selective voxels. Our findings remain stable when excluding non-selective voxels. We'll apply this refined method to all metrics in the final manuscript. **Clarifying the model's dimensionality for reproducibility** Thank you for this feedback. To address your concern: 1. We have updated our paper to include the dimensionality of our network. The dimensionality of the CNN is 784697 and the dimensionality of the transformer network is 7053009. 2. To ensure reproducibility, we have provided our full code for training and evaluation in the supplementary materials. We commit to open-sourcing our code upon publication. We have also expanded on any missing technical details in the paper. **Discussing the biological relevance of the model. Can we make any experimentally testable predictions?** 1. Our method indeed paves the way for several experimentally testable predictions. For example, our findings suggest potential studies of the encoding of outdoor versus indoor scenes, as well as differences in expansive versus enclosed space encoding between RSC, OPA, and PPA. Furthermore, the observed differences in horizontal versus vertical orientation preferences across the scene ROIs might hint at unique encodings, such as those of "supporting" structures or divergences in the reference frame. 2. It's important to note that our model's architecture need not mirror the brain's intricacies. The primary objective is to accurately predict brain responses. By analyzing our model's activations, we can deduce which image features it is leveraging to make predictions, and thus are most relevant for the responses that emerge in that brain voxel. This ability does not necessarily equate to a biologically accurate architecture, as seen in certain studies like [2]. **References** 1. Lescroart MD, Gallant JL. Human Scene-Selective Areas Represent 3D Configurations of Surfaces. Neuron. 2019 Jan 2;101(1):178-192.e7. doi: 10.1016/j.neuron.2018.11.004. 2. St-Yves, G., Allen, E.J., Wu, Y. et al. Brain-optimized deep neural network models of human visual areas learn non-hierarchical representations. Nat Commun 14, 3329 (2023). https://doi.org/10.1038/s41467-023-38674-4 --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their response and clarification. I have no further comments.
Summary: This study uses the network dissect method to investigate the feature selectivity of RSC, OFA, and PPA in the human brain. This method is called "brian dissection". In particular, this study focuses on some ecologically important intermediate features, such as depth, surface normals, curvatures, and object relations. Results showed that the three regions show distinct feature preferences. Strengths: 1. To my best knowledge, this is the first study to apply the network dissect method to examine the voxel preferences. 2. The overall method is clear and the presentation is good. Weaknesses: 1. It is unclear to me why this study only focuses on some intermediate features, such as depth, surface normals, curvature. In theory, this method can be also used to study both low-level and high-level visual features. 2. The theoretical contribution is unclear. I agree this study may be the first application of network dissection on voxel preferences. But the idea here is generally incremental. It can certainly obtain some new findings because the method per se is new. But I don't see clear progress being made as compared to previous methods. These types of results may be good for a neuroscience journal. The key point which is missing here is that how these representations are formed. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: My major concern is why this method is better than previous methods. For example, a simple encoding model was developed in ref.[1] and capture several important properties of scene-selective regions. I hope the authors provide evidence why brain dissection is better than this simple encoding-regression model. I understand that brain dissection certainly gives different results and feature maps because they are different methods. But this is not the reason why brain dissection is superior. Or, different dissection methods could be used and show that the results are highly consistent?? [1] Lescroart MD, Gallant JL. Human Scene-Selective Areas Represent 3D Configurations of Surfaces. Neuron. 2019 Jan 2;101(1):178-192.e7. doi: 10.1016/j.neuron.2018.11.004. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: 1. I trust the results. But the results are completely data-driven. I didn't see how the results add much to our understanding of the function of high-level visual regions. 2. My feeling is that this type of result is suitable for neuroscience journals like Neuroimage. I didn't see any new algorithm-level novelty. It is certainly not interesting to the machine learning community. The results here may be of interests to neuroscientists. I mean the results are valid. But this study only displays some results but does not contribute to how the brain form such representation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The theoretical contribution is unclear. These types of results may be good for a neuroscience journal. The key point which is missing here is how these representations are formed.** Thank you for your insightful comments. To enhance the clarity on the theoretical contribution of our paper: 1. Our work has introduced a novel scale of examining pixel-level spatial feature selectivity during natural image viewing, revealing new insights into representational differences in spatial feature encoding in the visual cortex. For example, our findings suggest a preferential role for PPA and OPA in supporting surfaces. Our findings on depth and surface normal selectivity in the scene and visual stream networks indicate specialized pathways for the processing of scenes with different 3D geometries, such as object spatial relations, flat versus vertical surfaces, and outdoor versus indoor scenes. The implications of these findings range from understanding brain representation modularity to better informing potential advances in representation learning, brain-computer interface applications, and treatments of related disorders. 2. In comparison to prior studies like [1], our model features align more closely with the mid- and high-level visual cortex representations (Figure R1). We've also introduced a brain dissection method offering pixel-level feature selectivity of spatial measures, a clear advancement over the full-image regression analysis of preceding work. We delve deeper into these distinctions in the global rebuttal response. We'd also like to emphasize the alignment of our work with NeurIPS's scope: 1. NeurIPS has historically published neuroscience-centric results. To illustrate this, consider the recent papers: [2,3,4,5]. 2. NeurIPS originated at the crossroads of biological and artificial neural networks. The conference’s Call for Papers explicitly promotes neuroscience findings, referencing its dedication to bridging disciplines such as machine learning and neuroscience. **Why is this method better than previous methods?** Addressing your concerns, we have included a comparison of our model to that of the model used in Lescroart & Gallant (2019) [1]. **As shown in Figures R1, our brain dissection model demonstrates a significant improvement (nearly 2x across scene ROIs) in its alignment with the brain responses compared to the baseline features from [1].** Additionally, we've provided a textual explanation of how our brain dissection approach offers more detailed and interpretable insights compared to earlier studies on spatial selectivity in the global reviewer response. **Different dissection methods could be used and show that the results are highly consistent.** Thank you for your comment. In response to the suggestion for varied interpretability methods: 1. We expanded our experiments beyond network dissection. We integrated both gradCAM [6] and raw attention techniques [7]. The former utilizes input features combined with network gradients, while the latter leverages attention scores from transformer architectures. 2. **Our updated results, detailed in Figure R2, affirm consistency across diverse interpretability methods and network architectures.** We provide more details in the global reviewer response. **Why not also focus on low-level and high-level features?** Indeed, our method can handle features from low- to high-level. We focused on intermediate features like 3D spatial attributes and object relationships because the encodings for intermediate features have proven more challenging to describe compared to well-studied low-level [8,9] and high-level category [10,11] features. Our method enables effective study of these features using natural image data without requiring specialized stimuli or changes to brain imaging techniques. In Section 4.2, we also analyzed high-level features such as object relations, attributes, and categories, at a scale larger than most previous studies (1703 categories, 310 relations, 617 attributes). **References** 1. Lescroart & Gallant (2019). Neuron, 101(1), 178-192.e7. 2. Millet et al. (2022). Advances in Neural Information Processing Systems, 35, 33428-33443. 3. Wang, A., et al. (2019). Advances in Neural Information Processing Systems, 32. 4. Khosla et al. (2022). Advances in Neural Information Processing Systems, 35, 9389-9402. 5. Antonello et al. (2021). Advances in Neural Information Processing Systems, 34, 8332-8344. 6. Selvaraju et al. (2017). In Proceedings of the IEEE international conference on computer vision, 618-626. 7. Caron et al. (2021). In Proceedings of the IEEE/CVF international conference on computer vision, 9650-9660. 8. Carandini et al. (2005). Journal of Neuroscience, 25(46), 10577-10597. 9. Hubel & Wiesel (1962). The Journal of physiology, 160(1), 106. 10. Desimone et al. (1984). Journal of Neuroscience, 4(8), 2051-2062. 11. Grill-Spector & Weiner (2014). Nature Reviews Neuroscience, 15(8), 536-548. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. This is an application of ML methods on neuroscience datasets, not an innovation on ML methods itself. It is good to see the consistent results across different methods. This is particularly important because simply applying a known method to a neuroscience dataset reduces the novelty. I raised my score to 5.
Summary: This paper utilized network dissection model to understand how human brain is functionally mapped to perception of natural scenes. The proposed method is used to examine a range of ecologically important, intermediate properties, including depth, surface normals, curvature, and object relations and find consistent feature selectivity differences. Strengths: - The paper is well written and easy to understand. - The paper introduces interesting discussions by employing an AI model to a neuroimaging study. Weaknesses: - While the paper performs a very interesting experiment from neuroscience perspective, there is not much of a technical contribution. The paper employs an existing model, i.e., network dissection, and does not propose a new or novel approach. - There is no baseline experiment and there is no way to validate the gain or improvement from the proposed method. - I think this paper has novelty, but perhaps not in a way that the NeurIPS community expects. At least I would like to see a proposal of a novel approach that lead to novel and improved findings. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - As far as I know, there are various methods that provide interpretability, e.g., Chefer et al., CVPR 2021. Will the result remain the same if other models are adopted other than the network dissection (which is a bit outdated). - What would be the feature maps like if some naive models were to be deployed, e.g., MLP or simple CNNs with Class Activation Map? - Is NSD dataset the only dataset that connects fMRI to natural scene? If so, this would justify experimenting only on a single benchmark regardless of generalization issue. - What are OPA, RSC, PPA in the abstract? They should be fully spelled out. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The authors address limitation of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Lacking on machine learning technical contribution. Lacking novelty in a way that the NeurIPS community expects.** We value the feedback provided. To enhance the clarity on the novel and improved findings: 1. Our work has introduced a novel scale of examining pixel-level spatial feature selectivity during natural image viewing, revealing new insights into representational differences in spatial feature encoding in the visual cortex. For example, our findings suggest a preferential role for PPA and OPA in supporting surfaces. Our findings on depth and surface normal selectivity in the scene and visual stream networks indicate specialized pathways for the processing of scenes with different 3D geometries, such as object spatial relations, flat versus vertical surfaces, and outdoor versus indoor scenes. The implications of these findings range from understanding brain representation modularity to better informing potential advances in representation learning, brain-computer interface applications, and treatments of related disorders. 2. In comparison to prior studies like [1], our model features align more closely with the mid- and high-level visual cortex representations (Figure R1). We've also introduced a brain dissection method offering pixel-level feature selectivity of spatial measures, a clear advancement over the full-image regression analysis of preceding work. We delve deeper into these distinctions in the global rebuttal response. We'd also like to emphasize the alignment of our work with NeurIPS's scope: 1. NeurIPS has historically published neuroscience-centric results. To illustrate this, consider the recent papers: [2,3,4,5]. 2. NeurIPS originated at the crossroads of biological and artificial neural networks. The conference’s Call for Papers explicitly promotes neuroscience findings, referencing its dedication to bridging disciplines such as machine learning and neuroscience. **"There is no baseline experiment and there is no way to validate the gain or improvement from the proposed method."** Addressing the baseline concerns, we've compared our method with the model from relevant previous work Lescroart & Gallant (2019) [1]. **As shown in Figures R1, our brain dissection model demonstrates a significant improvement (nearly 2x across scene ROIs) in its alignment with the brain responses compared to the baseline features from [1].** Additionally, we've provided a textual explanation of how our brain dissection approach offers more detailed and interpretable insights compared to earlier studies on spatial selectivity in the global reviewer response. **"Will the result remain the same if other models are adopted other than the network dissection?"** Thank you for your insightful comments. 1. In response to the suggestion for varied interpretability methods, we expanded our experiments beyond network dissection. We integrated both gradCAM [6] and raw attention techniques [7]. The former utilizes input features combined with network gradients, while the latter leverages attention scores from transformer architectures. 2. **Our updated results, detailed in Figure R2, affirm consistency across diverse interpretability methods and network architectures.** We provide more details in the global reviewer response. It's important to mention that we looked into the Chefer et al., CVPR 2021 method you referenced. However, integrating it with our custom architecture posed challenges. Both the GradCAM and raw attention methods we tried, which are baselines in that paper, performed relatively well, and are widely recognized for network interpretability [7,8]. **"What would be the feature maps like if some naive models were to be deployed, e.g., MLP or simple CNNs with Class Activation Map?"** Our current architecture closely resembles a simple CNN with Class Activation Maps. We also experimented with a transformer architecture and analyzed its attention maps, finding its results consistent with our CNN-based network dissection (Figure R2). More details are available in the global reviewer response. **"Is NSD dataset the only dataset that connects fMRI to natural scene?"** The NSD dataset is the only fMRI natural image dataset of its scale, with a total of 73000 COCO images and eight subjects. **"What are OPA, RSC, PPA in the abstract? They should be fully spelled out."** We apologize for not expanding the abbreviations of the scene-selective regions. We have updated the abstract with the names fully spelled out: occipital place area (OPA), retrosplenial complex (RSC), and parahippocampal place area (PPA). **References** 1. Lescroart & Gallant (2019). Neuron, 101(1), 178-192.e7. 2. Millet et al. (2022). NuerIPS, 35, 33428-33443. 3. Wang, A., et al. (2019). NuerIPS, 32. 4. Khosla et al. (2022). NuerIPS, 35, 9389-9402. 5. Antonello et al. (2021). NuerIPS, 34, 8332-8344. 6. Selvaraju et al. (2017). ICCV, 618-626. 7. Caron et al. (2021). ICCV, 9650-9660. 8. Linardatos et al. (2020). Entropy, 23(1), 18. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their clarifications.
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive feedback from the reviewers. We are grateful that reviewers HWnv, 9bSv, and MstC acknowledged the clarity and quality of our paper's writing. The novelty of our approach was positively highlighted by HWnv, and the pioneering application of the network dissect method to examine voxel spatial preferences was recognized by c89b. We are pleased that both 9bSv and MstC noted the significance of our work at the intersection of machine learning and neuroscience, emphasizing the depth of insights our research provides into the spatial and functional organization of the brain. The technical soundness of our work was underscored by 9bSv and MstC, with MstC further highlighting its importance in the context of existing literature. We'll address overarching questions here and specific reviewer concerns in their respective responses. @Reviewers HWnv,c89B: **Lacking on machine learning technical contribution. Mostly relevant for neuroscience.** 1. NeurIPS has historically published neuroscience-centric results, evidenced by recent papers: [1,2,3,4]. 2. NeurIPS originated at the intersection of biological and artificial neural networks. Its Call for Papers explicitly promotes neuroscience findings. 3. Our work has introduced a novel scale of examining pixel-level spatial feature selectivity during natural image viewing, revealing novel spatial feature encoding in the visual cortex. For example, our findings suggest a preferential role for PPA and OPA in supporting surfaces. Our findings on spatial selectivity in the scene and visual stream networks indicate specialized pathways for the processing of scenes with different 3D geometries, such as object spatial relations, flat versus vertical surfaces, and outdoor versus indoor scenes. The implications of these findings range from understanding brain representation modularity to better informing potential advances in representation learning, brain-computer interface applications, and treatments of related disorders. 4. In comparison to prior studies like [5], our model features align more closely with the mid- and high-level visual cortex representations (Figure R1). We've also introduced a brain dissection method offering pixel-level feature selectivity of spatial measures, a clear advancement over the full-image regression analysis of preceding work. @Reviewers HWnv,c89B,9bSv: **Why is brain dissection superior to baseline methods? What is the performance of your model?** Thank you for your feedback. We recognize the value of having comparative baselines. Addressing the concerns raised: 1. We directly compared our model to the model from Lescroart & Gallant. (2019) [5] on held-out NSD brain data using Pearson correlation. We computed the features in [5] for NSD images using the estimation networks, and fit them to brain data using Ridge regression. **As shown in Figure R1, our brain dissection model demonstrates a significant improvement (nearly 2x across scene ROIs) in its alignment with the brain responses compared to the baseline features from [5].** 2. Contrary to Reviewer c89B, our methodology markedly diverges from Lescroart & Gallant (2019). While they utilize a regression model for whole-image features like depth and surface normals, their approach limits deeper feature selectivity, and adding features requires new regression parameters. In contrast, our brain dissection method provides pixel and region-level selectivity across numerous natural image features, supported by brain-aligned neural networks. **This doesn't just mean different results; it signifies a leap in granularity and interpretability.** For example, where Lescroart & Gallant (2019) identified broad depth variations among OPA, PPA, and RSC, we detail fine spatial selectivity differences across the ROIs. @Reviewers HWnv,c89B: **Do the results hold with other interpretability methods & network architectures?** Thank you for your insightful comments. Addressing the concerns raised: 1. In response to the suggestion for varied interpretability methods, we expanded our experiments beyond network dissection. We integrated both gradCAM [6] and raw attention techniques [7]. The former utilizes input features combined with network gradients, while the latter leverages attention scores from transformer architectures. See below for implementation details. 2. **Our updated results, detailed in Figure R2, affirm consistency across diverse interpretability methods and network architectures.** Specifically: - The raw attention shows stable mean depth and surface normal selectivity when compared to network dissection. The method replicates significant depth metric increase and a preference for right/left surface normals in RSC compared to PPA and OPA, which prefer lower depths and upright surface normals. - GradCAM's depth selectivity aligns with network dissection, showing increased metric depth in RSC versus PPA and OPA. For surface normals, GradCAM mostly reflects prior findings but shows variations in RSC's selectivity. This might be attributed to its adaptation for regression gradients, and occasional inconsistencies as pointed out by Chefer et al. (CVPR 2021). *Implementation Details*: We combined gradCAM with our initial CNN. For the raw attention method, we opted for the Vision Transformer instead of the paper's CNN. We substituted voxel-specific weights with unique attention heads for each voxel prediction. Attention scores from the [CLS] token in these heads were used to generate voxel-specific attention maps for assessment. **References** 1. Millet et al. (2022). NeurIPS, 35, 33428-33443. 2. Wang, A., et al. (2019). NeurIPS, 32. 3. Khosla et al. (2022). NeurIPS, 35, 9389-9402. 4. Antonello et al. (2021). NeurIPS, 34, 8332-8344. 5. Lescroart & Gallant. (2019). Neuron, 101(1), 178-192.e7. 6. Selvaraju et al. (2017). ICCV, 618-626. 7. Caron et al. (2021). ICCV, 9650-9660. Pdf: /pdf/7325b347703b0a389f019129279e563a2088fa43.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning to Search Feasible and Infeasible Regions of Routing Problems with Flexible Neural k-Opt
Accept (poster)
Summary: The paper introduces a learning-to-search (L2S) solver called Neural k-Opt (NeuOpt) for routing problems. NeuOpt learns to perform flexible k-opt exchanges using a tailored action factorization method and a customized recurrent dual-stream decoder. The paper also proposes the Guided Infeasible Region Exploration (GIRE) scheme, which allows the autonomous exploration of both feasible and infeasible regions. The experiments on TSP and CVRP with up to 100 nodes show that NeuOpt could outperform other learning-based methods slightly. However, the solving speed could be very slow. It also cannot outperform HGS and LKH3 from OR fields. Overall, I think the topic and proposed method are interesting, however, the results are not significant. Strengths: * The paper is mostly well-written, except that Figure 1 is too complex to help the reviewer to understand the basic ideas of the proposed method. * The literature reviews are quite impressive, which include necessary classic and SoTA methods. * The proposed NeuOpt and Guided infeasible region exploration seem reasonable. * The proposed MDP for k-opt is interesting Weaknesses: * The experiments on TSP and CVRP with up to 100 nodes show that NeuOpt could outperform other learning-based methods slightly. However, the solving speed could be very slow. It also cannot outperform HGS and LKH3 from OR fields. Overall, I think the results are not significant. * Given the complexity of the method and the code is not provided, reproducibility could be a problem. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The feasibility of solutions is not discussed in the experiments. To what extent does the Guided Infeasible Region Exploration method impact the feasibility? The reviewer encourages the authors to evaluate the method on VRPTW or TSPTW to make a more solid benchmark. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations have been discussed in the main body. In Section 7, the first point "it falls short against some L2P solvers (e.g., [5, 6]) for larger-scale TSPs" is a good discussion about the limitation. However, the second and third points are actually the advantages of the paper, rather than the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging that our approach is reasonable, the MDP is interesting, and the paper is mostly well-written. We understand that the main concerns are the significance of the performance and the code availability. We hope our response below will clarify any misunderstandings and concerns about our work. *** **[Significance of the Performance]** Regarding - *"NeuOpt could outperform other learning-based methods slightly. However, the solving speed could be very slow. It also cannot outperform HGS and LKH3 from OR fields"* - **these appear to be misunderstandings**, and we would like to clarify the following: * **Compared to learning-to-search (L2S) solvers: NeuOpt significantly surpasses them in both performance and speed** (NeuOpt also belongs to L2S solvers) * On TSP100, our NeuOpt (0.33%, 17m) **halves the gaps** of Costa et al. (0.77%, 1.1h), Sui et al. (0.74%, 1.3h), and Wu et al. (1.54%, 2h) **with faster speed**. And NeuOpt (0.00%, 7h) **significantly surpasses** the SOTA solver DACT (0.10%, 13.5h). * On CVRP100, our NeuOpt (0.85%, 2.3h) **reduces by an order of magnitude the gaps achieved by all L2S solvers with faster speed**, including NLNS (2.26%, 2.4h), NCE (1.59%, 10.4d), Wu et al. (3.87%, 5h), and DACT (1.11%, 1.7d). * **Compared to learning-to-construct (L2C) solvers: NeuOpt consistently achieves better/comparable performance with faster speed** * On TSP100, our NeuOpt (0.02%, 2.8h) surpasses all L2C solvers **in both gap and speed**, including AM+LCP (0.60%, 10.9h), Pointerformer (0.11%, 5.6h), Sym-NCO (0.08%, 5.6h), POMO (0.07%, 5.6h), POMO+EAS (0.05%, 10.9h), and POMO+EAS+SGBS (0.03%, 1.1d). * On CVRP100, NeuOpt (0.59%, 4.6h) surpasses L2C solvers like Sym-NCO (0.89%, 7.2h) and POMO (0.70%, 7.2h) **in both gap and speed**. Compared to POMO+EAS, to achieve gap 0.30%, NeuOpt is around **2 hours faster**. Compared to POMO+EAS+SGBS, to achieve gap 0.10%, NeuOpt is around **7 hours faster**. * We note that POMO+EAS+SGBS has evolved through 4 stacked schemes, including model design by AM (ICLR’18), training algorithms by POMO (NeurIPS’20), active search by EAS (NeurIPS’21), and beam search by SGBS (ICLR’22). Thus, **it is already promising that our NeuOpt, utilizing unified model parameters for all test instances without per-instance active search (model parameter update) or beam search boosters, outperforms POMO+EAS+SGBS**. The superiority of our NeuOpt could be further enhanced with similar per-instance boosters in the future. * **Compared to learning-to-predict (L2P) solvers: NeuOpt shows greater adaptability to constrained VRPs and still exhibits better performance on the whole** * On TSP100, our NeuOpt (0.33%, 17m) surpasses GCN+BS (1.35%, 46m), CVAE-Opt-DE (0.34%, 1.8d), and GNN+GLE (0.58%, 2.8h) in **both gap and speed**. Our NeuOpt (0.00%, 7h) surpasses the SOTA solver DIFUSCO (0.02%, 21.7h) with a **significantly faster speed**. * While we acknowledge that NeuOpt does not outperform DPDP and Att-GCN+MCTS on TSP100, **their performance is limited to TSP only**. On CVRP100, our NeuOpt (0.30% gap, 13.8h) significantly surpasses DPDP (0.41% gap, 1.2d) and Att-GCN+MCTS even fails to solve CVRP. * **Compared to solvers from OR fields: our NeuOpt exhibits lower gaps than LKH3 with faster speed on both CVRP100 and CVRP200** * On CVRP100, in Table 1 (main paper), **NeuOpt (0.30%, 13.8h) outperforms LKH3 (0.54%, 5.7day)**. * On CVRP200, In Table 5 (appendix), **NeuOpt (0.68%, 9.6h) outperforms LKH3 (1.17%, 21.6h)**. * We acknowledge that NeuOpt may not outperform the upgraded HGS solver recently released in 2022 (neither can all the other neural solvers given that neural solvers are still in early stages). Nevertheless, we narrow such gaps for this line of neural solvers. Moreover, by integrating per-instance search boosters like EAS, our neural solver has the potential to further amplify the performance. **Lastly, beyond mere performance competition, we present novel ideas and insights as recognized by other reviewers.** We believe that our contributions, including the k-opt factorization in MDP, the first L2S solver for flexible k-opt, the fresh constraint handling scheme GIRE, and the dynamic data augmentation are worth sharing with the learning-to-optimize community. *** **[Code availability]** As promised in lines 278-279, **we will make our code, pre-trained models, and the used data publicly available on GitHub. Following the rebuttal guidelines, we have forwarded our code to Area Chair**. Meanwhile, we note that our approach is not complex. For training on CVRP, it requires only 4 days, 5 days, and 8 days for sizes 20, 50, and 100, which are highly desirable as they are around half the time taken by POMO and DACT (their training time is around 2 weeks on CVRP100). *** **[Refine Figure 1]** Thanks for the valuable feedback. We will enhance the readability of Figure 1 and add explanations. Please refer to the global response for detailed clarification. *** **[Feasibility of the solution in the experiments]** We apologize for the confusion. The eventual solutions in all experiments are **always feasible** (because our approach only retrieves the best feasible solution visited during search as the final output), even though we allow temporal exploration of infeasible regions. Kindly refer to Appendix E.3 and Figure 9 for visualizations of the learned search trajectories (showing a trend of alternating searches between feasible and infeasible regions). *** **[Make benchmark more solid]** Thanks for the suggestion. We will study the extensions of NeuOpt-GIRE to more constraints in the future. However, we believe that the current benchmark is considered solid as recognized by other reviewers. *** **[Discuss limitations]** We apologize for the confusion. The last two points are future works. We will refine our paper to ensure all limitations and suggestions from reviewers are mentioned and addressed properly. --- Rebuttal Comment 1.1: Title: Thanks for authors' rebuttal! Comment: Reviewer 2dzf, did the authors address your concerns about significance of the performance, code availability and other concerns? Thanks. --- Rebuttal Comment 1.2: Title: Thank you for the response Comment: I truly appreciate the authors for taking the time to address my concerns. Upon thorough consideration of all responses including mine and others, I maintain my viewpoint that the results, in comparison to baseline LKH-3 and HGS, still lack significance. This is especially noteworthy given the relatively small scale of the routing problem and the time required for solving (spanning hours and days). While I acknowledge that the proposed method could potentially be integrated into approaches like TAM (Two-stage Dividing Method) [1] and L2D (Learn-to-Delegate) [2] to tackle larger-scale VRPs as sub-solvers, it's concerning that the execution time would considerably increase when employing the proposed method as opposed to utilizing LKH-3 or HGS (could obtain good results fast for small-scale VRPs). The discussion about the feasibility and extension to other constraints still has a lot of room to improve. An additional point of consideration is the method employed to measure the solving time of the proposed approach, as well as the corresponding benchmarks like LKH-3 and HGS. Clear elaboration on this matter is necessary. Despite the aforementioned lingering concerns, I do find the proposed ideas interesting, particularly the MDP formulation for k-opt. I am also anticipating the release of the code. On the whole, I am inclined to raise and finalize my evaluation score to 5, with the expectation that the outlined matters will be duly addressed and clarified. [1] Hou Q, Yang J, Su Y, et al. Generalize Learned Heuristics to Solve Large-scale Vehicle Routing Problems in Real-time[C]//The Eleventh International Conference on Learning Representations. 2023. [2] Li S, Yan Z, Wu C. Learning to delegate for large-scale vehicle routing[J]. Advances in Neural Information Processing Systems, 2021. --- Reply to Comment 1.2.1: Title: Thank you for the support and please see our responses to outlined matters (1/2) Comment: We deeply appreciate the reviewer for considering raising the score. Thank you for acknowledging that our paper introduces interesting ideas, particularly the k-opt MDP formulation (besides presenting the first flexible k-opt solver, we also rethink the constraint handling by proposing GIRE and present the effective RDS decoder as well as the efficient dynamic data augmentation method). Below we further respond to your outlined matters. Regarding the solving time detailed in Table 1, we clarify that while our approach did take a long run time for certain cases, such long run time may be exclusive to the commonly used benchmarking setting, i.e., solving a total of 10,000 instances using one GPU only. Hence, all the compared neural solvers share similar long run times (e.g., see the hours and days run times of SOTA baselines DACT and SGBS in our paper and their original papers). Given the limited memory of one GPU (e.g., 11GB for our 2080TI GPU), we need to split all 10,000 instances into smaller batches (e.g., 2000 instances) and run the batch inference in sequential (thus longer run time). **For practical use of our NeuOpt, if users have the flexibility to use multiple GPUs or more powerful GPUs (like the A100 with 80GB memory) or even TPUs, the runtime could be significantly reduced as shown in the added Table below.** Meanwhile, as mentioned by the Reviewer #vydT, the users can choose proper K, T and DA according to the available computation budgets. Lastly, we note that one of the motivations of learning to search (L2S) solvers is to close the optimality gaps as much as possible given enough run time, and our NeuOpt has significantly enhanced the efficiency of existing L2S solvers. -- Time on TSP100|1GPU (11GB)|2GPU (22GB)|4GPU (44GB) :-:|:-:|:-:|:-: NeuOpt (DA=1,T=1k)|17m|8m|5m NeuOpt (DA=1,T=5k)|1.4h|42m|23m NeuOpt (DA=1,T=10k)|2.8h|1.4h|45m NeuOpt (DA=5,T=1k)|1.4h|43m|21m NeuOpt (DA=5,T=3k)|4.2h|2.2h|1h NeuOpt (DA=5,T=5k)|7h|3.6h|1.7h -- Time on CVRP100|1GPU (11GB)|2GPU (22GB)|4GPU (44GB) :-:|:-:|:-:|:-: NeuOpt (DA=1,T=1k)|28m|14m|8m NeuOpt (DA=1,T=5k)|2.3h|1.2h|36m NeuOpt (DA=1,T=10k)|4.6h|2.3h|1.2h NeuOpt (DA=5,T=6k) |13.8h|7h|3.3h NeuOpt (DA=5,T=20k)|1.9d|22h|11h NeuOpt (DA=5,T=40k)|3.8d|1.8d|22h -- We follow the recognized benchmark conventions (i.e., also used in the latest DACT, EAS, SBGS, and DIFUSCO papers) that the run time is recorded under the premise that one GPU is used for neural solvers and one CPU is used for traditional solvers. Nevertheless, we acknowledge the inherent challenges of time comparison between neural solvers and traditional solvers given the differences in infrastructure (CPU vs GPU) and programming languages (C vs Python), as mentioned in lines 291-297. We will make this more clear following the suggestions. Compared to LKH-3, our NeuOpt has shown promising performance by achieving lower optimality gaps with relatively shorter run times. And kindly note that our NeuOpt is the first L2S solver to achieve so. We acknowledge that our NeuOpt may not fully outstrip SOTA traditional solvers (i.e., LKH and HGS) that have been developed for more than decades. However, this may also be the case for all existing neural solvers proposed in recent years. We note that our motivation is to unleash the potential of L2S solvers to further push the boundaries of neural solvers. Meanwhile, if we consider the idea of *“No Free Lunch”*, it is fair that no solver could be the best in all situations. We note that our approach does exhibit unique advantages compared to LKH-3 or HGS where our NeuOpt is able to learn and leverage deep patterns directly from data and rely less on domain knowledge regarding the human expertise of the target VRP, thus holding the potential to be swiftly adapted to automatically learn-to-solve more VRP variants (i.e., a generic tool to learn data-driven VRP solvers).
Summary: The paper aims to learn the k-opt operation, one of the famous local search methods, via neural networks. In particular, the authors model the k-opt operation as a sequential node selection process and use a recurrent dual-stream (RDS) decoder. Furthermore, a guided infeasible region exploration (GIRE) is suggested to encourage the policy to escape local optima. The paper models the k-opt as a sequence of basic moves on open Hamiltonian paths (not cycles), which allows the neural network model easily learns the k-opt operation. Since the end of the sequence (i.e., E-move) is a kind of I-moves, k can be flexibly adjusted according to states. GIRE allows the model to explore infeasible regions, not only feasible regions; it encourages the model to escape local optima. In addition, the portion of infeasible solutions is dynamically adjusted. Strengths: The paper is well-written in general, and the key ideas are clearly delivered. The extensive experiments with various baselines suggest that the proposed method can mitigate the inefficiency of L2S solvers. One of the main benefits is that we can choose the proper K, the number of DA, and T according to the available computation budget. Also, the results show that NeuOpt gives promising performances with similar time with L2C algorithms. Weaknesses: There is no analysis of where the performances come from. the k-opt operations are well-working with fixed k. Comparisons between the original k-opt and NeuOpt are required. Some detailed but important information is missing or hard to be found (e.g., encoding scheme, dynamic augmentation, initial tour) Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have some questions about the details. 1. Effectiveness of learning the S-move. I think that choosing the S-move also gives good performances. 2. Why is LKH3 (known as better than LKH2) not employed as a baseline? 3. Whether the encoding is necessary for every t? 4. How is the problem augmented in detail? I’ve read through the appendix but couldn’t find it. Minor Comments: 1. I think further explanation for ES features is required (maybe in the appendix) 2. For better readability, additional cross-references for contents in the appendix are recommended. 3. There is no explanation when the other papers are referenced (e.g., line 192, 257). I recommend adding further descriptions. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the dual-stream structure is restricted to consider additional context information. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the positive and valuable comments. Thank you for acknowledging that NeuOpt improves the efficiency of L2S solvers, exhibits unique benefits for practical use, and achieves promising performance. We hope that the following response, along with additional experimental results, will address the remaining concerns. *** **[Similar time with L2C solvers]** Thanks for the comment. However, we would like to clarify that **our NeuOpt is consistently faster than all compared L2C solvers** while achieving better or similar performance. Specifically, * On TSP100, our NeuOpt (0.02%, 2.8h) surpasses all L2C solvers **in both gap and speed**, including AM+LCP (0.60%, 10.9h), Pointerformer (0.11%, 5.6h), Sym-NCO (0.08%, 5.6h), POMO (0.07%, 5.6h), POMO+EAS (0.05%, 10.9h), and POMO+EAS+SGBS (0.03%, 1.1d). * On CVRP100, NeuOpt (0.59%, 4.6h) outperforms L2C solvers like Sym-NCO (0.89%, 7.2h) and POMO (0.70%, 7.2h) **in both gap and speed**. Compared to POMO+EAS, to achieve a gap of 0.30%, NeuOpt is around **2 hours faster**. Compared to POMO+EAS+SGBS, to achieve a gap of 0.10%, NeuOpt is around **7 hours faster**. *** **[Where does the performance come from?]** Thanks for the question. Following your suggestion, we will make the analysis more clear. We attribute the performance of our approach to our 4 contributions, thoroughly verified through extensive experiments and ablation studies in the original paper: * **New action factorization:** It enables autonomous scheduling of dynamic $k$ during search (see lines 40-42), delivering advantages over fixed $k$ (see Figure 5a, lines 349-356). * **The RDS decoder:** It is flexible to control k-opt with any $k$ and more effectively captures the strong correlations between the removed and added edges (see line 45). Compared to the DACT decoder, our RDS decoder achieves much better performance with reduced run time (see Table 2 and lines 329-331). Further, designs within the RDS decoder, including the GRUs and dual streams, are all essential to performance (see Table 3 and lines 333-336). * **The GIRE scheme:** It is the first constraint handling scheme that promotes the exploration of both feasible and infeasible regions, bringing multiple benefits (see lines 52-57). It has been verified to be generic to boost both DACT and our NeuOpt for better constraint handling (see Table 2, lines 326-328). Besides, designs within GIRE, including the reward shaping and feature supplement, are both essential to performance (see Figure 4 and lines 337-343). We also provide visualizations and discussions about GIRE (see Appendix D and Figure 8). * **Dynamic data augmentation:** It enables NeuOpt to explicitly escape from the local optima (see Table 4 and lines 344-348). Table 1 also shows the effects of different augmentation settings on performance. *** **[Comparison with other k-opt methods]** Following the suggestion, we have added another baseline called OriginOpt and gathered the results together with those reported in the original paper in the tables below. Note that various traditional and learning-based k-opt baselines are now comprehensively included, as detailed below: * **Traditional k-opt baselines:** * **OriginOpt (static)** and **OriginOpt (dynamic)**, which randomly perform the k-opt (rather than using our NeuOpt) in a static and dynamic manner, respectively. * **LKH**, which not only performs dynamic k-opt, but also integrates other complex heuristic designs for better performance (e.g., an $\alpha$-measure-based edge candidate set, partitioning rules, tour merging strategies, the iterative partial transcription technique, the backbone-guided search, etc). * **Learning-based k-opt baselines:** * Neural 2-opt: **DACT**, **Wu et al.**, and **Costa et al.** * Neural 3-opt: **Sui et al.** The results demonstrate that our NeuOpt significantly outperforms all learning-based k-opt baselines, as well as the original k-opt OriginOpt baseline. Notably, our NeuOpt is able to find lower gaps at a faster speed than the strong LKH3 solver (which employs complex heuristics beyond k-opt) on CVRP100. -- TSP100|Gap|Time -|-|- LKH2|0.00%|5.7h OriginOpt (static)|210.00%|17m OriginOpt (dynamic)|202.40%|17m Costa et al.| 0.77%| 1.1h Sui et al.| 0.74%| 1.3h Wu et al.| 1.54%| 2h DACT| 0.10%| 13.5h **NeuOpt (DA=1,T=1k)**|0.33%|17m **NeuOpt (DA=5,T=3k)**|0.01%|4.2h -- CVRP100|Gap|Time -|-|- LKH3 |0.54%|5.7d OriginOpt (static)|93.23%|2.3h OriginOpt (dynamic)|103.84%|2.3h Wu et al.|3.87%|5h DACT|1.11%|1.7d **NeuOpt (DA=1,T=5k)**|0.85%|2.3h **NeuOpt (DA=5,T=6k)**|0.30%|13.8h *** **[Effects of S-move]** Thanks for the insightful comment. The reviewer is correct that learning the S-move is crucial for good performance. Kindly refer to Figure A4 in the attached PDF under the global response for new results. *** **[LKH2 or LKH3]** Kindly note that **we employed LKH3 for CVRP, while LKH2 was used for TSP**. This distinction was made because LKH2 is the latest version for TSP, whereas the updates in LKH3 are only made for constrained TSP (such as CVRP). *** **[Encoding for every t?]** Yes, because the current solution and features change with each $t$. This is common across all existing L2S and L2C solvers. *** **[Is dual-stream structure restricted?]** No. Let $s_1, s_2$ be the logits of the current dual streams, and $s_3, s_4, …$ represent additional ones. While our paper utilizes $s_1 + s_2$, more streams can be easily accounted for by MLP($s_1, s_2, s_3, s_4, …$) if needed. *** **[Explain details & other minor comments]** Thanks for the suggestions. We will refine our paper accordingly. Specifically, encoding and augmentation details follow the exact methods used in [1]. We will include algorithm pseudocode for dynamic data augmentation, and we will clarify that the initial tours are randomly generated (same with all existing L2S solvers). ``` References: [1] Efficient Neural Neighborhood Search for Pickup and Delivery Problems (IJCAI’22) ``` --- Rebuttal Comment 1.1: Title: Thank you for the responses. Comment: Thank you for faithfully answering my responses. I will maintain my score. --- Reply to Comment 1.1.1: Title: Thank you for the support Comment: We deeply appreciate the reviewer for acknowledging our response and keeping to support our paper. We will incorporate your valuable suggestions and our discussions in the revised paper.
Summary: In this paper, the authors propose Neural k-Opt (NeuOpt) that factorizes a generic k-opt exchange operation as a series of base operations. They also introduce Guided Infeasible Region Exploration (GIRE) scheme for Capacitated Vehicle Routing Problem (CVRP) where one augments a reward function for RL with signals about exploring infeasible solution spaces. On the model architecture side, the authors propose a recurrent dual-stream decoder that consumes both moves and edges. The authors provided a comprehensive comparison with baseline methods for problems of sizes up to 100 cities. Strengths: 1. I found the action factorization novel and general. A limitation of past k-Opt based learning-to-search methods is that they require a pre-determined value of k. The proposed factorization method can accommodate arbitrary k given an agent enough number of steps. 2. GIRE seems a promising approach to incorporating information about near-feasible regions into explorations. As far as I know, this contribution is novel. 3. The empirical comparisons with baseline methods are comprehensive. I appreciate the authors including an up-to-date list of baseline methods. There are some expected issues with reproductions but I definitely applaud the authors’ efforts here. 4. The paper is relatively well-written. The authors did a good job of packing a dense set of information within the page limit. Weaknesses: 1. The problem sizes considered are relatively small. Methods like DIFUSCO experimented with sizes up to 10000 cities. It is unclear how practically useful this method is with small problem sizes. 2. Another common evaluation mode for these classes of problems is for the generalization ability of a trained model. That is, extrapolate the model performance on problems of larger sizes than those in the training set. This is not considered in this paper. 3. In general, the empirical results are mixed. On TSP, NeuOpt is not better than DPDP and Att-GCN+MCTS. The comparison with DIFUSCO is obfuscated by a large time discrepancy from the original paper, possibly due to difficulty in reproducing their results. On CVRP, POMO+EAS+SGBS methods are comparable with NeuOpt. 4. The main advantage of the factorized action representation is that one could potentially handle arbitrary k-Opt in a general way. However, in the experiments, the authors used a value of 4 for k which seemed limiting. In addition, in the ablation on values of k, larger k values worked better, so why did the authors not choose a larger value of k for the main evaluations? 5. There are some places where the exposition could be improved. Please see my questions below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In section 5 about the reward shaping (near line 266), how to compute $P_t(\mathcal{U}|\mathcal{U})$ exactly? 2. In the entropy computation, why $P_t(\mathcal{U}|\mathcal{F})$ and $P_t(\mathcal{F}|\mathcal{U})$ are not included? It is possible to transition between feasible and infeasible solution spaces. 3. The actual entropy calculation involves hyperparameters $c_1$ and $c_2$. Can the authors provide more context on how they were chosen and their impact on the stability of training? 4. In Table 1, for different variants of NeuOpt, what does the DA number refer to? Is that the $T_{DA}$ mentioned in section 4.3? 5. In Figure 5(a), the results for K = 5 and 6 w/o E-move are missing. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors could add discussions about the generalization ability of trained models as well as scaling to larger problem sizes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the positive and valuable comments. We are delighted that the reviewer found our approach novel, general, and promising. Thank you for recognizing our efforts to include extensive baselines. We hope that the following response, along with additional experimental results, will address the remaining concerns. *** **[How practically useful our NeuOpt is?]** Thanks for the comment. * Regarding - *"sizes considered are relatively small"* - we would like to clarify that we did improve the scalability of the learning-to-search (L2S) solvers by direct training on size 200 (see global response). For even larger instances, we recommend integrating our NeuOpt with the SOTA **divide-and-conquer** frameworks. In such frameworks (e.g., L2D [2] and RBG [3], references in global response), a divider is usually learned to segment large-scale instances (e.g., TSP with 10,000 nodes) into sub-problems (e.g., fifty TSP-200 instances), where the sub-problems are solved by LKH3 or POMO (in parallel). **Given the new SOTA performance on sizes 100 and 200, our NeuOpt could serve as a more desirable conquer solver in these frameworks**. * Regarding - *"methods like DIFUSCO experimented with sizes up to 10000 cities"* - we acknowledge that L2P solvers excel at scalability, however, **methods like DIFUSCO are limited to supervised learning and TSP only** since their predicted heatmaps may not handle other constraints. Conversely, our NeuOpt and GIRE could be more adaptable for constrained VRPs via DRL. Moreover, we have shown in Table 1 that our NeuOpt (0.00%, 7h) exhibited much better performance than DIFUSCO (0.02%, 21.7h) on TSP100. * Regarding - *"how practically useful this method is?"* - the unique **research and practical values** of our approach are listed as follows: * SOTA performance compared to up-to-date baselines * the first L2S solver for flexible k-opt search * the first constraint-handling scheme beyond masking * better conquer solver for divide-and-conquer frameworks * more generic and adaptable solvers (for constrained VRPs) *** **[Generalization evaluation]** Yes, we have included such evaluations in the original Appendix. Kindly refer to the global response for more details. *** **[Time discrepancy with DIFUSCO]** Thanks for the question. Their reported time is for 128 instances while ours is for 10,000 instances. Meanwhile, their hardware (V100 GPU, CPU 2.50GHz) is different from ours (2080TI GPU, CPU 2.40GHz). *** **[Results are mixed]** We agree with the reviewer that different solvers have unique pros and cons, and we note that our NeuOpt does achieve SOTA performance on the whole. Specifically, * The good performance of **DPDP and Att-GCN+MCTS is limited to TSP, and our NeuOpt are more adaptable to constraints**. Our NeuOpt (0.30%, 13.8h) significantly outperforms DPDP (0.41%, 1.2d) on CVRP100 and Att-GCN+MCTS even fails to solve CVRP. * While achieving comparable gaps, **our NeuOpt-GIRE is around 7h faster than POMO+EAS+SGBS on CVRP100 and also significantly outperforms it on TSP100**. We note that POMO+EAS+SGBS has evolved through 4 stacked schemes, including model design by AM (ICLR’18), training algorithms by POMO (NeurIPS’20), active search by EAS (NeurIPS’21), and beam search by SGBS (ICLR’22). Thus, it is already promising that our NeuOpt, utilizing unified model parameters for all test instances without per-instance active search (model parameter update) or beam search boosters, outperforms POMO+EAS+SGBS. We believe the superiority of our NeuOpt could be further enhanced with similar per-instance boosters in the future. * Lastly, as one of the L2S solvers, **our NeuOpt is able to halve (on TSP) or even reduce by an order of magnitude (on CVRP) the gaps achieved by other L2S solvers with a much shorter run time**. *** **[Why use K=4?]** Thanks for the question. We use K=4 to balance computational costs and better performance, aligning with traditional solvers like LKH that avoid $k\geq5$ due to unbounded exploration. Yes, our factorization benefits from handling k-opt in a general way. Besides, it lets the model autonomously schedule dynamic k during the search. Kindly refer to our response **[Discuss trade-offs and computational costs]** and **[Why dynamic $k$ help escape local minima?]** to reviewer #NeSp for more discussions. *** **[Compute $P(\mathcal{U}|\mathcal{U})$]** We adopt $P(\tau'\in\mathcal{U}|\tau\in\mathcal{U})=\frac{P(\tau'\in\mathcal{U},\tau\in\mathcal{U})}{P(\tau\in\mathcal{U})}$, where $P(\tau'\in\mathcal{U},\tau\in\mathcal{U})$ and $P(\tau\in\mathcal{U})$ are estimated based on the historical solution records of the past $T_{EI}$ steps. *** **[Why not include $P(\mathcal{U}|\mathcal{F})$ and $P(\mathcal{F}|\mathcal{U})$?]** We opt not to explicitly include them since the MLP is able to derive $P(\mathcal{U}|\mathcal{F}) = 1 - P(\mathcal{F}|\mathcal{F})$ and $P(\mathcal{F}|\mathcal{U}) = 1 - P(\mathcal{U}|\mathcal{U})$. *** **[Effects of c1 & c2]** Thanks for the suggestion. They control the entropy measure patterns (shown in Figure 3). We set the values to only penalize extreme search behavior if the feasibility transition probability is outside [0.25, 0.75]. We follow the suggestion and add Figure A1 and Figure A2 in the attached PDF under global response. Results show that they may not affect training stability (thus no need for tuning in practical use). *** **[DA and T_DA]** We apologize for the confusion. DA refers to the number of augmentations for an instance, and T_DA is the maximum number of steps allowed before considering the search trapped in a local optimum. If T_DA is reached, the augmentation is changed to a new one. We will include algorithm pseudocode for dynamic data augmentation in the revised Appendix. *** **[Full results of w/o E-move]** Thanks for the comment. We have added the requested results to Figure A3 in the attached PDF under global response. The conclusion remains unchanged. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I want to thank the authors for their detailed responses to my questions. I would suggest including the generalization results in the main paper in a revision since that is an important set of experiments showcasing the effectiveness of the proposed method. As my questions are well addressed, I will raise my score to 7. --- Reply to Comment 1.1.1: Title: Thank you for the support and suggestion Comment: We deeply appreciate the reviewer for acknowledging our response. We will follow the suggestion to include the summarized Table A1 and Table A2 as well as corresponding discussions in the revised main paper.
Summary: The paper introduces Neural k-Opt (NeuOpt), a deep learning-based vehicle routing solver, designed to handle k-opt exchanges for any k≥2. Unlike existing Learning-to-Search (L2S) solvers, NeuOpt employs a tailored action factorization method, which allows complex k-opt exchanges to be broken down into simpler basis moves. This approach grants the model the flexibility to determine an appropriate k for each search step, enabling a balance between coarse-grained (larger k) and fine-grained (smaller k) searches. Moreover, the paper introduces Guided Infeasible Region Exploration (GIRE) scheme that promotes exploration of both feasible and infeasible regions in the search space. GIRE enriches the policy network with additional features to indicate constraint violations and exploration behavior statistics, aiding in escaping local optima, discovering shortcuts to better solutions, and enhancing the model's understanding of the problem landscapes. NeuOpt is trained using reinforcement learning (RL) and incorporates a dynamic data augmentation method during inference for improved efficiency. Extensive experiments on classic vehicle routing problem variants (TSP and CVRP) demonstrate the superiority of NeuOpt and GIRE over some existing approaches, including traditional hand-crafted solvers. Strengths: - Novel Approach: The paper introduces a novel approach called Neural k-Opt (NeuOpt) to handle k-opt exchanges for vehicle routing problems. This approach offers flexibility by allowing k to be any value ≥2, which can potentially lead to better solutions and more efficient search processes. - Guided Infeasible Region Exploration (GIRE): The introduction of GIRE is a significant contribution that promotes exploration of both feasible and infeasible regions in the search space. This unique scheme bridges feasible regions, helps escape local optima, and forces explicit awareness of the VRP constraints, potentially leading to improved performance in constrained VRPs. - Comprehensive Evaluation: The paper claims to have extensive experiments on classic VRP variants (TSP and CVRP) to validate the proposed NeuOpt and GIRE. A thorough evaluation of the model's performance on various VRP instances can demonstrate its effectiveness and potential advantages over existing methods. Weaknesses: - Scalability: While the paper claims that NeuOpt outperforms other methods, including hand-crafted solvers, there is no mention of its scalability to larger and more complex instances of vehicle routing problems. The effectiveness of NeuOpt on larger and real-world datasets should be explored. - Complexity of Action Factorization: Although the tailored action factorization method is introduced to handle k-opt exchanges flexibly, it may introduce increased complexity in the model's architecture and training process. The paper should discuss potential trade-offs and computational costs associated with the proposed factorization. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - Figure 1 is very hard to process... - lines 122-123: I believe the definition of TSP is slightly inaccurate. The main objective of the TSP is to find the optimal Hamiltonian cycle that **minimizes** the total distance or cost required to visit all the nodes exactly once and return to the starting node. - Can authors provide more discussions on why choosing k dynamically should escape local minimas? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: see Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the positive and valuable comments. We are delighted that the reviewer acknowledged that our contributions (NeuOpt and GIRE) are novel and significant, and our experiments are comprehensive and extensive. We hope that the following response will clear the remaining concerns. *** **[Effectiveness of NeuOpt on larger and real-world instances]** Thanks for raising the concern. We would like to clarify that our NeuOpt (for TSP) and NeuOpt-GIRE (for CVRP) were evaluated on these more complex instances as detailed in the original Appendix. Please refer to the above global response for a summary of these results. *** **[Are our action factorization complex?]** We understand the concern about complexity. As pointed out by the reviewer, our Action Factorization is designed to allow the deep model (our NeuOpt) to control k-opt exchanges flexibly (with any $k \ge 2$). We would like to clarify that our Action Factorization does not introduce much complexity into the model architecture or the training process. In fact, the designs of our NeuOpt for achieving flexible k-opt are both lightweight and desirable, for the following three reasons: * **Regarding the computational complexity:** Firstly, our factorization formulation is already simpler (while ensuring flexibility) compared to the original k-opt definition. Moreover, in the proposed RDS decoder, we **have further simplified** the parametrization of the above action factorization to a *node selection process* (with the type of basis moves in the action factorization being automatically inferred, see lines 198-201). This means that the decoder **only needs to specify one node in each decoding step**, resulting in a computational complexity (time) that grows approximately linearly with the increase of K. We believe that such complexity is highly desirable for decoding k-opt decisions. * **Regarding the model architecture:** We note that the model size (number of parameters) **remains the same for varying K** since our RDS decoder is flexible to control any k-opt with a unified model architecture. In Table 2 and our analysis in lines 329-331, we compared our NeuOpt (K=2) with DACT (2-opt) where both of them are learning to control 2-opt but with different decoders. Compared to the existing DACT decoder, our RDS decoder achieved better performance (our gap 0.24% vs its gap 0.32%) with **reduced** time (our 119s vs its 171s) but a slightly increased model size (our 0.683m vs its 0.633m). * **Regarding the training complexity:** We reported the training time in lines 615-617 in the original Appendix. For training on CVRP, it requires only 4 days, 5 days, and 8 days for sizes 20, 50, and 100, which are highly desirable as they are about **half the time taken by POMO and DACT** (their training time is around 2 weeks on CVRP100). *** **[Discuss trade-offs and computational costs]** We agree with the reviewer that there are trade-offs between better performance and increased computational costs. We would like to clarify that we did mention such trade-offs in the original submission: * In Table 2 and lines 329-331, we compared our NeuOpt (K=2) with NeuOpt (K=4), and mentioned that the performance is *"further amplified by increasing K to 4 for NeuOpt at the cost of slightly increased run time"*. * In Figure 5(a) and lines 354-357, we compared our NeuOpt with K=2, K=3, K=4, K=5, K=6, and mentioned that *"however, there is a trade-off between larger K and longer decoding time."* Following the suggestion of the reviewer, we have further **supplemented this by gathering more data to illustrate the trade-offs more clearly**. In the table below, we exhibit the performance and the inference/training time for different K. The results did suggest the aforementioned trade-offs, and we used K=4 to balance the trade-offs in this paper. We will further refine our paper to make the discussion more clear. | | Inference Time (T=1k) | Training Time (per epoch) | Objective Values | Gaps to the Best | |:-:|:-:|:-:|:-:|:-:| | NeuOpt (K=2) | 2m02s| 20m| 7.817| 0.30%| | NeuOpt (K=3) | 2m07s| 21m| 7.806| 0.15%| | NeuOpt (K=4) | 2m13s| 22m| 7.798| 0.05%| | NeuOpt (K=5) | 2m19s| 24m| 7.795| 0.01%| | NeuOpt (K=6) | 2m26s| 25m| 7.794| 0.00%| *** **[Refine Figure 1]** Thanks for the valuable feedback. We will enhance the readability of Figure 1 and add explanations. Please refer to the global response for detailed clarification. *** **[Refine TSP definition]** Thanks for the valuable feedback. We will refine it according to your suggestion and thoroughly check all the other definitions. *** **[Why dynamic $k$ help escape local minima?]** Thanks for the valuable question. We will refine our paper based on the discussions from two aspects as follows: * **Exploration of different neighborhoods:** The value of $k$ defines the neighbourhood of the current solution. For a given solution, different neighborhoods may have different local minima. For example, from the current solution $\tau_0$, we may obtain a local minima solution $\tau_1$ via a 2-opt move (only changing 2 edges of $\tau_0$), while a 5-opt move on $\tau_0$ might lead to another local minima solution $\tau_2$. Hence, the capability to vary $k$ dynamically can equip the solver with flexibility and reduce the likelihood of confinement to particular local minima. * **Adaptive search strategies:** Different search stages may require different $k$ to foster an efficient search. However, a statically defined $k$ might either be too restrictive, leading to rapid convergence to local optima, or too unbound, resulting in excessive exploration without convergence. As mentioned in introduction (lines 41-42), dynamically scheduling $k$ could achieve a *"balance between coarse-grained (larger $k$) and fine-grained (smaller $k$) searches"*. In our NeuOpt, $k$ is autonomously determined and dynamically scheduled during the search. --- Rebuttal Comment 1.1: Comment: Thanks for clarifications and additional experiments. Despite limitations mentioned by [2dzf,vydT], I think that the algorithm is interesting. I will keep my original score. --- Reply to Comment 1.1.1: Title: Thank you for the support Comment: We deeply appreciate the reviewer for acknowledging our response and additional experiments. Thank you for keeping to support our paper. We will incorporate the valuable suggestions from all the reviewers and ensure all limitations are addressed properly in the revised paper. Furthermore, we note that the Area Chair has kindly verified the receipt of our code repo link in a separate comment.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable comments. We are pleased to see that the reviewers have recognized our NeuOpt approach (including the k-opt action factorization, the RDS decoder, the fresh constraint-handling scheme GIRE, and the dynamic data augmentation) as being **novel** (#NeSp and #8kc7), **general** (#8kc7), **of significant contribution** (#NeSp), **beneficial for use** (#vydT), and **reasonable** (#2dzf). We also appreciate the positive feedback, where all reviewers found our paper mostly **well-written** and our experiments with various baselines **extensive** and **comprehensive**. In this global response, we intend to address the common concerns. *** **[Evaluation on more complex instances]** While Reviewers #NeSp and #8kc7 raised concerns regarding the evaluation of our approach on more complex (larger or real-world) instances, we would like to clarify that our NeuOpt (for TSP) and NeuOpt-GIRE (for CVRP) were indeed evaluated on these instances **in the original Appendix**. We will add summary tables and additional discussions to the revised main paper. Below we summarize the key results. * **Results on larger instances:** * Firstly, we followed existing conventions to benchmark our approach against various baselines on TSP and CVRP instances with sizes 20, 50,100 in Table 1. Beyond these results that affirm the superior performance of our approach, we have also conducted experiments on larger TSP200 and CVRP200 instances as detailed in Appendix E.2 and Appendix Table 5. The results indicate that our approach consistently finds close-to-optimal solutions and still **outperforms the traditional LKH3 solver on CVRP200**. We note that due to prohibited longer training time, existing L2S solvers like DACT [1] may not be efficient for CVRP with size 200, highlighting the better scalability of our approach. * For solving even larger instances, we recommend integrating our NeuOpt with the SOTA **divide-and-conquer** frameworks (e.g., [2][3]) where they require efficient solvers for handling sub-problems with small size. Given the new SOTA performance on sizes 100 and 200, our NeuOpt could serve as a **more desirable conquer solver** to complement the learning-based divide solver in such frameworks. Besides, as outlined in Section 7, future work would focus on integrating our NeuOpt with predicted heatmaps by L2P solvers to reduce search space, employing more scalable encoders or efficient CUDA implementation to further improve the scalability. * Lastly, we also examined the **generalization of our model to larger sizes** (generalize models trained on size 100 to larger sizes), details of which are given in the next bullet point. * **Results on real-world instances (generalization across size and distribution):** * In Appendix E.4 (Table 6 and Table 7), we have evaluated the generalization of our NeuOpt models trained on TSP and CVRP instances with size 100 and uniform distribution to real-world instances (from TSPLIB and CVRPLIB) with larger sizes (e.g., size 200) and/or different distributions (e.g., clustered node distributions, corner depot, much different demand settings, etc). For ease of reference, we have summarized the results in Table A1 and Table A2 in the attached PDF. The results consistently showcases that our NeuOpt could yield the lowest average gap. * Notably, the generalization of our NeuOpt is even more promising when compared to AMDKD method [4] which explicitly boosted the cross-distribution generalization performance of POMO based on knowledge distillation. This further suggests the promising potential of our NeuOpt if NeuOpt were to be augmented with a similar generalization boosting method in future work (e.g., applying AMDKD [4] or per-instance gradient update EAS [5] to our NeuOpt). ``` References: [1] Learning to Iteratively Solve Routing Problems with Dual-Aspect Collaborative Transformer (NeurIPS’21) [2] Learning to Delegate for Large-Scale Vehicle Routing (NeurIPS’21) [3] RBG: Hierarchically Solving Large-Scale Routing Problems in Logistic Systems via Reinforcement Learning (KDD’22) [4] Learning Generalizable Models for Vehicle Routing Problems via Knowledge Distillation (NuerIPS’22) [5] Efficient active search for combinatorial optimization problems (ICLR’22) ``` *** **[Clarification on Figure 1]** We thank the reviewers for providing this feedback and we apologize for the confusion due to the space limitation. We will add more explanations in the revised paper and further simplify this figure. For clarification, Figure 1 depicts an example on TSP-9 to illustrate how our decoder determines a 3-opt exchange with K=4 steps. Even though K=4, the 3-opt is selected instead of the 4-opt due to the E-move being chosen at the final decoding step. The upper portion of the figure provides a visual representation of how the dual-stream attentions are computed, where the top yellow part represents the move stream $\mu$, while the bottom blue part represents the edge stream $\lambda$. The lower portion of the figure demonstrates how the inferred basis moves contribute to the modification of the current solution. At each step, $\kappa$, our RDS decoder computes the dual-stream attention (depicted by both yellow and blue dotted arrows) from representations of historical decisions $q^\kappa_\mu$, $q^\kappa_\lambda$ to node embeddings $h_i$. Each $q$ undergoes processing by the corresponding GRUs that model the historical decisions. Following the attention, one node is selected (highlighted in green), thereby deciding a basis move $\Phi_\kappa(x_\kappa)$ to modify the solution. Ghost marks are used to indicate the same location of a cyclic solution when viewed from a flat perspective as in this figure. *** **[Additional experiments]** Following the suggestions of Reviewer #NeSp, #8kc7, and #vydT, we have added new experiments (figures and tables) in the attached PDF. Details can be found in our specific response to each reviewer. Pdf: /pdf/139beb5c28a6b4c18c128fc9b836d75d6376ec01.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Linear Causal Representations from Interventions under General Nonlinear Mixing
Accept (oral)
Summary: The authors study the problem of learning latent causal models from interventional data. The problem was formulated under linear or polynomial mixing in previous works and this work considers a more general setting of nonlinear mixing. The main contributions are the identifiability results of the latent causal model and a contrastive algorithm that identifies the latent model. Strengths: 1. The extension to nonlinear mixing is an important and challenging problem. 2. Using the contrastive algorithm to identify the model parameters is novel and sound. Weaknesses: 1. Single-node interventions are considered, while more nodes can be intervened in each environment in practice. For theoretical results, I think focusing on single-node intervention is fine, but it should be discussed whether the results have the potential to be generalized. 2. It was not mentioned whether the latent dimension $d$ is identifiable in the nonlinear setting. For all the identifiability results in the paper, the considered $\widetilde{Z}^{(i)}$ is assumed to have the same dimension as the true latent variable $Z^{(i)}$ (i.e., $d$). Note that $d$ is identifiable and it equals to the rank of the precision matrix of X in the linear setting. (Section 3.1 [1]). This issue occurs in the experiments as well. $d$ is provided to the contrastive method. But it is often not possible to know $d$ in practical settings. [1] C. Squires, A. Seigal, S. Bhate, and C. Uhler. Linear causal disentanglement via interventions, 2023. Additional comment: I think it would be helpful to provide a concrete toy example to demonstrate the identifiability since the proof intuition is a bit vague. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Whether $d$ is identifiable in the nonlinear setting? And how to identify $d$ for the contrastive method? I would like to raise my score if this problem can be addressed properly. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review and their suggestions. We will address their concerns in order. **Regarding the identifiability of $d$:** Yes, the dimension $d$ is identifiable from the observational distribution in our setting for the following reason. The image $f(\mathbb{R}^d)=M\subset \mathbb{R}^{d'}$ is a submanifold of dimension $d$, i.e., it locally looks like a $d$ dimensional hyperplane, so its dimension can be identified from the dimension of the tangent space. Put differently, the datapoints in a small neighborhood of a point $x$ generate essentially a $d$ dimensional linear space. In the setting considered in [2] that the reviewer highlighted, the latent dimension can be directly estimated from the rank since the mixing $f$ was assumed to be linear (which is a specialized setting). In general, when $f$ is non-linear as in most representation learning tasks, this task of estimating the latent dimension for complex data is much harder but also highly important and has been subject to intense study. For instance, in [1] the authors define a maximum likelihood estimator for the dimension of the data manifold around a data-point $x$ (it exploits that the volume of the $d$-dimensional ball scales like $r^d$ with the radius $r$). We show in the table below our estimates when applying this estimator (for the $k=50$ nearest neighbors) to our MLP setting with observed dimension $d'=100$ and $n=10000$ samples where we average the estimator over 10 different data-points $x$. We report the mean over 10 different runs and the reported error is the standard deviation. | Ground truth $d$ | Estimated $\hat{d}$ | | -------- | ------- | | $5$ | $5.3\pm 0.3$ | | $10$ | $9.8 \pm 0.6$ | | $20$ | $16.2 \pm 0.6$ | So for the settings we consider we can estimate the dimension experimentally, but, as one would expect, the problem becomes more difficult for high dimensions. To use our contrastive algorithm with unknown dimension, one could first use a standard estimator to estimate the dimension $d$ of the data manifold (as in [1]) and then apply the contrastive algorithm. We thank the reviewer for bringing up this point, and we will clarify in the paper, that $d$ is also identifiable. **Regarding multi-node interventions:** This is an interesting direction that is beyond the scope of our present work but we expect that new block identifiability results can be derived for multi-node interventions. We will be happy to add a discussion of this along with further future directions. We hope our response clarifies the reviewer's points of concern, especially the ones that led them to reduce their score. We're happy to address any further concerns and also welcome additional feedback on improving the paper. [1] Elizaveta Levina and Peter Bickel. Maximum Likelihood Estimation of Intrinsic Dimension, NeurIPS 2004. [2] C. Squires, A. Seigal, S. Bhate, and C. Uhler. Linear causal disentanglement via interventions, ICML 2023. --- Rebuttal Comment 1.1: Title: Reply to rebuttle Comment: Thanks for the clarifications. I had some negative impressions on the paper when I first read it, since the identifiability of the latent dimension was the first thing I was looking for in the theoretical results. But it was not mentioned at all. Overall, this is a solid work on an important problem. I am satisfied with the provided evidence. Therefore, I would raise my score.
Summary: This paper aims to identify latent variables via a nonlinear mixing function interventional data. The authors prove strong identifiability results for unknown single-node interventions, extending previous work that focused on linear maps, polynomial mixing functions or paired counterfactual data. The paper proposes an interesting contrastive algorithm to identify the latent variables and evaluates its performance on some synthetic tasks and a simple image dataset modified from Ahuja et al [2023]. While there have been other very recent works that examine the interventional setting, this is the first I'm aware of that only requires $d$ interventions to recover $d$ latents with minimal constrains on the mixing function. Strengths: * This is the strongest theoretical result that I am aware of in causal representation learning. The gaussian assumption is obviously restrictive, but other than that, the paper proves identifiability for a very practical class of problems: they only need $d$ interventions to identify $d$ latents, and make no further restrictive assumptions on the mixing function beyond standard infectivity / diffeomorphism assumptions. * The paper is very clearly written both in the main text and the appendix. * The contrastive algorithm that they propose to implement their approach is interesting. It has some optimization issues which the paper is upfront about, but I'm still curious to see how it would work on larger problems. Weaknesses: I don't have many complaints, but I do have some nitpicks. * In the introduction (particularly the first paragraph), the paper makes it sound like the generative process for the data is via a neural network (transformers & diffusion models in line 16, and a similar comment is made in line 124 in support of Assumption 1). While we can build generative models of complex data with neural networks, that is not the $f(\cdot)$ that we care about in causal representation learning. The real generative function, $f(\cdot)$, is a property of the world---it's the camera or microscope that photographs a scene---and we don't have control over that. I realize this makes the diffeomorphism assumptions unrealistic, but I think it's better to just be upfront about that as a limitation. * I have a similar complaint about the defence of the Gaussian assumption in lines 138 - 141: real processes almost certainly are not Gaussian, so we should treat it as a model of the world and be upfront about that (in the Box, "all models are wrong" sense). The paper would be strong if you were upfront about the fact that it is clearly restrictive but useful (because it gives strong identifiability results), and then evaluated sensitivity of the method to non-gaussian latents in the experiments. More minor: * Line 32 - I believe causal representation learning, but it remains to be seen whether it is either necessary or sufficient to build trustworthy systems, so that's a strong claim to make. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * Have you experimented with non-gaussian latents? What happens? * How well does the method work on larger problems (e.g. 5 - 10 balls? Or more?) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: As mentioned above, the paper could be a little more upfront about the implication of the assumptions it makes, but overall it does a good job of addressing limitations. I also liked the "deadends" section at the end of the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review and are glad the reviewer likes both our tight identifiability results and the contrastive learning algorithm. We agree with the reviewer's suggestions to improve the exposition and are happy to be revise the wording accordingly, including being more upfront about the limitations. In particular we will clarify that $f$ is a property of the world that we try to learn and Gaussian variables are often a useful approximation. We also like the reviewer's suggestions for additional experiments. 1. In Table 1 of the attached PDF, we show the results for experiments with non-Gaussian distributions (the setting agrees with the first row of Table 2 in the paper), in particular Laplace, Gumbel, Uniform and Exponential distributions. We observe that the recovery with our contrastive learning algorithm gets slightly worse but is still very reasonable. This is in line with the performance of, e.g., causal discovery algorithms based on a Gaussian likelihood score for non-Gaussian data. 2. As suggested, we also scaled up our experiments on the image dataset to 10 balls, please see Table 3. We found that for (much) larger sample size and larger models the performance can be increased significantly (compared to the results we reported), and we can handle up to 10 balls. While there is certainly still room for improvement (e.g. via hyperparameter turning), the more challenging next step is to handle noisy observations and more complex sceneries. We will add these tables to the paper in the final version. We welcome additional feedback to improve the work. --- Rebuttal Comment 1.1: Title: ... Comment: Thank you for the additional experiments! Like reviewer 2jDD above, I will keep my score of 8, and am willing to advocate for the paper during reviewer-AC discussions.
Summary: The paper considers the task of causal representation learning (causal disentanglement) from interventions, with (1) a linear latent structural causal model, (2) a nonlinear mixing function, and (3) single-node interventions. They show that, under perfect interventions, the generative model is identifiable up to trivial indeterminacies. They propose a method based on contrastive learning for recovering the generative model, and show that their method works well in practice (outperforming a baseline which only allow for linear mixing). Strengths: ### Significance This paper presents a significant theoretical contribution to an established line of work, while introducing techniques and connections that will be useful for future works. In particular, the advance from linear (or polynomial) mixing to general non-linear mixing is a big step towards realistic generative modeling. On the practical side, they also develop a solid contribution by showing that contrastive learning can be used to disentangle the latent variables. ### Clarity The paper is very clear in describing its contribution and how it compares to previous work. The presentation is quite thorough: they clearly describe their setup and discuss their assumptions, they show that their sufficient conditions for identifiability are actually necessary conditions, they discuss the unsuitability of polynomial mixing, and they provide a detailed description of their experimental methodology with accompanying code. Weaknesses: There are no major weaknesses. There are some relatively minor points of confusion / potential typos and some small suggestions to improve the paper in the **Questions** section. If I had to pick a weakness, it would be that the proposed algorithm suffers from problems with local optima. It would be ideal to have an algorithm which provably recovers the latent representation, and some sample complexity results. These results would make the paper feel very "complete", and with these results the paper would probably be a better fit for a journal. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: ### Questions 1. In Equation (3), why are we able to identify shifts without scaling indeterminacy? This seems odd: if we scale a variable, I would expect that the shift also scales. 2. In Equation (7), should $h$ in the last term have a subscript? Then it is not a matrix so I don't see how it should be in the inner product. ### Suggestions 1. In line 26, the cited papers are about independent component analysis, and these are given as examples of identifiability in causal representation learning (CRL). I agree that ICA can be cast as a special case of CRL. However, I have the feeling that for the purposes of clarity, it might be best to use a different umbrella term like "identifiable representation learning"; there is nothing really "causal" about ICA. 2. Adapt the synthetic data generation process to reduce varsortability and $R^2$-sortability [1,2]. One procedure which takes these issues into account is given by [3]. [1] Reisach, A., Seiler, C., & Weichwald, S. (2021). Beware of the simulated DAG! Causal discovery benchmarks may be easy to game. [2] Reisach, A. G., Tami, M., Seiler, C., Chambaz, A., & Weichwald, S. (2023). Simple sorting criteria help find the causal order in additive noise models. [3] Squires, C., Yun, A., Nichani, E., Agrawal, R., & Uhler, C. (2022). Causal structure discovery between clusters of nodes induced by latent factors. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review and positive feedback. We are glad the reviewer also expects our ideas to contribute towards generative modeling in the real world, as that is one of the stronger motivations behind our work. **Regarding the questions:** 1. We can identify $\eta$ exactly because the scaling is absorbed in $B$. Or, in other words, $\eta$ is the amount of shift in the normalized Gaussian (this is evident in equation (2)) and therefore can be identified without scaling indeterminacy. 2. There should be no subscript, thanks for pointing out our typo. **Regarding the suggestions:** We appreciate the insightful suggestion regarding varsortability and will be happy to discuss this in the paper, as well as cite the works you linked. We ran an additional experiment where we standardized $Z$ before applying the non-linearity thereby removing the varsortability (we also checked that in our setting $R^2$-sortability does not deviate substantially from $1/2$). The results are in Table 2 in the attached PDF and show a slightly degraded performance. In general, note that it is not directly clear how varsortability of $Z$ can be exploited because $Z$ is only identifiable up to scaling, as we show. We agree with the reviewer that resolving the issues of local optima and sample complexity are of great interest. Although little progress has been made on them so far in this field (e.g., there are no sample complexity results for causal representation learning that we are aware of), these are important directions for further research. We plan to expand our discussion section to include those directions, and also incorporate your suggestions along with the suggestions by other reviewers. --- Rebuttal Comment 1.1: Comment: I am pleased with the author's response to my review and have no remaining questions. I appreciate their consideration of the varsortability / $R^2$-sortability issue, and I agree that these problems are harder to conceptualize in the CRL setting. I will keep my score of 8, and am willing to advocate for the paper during reviewer-AC discussions.
Summary: In recent years, the theory of non-linear independent component analysis and causal representation learning has witnessed a lot of interesting developments. In this work, the authors study the problem of causal representation learning in the presence of interventional datasets, where interventions occur on the latents. The authors show that when the latents follow a linear structural causal model with a Gaussian distribution, then under general non-linear mixing (diffeomorphisms) the latents can be recovered up to permutation and scaling. For imperfect interventions, the authors show that it is still possible to recover the partial order under topological ordering of the underlying causal graph G can be recovered. The authors present a new method based on contrastive learning that learns to distinguish interventional data from observational data and test it out on some synthetic datasets. Strengths: The authors have studied an important problem in the area of intereventional causal representation learning. The results proposed in the work for the case of perfect interventions show that strong identification is achievable with perfect interventional data. The authors also make progress on the difficult problem of tackling imperfect interventions. The theory of the paper is overall quite insightful. Weaknesses: I have a question and a claim about weakness. Let us consider the following setting. Data generation for obervational data is given as: $\epsilon \sim \mathcal{N}(0, \sigma^2 Id)$, $z = A \epsilon$, $x = f(z)$ where $A$ is invertible, $\mathcal{N}$ is normal distribution, $\epsilon$ is noise (each component is independent), $f(\cdot)$ is injective. Data generation in interventional environment $k$ where $p^{th}$ component is intervened is given as $\epsilon^{(k)} \sim \mathcal{N}(\mu_p e_p, \sigma^2 Id + \sigma_p^2 e_p e_p^{\top})$, $z^{(k)}= A \epsilon^{(k)}$, $x = f(z^{(k)})$ where $e_p$ is a vector with zeros everywhere except at $p^{th}$ entry, $Id$ is identity, $\mu_p$ is mean under intervened distribution We index the observational data as $0$ and interventional data from $1$ to $p$. If we condition on the index of the data, the different components of $\epsilon$ are conditionally independent and follow an exponential distribution. We can now leverage the theory in i-VAE applied to identifying $\epsilon$ and mixing function $f \circ A$. In this case, if we have $p\geq 2d$ and sufficient variability condition from Theorem 1 in http://proceedings.mlr.press/v108/khemakhem20a/khemakhem20a.pdf is satisfied, then we achieve permutation and scaling identification. Therefore, we can assume to have identified $\epsilon$ up to permutation and scaling. We call this estimate $\hat{\epsilon} = P \Lambda z$, where $P$ is permutation, $\Lambda$ is diagonal. Note that $\hat{\epsilon} =\Lambda^{-1}P^{-1} A^{-1}z$. We now have a linear relationship between observed $\hat{\epsilon} $ and underlying true $z$. We can now leverage imperfect intervention results under linear mixing from https://arxiv.org/pdf/2301.08230.pdf (Theorem 13, see Table 1) to achieve mixing consistency based identification of $z$. We can even reduce the number of interventional distributions needed from $2d$ to $d$ provided we assume variance in Normal distribution is known by leveraging results from https://proceedings.mlr.press/v177/lachapelle22a/lachapelle22a.pdf. The characterization I describe above suggests that it is possible to get identification results by combining i-VAE and https://arxiv.org/pdf/2301.08230.pdf in a straightforward way. Either I am missing something? I would like the authors to clarify if they considered this simple combination? Further, if the authors agree with above characterization, how do the authors modify their contributions in this light? I don't quite agree with line 253, distributional assumptions. I think Gaussians is still a strong assumption and authors would benefit by being upfront about it. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please see the weakness section above. My final score depends on clarifcation to the questions I raised above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors would benefit by having a discussion on the limitations. For instance, they should talk about why is Gaussian assumption limiting as an example. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review and their insightful suggestion of a simpler proof strategy. However, the model considered in their argument is the very special case when the causal graph's weights do not change at all under all considered interventions. In particular, their review assumes $z^{(k)} = A\epsilon^{(k)}$, but our setting is substantially more general and considers the case where $A$ changes (which covers more realistic interventions). We present additional details below. Regarding the distributional assumption, we agree it's needed in our work and it has been emphasized in the abstract and introduction. Additionally, we will carefully revise to make it more evident throughout the writeup. ================== **Additional details:** We follow your notation in this response, however, note that the matrix $A$ in this response corresponds to $B^{-1}$ in the paper. The review considers interventions of the form $z^{(k)}=A\varepsilon^{(k)}$ where $\varepsilon^{(k)}$ is obtained by shifting and rescaling the noise variable of node $k$. As outlined in the review, this setting is covered by the iVAE theory and earlier identifiability results apply. Our work considers, in addition, interventions that change the relation to the parents, in particular perfect interventions that remove the effect from the parents. For such interventions, the matrix $A\to A^{(k)}$ also depends on the intervention. This is also the interventional setting considered for linear mixing functions in the recent work [1]. Note that the settings considered by the reviewer and our work both agree in the special ICA case when there are no causal relations but don't agree in cases beyond ICA, which we handle in our work. Therefore, to achieve our tight identifiability results, we need to exploit the specific form of the interventions. Note that our proofs are not purely linear algebraic but also topological, i.e., we explicitly exploit continuity of the mixing (see Lemma 4) making it unlikely that our results directly follow from known theorems. Nevertheless, we agree that it is a valuable addition to our paper to clarify that the ICA case is substantially simpler and has been solved in earlier work. We hope that this changes your view on our theoretical contribution and other points that may have contributed towards a lower score, and we are happy to discuss this issue and all other questions further. [1] C. Squires, A. Seigal, S. Bhate, and C. Uhler. Linear causal disentanglement via interventions, ICML 2023. --- Rebuttal Comment 1.1: Title: Further thoughts Comment: Thank you for the response. I have some follow up points. Firstly, I am not saying that your results from the above arguments I presented. However, I believe that the above case that I provided is both a very important one, complementary to your results, and follows from earlier work. I think you should reinterpret your contributions in the light of this example. For instance, you say in the abstract "This is also the first instance of causal identifiability from non-paired interventions for deep neural network embeddings." and have made several such statements elsewhere. The example I provided does not use paired interventions and also works for deep neural net embeddings. Each imperfect intervention can either occur by changing weights or noise distribution. The example I constructed shows that for latter class of imperfect interventions (that alter noise) you can use existing results to get stronger identification that you arrive. I don't think this result follows from your results either. Hence, the two results are complementary. Your results consider a larger class of imperfect interventions but arrives at weaker guarantees. The absence of any such discussion in the paper is not fair to earlier contributions that already exist. In your current response you seem to say "we agree that it is a valuable addition to our paper to clarify that the ICA case is substantially simpler and has been solved in earlier work." This is not exactly what I meant. I would appreciate if you can have a discussion of what I proposed in the main body (with details in Appendix) of how i-VAE + recent linear mixing works such as Varici et al. solve important sub-cases. --- Reply to Comment 1.1.1: Title: Reponse to further thoughts Comment: We thank the reviewer for their clarifying response. We apologize that we slightly misinterpreted the criticism in your review. **tl;dr:** We are happy to include additional discussion on these points, which are quite subtle and likely require additional investigation to make work. Please see below for details. **Identifiability of the setting in the review:** Firstly, let's focus on the setting considered in the review and clarify why we say that it essentially only covers ICA and not general CRL. For the class of interventions that you consider, the causal structure encoded by $A$ cannot be identified and every causal graph is consistent with the observed distributions. This can be seen as follows. As pointed out in the review, given the observational and interventional distributions $x^{(k)}$ we can find a function $g$ such that $x^{(k)}=g(\varepsilon^{(k)})$ and $g$ is unique up to permutations and scale, so we can identify $\varepsilon^{(k)}$. However, we cannot identify $A$ (which is our goal). Indeed, given an arbitrary invertible matrix $\tilde{A}$ satisfying DAG constraints, we may define a new latent structure by $\tilde{z}^{(k)}=\tilde{A}\varepsilon^{(k)}$. Then $x^{(k)}=g(\varepsilon^{(k)}) = g\circ \tilde{A}^{-1}(\tilde{z}^{(k)})$, and since $\tilde{A}$ was arbitrary, we get that the observations are compatible with any causal graph. This is consistent with Varici et al. [1] because in case of a linear SCM and interventions that only change the noise distribution (so the setting in the review), it seems that Assumption 3 in [1] is not satisfied for nodes with parents, i.e, the assumptions of their identifiability result are only satisfied for the empty graph. We will give more technical details at the end. Because of the counterexample above, it seems that to apply the techniques of [1] to the model considered in the review, we need to 1. either work in the special ICA setting 2. or make additional assumptions (as the reviewer seems to suggest). Finally, we note that our Theorem 3 applies to the setting in the review and implies identifiability of $\varepsilon^{(k)}$ up to linear maps. Therefore, we feel it's not fully complementary to this setting. Going further, we expect our linear identifiability result (Theorem 3) can be combined with any result for linear Gaussian SCMs and linear mixings such as [1] to obtain, e.g., mixing consistency. ========== In conclusion, while the ICA case can be obtained from prior works, it is not clear to us how exactly identifiability results for nonlinear mixings beyond the case of independent latents can be obtained by combining known results. We kindly ask the reviewer to clarify whether we misunderstood or overlooked something or what additional data or assumptions are necessary. We think obtaining generalizations of Varici et al. [1] for non-linear mixing (using iVAE or otherwise), is a very interesting and nontrivial problem for future work, which [1] themselves pose as an open problem (in their conclusion section). As suggested by the reviewer, we will expand the prior work section with more details, and also revise the sentence claiming that this is the first CRL identifiability result for general nonlinear mixing because this is indeed a bit vague as there is no generally agreed upon definition of CRL. Thanks again for engaging in the discussion, and we are happy to clarify any further concerns. **Assumption 3 in [1] and pure noise interventions** Based on our understanding, for linear SCMs and interventions that only change the noise distribution (for the setting in the review), Assumption 3 is only satisfied for interventions on nodes without parents, as outlined below. Denote the score (i.e., $\nabla \log (p)$) of the observational distribution $p_z$ by $s$ and the score for intervention $i$ with target $i$ by $s^i$. Then their Assumption 3 says that if $c\cdot (s - s^i)=0$ ($p_Z$ a.e.) for some vector $c$ then $c_j=0$ for $j\in \overline{pa}(i)=pa(i)\cup \{i\}$. We will assume that $i$ has a parent and show that the assumption is not satisfied. Assume that the structural equation for node $i$ is $Z_i = \alpha Z_{pa(i)} +N_i$ without intervention and under intervention $i$ the noise $N_i$ is replaced by $M_i$. Denote the log densities of $N_i$ and $M_i$ by $n$ and $m$. Using Equations (6), (7) of [1], we get $s(z)-s^i(z)=\nabla (\ln(p(z_i|z_{pa(i)})) - \ln(p^{(i)}(z_i|z_{pa(i)}))) =\nabla (n(z_i - \alpha z_{pa(i)}) - m(z_i - \alpha z_{pa(i)})). $ Let $k\in pa(i)$ and denote its coefficient in the structural equation by $\alpha_k\neq 0$. Then the vector $c$ with $c_i=\alpha_k$, $c_k = 1$ and $c_j=0$ for $j\notin \{i,k\}$ satisfies $ c \cdot(s(z)-s^i(z)) = \alpha_k (n'(z_i - \alpha z_{pa(i)}) - m'(z_i - \alpha z_{pa(i)})) -\alpha_k n'(z_i - \alpha z_{pa(i)}) +\alpha_k m'(z_i - \alpha z_{pa(i)})=0. $ Thus Assumption 3 is not satisfied. [1] Varici et al., Score-based Causal Representation Learning with Interventions, 2023
Rebuttal 1: Rebuttal: We thank all reviewers for their reviews pointing out that the paper 'presents a significant theoretical contribution' (R. 2jDD) that studies 'an important problem in interventional causal representation learning' (R. LWnm), proposes a 'novel and sound algorithm' (R. RqDB), and is 'very clearly written' (R. uV6m). Reviewers 2jDD and uV6m were quite positive about the paper, Reviewers RqDB and LWnm generally acknowledged the contributions of the paper but had questions regarding an alternative proof strategy and identifiability of the latent dimension. Those are addressed in the individual responses. Following the optional (and intriguing) suggestions of Reviewers uV6M and 2jDD, we ran additional experiments where we investigated the effects of different noise distributions (to probe misspecification) and data standardization (to probe varsortability) on the algorithm's performance. In addition, we considered more challenging settings for the balls dataset (scaling up from 3 balls to 10 balls). The results can be found in the attached PDF, for additional details we refer to our responses to Reviewers uV6M and 2jDD. Pdf: /pdf/f6607ed59a6d868a0cedb865b3810762104a394a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning
Accept (poster)
Summary: The paper proposes the SPQR loss to regularize the independence of the Q-network ensemble. The authors apply the random matrix theory and a spiked random model to derive the KL loss between Wigner’s semicircle distribution and the empirical spectral density of eigenvalues. The authors show that the independence could be enforced by minimizing this KL divergence loss with theoretical guarantees by satisfying the testing hypothesis. Extensive experiments on both the online and offline RL tasks and several baselines demonstrate that SPQR could improve the performance of the current ensemble RL algorithms promisingly by increasing of independence of the ensemble networks. Strengths: 1. This paper is well-organized and well-written. 2. The proposed SPQR is with a simple format and shows a promising improvement over the ensemble RL methods such as SAC-ens and REDQ. 3. The authors build the SPQR loss with mathematical tools and provide the theistical guarantee to ensure the independence of the ensembled Q-networks. 4. Experiments on both the online and offline RL tasks are sufficient to validate the generality of the proposed SPQR and that it can improve the performance of ensemble RL methods. Weaknesses: 1. Detailed explanations between the used math tools and the proposed method are missing, making the understanding hard for the readers. For example, the connection between the random matrix theory and a spiked random model is not explained well. What is the purpose of the use of a spiked random model in SPQR? 2. There lack details of in Algorithm 1. For example, how to construct the symmetric Q-matrix is not shown. The detailed version of the core algorithm is given in the appendix, which is strange. 3. The quality of the figures needs to be improved. For example, the text size is too small. 4. There are many typos. a) In Line 73, “tp” should be “to”. b) In Line 81, “SPQR analysis Q-ensemble independence …”. c) In Line 113, “Q-learning, We plot”. d) In Line 206, “Proof of Theorem 4.1 are …”. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What is the purpose of the use of spiked random model in SPQR? Could the authors provide some illustration examples or give more explanations? 2. How to construct the symmetric Q-matrix in Algorithm 1? 3. Why is the GOE chosen for the study and why should we relax the definition of GOE? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your considerate feedback and emphasis on improving our work. We address your questions and concerns as follows. Q1. Explanations about the spiked random model and GOE. A1. **1. spiked random model** The goal of the spiked random model is to detect/recover whether the given data is pure noise (random) or has a signal with noise (informative). As we visualize in Figure 2, the pure random matrix and informative matrix show different behavior in the eigenvalue domain. Before explaining the details, we can simply consider the spiked random model as the addition of an informative signal and the noise by the random matrix. In Figure 2, we generate a random matrix where the diversity of the elements in the 1st and the 3rd subfigure is similar but all elements of the random matrix in the 3rd figure are slightly moved towards (1,1)-direction. This difference is a clear difference between pure randomness and spiked randomness. Similar to PCA, the most natural assumption for the presence of information in the data is that the information can be captured in the most significant eigenvalue. To analyze the eigenvalue behavior of data, we need the random matrix theory. For example, in Right 3-4 in Figure 2, some signal, u,v exists and in the eigenvalue domain, it is detected by the top eigenvalue (rightmost) when the other eigenvalues follow Wigner’s semicircle law based on random matrix theory. On the other hand, in Left 1-2 in Figure 2, the elements of the random matrix show pure randomness without any signal, so its whole eigenvalues are in the Wigner’s semicircle. Our approach is to determine and alleviate non-independence among the Q-ensemble. Initially, we adapt the spike random model for SPQR to determine whether these data are collected independently. In order to make the Q-ensemble independent, we optimize the test criterion of the spiked random model. The test criterion has a form of KL loss between the eigenvalue distribution of observed data and pure random matrix, which is determined using random matrix theory. **2. GOE** In many machine learning and deep learning literature, GOE is used as the most natural way to model and analyze general random data matrix, such as analyzing loss surface [1], representation of GAN data [2], optimization acceleration [3], and explorations [4]. We choose GOE to model the behavior of an independent ensemble. **3. relax the definition of GOE** Thank you for pointing out the sloppy statement. The phrase ‘relaxed definition’ on line 141 should have been written as ‘simplified terms’. For convenience, we intend to give a friendly definition to make it easier for GOE to understand. Actually, the type of distribution (Gaussian, Rademacher, Poisson, etc..) of entities does not affect the eigenvalue distribution (of course Wigner’s law) of the random matrix, which is called ‘universality’ in the random matrix theory literature. as we have noted in line 205. [1] Baskerville, Nicholas P., et al. "The loss surfaces of neural networks with general activation functions." Journal of Statistical Mechanics: Theory and Experiment 2021.6 (2021): 064001. [2] Seddik, Mohamed El Amine, et al. "Random matrix theory proves that deep learning representations of gan-data behave as gaussian mixtures." International Conference on Machine Learning. PMLR, 2020. [3] Lacotte, Jonathan, and Mert Pilanci. "Optimal randomized first-order methods for least-squares problems." International Conference on Machine Learning. PMLR, 2020. [4] Sagun, Levent, et al. "Explorations on high dimensional landscapes." arXiv preprint arXiv:1412.6615 (2014). Q2. Explanation about how to construct the symmetric Q-matrix. A2. It was really hard to include the full version of the algorithm because of the page limitation. However, we agree with your comment. In the revised version, we will provide a more detailed explanation in the main text too. Our goal is to generate a symmetric Q-value matrix filled with an ensemble size of N. To avoid training or capturing the order of a matrix, we shuffle the filling order. This can be achieved by sorting the list of Q-networks l_k, as specified in Algorithm 3. In order to create a symmetric matrix of maximum size, the upper triangle part and the corresponding lower triangle part are equally filled. The size of the matrix is set as D=floor((sqrt(1+8*N)-1)/2), as described in Algorithm 3. D can be found by finding the largest integer that satisfies (D**2+D)/2 <= N. Please refer to the illustration in the **attached pdf files of the global response** for a clear understanding of the process. Q3. Typos, grammatical errors, and font size mistakes. A3. Thank you for kindly pointing out our typos, grammar, and font size mistakes. We will handle all of these mistakes in the revision. We hope that our explanation may address much of your concern. --- Rebuttal Comment 1.1: Comment: The reviewer appreciate the authors' response. My concerns are effectively addressed and I would like to raise my score to 6. --- Reply to Comment 1.1.1: Title: Thanks Comment: We appreciate your insightful and positive feedback. We will incorporate your helpful review into the revised version.
Summary: The paper deals with the problem of alleviating overestimation bias in RL, using ensembles Q-functions. The authors argue that previous methods do not provide a theoretical guarantee of the independence of the members of the ensemble. To provide this they propose an approach based on random matrix theory, which, in practice, can be implemented as a regularization loss. The approach is evaluated in a number of settings for RL in tasks in MuJoCo, D4RL Gym, Franka Kitchen and Antmaze. Strengths: + The approach has a strong, non-trivial theoretical foundation. + Experimental results show a significant improvement over the baselines. Weaknesses: - Throughout the paper, there are many grammatical errors and awkward formulations. Some of these might hinder understanding, such as on page 5: "high correlation occurs, which cannot be benefited by the ensemble method". I assume it this case what the authors mean is "benefit the ensemble method". Many of these could be easily fixed with a grammar checker. - The text on figures 5 and 6 (and to a lesser degree, figure 4) are unreadably small. Graphs should be redrawn in such a way that the text is still readable at a normal printing size. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The meaning of Figure 3 is not clear. It seems that the higher the \beta, the lower the average Q value. But at this point in the paper, the approach had not yet been described, so it is not clear what the \beta controls. Also, we don't know the correct value of Q. I can make some assumptions that the figure aims to show how the proposed technique improves on the overestimation bias, but this (if true) would need to be explained more clear. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: No separate limitations section found in the paper. The theoretical nature of the paper does not raise ethical concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the careful reading and interesting review. We address your questions and concerns as follows. Q1. Detailed explanation about Figure 3. A1. Thank you for providing a detailed and thoughtful comment to improve the presentation of our paper. During the rebuttal phase, it is not possible to modify our paper itself. Therefore, we provide more specific explanations for other readers as follows. To help to understand the independence and diversity of Q-ensemble more clearly, we mentioned in line 176 that beta can be simply considered as a loss gain (weight) and presented Figure 3 before providing a detailed explanation about SPQR and beta. We agree with your comment and will include the statement “beta is the loss weight for the regularization for Q-ensemble independence. Higher beta represents highly-likely-to-independent” in the caption for Figure 3 in the revision. Section 4.1 aims to demonstrate the relationship between independence and conservatism is related, thereby enabling the control of conservatism through independence regularization. It does not directly address the alleviation of overestimation bias, which requires a true Q-value. The main purpose of this plot is to demonstrate the order of Q-values in terms of beta level, indicating that a higher independence regularization weight leads to a more conservative algorithm. The value and scale themselves are not our primary concerns in this section. Therefore, appropriate conservatism can be achieved by tuning beta, which prevents underestimation due to extreme conservatism. Q2. Typos, grammatical errors, and font size mistakes. A2. Thank you for kindly pointing out our typos, grammar, and font size mistakes. We will handle all of these mistakes in the revision. We hope that our explanation may address much of your concern. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you for the rebuttal and the proposed changes in the paper. --- Reply to Comment 1.1.1: Title: Thanks Comment: We appreciate your insightful and positive feedback. We will incorporate your helpful review into the revised version.
Summary: This paper proposes a new regularization loss that improves the independence of Q ensemble, thus improving the performance on online and offline DRL settings. The authors first point out that previous works with Q ensemble either rely on assumptions that are inaccurate in practice, or rely on heuristics to improve diversity of networks and no theoretical support. The authors provide theoretical results that support the proposed method, and also conducted empirical experiments, where the proposed method is applied to multiple recent online and offline algorithms, and tested on the MuJoCo online and d4rl offline benchmarks. On a high level, the regularization (Spiked Wishart Q-ensemble independence reuglarization (SPQR)) encourages the Q ensemble distribution to be closer to an ideal independent ensemble, resulting in a more diversified Q prediction values, lower bias and better performance. Strengths: **originality** - The theoretical results and the proposed algorithm, the findings can be seen as novel - Learning with Q ensembles has been studied in many previous works, but as the authors pointed out, they either rely on assumptions that are inaccurate in practice, or rely on heuristics, so the novelty is quite good here. **quality** - Overall the presentation is good, writing is clear **Clarity** - The motivation, connection to related works are all quite clear, and ample technical details are given, the authors also explained that they try to be fair in the comparisons and use the same hyperparameters. And the authors explained computation time, performance improvement, implementation difficulty quite clearly. **significance** - the analysis is nice, which shows that the proposed method seems to achieve the desired improved independence. - empirical results showing consistent performance improvement over the baseline, big improvement in some settings, slight improvements in others, but overall quite consistent. Consistent improvement over recent sota algorithms such as REDQ with small change in the code and low computation cost and no excessive fine-tuning is quite impressive. - the theoretical part is interesting and quite important Weaknesses: I don't have very major concerns, one thing that can be fixed is the text in all your figures, especially those in the main paper are just too small, please make the fontsize of legend, as well as axis labels and ticks and any other text in the figures bigger so they are easier to read. Minor issue: - line 81: SPQR analysis -> analyzes ? - Figure 3 caption can you add a short sentence on what does beta do? It is not explained until a later section and I got confused when reading to Figure 3. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Main suggestion: fix the figures and make them more readable. - Question: what is the limitation of your work? Are these cases where the proposed method might fail, or type of algorithms that it cannot be applied to? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The proposed method seems to be quite general and can be applied to other methods based on Q ensembles. However can be good to have some discussion on limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your interest in our research and for providing us with constructive feedback. We address your questions and concerns as follows. Q1. Typos, grammatical errors, and font size mistakes. A1. Thank you for kindly pointing out our typos, grammar, and font size mistakes. We will take care of all the mistakes in the revision. Q2. More detailed caption for Figure 3. A2. Thank you for providing a detailed and thoughtful comment to improve the presentation of our paper. During the rebuttal phase, it is not possible to modify our paper itself. Therefore, we provide more specific explanations for other readers as follows. To help to understand the independence and diversity of Q-ensemble more clearly, we mentioned in line 176 that beta can be simply considered as a loss gain (weight) and presented Figure 3 before providing a detailed explanation about SPQR and beta. We agree with your comment and will include the statement “beta is the loss weight for the regularization for Q-ensemble independence. Higher beta represents highly-likely-to-independent” in the caption for Figure 3 in the revision. Q3. Limitation of our work. A3. We might need a hyperparameter tuning on the regularization weight beta. However, we can broadly apply a common fixed beta in most environments since beta is not a sensitive hyperparameter as we have already shown in Table 8 and Table 11 in the appendix. We hope that our explanation may address much of your concern. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I thank the authors for the rebuttal, I don't have other major concerns at this point. --- Reply to Comment 1.1.1: Title: Thanks Comment: We appreciate your considerate and positive feedback. We will incorporate your helpful review into the revised version.
Summary: This work proposes a spiked Wishart Q-ensemble independence regularization (SPQR) to improve the independence of ensembling in Q-learning. SPQR encourages the ensemble to be closer to an ideal independent ensemble by penalizing the KL divergence between the eigenvalue distribution of the current ensemble and an ideal one. Strengths: The paper is easy to follow. The paper provides nice evidence of lack of independence in Q-ensemble training in current methods. The regularization to increase the diversity is well motivated, and seems simple and practical. The empirical evaluation is comprehensive, though more baselines would make them more compelling. Weaknesses: There is quite a bit of prior work on ensembling in deep RL, for example bootstrapped DQN [1] and MSG [2], which are not discussed. In particular, [2] provides theoretical and empirical support for constructing independent ensembles. A discussion and comparison with MSG is warranted given the emphasis on constructing independent ensembles. While the emphasis of work is on improving ensembling methods in deep RL, recent methods have been able to forego ensembling while maintaining or improving performance over REDQ, for example DroQ [3] or RLPD [4]. It would be good to contextualize the improvements from SPQR by adding comparisons with these methods. More comparisons with offline RL methods would help too, for example, IQL. [1] Deep Exploration via Bootstrapped DQN. Osband et al. [2] Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters. Ghasemipour et al. NeurIPS, 2022. [3] Dropout Q-Functions for Doubly Efficient Reinforcement Learning. Hiraoka et al. [4] Efficient Online Reinforcement Learning with Offline Data. Ball et al. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: While I do not see potential negative societal impact, a discussion on limitations is missing from the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your detailed feedback. We address your questions and concerns as follows. Q1. Comparison with other RL algorithms. A1. We appreciate for informing us about meaningful prior studies. We will try our best to compare and evaluate each previous study based on their conceptual and empirical performance perspectives. 1. bootstrapped DQN Bootstrapped DQN aims to enhance deep exploration in the Atari environment by using a multi-head ensemble in deep online RL. SPQR aims to improve Q-ensemble independence in both online and offline RL. Bootstrapped DQN proposed a multi-head architecture for ensemble Q-learning, which was outlined and evaluated as multi-head MSG in the MSG paper. Intuitively, from the Q-ensemble independence perspective, separate N ensemble networks are preferable to a shared multi-head architecture. Moreover, empirical evidence indicates that deep ensemble MSG outperforms multi-head MSG. As bootstrapped DQN suggested, we will evaluate SPQR using a multi-head architecture in addition. 2. MSG The Model Standard-deviation Gradient (MSG), proposes a pessimistic ensemble Q-learning algorithm using the Upper Confidence Bound (UCB) of independent Q-networks, without relying on a shared target Q-value. SPQR aims to enhance independence by regularizing the network while maintaining a shared target Q-value. The performance of MSG and SPQR in the Antmaze environment can be compared, as reported in MSG. However, it should be noted that MSG uses the *-v0 environment, which needs to be reproduced. Since MSG and SPQR are orthogonal methodologies, we can combine them as we did for SPQR-EDAC. MSG claims that a shared target Q-value leads to optimism. We can modify the shared Q-target value of SPQR to an independent target Q-value or a multi-head target Q-value, as MSG has proposed. 3. DroQ The objective of DroQ is to enhance the computational efficiency of the ensemble Q-learning using the dropout technique. Since the performance of DroQ and REDQ is similar, we have thought that a comparison between REDQ and SPQR is sufficient. We will reproduce DroQ and evaluate an empirical comparison with SPQR for a revision of our paper. 4. RLPD The RLPD proposes an ensemble Q-learning algorithm that performs well by using an offline data buffer during online RL training. By combining RLPD with SPQR, performance can be improved as RLPD employs the orthogonal methodology of SPQR. 5. IQL Table 9 in Appendix E.1 compares SPQR-CQL with IQL in the Franka Kitchen and Antmaze environments. Since CQL outperforms IQL in the D4RL Gym environment, comparing SPQR with CQL suffices for the Gym environment. We hope that our explanation may address much of your concern. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I have read the rebuttal, and I will maintain my score. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thanks again for providing a helpful review. We will incorporate your feedback into the revised version.
Rebuttal 1: Rebuttal: For reviewer ppMG, we attach a pdf file for illustration about constructing a symmetric matrix here. Pdf: /pdf/0c579593a91749696e4d4adc9e019407effefa0c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Mitigating overestimation bias is crucial for deep reinforcement learning. Existing works about the ensemble techniques for Q-learning have been explored to leverage the diversity of multiple Q-functions. The authors argue that there has been no attempt to ensure ensemble independence from a theoretical standpoint. The authors introduce a regularization loss for Q-ensemble independence, based on random matrix theory, called Spiked Wishart Q-ensemble Independence Regularization (SPQR). The authors incorporate SPQR into online and offline ensemble Q-learning algorithms. Experimental results show that SPQR surpasses baseline algorithms in both online and offline RL benchmarks, demonstrating its effectiveness in addressing overestimation bias and improving performance. Strengths: - Significance and Originality: The problem and viewpoint that the paper studies - how to address overestimation bias/out-of-distribution error in RL is an important problem, and there have been a number of works investigating how to tackle this problem based on ensemble learning. However, most of the methods assume the bias follows a uniform and independent distribution, which may not hold in practice. This paper aims to improve previous ensemble methods from this perspective with a theoretical guarantee of improved Q-ensemble independence. The viewpoint from the random matrix theory seems novel, and the authors also propose a practical and tractable implementation for the method, which makes it possible to be applied in standard high-dimensional tasks. - Quality: The authors have done a comprehensive experiments to evaluate the methods in different setupds, and the proposed SPQR methods provides improvements in different aspects. - Clarity: The paper is also well-written and easy to follow. Weaknesses: - It is claimed in the paper that the assumption that most previous works rely on may not be true (the i.i.d. assumption about the bias). Could the authors demonstrate this effect is generally invalid in most of the environments? It would be better to convince the readers (since previous methods also have great performance although this assumption may not hold in practice). - Results in Figure 5 about the validation for the independence analysis are interesting. Does this generally hold in different tasks? - The improvement in standard D4RL locomotion tasks is somewhat marginal (4% improvement) considering increasing computation costs. - The results in Figure 6 about the performance of SAC-Ens is very low. Could authors better explain this result? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please find my questions in the weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your constructive and thoughtful feedback. We address your questions and concerns as follows. Q1. Verification of the invalidation of the i.i.d assumption in various environments. A1. As you mentioned, we evaluate the acceptance ratio in various environments and datasets. According to the table below, SAC-Min and EDAC demonstrate a lower acceptance ratio than SPQR for various environments and offline datasets. We evaluate each algorithm on *-full-replay (OOD data) with chi-square hypothesis testing for independence, with a significant level \alpha=0.025. For example, an agent trained by hopper-random is evaluated with hopper-full-replay, as we have noted on line 245. Table: independence testing acceptance ratio per environment | algorithm | hopper-random | halfcheetah-random | walker2d-random | |-----------|---------------|--------------------|-----------------| | SAC-Min | 30.4% | 13.3% | 34.0% | | EDAC | 30.4% | 6.7% | 0.0% | | SPQR | 80.4% | 51.0% | 60.0% | Table: independence testing acceptance ratio per dataset | algorithm | halfcheetah-random | halfcheetah-medium | halfcheetah-expert | |-----------|--------------------|--------------------|--------------------| | SAC-Min | 13.3% | 23.3% | 32.0% | | SPQR | 51.0% | 70.0% | 74.0% | Q2. Explain computational cost and performance improvement. A2. The computational cost can be analyzed in terms of two aspects: memory usage and training time per epoch. From a memory usage perspective, we have less memory than SAC-Min because SPQR uses a smaller number of ensembles, N (we have provided the specific values of N in Table 8) by the conservative ensemble property of SPQR as shown in Figure 3 and Figure 8. This significantly reduces the training time per epoch since lower memory usage also has a significant impact on time consumption. As mentioned in line 294, SAC-Min requires 500 networks for some tasks in a hopper environment, whereas SPQR only needs 50, making it **significantly more computationally efficient**. Furthermore, computing the regularization loss does not significantly increase computational cost in terms of training time per epoch, by increasing **only 5% with the same ensemble size**, as mentioned in line 239. Table: average ensemble size decreasing rate per environment | environments | walker2d | halfcheetah | hopper | |-----------------|----------|-------------|--------| | decreasing rate | 9.9% (33.3->30) | 35% (10->6.5) | 79% (350->75) | In addition, building upon our previous comment in line 290, we note that the performance improvements are **significantly greater** in the case of low dataset quality. Although there was a remarkable improvement in the low-quality dataset (*-random), the average improvement rate seems somewhat marginal because SAC-Min has already shown good performance in the high-quality dataset, such as *-expert series. Therefore, we can conclude that our algorithm outperforms the baseline by a large margin, even if its average value appears to be small. As we provide the performance gain by dataset quality below, we can consistently confirm that our method shows a higher performance improvement ratio in the worse-quality dataset. Table: performance gain per dataset quality | random | medium | expert | medium-expert | medium-replay | full-replay | |--------|--------|--------|---------------|---------------|-------------| | 16% | 7.0% | 2.3% | 3.2% | 7.3% | 3.3% | In conclusion, we want to emphasize that applying SPQR becomes more computationally efficient since it needs fewer networks and shows a larger performance gain when the dataset quality is worse. Q3. Explain the performance of SAC-Ens. A3. When compared to other baseline ensemble methods, SAC-Ens appears to perform poorly. Given that SAC-Ens calculates an average Q-value among N networks, it is highly likely that its performance is similar to vanilla SAC that uses a single network (ensemble), as shown in Figure 4 (learning curve of DQN, DDQN, and Average DQN) and Figure 8 (sensitivity analysis of DQN, DDQN, and Average DQN) of the Maximin paper[1]. We also note that the reported learning curve of SAC-Ens is close to the performance of vanilla SAC as shown in Figure 1 of the REDQ paper[2]. More importantly, using the average target Q-value (SAC-Ens) underperforms the minimum target Q-value (REDQ(UTD=1), SAC-Min), as reported in Figure 3 and Figure 4 (learning curve of Average DQN and Maxmin DQN) of the Maximin paper[1]. This occurs because using the average target Q-value suffers more from the overestimation bias harshly compared to using a minimum target Q-value, as illustrated in Figure 10 in the Appendix of our paper. [1] Lan, Qingfeng, et al. "Maxmin q-learning: Controlling the estimation bias of q-learning." arXiv preprint arXiv:2002.06487 (2020). [2] Chen, Xinyue, et al. "Randomized ensembled double q-learning: Learning fast without a model." arXiv preprint arXiv:2101.05982 (2021). We hope that our explanation and additional experiments may address much of your concern. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I find the additional results (the two tables) for Q1 very interesting, which also better supports the claim in the paper. I hope the authors will incorporate the discussion into the revision. I therefore raised the score to 6. --- Reply to Comment 1.1.1: Title: Thanks Comment: We appreciate your insightful and positive feedback. We will incorporate your considerate review into the revised version.
null
null
null
null
null
null
On the Consistency of Maximum Likelihood Estimation of Probabilistic Principal Component Analysis
Accept (poster)
Summary: This work studies the consistency of the maximum likelihood estimates of probabilistic PCA. These estimates are unique up to a rotation, so they use the quotient space and claim in their Lemma 6.1, ..., 6.5 stated in Redner[1981] are verified and apply the consistency results available in this book. Strengths: This paper is overall well written, with a thorough introduction to equivalence classes and quotient topological spaces. Weaknesses: Compared to Redner [1981] and Wald [1949], the novelty is not enough. I see the contribution as mainly an application of the result of Wald to the case of probabilistic PCA. The general idea is rather straightforward: using the quotient space instead of $R^d$ is the first idea that comes to mind. I agree that the precise formulation is not trivial but alone, and although the writing is excellent, it is not enough for a publication in my opinion. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you clarify the contributions of the paper ? Is there something I missed ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Respectful Disagreement}$ Thank you for your careful review. We have clarified the details of our theoretical contributions and we respectfully disagree with the assessment that the contribution is mainly an application of the result of Wald and that there is inadequate reproducibility Not only are there significant technical novelties where our work differs from Redner's (1981), but our research definitely impacts future research} $\textit{Novel Contributions}$ We agree that the quotient of $\mathbb{R}^d$ is the first that comes to mind but it takes a significant amount of technical novelty to rigorously build a framework that is amenable to an extension of Wald's work. The model PPCA has been around since 1999. In light of the widely used ML estimates it is highly unlikely that nobody tried to justify them through an argument based on Wald's criteria. The main difficulty was to interpret Wald's condition in a quotient space framework which partly has been introduced in Redner's work but that is neither precise nor complete. While Redner's work is the first one to $\textit{attempt}$ to address this issue with the help of quotient topology, the treatment developed by Redner is rather misleading as highlighted in our work (l67-78, l234-247). We humbly request the reviewer to glance through Redner's paper where he states Thereom 4. Unfortunately, it is assumptions 5 (and 3) that Redner takes for granted. For instance, it is simply untrue that the quotient metric behaves in a similar fashion to that of the original Euclidean metric and therefore it would take a great deal of effort to unambiguously define the right notion of convergence to be used for assumption 5 (and 3), which Redner seems to have missed. Redner mentions that the MLE converges to the true parameter in this topological space and this follows from the theory of quotient spaces which is not true. In general, metric topology and quotient topology are two very different things and their interaction depends highly on the equivalence relation $\sim$. Here is a counter-example: Consider the space $X=\{(x,y):x,y\geqslant 0\} - \{(0,0)\} $ with Euclidean norm, and define the equivalence relation $\sim$ on $X$ given by: $p_1\sim p_2\iff p_1=\lambda p_2$ for some $\lambda>0$, i.e., the points $p_1,p_2$ are equivalent if they are on the same ray. The projection map here is $\pi:X\to X/\sim$ is given by $\pi(p)=[p]$. Under this equivalence relation it is true that $d_1([x],[y])=0$ (from the definition of pseudo metric $d_1$ after l208 in paper) for any two distinct $[x],[y]\in X/\sim$, which can be easily argued by taking $x_1=x/n,y_1=y/n$ with $n\in\mathbb{N}$, so that, $x_1\sim x,y_1\sim y$ and letting $n\to \infty $. Therefore the usual metric structure totally breaks down as any two different equivalence classes have a distance zero between them. But quotient topology is non-trivial and we can still talk about convergence there. To extend Wald's condition, the space $X$ must be endowed with a non-trivial metric structure which Redner has missed. For instance, it is impossible in the above example. The quotient space has to pass through a pseudometric to achieve the desired metric space structure in order to build set up the ground for Wald's results to extend. In general, it has been very hard to interpret something meaningful out of the intangible metric structure from a statistical point of view. However, when the quotient is done with respect to a closed set one can provide a concrete tangible geometrical description of the metric space through Lemma 4.3, which in our work has been done in the context of PPCA. The model PPCA is $\textit{not}$ important here, rather the $\textit{key}$ part is that the quotient has to be with respect to a $\textit {nice enough equivalence}$. This key technical but unavoidable construction seems to have gone unnoticed by Redner. Not only our contribution is building this connection that requires a significant amount of bridging between different equivalences that are governing the quotient space, but coming up with an explicit description of the metric in the quotient space is a significant contribution that was only ``existential" prior to us. This explicit description plays a crucial role in the proof of assumption 5 (our Lemma 6.5) which again was assumed trivial in Redner's work (see the proof of Theorem 5). Also, note that in that proof, it was taken for granted that $g^{*}(x,\gamma,r)$ is measurable, which is not true generally. It is only upper semianalytic (see proposition 7.47, p179 in the reference we gave in proof of Lemma 6.2). $\textbf{Impact and reproducibility for future research:}$ As an immediate consequence of our work is (strong) consistent covariance estimation of the PPCA process which wasn't known before. (please see the second last paragraph of reviewer ZSYo's response) Our rigorous framework opens doors for many other general statistical models where rotational ambiguity (or more generally ambiguity due to a closed space or symmetric parametrization) is present. For instance, Matrix factorization problem where one is interested in the estimation of two matrices $\textbf{A},\textbf{B}$ such that the data matrix $\textbf{X}=\textbf{A}\textbf{B}+\text{noise}$ is one such problem, thanks to the Reviewer 1. Our methodology is highly flexible in the sense that it does $\textit{not}$ depend on the statistical model much as long as there are some regularities in the geometry of the parameter space $\Theta$. (please see the last paragraph of reviewer ZSYo's response) Our contribution includes an explicit formulation for smooth enough equivalence relations that was very much needed to make this flexible topological toolkit widely accessible to future researchers. Finally, we welcome your further suggestions and comments to improve the final manuscript. $\textit{We hope we have addressed all comments satisfactorily and kindly request you to revise your score.}$ --- Rebuttal Comment 1.1: Comment: Thanks for your answer. I agree that Redner's work was incomplete and that the current work fills this hole (in my opinion Redner's work is the reason why no one tried to prove consistency of PPCA before). I agree that this method could be used for different models. Nevertheless, once the mistake in Redner's work has been noticed, the distance in Lemma 4.3 is the first you would think of. Verifying Wald's assumptions does not pose any technical challenge as long as I can see. I keep my score unchanged. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for engaging with us for a productive discussion. We would like to bring the following points to your attention with regard to Redner's work and the verification of Wald's criteria. i) We assume you meant Lemma 4.3. We respectfully disagree that the distance in that result is the first one that comes to anyone's mind and that is because it is not in general a distance on quotient parameter space $X/C$ (it is a distance on some other quotient space $X_{\approx}$). In our case, it happens to be a distance on $X/C$. This connection and why they are the same is far from being trivial. Please refer to our response to the first question of Reviewer gPBN where we exclusively discuss this point and consider looking at the reference provided there. ii) Redner (1981), `Note on the consistency of the maximum likelihood estimate for nonidentifiable distributions', Annals of Statistics is a fairly cited paper in the relevant literature. But if you look deeper into those citations, you will find it is mostly an acknowledgement saying that the situation has been handled within a quotient space (which is, unfortunately, a misconception). The methodologies were never used afterwards as the foundation was shaky. Therefore, it is important to have a correct foundation for it to be useful for future practitioners and clear up certain misconceptions. In our work, we tried our best to maintain neutral language while pointing out these issues. Contrary to Redner's work, our framework is not limited to an abstract piece of mathematics as it allows us to prove consistent covariance estimation as we pointed out in the response of the previous reviewer. Therefore, it is useful for practitioners and more experimentally inclined researchers for implementation purposes. iii) With regard to your last point, it is true that verifying Wald's conditions is far less challenging than actually building the theory in the correct way. With that said it has its challenges. As we said, in general, sup (or inf) over an uncountable family of functions gives rise to measurability issues which are hardly addressed (or even mentioned) in the previous works. Also in the third line of our proof of Lemma 6.3 (in the upper bound of log-likelihood), the short argument relied on the homoskedastic error term assumption for the PPCA model. Had this been a Factor model with heteroskedastic error term the proof would not have been the same. Therefore we tried to exploit the model assumptions on PPCA whenever we could and it is not merely a replication of something more general.
Summary: In this work, the authors propose a novel topological framework and show that the maximum likelihood (ML) solution of probabilistic principal component analysis (PPCA) is consistent in an appropriate quotient Euclidean space. The consistency results encompass more estimators beyond the ML solution. In addition, the ML solution has been shown to achieve strong consistency when the parameter space is compact. Strengths: This work seems to be the first work to establish the (strong) consistency result of the ML solution of PPCA. Weaknesses: It would be ideal to remove the assumption that the latent dimension $q$ is known. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I am not familiar with the techniques used in this work. Does the consistency in the appropriate quotient Euclidean space means that the ML estimate $(\hat{\mathbf{W}},\hat{\sigma}^2)$ converges to the true parameters $(\mathbf{W}_0, \sigma^2_0)$ in probability up to rotation (i.e., $\hat{\mathbf{W}}$ converges to $\mathbf{W}_0 \mathbf{R}$ with $\mathbf{R}$ being orthogonal)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{We have addressed all the questions raised by the reviewer}.$ Thank you for your careful review and assessment. $\textit{You are correct}$. If we assume $\sigma^2=\sigma_0^2$ is known then our result does imply that $\widehat{W}$ converges to $W_0R$ for some orthogonal matrix $R$, which is to say $\mathbb{P}([\widehat{W}]\to [{W}_0])=1$. However, it is not just an abstract and technical result. In this case, using standard results from probability theory it is possible to argue that: $\mathbb{P}(\widehat{W}\widehat{W}^T\to\widehat{W}_0\widehat{W}_0^T)=1$, as the function $W\to WW^T$ is continuous. This is a concrete result (and can be stated without referring to the quotient space) which says that the true covariance matrix $W_0W_0^T+\sigma_0^2I$ can be consistently estimated through the maximum likelihood estimates $\widehat{W}\widehat{W}^T+\sigma_0^2I$. On a slightly more technical side, note that our quotient space construction was necessary as an intermediate step in the previous argument, in fact, we crucially used the continuous function $W\to WW^T$ factors through the quotient space using Theorem 4.2 in our paper, we can give more details if you want but skip for now to keep the discussion simpler. Lastly, the known $\sigma^2$ assumption was only for illustration purposes and can of course be removed, and in that case, the parameter $(\widehat{W},\widehat{\sigma}^2)$ recovers $(W_0,\sigma_0^2)$ up to a closed set $C$ defined in our paper just above the line 229, and here $\widehat{W}\widehat{W}^T+\widehat{\sigma^2}I$ consistently estimates the covariance of the PPCA process, a theoretical guarantee with respect to the maximum likelihood estimate that was not known before. Finally, $\textit{regarding the weakness}$ you pointed out, thank you for raising this concern. We can remove the assumption that: $q$ is known. Our proof techniques work as long as $1\leqslant q\leqslant p$ is a fixed integer, i.e. they do not grow with the sample size $n$. Usually, it is a different line of interest to want to estimate $q$ (related to model selection problems), also $q$ can be estimated consistently through MLE; We discussed all these relevant works in lines 134-148 in our work, which motivated us not to focus too much on $q$. The authors warmly welcome further questions, suggestions to improve the current manuscript, or if there is anything that the reviewer would want to see in the final version. Due to the page constraint, we had to make a choice on the content to present and our main focus was to judiciously build a rigorous framework to make it available for the community which was only partly addressed before by the previous work, Redner (1981). $\textit{We hope we have addressed all your comments satisfactorily and kindly request you to revise your score.}$ --- Rebuttal Comment 1.1: Title: Correcting a minor typo Comment: Hello everyone, It has come to our notice that there is a minor (and obvious) typo in the response we wrote for reviewer MhxU. In the second line of the rebuttal it should have been $\mathbb{P}(\widehat{W}\widehat{W}^T\to W_0W_0^T)=1$ and not $\mathbb{P}(\widehat{W}\widehat{W}^T\to \widehat{W}_0\widehat{W}_0^T)=1$.
Summary: The paper discusses consistency of the maximum likelihood (ML) estimation in probabilistic principal component analysis (PPCA). Despite its wide applicability, proving ML estimation consistency in PPCA has been a challenging task because of the non-identifiability of the problem. The author(s) extend the quotient space topology idea presented in Redner (1981) to a more general setting where ML estimator is one such estimator. Also, strong consistency of the ML estimator is proved under a compactness assumption. Strengths: 1. Detailed description of the quotient space topology and the associated metric. The constructive approach was clear and concise. 2. With standard assumptions on the quotient space, the author(s) derived both weak and strong consistency of the ML estimator in the PPCA model. 3. Verification of the Wald's conditions [Wald, 1949] in the quotient space topology with sufficient technical details. Weaknesses: 1. Some remarks on the Wald's conditions on the quotient space to clarify how those conditions could be interpreted in the quotient space would be useful. 2. One or two examples to clarify the result in Theorem 4.2 would be helpful. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. What would be $X_{\equiv}$ of Lemma 4.3 in the case of PPCA and in what sense $X_{\equiv}$ and the quotient $X\C$ are "identical" in that lemma? The two metric defined on those two topological spaces are not same- right? 2. I was wondering whether it's not possible to have the consistency result for the MLE in PPCA as a corollary of the results proved in the following Van der Vaart, A. W., & Wellner, J. A. (1992). Existence and consistency of maximum likelihood in upgraded mixture models. Journal of multivariate analysis, 43(1), 133-146. I understand the above may be for mixture models but can't we have a corollary for one mixing component which would be the marginal distribution of the data given W and $\sigma$ in your case. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes, the author(s) have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{We addressed all the questions raised by the reviewer.}$ Thank you for your careful review and meticulous reading and assessment. We are very happy that you asked two excellent questions which we would like to address below: Let us clarify the picture in depth before answering the question, we recall a few things. i) We have a topological (and metric) space $X$ which we want to quotient wrt. an equivalence relation $\sim$. ii) Problem is there is no nice metric structure (in general) on $X/\sim$, which seems to have gone unnoticed in the previous work by Redner (1981). There is however, a well-known pseudometric (defined after line 208) and the topology this pseudometric generates can be very different from the quotient topology (please see the example provided in reviewer QWak's response). So we have to be very careful when saying things like convergence, and continuity as those concepts crucially rely on which topology we are using. iii) To extend Wald's condition we need a metric space as written towards the end of Wald's original paper. Therefore $X/\sim$ with a pseudometric is not enough. However this pseudometric can be turned into a metric through another equivalence relation $\approx$. (which basically treats two points `same' if their distance under this pseudometric is 0 and thus turns it into a metric). iv) Now there are two equivalene relations on $X$, namely $\sim$ and $\approx$ ($\approx$ and $\sim$ are interrelated as we mentioned in line 216). Therefore there are two possible quotient spaces $X/\sim$ endowed with a pseudometric and $X/\approx$ endowed with a metric. Of course, those spaces are related as $\sim$ and $\approx$ are related. But we want to work with $X/\approx$ since that has a metric space structure and we have a hope to check Wald's condition there. v) Notice $X/\sim$ (which is the same as $X/C$) is our parameter space where we should be trying to extend Wald's condition but in the previous point, we said we want to work in $X/\approx$ because $X/\sim$ might fail to have the desired metric space structure. vi) It is a non-trivial fact (please see p18, Lipschitz algebras, Nik Weaver) that for some `nice' (in our case which is a quotient by a closed set $C$) equivalence relations $\sim$, it is true that $\sim=\approx$ and the equality here means, they partition $X$ into same equivalence classes. The great news now is that: $X/\approx=X/\sim$, implies that the pseudometric on $X/\sim$ is actually a metric that we crucially exploit. $\textit{Now to answer your question}$, 1) The space $X_{\approx}$ is abstract and does not have a very clear picture in general but in our paper, it is essentially $X/C$ which has a clear picture. You are absolutely right that in general the metric structures are not the same but in our case it is, which is one of the key things for Wald's idea to work. Lastly, we mention that for most statistical applications (like Redner(1981)) we quotient by a closed space, and then we should be in a good situation. However, the reference we provided in point(vi) above outlines a fairly general framework beyond just quotient wrt a closed set, under which $\sim=\approx$ holds, and also gives solid counterexamples when any of those conditions are violated which again explains why care is needed to build a correct foundation. 2) Authors are aware of the work: Van der Vaart, A. W., \& Wellner, J. A. (1992). Our work won't be a corollary of their proven theorems as they assume model identifiability (see their assumption (3.3) in section 3, consistency). Finally, we cordially thank you again for reading our work in-depth and we will include an example or two along with a brief discussion on how Wald's condition could be interpreted in the Quotient space in the final version. Due to the space constraint of this conference, we had to make a choice to present the topics. With regards to the previous works along this line the authors strongly felt it is highly important to keep things very transparent and build a solid foundation. This is the reason why we kept every detailed argument as a part of the presentation. We had to sacrifice a bit on possible implications of our work, one of which is notably a (strong) consistent covariance estimation of the PPCA model (please see the second last paragraph of reviewer ZSYo's response). However, the argument can get slightly technical and we can elaborate more if you are interested but overall it is an immediate implication of the convergence in the quotient space. Lastly, we very much welcome your further suggestion to improve our final version. $\textit{We hope we have addressed all your questions satisfactorily and kindly request you to revise your score.}$ --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: I thank the author(s) for their detailed explanations in the rebuttal. I am happy with the answers they provided to my queries. I have increased my rating to "accept". --- Reply to Comment 1.1.1: Comment: Thank you for your response and for engaging with us for a productive post-rebuttal discussion. We appreciate your cooperation.
Summary: paper addresses "Probabilistic PCA" which stands for a setup where the vector of p observations x can be written as x = Wz + eps where eps stands for an additive (Gaussian, centered, iid) noise, the matrix W \in R^{p x q} is an unknown and z in R^q is the other unknown . Only the value of q is known in advance and the goal is to recover W and z and remove the noise. The main challenge is to address the fact that the model is invariant w.r.t. applying a rotation R to W and z: Wz = (W R) (R' z) . Note that (W R) & (R' z) have the same euclidean norms as W & z. The contribution is to propose an analysis in the quotient space of the vector space by the rotations. Strengths: The paper contains a rigorous consistency analysis of the PPCA problem with respect to rotation invariance Weaknesses: The main focus of the paper on the rotation invariance does not seem to be the biggest concern with PPCA or other related (see Questions) methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. There is a broader set of problems where rotation invariance causes trouble, and that set of problem directly related to the PPCA problem considered: all problems where the observations / measurements are of the form <u_i, u_j> or <u_i, v_j>. This comprises PPCA, but also matrix factorization and embedding problems. Can authors discuss how their method can expand to those problems? 2. Is there a relation between rotation invariance and computational complexity? Can a rotation be a distractor in the gradient steps convergence of a first order method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The focus seems to be relatively narrow and implications of the proven results not sufficiently emphasized. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{We addressed all the questions and weaknesses raised by the reviewer}$. Thank you for your careful review and valuable assessment. First, we address your questions. 1) You are correct that `There is a broader set of problems where rotation invariance causes trouble' and we agree that PPCA is just a part of that broad set. Our methodology and framework are readily applicable even in those cases. To continue with your question, let us assume that (for matrix factorization) we are interested in the estimation of two matrices $\textbf{A},\textbf{B}$ such that the data matrix $\textbf{X}=\textbf{A}\textbf{B}+\text{noise}.$ Assume that $X\in\mathbb{R}^{n\times p},A\in\mathbb{R}^{n\times m},B\in\mathbb{R}^{m\times p}$. Observe that if $(A, B)$ is a solution to $X=AB$ then so is $(AR, R^TB)$, where $R$ is any $m\times m$ orthogonal matrix. Let $A_0,B_0$ are the true values of $A$ and $B$, respectively. We can try to check Wald conditions on the space $\mathbb{R}^{n\times m}/\sim_{A_0}\times \mathbb{R}^{m\times p}/\sim_{B_0}$, where $\sim_{A_0},\sim_{B_0}$ are equivalence relations defined by $\sim_{A_0}:=\{ A' :A'A'^T=A_0A_0^T \}$ and $\sim_{B_0}=\{B': B'^TB'=B_0^TB_0\}$. Note that $\sim_{A_0},\sim_{B_0}$ are both closed subspaces of Euclidean space. Geometrically we are in a nice situation here. In fact, it will work out as long as the likelihood function for this matrix factorization model has some regularities. More generally, if the parameters $u_i,v_j$ are of your interest and the likelihood contains something like $\langle u_i,v_j\rangle$, you can always do the above construction. 2) Authors are unsure how this question is related to our work as we have a `closed form' expression of the Maximum likelihoods in our case stated on page 4 in the description of the PPCA model. We usually apply numerical schemes (like gradient descent) where closed forms are $\textit{not}$ available. Having said that, we respond to the question. If rotational invariance (or more generally model identifiability) is present then that will lead to too many (possibly uncountable) local minima which is bad news. Too many local minima might cause trouble especially if their cost is higher than the global minima of the objective function. But, it seems that even in this case, it is possible to treat those large number of local minimas equivalent with respect to their costs and do a similar kind of analysis but that would mean trying to understand the dynamics of gradient steps convergence of a first-order method on a quotient space (which is non-Euclidean). It is possible though, to use our framework even in this case. But usually, to understand the dynamics, we need some sort of a smoothness (or a notion of differentiability) assumption beyond continuity which seems difficult on the quotient spaces. It is possible to do differential calculus on quotient spaces if they are manifolds and admit smooth structures. However, we can prove that those quotient spaces are not manifolds. So, standard methodologies from differential geometry do not apply. This seems like an excellent (but considerably harder due to reasons we mentioned) follow-up line of research in the future but for now is not very related to the work we have done. Lastly, we respectfully disagree with the $\textit{weakness}$ pointed out by the reviewer for the following reasons: 1) Authors understand that their result is stated in terms of abstract topological spaces but it readily implies consistent covariance estimation is possible. Though the PPCA community does not care much about rotational invariance they do care about consistent covariance estimation which was not known before. For instance our result implies $\mathbb{P}(\widehat{W}\widehat{W}^T+\widehat{\sigma^2}I\to W_0W_0^T+\sigma_0^2I)=1$ This follows from a standard fact in probability theory that $\mathbb{P}(X_n\to c)=1\implies \mathbb{P}(f(X_n)\to f(c))=1$ if $f$ is nice (continuous). In our case, the map $[(W,\sigma^2)]\to WW^T+\sigma^2I_p$ can be seen as a continuous map from the quotient space $X/C$ to the space $R^{p\times p}$ by Theorem 4.2. 2) As you already observed in the answer to the first question our methodology is highly flexible in the sense that it does $\textit{not}$ depend on the statistical model much as long as there are some regularities in likelihood function and in the geometry of the parameter space $\Theta$. This makes it even more usable, even for non-linear models like nonlinear independent component analysis where the observed data is generated as $x=f(z|\theta)+\text{noise}$, where $z$ is a latent random vector and $\theta$ is a parameter and the function $f$ is $\textit{non-linear}.$ The methodology we developed in our work for the PPCA model could be readily applied in this case as our quotient space construction barely depends on the fact that $f$ is linear for PPCA. It would be an interesting line of work to investigate the connection between the methodologies discussed in our work to that of Zheng et al `On the Identifiability of Nonlinear ICA: Sparsity and Beyond', NeurIPS 2022, where authors recover the latent sources up to an equivalence but using other techniques. There are numerous possible ways of research directions that can come out of this work. The primary focus for us was to build a strong theoretical foundation (that was missing in the literature, Redner(1981)), This allows seamless and flexible applications. Finally, we hope to make abstract topological ideas more mainstream and practical. For your assessment of us not sufficiently emphasising the implication of our work, please refer to the last paragraph of reviewer gPBN's response where we discuss this point. We warmly welcome your suggestions and comments to improve our final version. $\textit{We hope we have addressed all comments satisfactorily and kindly request you to revise your score.}$
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
StateMask: Explaining Deep Reinforcement Learning through State Mask
Accept (poster)
Summary: The paper presents an interesting model (so called statemask) to identify critical states for an agent's final reward. The goal of statemask is to find the non-important time steps and randomize their actions without changing the expected total reward of the target agent. A PPO based algorithm is leveraged to formally generate the model. Several numerical examples are shown to demonstrate the merits of the proposed model. Strengths: The paper presents an interesting model (so called statemask) to identify critical states for an agent's final reward. The goal of statemask is to find the non-important time steps and randomize their actions without changing the expected total reward of the target agent. Several numerical examples are shown to demonstrate the merits of the proposed model. Weaknesses: The paper seems claim the method is suitable for all decision making processes. However, for the type of shortest path finding problems, it is questionable that the method will be effective. In fact, the critical elements are not states, but the critical paths. So, it would be interesting to know statemask fits to what kinds of processes which is missing in the current version. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The optimization of (2) makes sense to provide a mask network. However, it is interesting to know if the state space is huge, how to find those critical states. For example, how the critical states in Figure 4 and Figure 5 are identified? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer AA7o for the constructive and insightful comments. Please see our response to each of your questions below. **1. The paper seems to claim the method is suitable for all decision-making processes. However, for the type of shortest path finding problems, it is questionable whether the method will be effective. In fact, the critical elements are not states, but the critical paths. So, it would be interesting to know statemask fits to what kinds of processes which is missing in the current version.** We do not claim that our method is suitable for all decision-making processes. Our method may not be suited to explain the agent with poor performance. In this case, the mask net may not differentiate the important and unimportant states since replacing the current action with random action or not will have little influence on the final performance. While our method is to identify the states most critical to the expected total reward, it is also implicitly associated with the related action at these states. Throughout treating the target agent with a fixed policy as a part of the environment as Figure 2 shows, StateMask actually takes both the state and the corresponding action of the target agent into account. Therefore, StateMask indirectly figures out the importance of the state-action pair. Back up to the shortest path finding problem, by identifying these crucial state-action pairs, we can reconstruct the corresponding path. Specifically, we apply our method in a shortest path finding environment--MiniGrid-Empty-6x6-v0 [1]. As is shown in Figure 2 of the attached PDF, the goal of the agent is to reach the green goal square using as few steps as possible. When the agent is optimal as trajectory 1 shows, our method figures out that all steps in the path are important. When the agent is sub-optimal as trajectories 2 and 3 show, our method highlights the path starting from the last sub-optimal action as the most critical one since in the pinpointed critical path, the agent must take actions to reach the green goal square with the fewest steps possible to keep the same reward. These insights underline the adaptability of StateMask to the shortest path problem and highlight its ability to capture critical steps in diverse decision-making contexts. [1] Minimalistic Gridworld Environment (MiniGrid). https://github.com/maximecb/gym-minigrid. **2. The optimization of (2) makes sense to provide a mask network. However, it is interesting to know if the state space is huge, how to find those critical states. For example, how the critical states in Figure 4 and Figure 5 are identified?** Our method is independent of the size of the state space. As stated in Sec. 3, we measure the significance of states based on the probability of StateMask output being 0, i.e., the probability of keeping the target agent’s action. Take Figure 4 and Figure 5 as an example, for a single trajectory, we calculate the importance score of each time step using StateMask and then rank the corresponding importance scores from top to bottom to identify these critical states. In addition, as depicted in Figure 4, we successfully apply our StateMask approach to the Pong game, which has a large state space ($210\times160\times3$). Despite the complexity of the state space, our method accurately predicts the importance of states with high fidelity as Figure 3 in the main text indicates. Moreover, in the paper, we demonstrate the effectiveness of our approach in the MuJoCo game, which has an infinite number of states due to its continuous state space. This result further validates that our approach remains unaffected by the size or complexity of the state space. --- Rebuttal Comment 1.1: Comment: Thanks the authors for additional numerical experiments and detailed explanations for my concerns. Now the contributions of the paper are clear. --- Reply to Comment 1.1.1: Title: Response to Reviewer AA7o Comment: We sincerely appreciate your positive feedback on the additional experiments and detailed explanations we provided. It's gratifying to know that our efforts have contributed to clarifying the contributions of the paper. We will include the additional experiment results and the detailed explanations in the next version of our manuscript.
Summary: This paper focuses on providing an explanation for deep reinforcement learning agents by identifying the important time steps within an episode. The authors propose a module called StateMask, which replaces the original agent's policy with random actions in specific time steps. By preserving the overall episode returns, only non-important time steps are randomized. Strengths: One notable strength of this paper is the intriguing concept of masking actions, which allows for identifying important time steps without altering the learned agent or its learning process. Weaknesses: 1. The objective Eqn. 2 is problematic. The optimal solution of Eqn. 2 is $\pi=\bar{\pi}$, i.e. $\tilde{\pi}(a_t^e=0|s_t)=1$, where the integration policy $\pi$ degenerate to the target policy $\bar{\pi}$. If StateMask $\tilde{\pi}$ becomes a constant policy, it fails to identify any important time steps. 1. Even if Eqn. 2 cannot be optimized to zero but instead reduced to a small value, StateMask $\tilde{\pi}$ would tend to predict $a_t^e=0$ in most states but $a_t^e=1$ in a few specific states. Since $a_t^e=1$ indicates a non-important state, this method can only identify a limited number of non-important states and fails to capture important time steps. Identifying non-important states is not consistent to the major motivation of this paper. 1. The motivation behind using the absolute error in Eqn. 2 and its surrogate objective in Section 3.2 is not clear. A more straightforward approach for regression tasks would be to minimize the squared error (MSE / L2 loss) rather than the L1 loss, as the L2 loss is differentiable everywhere. 1. The true optimization challenge is that the optimization problem of StateMask is another reinforcement learning problem. The authors propose a PPO-like surrogate objective which maximize the return under the constrain of minimizing the difference to target policy $\bar{\pi}$. However, it seems not consistent to the major objective Eqn 2. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The authors should clarify the motivation for proposing the surrogate objective. 1. In line 61, authors claim that they provide theoretical analysis for StateMask but it is missing. 1. What is the definition of $\tilde{\pi}_{\theta}$ in line 195? In the previous sections, $\tilde{\pi}$ denotes the policy of MaskState instead of the integration policy $\pi$. 1. What reward function is used for the advantage function in line 201? 1. The sign in Eqn 8 should be reversed. 1. The metric "fidelity" should be well-defined in the text. 1. The authors should use more metrics to evaluate their method, because StateMask directly optimizes "fidelity" which is not fair to other methods. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors should refine the objective Eqn 2. Besides, the writing skill needs to be improved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer qciU for the constructive and insightful comments. Please see our response to each of your questions below. **1. Questions about Eqn. (2)** **Q(1):** The optimal solution of Eqn. (2) is $\pi=\bar{\pi}$, failing to identify any important time steps. Please note that our goal is to find a non-trivial solution (i.e., mask some specific states) to Eqn. (2). The challenges of finding such a solution are explained in line 162-176, and our approach is designed explicitly to seek these “non-trivial” solutions. To achieve this, we introduce a mask ratio constraint (i.e., enforce the mask ratio is larger than a threshold $c$) and propose Eqn. (5) as our objective function. A comprehensive analysis of Eqn. (5) is available in Supp. S2. **Q(2):** StateMask's limited ability to identify important time steps is not consistent with the paper's major motivation. Instead of directly using the StateMask output $a_t^e$ to identify important time steps, we evaluate the significance of states based on the probability of mask net output being 0 as stated in Sec. 3. We calculate an importance score for each time step in a trajectory based on the probabilities given by the mask network's output. These scores are ranked in descending order, enabling us to identify the states that contribute the most to the final reward in the fidelity test, regardless of whether the mask network's output is frequently 0. Consequently, even in situations where the ratio of time steps being masked is low (e.g., <10% in MuJoCo games), the ranking of importance scores enables us to identify the critical time steps effectively (see Figure 3). **Q(3):** Why use $L_1$ loss instead of $L_2$ loss in Eqn. (2)? Recall our goal is to minimize the performance difference between two policies $\pi$ and $\bar{\pi}$ , which requires a $L_{p}$ loss to measure the performance gap. The above optimization problem will involve the gradient of $\eta(\pi)$ w.r.t $\theta$. However, a common solution using the standard policy gradient method [1] to optimize $\theta$ cannot guarantee the monotonic decrease of the $L_{p}$. The key challenge is to derive a surrogate objective that enables the monotonically decrease during training. Indeed, whether we use $L_1$ or $L_2$ loss is inconsequential to our algorithm, and Theorem 1 remains valid when optimizing with $L_2$ loss. The objective function, Eqn. (6), will be identical regardless of using $L_1$ or $L_2$ loss. Therefore, without loss of generality, we use $L_1$ loss in Eqn. (2) to introduce the challenge and motivate our design. [1] Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning,1992. **2. Motivation of the surrogate objective Eqn. (6) and consistency with Eqn. (2).** The motivation behind the surrogate objective Eqn. (6) is to address the challenge of obtaining a non-trivial solution for Eqn. (2) with a monotonic decrease guarantee as naively optimizing Eqn. (2) may not guarantee convergence. To solve Eqn. (6), we use a prime-dual method to update $\theta$ and $\lambda$ by solving Eqn. (7) and Eqn. (8). As for Eqn. (7), although it resembles the PPO, it introduces a Lagrange multiplier $\lambda$. It is worth noting that when $\lambda$ is larger than 1, we are doing something like minimizing the return. In fact, $\lambda$ serves as a role to control the performance difference between $\bar{\pi}$ and $\tilde{\pi}$. By updating both $\theta$ and $\lambda$ iteratively, we could solve Eqn. (6) and thereby monotonically decrease the major objective Eqn. (2). **3. Lack of theoretical analysis.** We apologize for any confusion caused. We delve into theoretical analysis in detail in Sec. 3.2. Specifically, we describe the construction of the surrogate objective, which is theoretically derived to guarantee a monotonic decrease in Eqn. (2), in accordance with Lemma 1 and Theorem 1. Furthermore, we provide a comprehensive theoretical analysis in Supp. S2, explaining why the optimization of Eqn. (5) leads to a monotonic decrease in Eqn. (2). **4. Definition of $\tilde{\pi}_\theta$ in line 195.** $\tilde{\pi}_\theta$ represents a parameterized policy of StateMask. It's essential to note that the parameterization is applied to $\tilde{\pi}$ and not the integral policy, $\pi$. Please see line 190-193 for more details. **5. Reward function used for the advantage function in line 201.** The reward function of the state mask is the same as the environment’s reward provided to the target agent. **6. Reverse the sign in Eqn. (8).** Please note that we have negated the sign of the formula in parentheses associated with $\lambda$ when deriving from Eqn. (6) to Eqn. (8). **7. The metric "fidelity" should be well-defined in the text.** We have included a detailed description of the metric "fidelity" in Supp. S5.1. **8. Need more metrics to evaluate their method, because StateMask directly optimizes "fidelity" which is not fair to other methods.** Our optimization objective aims to minimize the performance gap between the target agent and StateMask at the policy level, specifically, by reducing the expected total reward difference between their respective policies. In contrast, "fidelity" is defined at the trajectory level, where we assess if altering a consecutive series of actions within a fixed trajectory results in a significant change in the reward. Hence, StateMask does not directly optimize for "fidelity." It is worth noting that, to the best of our knowledge, the fidelity score proposed in [2] is the only quantitative metric available in our setting. Additionally, we present qualitative evaluation results in Supp. S11, where our method surpasses baseline approaches in facilitating the user's understanding of a DRL agent's policy. [2] Edge: Explaining deep reinforcement learning policies. In Proc. of NeurIPS, 2021. --- Rebuttal Comment 1.1: Title: Follow up with Reviewer qciU Comment: We wish to extend our heartfelt appreciation to Reviewer qciU once more, for generously offering us your remarkably insightful comments. As we approach the conclusion of the discussion phase, we take this opportunity to inquire if there exist any queries you would like to share concerning our response. We are pleased to address any further concerns you may have. Furthermore, if our response has satisfactorily addressed your concerns, we kindly ask if you can reconsider your score. Thank you for your time and attention. --- Rebuttal Comment 1.2: Title: Reply to authors Comment: I appreciate the response from reviewers. It enhances my understand the challenges of non-trivial solution and the monotonicity. 1. One concern I'd like to address is that this method does not propose a surrogate objective but changes the objective. The analysis in Theorem 1 and Lemma 2 lead to infer a constraint $\eta(\tilde{\pi}_{old})\leq\eta(\tilde{\pi})\leq \eta(\bar{\pi})$. Eqn.5 can be regarded as a surrogate objective of $$ max_\theta \ |\eta(\tilde{\pi}) - \eta(\bar{\pi})|, s.t. \ \eta(\tilde{\pi})\leq \eta(\bar{\pi}) $$ which is equivalent to $$ max_\theta \ \eta(\bar{\pi}) - \eta(\tilde{\pi}), s.t. \ \eta(\tilde{\pi})\leq \eta(\bar{\pi}) $$ This is different than the original problem (Eqn.2). This problem may converge to a different optimal point due to the absence of absolute value. Besides, this problem has no challenge of monotonicity guarantee stated in line 162-176. 2. The presentation of line 162-176 has to be refined. 1) the challenge is not differentiable or absolute value, as mentioned in Q(3). 2) explain why monotonicity guarantee is required for a regression objective in RL. 1. Why $\omega$ in Eqn(7) is a hyper-parameter but $\lambda$ is Eqn(6) is a parameter to be optimized? 1. I cannot locate the code for optimizing Eqn(8) in "Pong" and in folder "perfect_game". Please help to pinpoint the specific lines. I will consider to raise my score if authors further address these questions. --- Reply to Comment 1.2.1: Title: Reply to Reviewer qciU (Part 1) Comment: We appreciate your thoughtful feedback and your careful consideration of our work. Please see our response to each of your questions below. Regarding your first question, it is worth noting that Lemma 2 infers a constraint $\eta(\tilde{\pi}\_{\theta_{old}}) \leq \eta(\tilde{\pi}\_{\theta}) \leq 2\eta(\bar{\pi})-\eta(\tilde{\pi}\_{\theta_{old}})$, which is different from what you derived. Eqn. (5) can be regarded as a surrogate objective of $\max\_\theta \eta(\tilde{\pi}\_{\theta}), s.t. \eta(\tilde{\pi}\_{\theta}) \leq 2\eta(\bar{\pi})-\eta(\tilde{\pi}\_{\theta_{old}})$ when solved by TRPO. We have shown that a policy $\tilde{\pi}_\theta$ solved from the optimization objective Eqn. (5) satisfies Lemma 2 and thus enables the desired monotonicity of Eqn. (2) as shown in Supp. S2. Therefore, Eqn. (5) is consistent with the original problem Eqn. (2). Regarding your second question, we would kindly like to commence by offering some clarification. It is important to note that we are not asserting that the challenge pertains to the differentiability of an $L_1$ function. For the $L_1$ loss, it is non-differentiable only at the point 0, which coincides with the optimal solution. The depiction provided in lines 163-166 (specifically, the comparison between $\eta(\pi)$ and $\eta(\bar{\pi})$) mirrors the process of taking the gradient of the $L_1$ loss at a non-zero point. We utilize the $L_1$ loss as an illustrative example to highlight that straightforwardly optimizing Eqn. (2) does not ensure a monotonic decrease in Eqn. (2). Regarding the necessity of ensuring a monotonically guaranteed outcome, if we directly solve optimization of Eqn. (2), as expounded in our explanation in Q(3), it will entail calculating the gradient of $\eta(\pi\_\theta)$ with respect to $\theta$. Our only recourse to estimating $\nabla\_\theta \eta(\pi_\theta)$ is through the vanilla policy gradient method $\nabla\_\theta \eta(\pi_\theta) =\nabla_\theta \sum_{t=0}^{T-1} \log \pi_\theta (a_t \mid s_t) \hat{A}_t$ [1] where $\hat{A}_t$ is the estimate of the advantage function at time $t$. Nevertheless, this approach is widely regarded as ill-suited for most problems due to its elevated sample complexity [2,3]. Furthermore, selecting an appropriate step size (i.e., learning rate) that is effective throughout the optimization process for the vanilla policy gradient method becomes a more challenging task. To address these issues, drawing inspiration from TRPO, we have formulated an alternative surrogate objective that facilitates the monotonic decrease of Eqn. (2). We will include a discussion about the necessity of monotonicity in our next version. Thank you for your invaluable suggestion. [1] Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning,1992. [2] Optimizing expectations: From deep reinforcement learning to stochastic computation graphs. Diss. UC Berkeley, 2016. [3] Trust region policy optimization. In Proc. of ICML, 2015. --- Reply to Comment 1.2.2: Title: Reply to Reviewer qciU (Part 2) Comment: Regarding your third question, we appreciate your observation that $w$ in Eqn. (7) is a hyper-parameter but $\lambda$ is Eqn. (6) is a parameter to be optimized. Eqn. (5) transforms the original problem Eqn. (2) to a **constrained** reinforcement learning problem (i.e., by constraining the $\eta(\tilde{\pi})$ with an upper bound). To solve Eqn. (5), we transform the performance-bound constraint to a Lagrangian form by following the common strategy in the constrained RL area [4,5]. As such, $\lambda$ is introduced in Eqn. (6) as a Lagrangian multiplier to optimize the Eqn. (5). As for the sparsity constraint in Eqn. (5), we can also transform the sparsity constraint to the Lagrangian. However, an increasing number of Lagrange multipliers will increase the complexity of optimization [4, 6]. Since the sparsity constraint is essential for us to find a non-trivial solution to Eqn. (2), we are especially interested in accurately controlling the sparsity. We adopt a strategy akin to that employed in supervised learning, incorporating a regularization loss term, e.g. $L_1$ or $L_2$ regularization, and transform the sparsity constraint to the term $w L^{MASK}_t$ in Eqn. (7) and thus set $w$ as a hyper-parameter. Additionally, to investigate the impact of $w$, we vary the hyper-parameter $w$ from {0, 1e-5, 1e-4, 1e-3, 1e-2} and do an ablation study in Supp. S5.3. Regarding your fourth question, we sincerely apologize for any inconvenience caused. It has come to our attention that there was a synchronization issue with the code file in the Supplementary Material. We are sorry that the code in the Supplementary Material is not up-to-date and the old version mainly investigates the alternative design mentioned in Supp. S10. We observe that the alternative design has noticeable fidelity when the agent is near-optimal while performing worse when the agent is sub-optimal. It motivates us to redesign the goal and introduce a Lagrange multiplier to solve the problem. It is worth noting that we implemented the code for optimizing $\lambda$ in two MuJoCo games, namely You-Shall-Not-Pass and Kick-And-Defend, in the old version. You can locate this code in the file ppo2_mask.py, specifically at line 755, within the “normal_form/YouShallNotPass/src” and “normal_form/KickAndDefend/src” folders. Although we are still working on cleaning the code, we are committed to providing accurate and up-to-date resources for our readers and reviewers. We have uploaded the latest version of the code file for Pong and the perfect-information extensive game under the “updated_code” folder. You can access the corrected code from the link in the Supplementary Material. For your convenience, in the "updated_code/Pong" folder, you can find the code for optimizing Eqn. (8) within ppo.py, specifically at line 257. In the "updated_code/perfect_game" folder, the code for optimizing Eqn. (8) can be located within ppo_gmax.py, at line 500. We will release the cleaned code upon publication. We deeply regret any inconvenience this has caused and appreciate your understanding and patience as we rectify this issue. Thank you for bringing this matter to our attention. [4] Constrained Reinforcement Learning Has Zero Duality Gap. In Proc. of NeurIPS, 2019. [5] On The Robustness Of Safe Reinforcement Learning Under Observational Perturbations. In Proc. of ICLR, 2023. [6] Constrained optimization and Lagrange multiplier methods. Academic press, 2014.
Summary: This submission focuses on explaining which states are important to the agent’s final reward. By utilizing a mask to learn and assess which actions are critical. When learning the mask, it focuses on the random actions without affecting the agent’s performance. They evaluate on 10 different tasks such as Pong, some scenarios in StarCraft II, and Connect 4. After learning the mask, they provide fidelity scores and show some examples like Pong and Connect 4. With this information from the mask, they utilize it to perform adversarial attacks and correct agent errors by fine-tuning. Their work outperforms among EDGE, lazy MDP, and value-max. Strengths: Significance and originality: How ubiquitous this method can be since as they stated, it does not assume access to the agent’s value function or policy network. Hence it can be utilized by multiple methods as well as individual agents in a multi-agent environment. Figures 4 and 5 are interesting to showcase which time steps are important based on the fidelity scores. These figures showcase a one player game and a two player game showcasing the differences especially in Connect 4. Using the explanations to provide adversarial attacks and correcting agent’s sub-optimal performance is interesting. In Table 1, your implementation outperforms EDGE and others in both performance drop for adversarial attacks and performance gain in patching. Edit: I have read their rebuttal and I will change the score from a 5 to a 6. Weaknesses: Evaluation: More evaluations among other networks to see how versatile it is. Plus other ablations such as do you vary the amount of time steps for the input, like frame stacking to assess how the time step prioritization could affect among the parameter value for frame stacking. Clarity: In the supplementary material when reading it, the networks just said if they were CNN, LSTM, MLP, but were they DQN, A2C, and so on. This is important information to assess with your method if it can learn the different masking for them. Plus it would be interesting as another experiment to see if DQN or Double DQN focus on different time steps. Related Works: Related works to reference perturbation methods because in the design rationale what you are mentioning is very similar to what computer vision has done with perturbation methods to understand visual explanations with salience maps. You even use the nomenclature perturb like in the second paragraph of design rationale. For instance to include perturbation computer vision methods like RISE (Petsiuk, Vitali, Abir Das, and Kate Saenko. "Rise: Randomized input sampling for explanation of black-box models." arXiv preprint arXiv:1806.07421 (2018).) since it creates perturbation masks and to suggest that there has been work in computer vision using perturbations. Yes, you focus on the time steps but can mention that there has been work that wants to show which pixels are affecting classification. What you are using in your masking approach is still novel for reinforcement learning. Technical Quality: 3 good Clarity: 3 good Questions for Authors: There were no questions per say just suggestions in the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: They do mention challenges with their approach such as converge issues. So they did address some limitations. Other negative societal impacts is not an issue for this submission since they are working on the opposite part where they want to understand for reinforcement learning why decisions are being made. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer oH2u for the constructive and insightful comments. Please see our response to each of your questions below. **1. Evaluation: More evaluations among other networks to see how versatile it is. Plus other ablations such as do you vary the amount of time steps for the input, like frame stacking to assess how the time step prioritization could affect among the parameter value for frame stacking.** We appreciate the reviewer's suggestion to perform more evaluations among different networks and conduct further ablation studies. These suggestions indeed provide avenues for enhancing the comprehensiveness of our research. It is worth noting that we do not assume access to the target agent's network. We report the network structures of StateMask in Supp. S4.2, which vary from CNN to LSTM and MLP. This demonstrates the versatility of our method in accommodating diverse network structures for StateMask. We appreciate your suggestion to explore more evaluations across various networks, and will consider further investigations on other networks such as the Transformer to enhance the comprehensiveness of our evaluation in future research. Regarding the method of encoding the state (i.e., either considering only the current frame or stacking multiple frames), we conduct an experiment in the Pong game. Specifically, we verify the number of frames stacked for the StateMask input in the range of {1, 2, 4} and see how the fidelity changes. We show the fidelity score comparison in Figure 3(b) of the attached PDF. We can observe that stacking two and four frames share similar fidelity while the fidelity of stacking only one frame is much lower. We suspect the reason is that we need temporal information (e.g., the movement of the ball) when training the StateMask in the Pong game. In contrast, for games (e.g., Connect 4 or Tic-Tac-Toe) where temporal dependencies play a less role, the influence of frame stacking may be less pronounced. **2. Clarity: In the supplementary material when reading it, the networks just said if they were CNN, LSTM, MLP, but were they DQN, A2C, and so on? This is important information to assess with your method if it can learn the different masking for them. Plus it would be interesting as another experiment to see if DQN or Double DQN focus on different time steps.** Your suggestion to examine the impact of using different reinforcement learning methods like DQN or A2C to train the mask network is insightful. However, our current design does not permit the direct application of these methods because our method relies on iteratively updating $\theta$ and $\lambda$ by solving Eqn. (7) and Eqn. (8). To utilize DQN or A2C, we have to redesign the objective function that directly maximizes the agent’s expected total reward after applying the state mask. As an additional experiment, we implement DQN and A2C on the Pong game using the new objective function and compare them with our method and report the fidelity scores in Figure 3(a) of the attached PDF. The visualization results in Figure 1, show that the state masks learned by DQN and A2C focus on different time steps from ours and have a lower fidelity compared to our method. **3. Related Works: Include perturbation methods such as RISE in the computer vision area.** We appreciate the reviewer's suggestion to reference perturbation methods in the field of computer vision, such as the RISE [1] (e.g., a saliency map method based on randomly masking the inputs and obtaining the corresponding outputs). In light of your suggestion, we will certainly include a discussion of related work in perturbation methods from the computer vision area and compare StateMask with them in the next version. Thank you for bringing this to our attention. [1] RISE: Randomized input Sampling for Explanation of Black-box Models. In Proc. of BMVC, 2018. --- Rebuttal Comment 1.1: Title: Reply to your rebuttal Comment: Thank you for providing additional clarification. It is greatly appreciated. With the additional experiments, I will raise my score from a 5 to a 6. --- Reply to Comment 1.1.1: Title: Reply to Reviewer oH2u Comment: We are genuinely thankful for your insightful comments and we are pleased to see that our efforts have positively impacted your assessment. In our forthcoming version, we plan to include perturbation methods in the field of computer vision in the related work section. Furthermore, we will discuss two alternative designs and the effect of frame stacking. Your feedback continues to be invaluable as we enhance our work.
Summary: This paper aims to explain deep RL through identifying the critical states at which the action of policy significantly impacts the final reward performance. The main idea is to learn a state-mask, which is modeled as an additional policy, to determine whether to mask original action output by a random one and minimize the performance difference in the meantime. In practice, the authors adopt a trust-region trick to guarantee the monotonic decrease of performance difference and learn the objective in a PPO style. They further apply the explanation of critical states to do adversarial attack and error patch, which exhibits a better performance than baselines. Strengths: - The paper is clearly written and the presentation in the evaluation part is good. - The empirical results and the selected examples show the effectiveness of the proposed method on key state explanation. - The generated explanation is easily compatible with the downstream tasks, e.g., adversarial attack and defense, which lays a foundation for future work. Weaknesses: - The paper is implicitly built upon a restrictive assumption that there are some specific single timesteps/states that contribute to the final reward significantly in every episode. However, there are other cases that it is a series of actions (may be consecutive or not) that mutually influences the total reward, which is more common in complex environments but hard to be captured by this method. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - When using thm 1 to derive the objective function in eq.(5), do you remove the first condition in eq.(4)? - How do you choose the threshold $c$ in eq.(5) since there is a trade-off between better explanation and smaller performance difference? - How is the fidelity score in fig. 3 computed? The authors mention it is from [1] but in the original paper the lower score indicates the higher fidelity of explanation (the third last row in page 7). [1] Guo, Wenbo, et al. "Edge: Explaining deep reinforcement learning policies." Advances in Neural Information Processing Systems 34 (2021): 12222-12236. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer ENqW for the constructive and insightful comments. Please see our response to each of your questions below. **1. The paper is implicitly built upon a restrictive assumption that there are some specific single timesteps/states that contribute to the final reward significantly in every episode. However, there are other cases that it is a series of actions (may be consecutive or not) that mutually influences the total reward, which is more common in complex environments but hard to be captured by this method.** The reviewer questioned that our paper is based on the assumption about the significant influence of specific timesteps or states on the final reward. However, we would like to clarify that our method, StateMask, does not inherently depend on such an assumption. StateMask is designed to identify not only individual states but also multiple states that contribute towards the final reward. It is capable of tracing back to multiple actions that mutually influence the reward, regardless of consecutive or not. Evidence of this can be seen in Figure 5 of our paper. Our StateMask method identifies two important non-consecutive action series in the complex game Connect 4. The first critical action series marked by our method is the initial move. This move, as explained by [1], sets the foundation for future gameplay. Furthermore, StateMask highlights the last two time steps as significant contributors to the agent's victory. The second-to-last step stands out as pivotal, as the black player has a winning strategy no matter which column the blue player places their piece next. This showcases StateMask's effectiveness in capturing joint contributions of actions in complex environments. Thus, we believe our method sufficiently addresses the scenario posed by the reviewer and is versatile enough to handle complex environments where multiple action series can mutually affect the total reward. [1] A knowledge-based approach of connect-four. Journal of the International Computer Games Association, 1988. **2. When using Theorem 1 to derive the objective function in Eqn. (5), do you remove the first condition in Eqn. (4)?** No, we do not remove the first condition in Eqn. (4) when using Theorem 1 to derive the objective function in Eqn. (5). In fact, optimizing Eqn. (5) is equivalent to optimizing Eqn. (4). We use the same trick to deal with the first condition in Eqn. (4) as TRPO [2] and transform it to maximize the local approximation $L\_{\tilde{\pi}\_{\theta\_{old}}}(\widetilde{\pi}\_{\theta})$ with a trust region constraint. We recommend referring to Supplement S2, where we give a detailed analysis to clarify this point. [2] Trust region policy optimization. In Proc. of ICML, 2015. **3. How do you choose the threshold $c$ in Eqn. (5) since there is a trade-off between better explanation and smaller performance difference?** We appreciate your inquiry about the threshold $c$ in Eqn. (5). While we do not explicitly choose the threshold $c$ in Eqn. (5), we transform the constraint $E\_{a^e \sim \tilde{\pi}\_{\theta}}[a^e] \geq c$ to an additional loss item $L^{mask}$ with coefficient $w$ when optimizing StateMask. Please refer to Supp. S3 for the detailed derivation. Now, $w$ will be utilized to regulate the proportion of the mask. As stated in Supp. S4.2, we search the $w$ in {0, 1e-5, 1e-4, 1e-3, 1e-2} for each environment. Besides, we also study the sensitivity of $w$ in Supp. S5.3. We can observe that when the hyperparameter $w$ differs, the fidelity scores of our explanation method do not vary too much. This suggests that our model is robust to the choice of "$w$", and thus manages the trade-off effectively. **4. How is the fidelity score in Figure 3 computed? The authors mention it is from [1] but in the original paper the lower score indicates the higher fidelity of explanation (the third last row in page 7).** We appreciate your query about the fidelity score presented in Figure 3. We apologize if there was any confusion due to the space limit. In our work, we use a similar but slightly modified approach from [3] to calculate the fidelity scores. Unlike [3], in our method, we have introduced a negative sign, thus a higher score represents higher fidelity. We make this adjustment for ease of presentation. The details of our fidelity test can be found in Supp. S5.1. We believe this comprehensive description will provide clarity on the metric we used. [3] Edge: Explaining deep reinforcement learning policies. In Proc. of NeurIPS, 2021. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. Most of my concerns have been addressed and I have some suggestions on fig.4. I roughly agree that Statemask is capable of capturing non-consecutive key actions but the illustration in fig.4 are not very convincing as the identification of critical steps in fig.4 may be not very hard. So probably you can add some comparison with baselines (e.g., other baseline cannot identify the critical time) or move some illustrations on complex tasks (e.g., figures in supplementary S6) to main text in revision. --- Reply to Comment 1.1.1: Title: Reply to Reviewer ENqW Comment: We appreciate your continued engagement with our manuscript and the acknowledgment of our efforts in addressing your concerns. Your insights are valuable to us in ensuring the quality of our work. Although we are unable to submit further revisions at this stage, your suggestions will undoubtedly be incorporated into the subsequent version. We will move some illustrations on complex tasks (e.g. DouDizhu) from Supplement S6 to the main text and include visualization comparisons with baselines in the Supplementary Material in the revised version. We are grateful for your constructive feedback.
Rebuttal 1: Rebuttal: Dear Reviewers, We would like to express our sincere gratitude for your thoughtful and constructive feedback on our manuscript. Your insights and suggestions have significantly enriched the quality of our work, and we appreciate the time and effort you have dedicated to reviewing our paper. In response to your valuable recommendations, we have diligently incorporated additional experiments that align with your suggestions. Following the suggestion of Reviewer oH2u, we further investigate two alternative designs DQN and A2C, and the effect of frame stacking in the Pong game. In response to Reviewer AA7o, we do additional experiments on a shortest path finding problem -- MiniGrid-Empty-6x6-v0 and visualize the identified critical path. We have thoughtfully documented these experiments and their outcomes in the attached PDF document, which we believe adds substantial value to the overall contribution of our research. Your input has been instrumental in shaping the evolution of our paper, and we hope that the additional experiments and results we have provided effectively address your concerns and contribute positively to the overall understanding of our methodology. Once again, we extend our heartfelt appreciation for your invaluable feedback, which has undoubtedly contributed to the advancement of our research. Pdf: /pdf/d45ef7f1525ab4a44c9a36fd56539543f65d92cf.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Unified Enhancement of Privacy Bounds for Mixture Mechanisms via $f$-Differential Privacy
Accept (poster)
Summary: This paper introduces a framework for analyzing mixture distributions via $f$-DP and its tradeoff function. Additionally, the paper leverages this framework for improving bounds for shuffling mechanisms in DP, and the same framework to prove a statement about privacy of a single-step of gradient descent from random initialization. Strengths: This paper addresses a fundamental problem in DP: computing effective estimates on additive mixtures of random variables. Methods of analysis for these mixtures are, to date, reasonably ad-hoc and certainly not known to be tight. This paper presents a clean unified approach, making connections to more abstract and clean mathematically-posed problems (IMO, this is the manner in which DP should imagine evolving itself, away from its CS + statistics formulations, which are often focused on data structures and properties of particular random variables). Not only does this paper formulate a nice presentation, but to my knowledge it improves existing statements through this framework. Clear accept. Weaknesses: Writing some nits no presentation that came up from a stream of consciousness read below. Not necessarily all weaknesses, just comments. * I would prefer a slight reordering of the content. AFAICT, the main logical flow is: Lemma 4.1 -> Prop 4.3 -> Thm 3.1, with the rest of the stuff more or less as add-ons (though FWIW i have not yet read the section on advanced concavity). I would really prefer this logical flow to be more clear. Ideally, for me, we would present the proofs straight through, beginning with the lemmas. However, I get that this is CS and we don't really do that here. I think a reasonable option is to sketch the proof in the main body, or discuss that the major results seem to follow primarily from the fairly simple Lemma 4.1, plus some calculations, given the reduction introduced by [20]. This logical flow is simple and quite nice, and IMO it should be highlighted on its own terms. * Editorially, I would suggest pushing the DP-GD stuff into an appendix. I think it confuses the message, and it's not clear either how extensible it is (presumably it breaks after one step due to the injected dependence? Though I could imagine conditioning on stuff and invoking nested mixtures, this may be prohibitively difficult since IIUC Gaussian-ness will go away). * Confusion on 'the outputs of two neighboring datasets are post-processing of r.v.s $X$ and $Y$' (beginning of section 4). Going back to the reference, it seems like the technical meat there is effectively a good reduction to scalar-valued random variables (if I read correctly, the content of their Thm 3.1)--presumably that is what is invoked here? But it reads as if the development were restricted to scalar-valued mechanisms (which I assume it is not). * Setup to 4.1: the way that $X$, $Y$, and $I$ are coupled is unstated. Meaning: in the setup here, it is possible that $I$ is independent of $X$ and $Y$ (in which case $X|I$ and $Y|I$ would just be $X$ and $Y$). I guess the coupling is something like, e.g. $X$ is coupled with $I$ such that $X|\{I=i\} \sim p_i$? But if so, I don't see this stated anywhere before the proof of Lemma 4.1. * I know this is not how CS papers are generally written, but (being an old mathematician myself) I would really personally prefer to read this paper straight through and without applications, building up to an improved analysis (which I assume is Thm 3.1), then closing with discussion of relation to prior results and tightness. * Speaking of relation to prior results, and putting my CS hat back on: this could really use some deeper dives / commentary. I would recommend cutting the DP-GD stuff (or pushing to an appendix) and extending the discussion of relationship to prior bounds. For example: there is a table showing $\delta$s for varying $\epsilon$s at a fixed $\epsilon_0$. What happens when we vary this $\epsilon_0$ over a wide range--are the results here _always_ better (and qualitatively similar)? Or are there some regimes in which previous analyses were tighter? * Following the citations of [20] leads me to https://arxiv.org/pdf/2304.05007.pdf. Glancing through this paper, it seems to me that one major difference with the present work is that this work claims improved amplification relative to [20] _independent_ of mechanism structure, whereas that work improves their bounds in the case of some particular mechanisms (and their worst-case bounds, for general mechanisms, are identical). Is this correct? Can you speak to this relationship? * One confusing thing on a read-through: it is difficult to find the proof of proposition 4.3. This proof can be found in the 'technical details for section 3.1', which is not where a reader like me would look right away (given, in particular, that there is an appendix section on proofs for section 4). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Simply a summary of some of my stream-of-consciousness comments above: * Could you comment more extensively on relation to prior results? Is the analysis here _always_ stronger? * Similarly, could you comment on the relation to https://arxiv.org/pdf/2304.05007.pdf? I listed my understanding of the way these relate above, but looking for some clarity here. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Tightness or lack thereof of particular results was discussed. Negative social impacts not immediately applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your extremely positive feedback and your valuable comments. I have organized the responses to the weaknesses and questions as follows, corresponding to the 8 items in weaknesses. The reference number provided here aligns with the supplementary material, which slightly differs from the 9-page paper. 1. We have taken your advice into account concerning the structure of our paper. Initially, we highlighted the theoretical part as you metioned, but we finally moved it behind the applications section in the submitted version. This adjustment is aimed at enhancing the paper's accessibility to a wider audience, particularly NeurIPS readers, given the statistical nature of the theory. After careful consideration, we have opted to retain the current structure with certain modifications. We have clarified the proof of Theorem 3.1, which relies on both Proposition 4.3 (implied by Lemma 4.1) and Proposition 4.8 (a result of Lemma 4.7). In the submitted version, a proof sketch is contained at the beginning of Section 4 implicitly which has also been highlightened in the revised version. Moreover, a detailed citation and some discussions regarding the post-processing in [23] has also been provided at Section 4. 2. We sincerely value your feedback regarding the DP-GD section (Section 3.2). In the submitted version, we show that it can be applied to noiseless linear models with least squares loss, which are also used in practice (cf., https://arxiv.org/abs/2006.08212). We acknowledge the need to address more general models and the difficulty to clarify the sensitivity $\mu_I$ with given $I$ for general models and loss functions. While we have decided to retain this section, we included more discussions on the extensions and restrictions in the revised version. 3. Both $X$ and $Y$ are 2-dimensional vector-valued random vectors, representing mixtures of binomial distributions. As you indicated, the primary contribution of [23] is that they propose a promising post-processing procedure to convert the multi-dimensional shuffled model to 2-dimensional random vectors, which makes the likelihood ratio easy to verify. 4. We have addressed the potentially confusing notation $(X|I, I)$ and introduced further clarification: $(X|I, I)$ means we observe two random variables, $X|I$ and $I$, where $X$ follows a mixture distribution $\sum_{i=1}^m w_i p_i$, and $I$ represents the information of indices with $\mathbb{P}[I=i] = w_i$. $X|I$ denotes $X|I=i \sim p_i$. As releasing the indices exposes more information, the couple $((X|I, I), (Y|I, I))$ is more separable than $(X, Y)$ without knowing the indices, which motivates Lemma 4.1. 5. We have introduced an explanatory paragraph regarding tightness immediately following Theorem 3.1: The bound in Theorem 3.1 is near-optimal in theory, and it represents a significant enhancement compared to [23], as demonstrated in Table 1 and Figure 1. In fact, the proof of Theorem 3.1 is based on a post-processing procedure in [23], joint concavity (Proposition 4.3), and advanced joint concavity (Proposition 4.8). The post-processing procedure is sharp for specific mechanisms, such as the randomized response mechanism, as shown by Theorem 5.2 and Theorem 5.3 in [23]. Proposition 4.3 is also sharp, as demonstrated in Section 4. Based on the proof of Proposition 4.8 in Section B.1.2, the advanced joint concavity used in this example is almost the tightest closed-form bound that can be derived. Compared to [23], the main advantage of using $f$-DP (Proposition 4.3) is that we avoid the use of Hoeffding's inequality, which is adopted in [22, 23] and leads to loose bounds, to bound the mixture of binomial distributions. Moreover, Theorem 3.2 in [23] holds with an assumption on the local privacy budget $\epsilon_0$, which is removed by using $f$-DP in our paper. Consequently, our bound is always better than the theoretical bound in Theorem 3.2 in [23] as Proposition 4.3 is tight and an $(\epsilon,\delta)$-DP version of Proposition 4.8 was also adopted in [23]. 6. In line with Item 5, we have discussed the tightness of the shuffling models in the revised version. Additionally, we have included Figure 6 in the attached pdf about varying $\epsilon_0$ for your reference. Considering the DP-GD stuff, we discussed the application and extensions from noiseless linear models as stated in Item 2. Since the revised paper does not exceed the pages required by the camera-ready version, we decided to keep this content. 7. Thank you for introducing the concurrent work https://arxiv.org/pdf/2304.05007.pdf. After reading through their paper, as you mentioned, one difference is that they analyze specific local randomizers satisfying certain assumptions. Their tightness is to design a tighter post-processing procedure (in comparison to [23]) to reduce the coupled distributions to the mixture of binomial noise for specific local randomizers. Their analysis of the mixture distribution similarly rests on Hoeffding's inequality, which, as discussed in Item 5, is refined by our Proposition 4.3. Combining their novel post-processing with our tight analysis of mixture distributions could potentially yield tighter bounds, an exciting avenue to explore. 8. We have updated the title of Section B.1 to "Proofs of Proposition 4.3, Proposition 4.8, and Theorem 3.1." Additionally, we have divided Section B.1.2 into two separate sections, one focusing on the proof of Proposition 4.8 and the other on the proof of Theorem 3.1. This reorganization aims to improve clarity and facilitate better understanding for the readers. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. This all makes sense. I will keep my score of accept. --- Reply to Comment 1.1.1: Comment: We are grateful for your valuable comments and comprehensive suggestions once more. We will thoroughly reconsider your insights regarding the paper's structure until we finalized the camera-ready version.
Summary: One of the main challenges the differential privacy frame work is facing these days, is the gap between the variety of randomization techniques applied in the machine learning community for various reasons, and our limited capability to prove the privacy amplifications they entail. Among primary examples we can mention random initialization and shuffling techniques. The authors of this paper leverage the known connection between DP and hypothesis testing, specifically relaying on the newly presented notion of f-DP, to transition the analysis of the effect of these mechanism from the domain of privacy loss distribution to the domain of hypothesis testing, where they prove several key results. Combining these results with the known implications between f-DP and DP, they provide tighter bounds for privacy amplification by shuffling and new results for privacy amplification by random initialization (for the limited setting of one gradient step). Strengths: This paper partially fills a long time gap in our understanding of the way privacy is improved by introducing various random steps to the learning algorithm. Unlike most previous works, the authors use the newly presented f-DP definition and the known implications between it and the DP definition, to analyze the effect of these randomization techniques on the trade-off function. The contribution of this paper extends beyond its results, as the proof techniques presented in it might be used for the analysis of other potential amplification techniques as well. Weaknesses: At first glance, I got the impression that the results of this paper aim to provide guarantees using a separate notion of privacy, which cannot be compared to DP. This is indeed not the case, thanks to the known implications between DP and f-DP, but I found the current presentation somewhat confusing regarding this point, and I recommend the authors clarify it in the final version. I found some of the choices made in the numerical evaluations presented in Figure 1 to be not so clear. First of all, the sample size was chosen to 1,000 while the reference paper used 10,000, and $\delta$ was chosen as $1/n$ while it is often expected to be of the order $o(1/n)$. A more substantial issue with the comparison presented in Figure 1 has to do with the presented baseline. If I understand correctly, the baseline results were chosen to be those of the closed-form presented in [20], but that work contains tighter results presented in Theorem III2, which can be numerically evaluated, and should be presented as the more relevant comparison. Minor comments: * In line 104, the inequality if a functional inequality, but this notion was not presented before, so the notation is somewhat confusing. ======= **Edit after rebuttal discussion:** The authors response satisfied my remaining concerns. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: i will appreciate some clarification regarding the choices made in the evaluation presented in figure 1, and the chosen baseline. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your extremely positive rating and valuable comments. Thanks to your feedback, we revisited the numerical results of Feldman et al. (2023) and compared them with our own. The comparison showed that our methods outperformed their numerical upper bound. Additionally, we conducted a comparison with their numerical lower bound, which, to the best of our understanding based on their code, is obtained using binary search for privacy auditing. The results demonstrated that our bound is near-optimal. The reference number here is aligned with the supplementary materials. Below, we provide detailed responses. Responses to the weaknesses: 1. The choices in Figure 1. We initially used a sample size of $n=1000$ and set $\delta = 1/n$ as a simple example to demonstrate the sharpness of our $f$-DP bound. In response to the reviewer's suggestion, we have now updated Figure 1 and Table 1 with $n=10000$ and $\delta = o(1/n)$ which is in line with exisiting literature [23]. Please refer to the attached PDF for both the updated figure (Figure 5) and the new table (Table 3&4). Here, we display a simplified table with $\epsilon_0 = \log\left(\frac{n}{8\log (2/\delta)} - 1 \right) = 4.444$, which is the critical value required by Theorem 3.2 of Feldman et al. (2023) with $\delta = 10^{-6}$. It is worth noting that this assumption, caused by Hoeffding's inequality, is removed in our $f$-DP analysis, as we utilize Lemma 4.1 instead of Hoeffding's inequality. As seen in the updated table, our bound still significantly outperforms previous results. Table 3 in the attached pdf: $\epsilon$: 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |1.1 $\delta$ in [23]: 0.9494 | 0.3764 | 0.1038 | 0.0181 |0.0018 | $8 * 10^{-5}$| $2 * 10^{-6}$ $\delta_{f-DP}$: $3 * 10^{-6} | 10^{-7} | 4* 10^{-9} | 9 * 10^{-11} | 2 * 10^{-12}| 2 * 10^{-14}| 3* 10^{-16}$ 2. Thank your for pointing out the minor errors in line 104. The functional inequality means the inequality holds pointwisely, which is pointed out in the revised version. Responses to the weaknesses: 3. Choosing the numerical upper bound in [23] as the baseline. In the submitted version of our paper, we solely compared our results with the theoretical upper bound presented in [23]. However, following the reviewer's suggestion, we reevaluated the numerical results in [23] and conducted a more comprehensive comparison with their work. For this comparison, we selected $n=10000$, and $\epsilon_0$ was set to $4.444$, aligned with Item 1. The results of this comparison are listed in Table 4 in the attached PDF and we listed the simplified one below. It is important to note that for very small values of $\delta$, the numerical results in [23] remain invalid due to their assumptions on $\epsilon_0$ in their Theorem 3.2. Additionally, we also compared our results with the numerical lower bound generated by inference attacks in [23]. This comparison further substantiates that our $f$-DP analysis is indeed near-optimal. From a theoretical standpoint, this near-optimality can be interpreted by considering the basis of our proofs. The proof of Theorem 3.1 relies on a post-processing procedure in [23], the joint concavity (Lemma 4.1 and Proposition 4.3), and the advanced joint concavity (Proposition 4.8). The post-processing procedure is sharp for specific mechanisms, such as the randomized response mechanism, as demonstrated by Theorem 5.2 and Theorem 5.3 in [23]. Proposition 4.3 also exhibits sharpness, as highlighted in Section 4. Furthermore, based on the proof of Proposition 4.8 in Section B.1.2, the advanced joint concavity utilized in this example is nearly the tightest closed-form bound attainable. Table 4 in the attached pdf: $\delta$: $5 \times 10^{-5}| 3 \times 10^{-6} | 10^{-7}| 4 \times 10^{-9}| 9 \times 10^{-11}| 2 \times 10^{-12}| 2 \times 10^{-14}$ $\epsilon_{fdp}$ (ours): $0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0$ Numerical $\epsilon$ upper bound in [23]: $1.014 | 1.085 | \epsilon_0 | \epsilon_0 | \epsilon_0 | \epsilon_0 | \epsilon_0$ Numerical $\epsilon$ lower bound: $0.369 | 0.470 | 0.575 | 0.664 | 0.758 | 0.845| 0.944$ --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications and updates which satisfied my remaining concerns, and apologize for the late response. --- Reply to Comment 1.1.1: Comment: Thank you once more for your profoundly insightful feedback, particularly for prompting us to reevaluate the numerical findings presented by Feldman et al.
Summary: Randomization is an essential too in deriving differentially private algorithms. However, sophisticated randomization techniques such as shuffling induce complicated distributions on outputs, making analysis of privacy loss difficult. The paper uses the framework of f-differential privacy to provide tighter analysis on the privacy guarantees of shuffled DP-SGD than existing methods do. Additionally, the authors show that randomized initialization can beneficially improve privacy guarantees for one step problems. Strengths: - The closed-form bounds for shuffled DP-SGD seem particularly important, even if the bounds are complicated. Table 1 does a good job of convincing the reader that the obtained bounds are significantly better than existing ones. - Moreover, the authors present some useful results for studying tradeoff functions in the f-DP framework. These will likely be useful for future investigations. Weaknesses: - Results for random initialization for one-step SGD do not seem particularly useful? In particular, this contribution seems like a bit of a toy problem. - Some demonstration of the improved bounds for Shuffle-SGD would be useful to the reader. In particular, it would be convincing if there was some simple learning task on which the accuracy of the model produced by shuffle-SGD (under the new analysis) significantly outperformed the model produced under the old analysis. This should happen for any experiment, as the privacy budget savings seems to be significant per table 1. Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The authors fairly discuss the limitations of their work, in particular their intentions to study multi-step SGD with random initialization. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your positive comments and acknowledgment of the novelty of our results. In response to the identified weaknesses, we provide the following explanations: 1. Response to the first item. In Section 3.2, we applied Theorem 3.5 to analyze privacy amplification in noiseless 1-dimensional linear models, where the sensitivity of the gradient $\mu_I$ with a given initialization $I$ can be specified. It is worth noting that similar results are applicable to multidimensional linear models, where $y_i = a^T x_i$ and $x_ix_i^T = \mathbb{I}$ with $\mathbb{I}$ being the identity matrix. Noiseless linear models find relevance in various applications, as exemplified in the work by Berthier et al. (2020) titled "Tight Nonparametric Convergence Rates for Stochastic Gradient Descent under the Noiseless Linear Model." For more general applications, such as general convex loss functions, deriving $\mu_I$ becomes more intricate and deserves further investigation, which we plan to address in our future study on the privacy of multi-step DP-SGD. Overall, the DP-GD part is not as promising as our results on shuffling models but it is still novel with applications to specific examples. 2. Response to the second item. We are grateful for your valuable suggestions regarding the experiments. As mentioned by Feldman et al. (2020), it is indeed possible to analyze the privacy of shuffled DP-SGD when the number of iterations is given. However, despite extensive literature review on shuffling models, we have not come across any experimental results involving running shuffled DP-SGD. We have compiled a list of related papers, including a survey paper, but due to the constraints of time for this rebuttal, we intend to present our experimental results for shuffled DP-SGD in a subsequent study. To make our theory more convincing, we added one paragraph in Section 3.1 regarding the tightness of Theorem 3.1 as follows: The bound in Theorem 3.1 is near-optimal in theory, and it represents a significant enhancement compared to [23], as demonstrated in Table 1 and Figure 1. In fact, the proof of Theorem 3.1 is based on a post-processing procedure in [23], joint concavity (Proposition 4.3), and advanced joint concavity (Proposition 4.8). The post-processing procedure is sharp for specific mechanisms, such as the randomized response mechanism, as shown by Theorem 5.2 and Theorem 5.3 in [23]. Proposition 4.3 is also sharp, as demonstrated in Section 4. Based on the proof of Proposition 4.8 in Section B.1.2, the advanced joint concavity used in this example is almost the tightest closed-form bound that can be derived. Compared to [23], the main advantage of using $f$-DP (Proposition 4.3) is that we avoid the use of Hoeffding's inequality, which is adopted in [22, 23] and leads to loose bounds, to bound the mixture of binomial distributions. Moreover, Theorem 3.2 in [23] holds with an assumption on the local privacy budget $\epsilon_0$, which is removed by using $f$-DP in our paper. Besides, we compared our upper bound with the numerical lower bound obtained by inference attacks which shows that our bound is near-optimal, as shown in Table 4 in the attached pdf. https://arxiv.org/pdf/2208.04591.pdf https://arxiv.org/pdf/2012.12803.pdf https://arxiv.org/pdf/2107.11839.pdf https://arxiv.org/pdf/2303.07160.pdf https://arxiv.org/abs/2105.05180 http://proceedings.mlr.press/v139/ghazi21a/ghazi21a.pdf
Summary: This paper studies an important problem in DP: the privacy quantification based on a mixture of randomness. Based on f-DP, the authors point out the joint concavity of the tradeoff function. Two potential examples are proposed, including the shuffling model and the privacy amplification from random initialization. Strengths: + The problem studied is important to the privacy analyses of many applications. In particular, I really like the idea to consider the privacy amplification from the random initialization. + The introduction and the potential limitation are nicely written. Motivation is clear. + The improvement upon the privacy analysis on shuffling model seem interesting. Weaknesses: - Though I really like the idea to take the randomness of initialization into account and study its privacy implication, the results presented in Section 3.2 are not convincing. The claimed simulatable f-DP guarantees seem to be instance-based, or is with respect to a given adjacent datasets $D_0$ and $D_1$. Moreover, even for the particular instance-based guarantee, there is no analysis provided, such as high confidence bound, to allow us really apply this bound to produce rigorous privacy guarantee. Moreover, (please correct me if I get it wrong), based on my understanding, applying the joint concavity, Theorem 3.5 is essentially determined by the average case of the "local" sensitivity given the random initialization. So, I guess current studies cannot be generalized to the standard worst-case DP guarantee. - The notations are confusing. For example, I did not find the formal definition of $(\theta(D)|I,I)$ in Section 3.2. My best guessing is because there are two random sources? But why we must assume the noise and random initialization are in the same distribution? - Section 3.1 is lack of theoretical analysis on how tighter the f-DP bound given in Theorem 3.1 is, as compared to previous works. Though the empirical improvement on the $\delta$ numbers seem significant, given that the dependence of $\epsilon$ on $\delta$ is in logarithm, it is not very clear whether such improvement is general. Another missing issue is the explanation on the computational accuracy $10^{-6}$ in Table 1. Where is this restriction coming from? Moreover, I think one major motivation to introduce f-DP in [14] is for tighter composition. The authors may also want to take it into consideration. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. How to really apply the privacy amplification results in 3.2 under regular input-independent f-DP scenario? 2. What the fundamental challenge to generalize the results to the subsampling case? I think the subsampled f-DP is already solved in "Deep Learning with Gaussian Differential Privacy" (https://arxiv.org/abs/1911.11607). 3. For the theoretical analysis, can you characterize the improvement compared to previous works? 4. It would be more helpful if the authors can elaborate the background of shuffling model, at least why the output distributions from two adjacent datasets are in the form described at the beginning of Section 4. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: ## Update after reading the authors responses: I summarize my concerns and suggestions for the authors to improve this paper: 1. As I said, I always like the idea of privacy amplification from random initialization and this is an important open question widely-recognized in DP community. But the authors's results on this part, from my opinion, is trivial given that it can only study closed-form iterate distribution for the first round and thus negligible amplification in DP-SGD, and the authors agreed to put it as a minor contribution in the revision. 2. I think some claims in the paper are overblown and the limitations of the proposed methods are not properly discussed. At least to me, the joint convexity thing in the tradeoff function is not that surprising given that Poisson subsampling has already been studied in Gaussian DP. The authors also agreed that their improvement over the subsampling case is limited either for i.i.d. sampling or with a fixed batch size. 3. For the main contribution to the shuffling model, I never say that there is no novelty in this paper. I agree that the paper presents tighter bounds on some special cases, for example with additional assumptions on the ordering tradeoff function or special mixture weights. But for the shuffling model, the advantage is only measured empirically without a clear asymptotic analysis. As I mentioned, a small constant improvement on the $\epsilon$ can lead to exponential improvement on $\delta$. The authors' response on this part is somewhat double talk and constantly say they simulate it and it is better than prior works. But my question is always how better it is and the authors did not directly give an answer to show analytical bound of improvement. Anyway, I suggest the authors clearly discussing about both the advantages and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback on our paper and insightful comments. We have carefully considered the raised questions and weaknesses and offer the following responses and modifications. The reference number here is aligned with the supplementary materials. Responses to weaknesses: 1. The first item. Section 3.2 is convincing but it has some restrictions, and in the revised version, we have added one paragraph discussing extension of Theorem 3.5 to the worst case and its restrictions: In Theorem 3.5, given the initialization $I$, the sensitivity of the gradient $\mu_I = \mu_I(D_0, D_1)$ depends on choices of two neighboring datasets $D_0$ and $D_1$. To extend this to the worst-case, we consider the noiseless linear models with least squares loss (Example 3.3 and 3.4), which is also used in practice (https://arxiv.org/abs/2006.08212). In fact, $\mu_I (D_0,D_1)\equiv a - I$ when $D_1$ is constructed by deleting $\bf{arbitrary}$ element in $D_0$. For more complicated models, such as those with general convex loss functions, the analysis of $\mu_I$ is involved and will be addressed in our future study of DP-SGD. As discussed in the end of this section, even for this simplest linear model, we observe privacy amplification while R'enyi DP fails to account for the privacy amplification. Overall, the DP-GD part is not as promising as our results on shuffling models but it is still novel and is convincing with specific applications. 2. The second item. We acknowledged the potential misinterpretation of notations and have added an explanation for this notation in Sections 3 and similar notations in Section 4. Specifically, $(\theta(D)|I, I)$ denotes the observation of both $\theta(D)|I=i\sim\mathcal{N}(\mu_i,1)$ and the initialization (index) $I$. This is a relaxation of observing the mixture distribution $\theta(D)$ (a mixture of $\mathcal{N}(\mu_i,1)$ as stated in Line 167-168) without knowing the information of the index $I$. As observing the index exposes more information, distinguishing $\theta(D_0)$ and $\theta(D_1)$ is harder than distinguishing $(\theta(D_0)|I, I)$ and $(\theta(D_1)|I, I)$, which motivates Lemma 4.1 and Theorem 3.5. Releasing the index is essential to make the mixture distributions distinguishable (e.g., Example F.4 is not distinguishable without releasing indices). Moreover, we do not assume that $\theta(D)|I$ has the same distribution as $I$, although the initialization is usually Gaussian in practice. The expectation in Theorem 3.5 is taken w.r.t. $I$ and $I$ can be any distribution. 3. The first question of the third item. The tightness of Theorem 3.1 was discussed in Section 4 in the submitted version and we have added one more paragraph in Section 3.1 discussing the tightness in theory in the revised version. The bound is near-optimal in theory, and it represents a significant enhancement compared to [23].The proof of Theorem 3.1 is based on a post-processing procedure in [23], the joint concavity (Proposition 4.3), and the advanced joint concavity (Proposition 4.8). The post-processing procedure is sharp for specific mechanisms, such as the randomized response mechanism, as shown by Theorem 5.2 and Theorem 5.3 in [23]. Proposition 4.3 is also sharp, as demonstrated in Section 4. Based on the proof of Proposition 4.8 in Section B.1.2 before line 542, the advanced joint concavity used in this example is almost the tightest closed-form bound that can be derived. Compared to [23], the main advantage of using $f$-DP (Proposition 4.3) is that we avoid the use of Hoeffding's inequality and Chernoff bound to bound the mixture of binomial distributions, which is adopted in [22, 23] and leads to loose bounds. Moreover, Theorem 3.2 in [23] holds with an assumption on the local privacy budget $\epsilon_0$ which is removed by using $f$-DP in our paper. Additionally, based on Table 3 and 4 in the pdf attached, our bound is near-optimal as it is close to the numerical lower bound computed by privacy auditing. 4. The second question in the third item. The accuracy $10^{-6}$ was due to calculating the dual function in Matlab. We have refined our code, and now the accuracy can be $10^{-20}$ as shown in the attached pdf. 5. The third question in the third item. The composition property is a fundamental aspect of differential privacy (DP) and composition + shuffling will also be investigated in our future study. However, it differs from our current privacy analysis. Specifically, computing the privacy of DPSGD through composition requires releasing the parameters (models) output by every iteration while our primary focus lies in analyzing the privacy of the last-iterate parameter without disclosing intermediate ones (cf., [2,43] for the analysis using RDP). Although our present paper only examines the one-step iteration, we are actively extending it to DP-SGD in the future. This extension is highly non-trivial. The role of joint concavity (Lemma 4.1) is crucial, building on insights from the analysis in [43] for RDP. Responses to the questions: Question 1 & 3 are answered in Item 1 & 3 above. 6. Question 2: Besides the simplest sampled mechanism in [9,14], there are other sampling mechanisms such as Poisson sampling or sampling with replacement. For sub-sample mechanisms, one may refer to [4] for privacy analysis, where the joint convexity and advanced joint convexity of the hockey-stick divergence play important roles. Our Lemma 4.1 and Lemma 4.7 are extensions from the hockey-stick divergence to trade-off functions. 7. Question 4: We have taken the reviewer's suggestion and added a detailed citation (Theorem 3.2 in [23]) and provided clear explanations at the beginning of Section 4 to better elucidate the post-processing procedure. As we mentioned earlier, this post-processing technique from [23] is highly effective for specific mechanisms, making our $f$-DP bound near-optimal. --- Rebuttal Comment 1.1: Comment: I would like to thank the reviewers' response. I carefully read the reply and also authors' responses to the other reviewers. But still, my two main concerns about the paper's novelty remains. First, I do not think results in section 3.2 are meaningful, or at least the author should not claim this as a main result of this paper. The results are far away from solving problem of privacy amplification from random initialization. At most, the authors' methods only address the DP analysis for running DP-SGD on very simple quadratic linear regression (gradient has a closed form) and for one iteration. The results are trivial since even without using the tradeoff function, the iterate distribution already enjoys a closed form by just combing the Gaussian noise and initialization. That is why I state once the loss function is a bit more complicated, the methods proposed can only handle very weak instance-based DP then. Second, I guess I may not express my concerns on the subsampling issues very clearly. My point is that subsampling is a classic distribution mixture model and many existing works studying f-DP for subsampling have also essentially studied the f-DP on mixture distributions. Compared to the random initialization model, subsampling is definitely a more well-studied and better-understood model. At least, I do not see any straightforward application of the paper's results on mixture distribution to produce sharpened results on the subsampling amplification. As a summary, as the title of the paper is a "unified analysis" of f-DP on mixture models, I do not think the methods presented here are generic or novel enough to handle all the applications. So, I decide to keep my score. --- Reply to Comment 1.1.1: Title: Response to the reviewer, I Comment: We thank the reviewer for clarifying questions in the official comments that is not specific in the initial reviews. The weakness and novelty of our paper were stressed in the rebuttal clearly. I would like to strengthen our novelty, contributions, and limitations again: we provide a unified $f$-DP analysis framework for mixture mechanisms, that applies to shuffling models and DP-GD with random initialization (with some strong assumptions on the models to extend it to the worst case). This framework can also be extended to the shuffled and sub-sampled multi-step DP-SGD which will be investigated in our future study. 1. In response to the reviewer's inquiries concerning the application of the least squares loss in the context of privacy amplification with random initialization, we offer the following explanation: First, we extend our findings to a worst-case study and utilize linear logistic regression as an illustrative example. In general, achieving the worst case DP guarantee is to replace $\mu_I = \mu_I(D_0,D_1)$ in Theorem 3.5 by the worst case sensitivity $\mu_I^*$ with a given initialization $I$, where $\mu_I^*$ is such that $|\mu_I^*| = \max_{D_0, D_1} |\mu_I(D_0,D_1)|$. We now use linear logistic regression as an example to specify the worst case sensitivity $\mu_I^*$. Let $D_0$ be an arbitrary dataset and let $D_1$ be generated by deleting $z = (x,y)\in D_0.$ Then, we have $\mu_I(D_0,D_1) = \frac{\eta e^{-yI\cdot x}}{1 + e^{-yI \cdot x}}$ and worst case sensitivity is $\mu_I^*$ is such that $|\mu_I^*| = \sup_{x,y} \left|\frac{\eta e^{-I\cdot yx}}{1 + e^{-I \cdot yx}} \right|$ as the gradient of the logistic loss is the softmax function $\frac{e^{-I\cdot yx}}{1 + e^{-I \cdot yx}}$. Let's consider the noiseless linear model discussed in https://arxiv.org/abs/2006.08212, where we assume $y = ax$ and $x^2 = 1$ for some $a\neq 0$. In this context, the worst-case sensitivity $\mu_I^*$ is given by $\mu_I^* = \frac{\eta e^{-aI}}{1 + e^{-aI}}$. The numerical computation of the worst-case trade-off function can be performed by substituting $\mu_I$ in Theorem 3.5 with $\mu_I^*$. By comparing the resulting trade-off function with respect to $\mu_I^*$ to the that of $1$-GDP (the sensitivity of the softmax function is $1$), we observe the privacy amplification achieved through random initialization. In general, the impact of initialization in worst-case differential privacy (DP) can be examined under the condition that $|xy| \leq M$ for some $M > 0$. Note the the key factor is the monotonicity of the softmax function. Similar results may hold when the gradient is monotonic, which holds for convex generalized linear models (cf., https://arxiv.org/pdf/2006.06783.pdf.) In response to your inquiries regarding the utilization of the least squares loss, we offer the following specific explanation: In general, the initialization $I$ is not restricted to any specific distribution, thus $\mu_I$ may not inherently follow a Gaussian distribution. Take Example 3.4, which represents a worst-case scenario achieved by removing an arbitrary element from a dataset, while applying gradient clipping with a constant $c$. Here, the initialization $I$ is truncated as a result of its appearance in the gradient, leading to a distribution that is not characterized by a Gaussian + Gaussian combination, making its verification challenging. Our Figure 2 visually demonstrates the observed privacy amplification using our Theorem 3.5 when compared to $c$-GDP without considering the random initialization. Example 3.3, in the absence of gradient clipping, indeed corresponds to a Gaussian + Gaussian distribution. The trade-off function involves testing two Gaussian distributions characterized by distinct means and variances. It's noteworthy that the closed-form expressions for type I and type II errors exist. However, representing the trade-off function in a closed form is only feasible when both distributions share the same mean. We greatly appreciate your insightful observations. Moroever, we intend to relocate this example to the appendix in the revised version of our work. Overall, strong assumptions on the models are required for us to extend it to the worst case but it is still new and not demonstrated in existing results. As a result, we finally decided to keep this section with more discussions on its restrictions. We have highlighted this restriction in both the abstract and the introduction. Technically, this analysis of initialization is non-trivial and is a result of our $f$-DP framework (Lemma 4.1). Moreover, we hope our analysis serves as an important first step for people to notice the effect of initialization on DP analysis. Additionally, this is not claimed as our main contribution in the revised version and our main contribution is refining the bound for the shuffling models. --- Reply to Comment 1.1.2: Comment: Our most significant contribution is in the realm of shuffling models, where we establish a substantial enhancement compared to the state-of-the-art results. Moreover, this bound is nearly optimal when compared with the empirical lower bound. This contribution is both novel and significant. The shuffling procedure is widely adopted in training deep learning models, as can be seen from both TensorFlow and PyTorch (Opacus). We would like to emphasize once more this main contribution, as the reviewer didn't mention this part as our main contribution in the reviews. Overall, a rating of 3 might arguably be worth a second thought as our results do present near-optimal analysis for the widely used shuffling models. --- Rebuttal 2: Comment: If I may ask the reviewer for clarifications myself. As I see it, the main contribution of the paper in Theorem 3.1 and the theoretical tools that were used to prove it (presented mainly in section 4). I agree with the reviewer that the current presentation paints a slightly different picture, and I recommend the authors align their final presentation with the assessment provided by the reviewers. That being said, I am still not sure I understand the reviewers opinion on this part's contribution. Is it just not important enough in their opinion (in which case, I don't have much to add, and it is probably up to the AC to decide), or do they also find a significant limitation in this result (such as not being convinced by the tightness argument). If it is the latter, I would appreciate a clarification of the gap, as they see it. --- Rebuttal Comment 2.1: Title: Response to the reviewers update. Comment: The reviewer revised their reviews instead of adding official comments, almost causing us to overlook them. All questions in the reviewer's update have already been clearly addressed in our rebuttal, which seems to be overlooked by the reviewer. Here, we would like to address them again in response to the reviewer's update. 1. Response to Item 1 in Limitations: we claim it as our secondary contribution doesn't mean it is trivial. As required by the reviewer, we extended our analysis to linear models with strongly convex loss in our rebuttal, which is non-trivial and can be found in, for example, https://arxiv.org/abs/1803.02596, https://arxiv.org/abs/2007.05157. 2. Response to Item 2 in Limitations: our framework is designed to deal with $\textbf{any}$ mixture models. Our Lemma 4.7 is motivated by the subsampling analysis in Balle et al. and is a generalization of their Theorem 2 to $f$-DP. $\textbf{For complicated mixture models such as shuffling,}$ $\textbf{Lemma 4.7 and their Theorem 2 may not work}$ due to the difficulty to clarify the weights. To deal with this challange, we propose novel Lemma 4.1 that can handle $\textbf{any}$ mixture distributions and lead to $\textbf{sharp}$ analysis (Proposition 4.3) in the applications of the widely used shuffling models. 3. Response to Item 3 in Limitations: in our rebuttal we already show that our bound is near-optimal $\textbf{ in theory}$ and is $\textbf{ always}$ better than SOTA (https://arxiv.org/pdf/2208.04591.pdf) instead of in some special cases as demonstrated by the reviewer. Moreover, we $\textbf{removed}$ the assumption required by Theorem 3.2 in SOTA while the reviewer $\textbf{misunderstood}$ it as we add stronger assumptions. Here we emphasize the tightness again in terms of the proof details that is in line with our rebuttal. The proof of Theorem 3.1 is based on a post-processing procedure in SOTA, joint concavity (Proposition 4.3), and advanced joint concavity (Proposition 4.8). The post-processing procedure is sharp for specific mechanisms, such as the randomized response mechanism, as shown by Theorem 5.2 and Theorem 5.3 in SOTA. Proposition 4.3 is also sharp, as demonstrated in our Section 4. Based on the proof of Proposition 4.8 in Section B.1.2 (the content before line 542 is the proof of Proposition 4.8 and is clarified in the revised version), the advanced joint concavity used in this example is almost the tightest closed-form bound that can be derived. Compared to SOTA, the main advantage of using $f$-DP (Proposition 4.3) is that we avoid the use of Hoeffding's inequality and the Chernoff bound, which is adopted in SOTA and leads to loose bounds, to bound the mixture of binomial distributions. Moreover, Theorem 3.2 in SOTA holds with an assumptions on the local privacy budget $\epsilon_0$ , $\textbf{which is removed by using Proposition 4.3 in our paper.}$ Since our novel Proposition 4.3 is sharp and the other details of the proof are in line with SOTA, our bound is always better than the SOTA. Both our bounds and that in SOTA are non-asymptotic bounds and the non-asymptotic analysis is more practical than asymptotic analysis. We already show that our bound is $\textbf{always better than SOTA in terms of tightness and assumptions}$. Comparing with them in the asymptotic sense is unnecessary.
Rebuttal 1: Rebuttal: We greatly appreciate the reviews from all the reviewers, which have contributed to the thoroughness of our paper. We have attached a 1-page PDF here that provides detailed comparisons and extra numerical results. Furthermore, we have noticed several frequently asked questions and would like to emphasize our responses as follows. Here the reference number is aligned with the supplementary materials, which is slightly different from the 9-page paper due to the appendix. 1. The tightness of Theorem 3.1 for shuffling models: Our Theorem 3.1 offers a near-optimal bound. Theoretically, we have added one paragraph right behind Theorem 3.1 to illustrate its near-optimality: The proof of Theorem 3.1 is based on a post-processing procedure in [23], joint concavity (Proposition 4.3), and advanced joint concavity (Proposition 4.8). The post-processing procedure is sharp for specific mechanisms, such as the randomized response mechanism, as shown by Theorem 5.2 and Theorem 5.3 in [23]. Proposition 4.3 is also sharp, as demonstrated in Section 4. Based on the proof of Proposition 4.8 in Section B.1.2 (the content before line 542 is the proof of Proposition 4.8 and is clarified in the revised version), the advanced joint concavity used in this example is almost the tightest closed-form bound that can be derived. Compared to [23], the main advantage of using $f$-DP (Proposition 4.3) is that we avoid the use of Hoeffding's inequality and the Chernoff bound, which is adopted in [22, 23] and leads to loose bounds, to bound the mixture of binomial distributions. Moreover, Theorem 3.2 in [23] holds with an assumption on the local privacy budget $\epsilon_0$, which is removed by using $f$-DP in our paper. Numerically, we compare our theoretical bound with the numerical upper bound in [23] and the numerical lower bound obtained by privacy auditing in Table 4 in the attached PDF, which shows that our theoretical bound is near-optimal as it closely aligns with the lower bound. 2. The notation $(X|I,I)$ in Section 4 is confusing, and we have addressed this concern in the revised version. Specifically, we clarified the definition as follows: $(X|I, I)$ means we observe two random variables, $X|I$ and $I$, where $X$ follows a mixture distribution $\sum_{i=1}^m w_i p_i$, and $I$ represents the information of indices with $\mathbb{P}[I=i] = w_i$. $X|I$ denotes $X|I=i \sim p_i$. As releasing the indices exposes more information, the couple $((X|I, I), (Y|I, I))$ is more separable than $(X, Y)$ without knowing the indices, which motivates Lemma 4.1. 3. In Section 3.2, we applied Theorem 3.5 to analyze privacy amplification in noiseless 1-dimensional linear models (Example 3.3 and 3.4), where the sensitivity of the gradient $\mu_I$ with a given initialization $I$ can be specified. It is worth noting that similar results are applicable to multidimensional linear models, where $y_i = a^T x_i$ and $x_ix_i^T = \mathbb{I}$ with $\mathbb{I}$ being the identity matrix. Noiseless linear models find relevance in various applications, as exemplified in the work by Berthier et al. (2020) titled "Tight Nonparametric Convergence Rates for Stochastic Gradient Descent under the Noiseless Linear Model." For more general applications, such as general convex loss functions, deriving $\mu_I$ becomes more intricate and deserves further investigation, which we plan to address in our future study on the privacy of multi-step DP-SGD. Overall, the DP-GD part is not as promising as our results on shuffling models but it is still novel with applications to specific examples. Pdf: /pdf/18c9519fcf4c04ce013bf45b6b799cb2bc961eaa.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies improving the analysis of privacy bounds for two randomization processes (privacy amplifications): shuffling models (where each user record is privatized by some local randomizer like randomized response mechanism) and differentially-private gradient descent (DP-GD, where a Gaussian noise is added to the gradient update at each iteration). Previous bounds were given for the standard $(\epsilon, \delta)$-DP bound, while this paper analyzes through $f$-DP, a differential-privacy bound considering the trade-off between type I and type II errors (like the $f$-score), using some joint convexity arguments. When translating the $f$-DP bounds back to bounds on $\epsilon$ and $\delta$, this paper gets much tighter bounds that works for more general ranges of $\epsilon$ and $\delta$. Strengths: The improvement in DP bounds are non-negligible and more general, and the presentation is convincing. Weaknesses: Certain critical aspects of the proofs might be better explained: the proof depends on some close-form expressions of piecewise linear functions (Theorem 3.1, Corollary 3.2, Proposition 4.3), which might have meaning beyond just those arithmetic manipulations or expressions as is currently shown in the paper. A one- or two-sentence description, or some simple examples, might help here. Technical Quality: 3 good Clarity: 3 good Questions for Authors: After Proposition 4.8, the paper writes “One may refer to the supplementary materials for details”, but this reader is unsure where are the supplementary materials. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The theoretical results in this paper does not have broader societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments and valuable suggestions regarding our paper, particularly in acknowledging our main contributions. In response to the raised questions and weaknesses, we would like to provide the following explanations: Response to the weakness: In the revised version, we have explained the expressions of the trade-off functions derived in Theorem 3.1 in Section 3.1. Additionally, we have provided a comprehensive explanation of the trade-off function for testing two discrete distributions using the Neyman-Pearson lemma, employing the simple Bernoulli distribution as an example in Appendix A. The trade-off function for testing two discrete distributions is known to be piecewise linear, a well-established result based on the Neyman-Pearson lemma. In this context, the slope of each line segment is $-Prob_{X\sim Q}[p(X)/q(X) = t]/Prob_{X\sim P}[p(X)/q(X) = t],$ which is constant as $t$ only relies on the knots, in the Neyman-Pearson lemma. For example, for testing two Bernoulli distributions $Bern(p_0) v.s. Bern(p_1)$ with $p_1 > p_0$, the trade-off function is piecewise linear with 3 knots: $(0,1), (p_0, 1 - p_1)$, and $(1,0)$. Since the shuffling procedure introduces the mixture of binomial noise, which is a discrete distribution, the trade-off functions presented in Theorem 3.1, Corollary 3.2, and Proposition 4.3 are also piece-wise linear with numerous knots due to the mixture. Response to the Question: We acknowledge the confusion regarding the presentation of the proof of Proposition 4.8, which is contained implicitly in the proof of Theorem 3.1 in Section B.1.2 in the submitted version. Precisely, in Section B.1.2, the content before line 542 is the proof of Proposition 4.8. To address this, we have made the necessary modifications. Specifically, we have updated the title of Section B.1 to "Proofs of Proposition 4.3, Proposition 4.8, and Theorem 3.1." Additionally, we have divided Section B.1.2 into two separate sections, one focusing on the proof of Proposition 4.8 and the other on the proof of Theorem 3.1. This reorganization aims to improve clarity and facilitate better understanding for the readers. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I think other reviewers have more to add to the discussion, and I may just maintain my original ratings.
null
null
null
null
null
null
Flow-Attention-based Spatio-Temporal Aggregation Network for 3D Mask Detection
Accept (poster)
Summary: This paper presents a framework called FASTEN (Flow-Attention-based Spatio-Temporal Aggregation Network) for 3D mask presentation attack detection. In previous works, as a recent technology rPPG addresses some of the limitations but also sensitivity to noise and high computational overhead. To overcome these challenges, FASTEN is designed to focus on fine-grained details in large movements, aiming to eliminate redundant spatio-temporal feature interference and capture splicing traces of 3D masks using fewer frames. It comprises three key modules: a facial optical flow network for obtaining inter-frame flow information, flow attention for assigning different significance to each frame, and spatio-temporal aggregation for combining high-level spatial features and temporal transition features. Strengths: Through extensive experiments, FASTEN demonstrates a good performance compared to six competing methods in both intra-dataset and cross-dataset evaluations using multiple detection metrics. The three datasets used for the evaluation are frequently used in the field and are publicly available. The authors mention the deployment of FASTEN on real-world mobile devices for real-time 3D mask detection. This demonstrates the practicality and applicability of the proposed framework in real-world scenarios, validating its potential for deployment in various practical applications and settings. Weaknesses: The proposed method by the authors seems to have limited novelty as a substantial portion of its components is derived from existing work. The experiments and discussions in real-world scenarios are very limited. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The proposed method seems to focus on fine details in large movements. However, I am very curious about how the model will perform if the detected object does not have large movements. In line 44, As far as I know, the majority of rPPG methods involve analyzing facial videos captured by a camera and do not necessitate the use of additional photodetectors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It might be interesting if we could see more discussions about the different regions. It would be useful to see more experiments and discussions in real-world scenarios. For example, a comparison of performance with previous methods in real-world scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer jUZR: Q1: Limited novelty as a substantial portion of its components is derived from existing work. A1: Simply using both spatial and temporal features is out of intuition and not new for detection tasks, but our contribution focuses more on the frame-wise attention to aggregate the feature information of multiple frames, rather than simple addition or concatenation of the two features. We provide comprehensive experimental results in the ablation study (Table 4) to show the outperformance of our defense (Spat. + Temp.(UnEq.)) over only combining two features together (Spat. + Temp.(Eq.)) which neglects the apparent movement changes among different frames. Assigning different frame weights in our method contributes more to the detection with significant improvement, especially when the input frames include large movements, e.g., eye-blinking, mouth-opening. According to our best knowledge, we are the first to consider flow attention and spatio-temporal aggregation for 3D mask detection. We will add more explanation in the revised manuscript. Q2: The experiments and discussions in real-world scenarios are very limited. A2: Our work proves that the accuracy is almost unaffected when the model is deployed in real-world scenarios. Considering the development cost and time constraints, we devote more effort to comparing and discussing experimental data. Q3: Large movements and different regions. A3: Our method performs better for input videos with larger motions, but since we also have a spatial branch, when the motion range is small, our method will rely more on the spatial branch, with the temporal branch providing assistance. Our experimental results show the average results of the whole dataset, which includes both large movements and trivial movements. The sound results can provide effective support for our method's effectiveness in inputs with subtle movements. We think it’s not scalable to conduct a comparison experiment to test the defense performance solely in small movements of the dataset since it is hard to quantitatively define the boundary between large and trivial movements. Q4: Error in line 44. A4: We admit this is a factual error. We will modify it in the revised version. --- Rebuttal Comment 1.1: Comment: The authors' rebuttal has clarified my questions.
Summary: In this paper, To enhance the accuracy of face presentation attack detection by effectively incorporating temporal information,the author proposes a novel 3D mask detection framework. The architecture integrates: 1) a facial optical flow network to obtain non-RGB inter-frame flow information; 2) flow attention to assign different significance to each frame; 3) spatio-temporal aggregation to aggregate high-level spatial features and temporal transition features. Through extensive experiments, FASTEN only requires five frames of input and outperforms six competitors for both intra-dataset and cross-dataset evaluations in terms of multiple detection metrics. Strengths: 1.The author employs a simple model structure that enables deployment on end devices. It is hoped that the author can further compare the Flops and Params of this model with other models. 2.The author simplifies the structure of the optical flow calculation model specifically for facial features, thereby reducing computational burden. 3.The model only requires five images as input, reducing the requirement for the length of the input sequence compared to the rPPG method. Weaknesses: 1.The structure proposed in this article is not sufficiently innovative, as it only uses simplified optical flow networks and MobileNetV3-Small. The fusion of the spatial features and optical flow features is achieved by employing the "calculate the frame weights" method. 2.Due to this paper has achieved results beyond previous works by using a simple structure, it is hoped that the author will open-source the code for validation purposes. 3.Although the model exhibits significant improvement in accuracy compared to rPPG-based models, which is attributed to the longer temporal information required for measuring rPPG signals, the improvement in accuracy compared to "Deep-Learning-based Methods" from two years ago is limited. It is requested that the author compare their approach with recent algorithms developed in the past two years. Technical Quality: 3 good Clarity: 3 good Questions for Authors: none Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see weakness Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer iaxD: Q1: Novelty. A1: Simply using both spatial and temporal features is out of intuition and not new for detection tasks, but our contribution focuses more on the frame-wise attention to aggregate the feature information of multiple frames, rather than simple addition or concatenation of the two features. We provide comprehensive experimental results in the ablation study (Table 4) to show the outperformance of our defense (Spat. + Temp.(UnEq.)) over only combining two features together (Spat. + Temp.(Eq.)) which neglects the apparent movement changes among different frames. Assigning different frame weights in our method contributes more to the detection with significant improvement, especially when the input frames include large movements, e.g., eye-blinking, mouth-opening. As mentioned by Reviewer U1im, and also according to our best knowledge, we are the first to consider flow attention and spatio-temporal aggregation for 3D mask detection. We will add more explanation in the revised manuscript. Q2: Open source. A2: We feel sorry that we currently cannot open-source our code for public disclosure since it may involve confidentiality. We will open-source a demo code and model structure once the paper is accepted. Q3: It is requested that the author compare their approach with recent algorithms developed in the past two years. A3: We additionally compare our method with one ViT-based method, ViTranZFAS [1], and one deep-learning-based method, MD-FAS [2]. Table 1 in the enclosure shows the intra-dataset and cross-dataset results on HiFiMask. ViTranZFAS performs worse than FASTEN. We attribute it to three main reasons: 1) Lack of Temporal Information: ViT primarily focuses on analyzing individual images without considering temporal information. Facial liveness detection often requires understanding the dynamic changes over time, such as blinking, movement, or facial expressions. ViT's static image analysis might not effectively capture these temporal cues. 2) Small Regions of Interest: Liveness cues often involve subtle facial regions like the edge of the facial features or muscle contractions. ViT's patch-based approach might not focus on these small regions of interest effectively, leading to challenges in capturing fine-grained liveness indicators. 3) Training Difficulty: ViT requires a pretrained model, which requires high computational costs. MD-FAS aims to solve the forgetting issue when learning a new domain data and improve the adaptability of the model. It cannot perform well with unseen target data, as shown in the cross-dataset evaluation. In addition, the baseline SRENet selected by this scheme is a region-based fine classification task, which has high requirements for the diversity of attack categories, as well as the resolution of images. When we use HiFiMask as the training set for a fair comparison, its accuracy performance remains poor. Reference: [1] George A, Marcel S. On the effectiveness of vision transformers for zero-shot face anti-spoofing[C]//2021 IEEE International Joint Conference on Biometrics (IJCB). IEEE, 2021: 1-8. [2] Guo X, Liu Y, Jain A, et al. Multi-domain Learning for Updating Face Anti-spoofing Models[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 230-249.
Summary: This work proposes a 3D mask detection system, successfully deployed on mobile devices. In addition, the number of frames required for making predictions is minimal, improving the latency of responses. Experiments, performed with face mask datasets showed improved performance compared to existing techniques. Strengths: 1. The introduced method is very interesting. 2. Combination of facial optical flow for inter-frame characteristics with flow attention and spatio-temporal aggregation (to the best of my knowledge) hasn’t been applied for 3D face mask detection yet. 3. The use of only 5 frames and potential of deploying such solution on mobile devices is very promising. 4. The method is well described with clear comparison to hand-crafted features, deep learning methods and remote vital estimation approaches. 5. Experimental analysis is clear and thorough with results supporting claims well. Weaknesses: 1. The method has only been tested using MobileNet backbone. It’s understandable that experimental analysis has to have some limitations, but it would be interesting to explore other maybe even faster classifiers. 2. Minor comment: Font in tables is very small, please increase readability. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. How would motion affect performance of the proposed method? You stated that ‘flow attention can effectively help FASTEN assign larger frame weights to frames with large movement.’ but what type of the movement is acceptable? 2. Average processing time has been provided, but what’s the latency of responses? Do you use batch processing at the inference time? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations and ideas for future work improvement were clearly explained and make sense. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer U1im: Q1: More backbones. A1: Thanks for your advice. We use MobileNet due to its excellent accuracy-efficiency trade-off. Replacing it with other backbones may accelerate the speed but the performance improvement is limited. We will consider this advice in our future work. Q2: Fonts in tables. A2: We will check the fonts in all tables and figures and adjust them for better readability. Q3: Large movement. A3: One of our work’s insights is that the motion characteristics difference between the mask and that of the face, due to that the mask is not perfectly fit for the face. In practical applications, we consider movements such as nodding, shaking the head, and opening the mouth. Q4: Latency and batch processing. A4: The time mentioned in Table 5 refers to the time it takes for a single image to be processed using a mobile device. Considering the pre-processing time for video parsing and other necessary steps in actual usage, the overall response time is about 879 ms. We do not use batch processing at the inference time. --- Rebuttal Comment 1.1: Title: Response to authors' rebuttal Comment: Thank you for your responses. If the overall time of processing the entire pipeline is 879ms, then the method is not real time as stated in the paper. Please correct such statements. Please also include details of pre/post processing. --- Reply to Comment 1.1.1: Comment: Thanks for your reminder. We will modify the real-time issue in the revised manuscript. Pre-processing includes video frame decoding, face detection with landmark, and face alignment. There is no post processing.
Summary: This paper presents an approach to 3D mask detection using a small number of face video frames. Experiments were conducted on several databases, and comparisons with other methods are presented. Strengths: Using small networks for 3D mask detection and a small number of frames; Both spatial and temporal features are extracted; Most parts are written clearly. Weaknesses: It seems that this work only focuses on 3D mask detection, without dealing with paper print, video replay, etc. This makes it quite narrow in terms of applications for face anti-spoofing. It is not new to use both spatial and temporal features for face anti-spoofing. Many existing works have used spatial and temporal information. More recent works use vision transformer based approaches for face anti-spoofing, which can effectively encode attention into the learning framework in a simple way, rather than using optical flow to indicate attention, as shown in the work. Further, there are no comparisons with the ViT based approaches in this paper. It is confusing for the saying of "only 5 frames", since the five frames can be picked from a sequence with some fixed intervals, rather than consecutive frames in the original videos (25 or 30 frames per second), as indicated in some databases. Further, the paper argues to capture facial changes caused by pulse, how to acquire the blood changes within 5 frames? You can calculate: 5/25=0.2 seconds, which is less than a pulse period (suppose a regular person has 60 pulses per minute, then one pulse will take one second). It mentions that the method can be developed in a mobile device, and shows the computation and memory used. But there is no accuracy or error rate reported. How accurate it can achieve in mobile device? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See my questions and comments in Weakness part. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The argument of using 5 frames is doubtable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer bEHS: Q1: Only focusing on 3D mask detection, which is quite narrow in terms of applications for face anti-spoofing. A1: Owing to the rapid development and maturity of 3D printing, detecting 3D masks has become a rising challenging task in the field of face anti-spoofing, which threatens most existing PADs [1][2][3]. 3D masks made up of soft silicone are vivid enough, in terms of color, texture, and geometry structure, that even humans find it confusing to differentiate. This has gradually made 3D mask defense from a branch of face anti-spoofing to a relatively independent sub-topic. To date, effective defense against 3D masks remains a missing piece. Therefore, our method focuses on this type of attack (this is also where the motivation of our method comes from). As a deep learning method by extracting features, we believe that our method can also be used in other attack types e.g., printing, replay, since their features are relatively simpler. In addition, in terms of the actual application you mentioned, in actual face anti-spoofing security applications, it is usually the aggregation of multiple defense models. Other attacks can be separately mitigated by the corresponding tactics. Deploying our defense model will enhance the overall security of the system. Q2: It is not new to use both spatial and temporal features for face anti-spoofing. A2: Simply using both spatial and temporal features is not new for detection tasks, but our contribution focuses more on the frame-wise attention to aggregate the feature information of multiple frames, rather than simple addition or concatenation of the two features. We provide comprehensive experimental results in the ablation study (Table 4) to show the outperformance of our defense (Spat. + Temp.(UnEq.)) over simply combining two features together (Spat. + Temp.(Eq.)) without considering frame weights. According to our best knowledge, we are the first to consider flow attention and spatio-temporal aggregation for 3D mask detection. We will add more explanation in the revised manuscript. Q3: Recent works using ViT, can effectively encode attention into the learning framework in a simple way, rather than using optical flow to indicate attention. A3: The “attention” you mentioned here is different to the form of attention in our method. We used the optical flow to generate inter-frame attention (the contribution or weight of each frame to the detection) but not the attention on each feature map. However, we reckon this comment raised an interesting point, the attention on the spatial dimension also plays an important indication to the final detection. We therefore compare our defense with a ViT-based method, ViTranZFAS [4], which is slightly modified to adapt to 3D mask detection. Table 1 in the enclosure shows the intra-dataset and cross-dataset results on HiFiMask. FASTEN performs better than ViTranZFAS in almost all cases. We attribute it to three main reasons: 1) Lack of Temporal Information: ViT primarily focuses on analyzing individual images without considering temporal information. Facial liveness detection often requires understanding the dynamic changes over time; ViT's static image analysis might not effectively capture these temporal cues. 2) Small Regions of Interest: Liveness cues often involve subtle facial regions like the edge of the facial features or muscle contractions; ViT's patch-based approach might not focus on these small regions of interest effectively, leading to challenges in capturing fine-grained liveness indicators. 3) Training Difficulty: ViT requires a pretrained model and high computational costs. Q4: “Only 5 frames” A4: Our method supports picking 5 frames at equal/unequal intervals. For datasets such as HiFiMask that have been pre-sampled, we directly use the preprocessed sampled data, since our method is not sensitive to how the 5 frames are selected, and just focuses on the movement between consecutive selected frames. We explained in the Statement for rPPG methods in the paper that they usually require a longer video clip (usually more than 10 seconds), and require continuous (high-frequency sampling, by Nyquist–Shannon Sampling Theorem) signal inputs. So “5-frame input” is for our method, not rPPG methods. Q5: How accurately can it be achieved in mobile devices? A5: With less computational cost and model parameters, quantization acceleration is unnecessary when deploying on mobile devices, so we can practically use it with almost no loss of accuracy. The overall system achieves a defense rate (accuracy) of 99.3% against 3D masks. Reference: [1] Liu S Q, Lan X, Yuen P C. Learning Temporal Similarity of Remote Photoplethysmography for Fast 3D Mask Face Presentation Attack Detection[J]. IEEE Transactions on Information Forensics and Security, 2022, 17: 3195-3210. [2] Yu Z, Qin Y, Li X, et al. Deep learning for face anti-spoofing: A survey[J]. IEEE transactions on pattern analysis and machine intelligence, 2022, 45(5): 5609-5631. [3] Erdogmus N, Marcel S. Spoofing face recognition with 3D masks[J]. IEEE transactions on information forensics and security, 2014, 9(7): 1084-1097. [4] George A, Marcel S. On the effectiveness of vision transformers for zero-shot face anti-spoofing[C]//2021 IEEE International Joint Conference on Biometrics (IJCB). IEEE, 2021: 1-8.
Rebuttal 1: Rebuttal: Thanks for your feedback! We summarize our responses by questions in this rebuttal. Please see more details in the responses to each reviewer. * Ethics [Ethics Reviewer HeTH, Nmac] We acknowledge there might be social impacts associated with our work. We will accordingly add a section to the paper to discuss the impact. Our defense alleviates 3D mask spoofing attacks against face recognition systems (FRS). Such defense would be beneficial for minimizing security concerns associated with FRS in financial and public service sectors. However, we would also like to extend our considerations to privacy and fairness issues of the defense and its serving FRS. 1) Privacy. Our defense applies to the specific task of spoofing detection in which the facial information is presented to the associated FRS as credentials. To ensure no private information is memorized, our model is trained on public datasets and does not use any private photos for the training purpose. 2) Fairness. In our work, we use HiFiMask as one of the main datasets. HiFiMask provides an equal number of subjects (25 each) for each ethnicity to facilitate fair artificial intelligence and mitigate biases [1]. Our training method also encourages the fairness of the trained defense. However, considering the defense is an independent component in the FRS, it cannot improve the fairness of the overall FRS. Moreover, there might be fairness issues for newborns or people with certain skin diseases since they do not have enough representation in the training dataset. Another possibility would be the dataset itself contains unbalanced data or faces with even only one skin tone. If this happens, incremental training after collecting obfuscated/synthetic data of the corresponding group of people can be used to mitigate the fairness issue. We also encourage future research in this area to incorporate fairness training techniques to mitigate possible issues. * Novelty [Reviewer VV4D, bEHS, iaxD, jUZR] We emphasize that our contribution focuses more on the frame-wise attention to aggregate the feature information of multiple frames, rather than simple addition or concatenation of the two features. We provide comprehensive experimental results in the ablation study (Table 4) to show the outperformance of our defense (Spat. + Temp.(UnEq.)) over only combining two features together (Spat. + Temp.(Eq.)) which neglects the apparent movement changes among different frames. We are the first to consider flow attention and spatiotemporal aggregation for 3D mask detection. * Research focus [Reviewer bEHS] Detecting 3D masks has become a rising challenge for most existing PADs owing to the rapid development and maturity of 3D printing [2]. Realistic 3D masks are hard to differentiate even for human eyes. To date, effective defense against 3D masks remains a missing piece. Therefore, our method focuses on this type of attack (this is also where the motivation of our method comes from). As a deep learning method by extracting features, we believe that our method can also be adapted to other attack types e.g., printing, replay, since their features are relatively simpler. Moreover, in actual face anti-spoofing security applications, it is usually an aggregation of multiple defense models. Other attacks can be separately mitigated by the corresponding tactics. Deploying our defense will enhance the overall security of the system. * More competitors [Reviewer bEHS, iaxD] We additionally compare our method with ViTranZFAS[3], and one recent deep-learning-based method, MD-FAS[4]. Table 1 in the enclosure shows the intra-dataset and cross-dataset results on HiFiMask. Please refer to the responses to each reviewer for more analyses. * Frames [Reviewer bEHS] Our method supports picking five frames at equal/unequal intervals. For datasets such as HiFiMask that have been pre-sampled, we directly use the preprocessed sampled data, since our method is not sensitive to how the frames are selected, and just focuses on the movement between consecutive selected frames. We explained in the Statement for rPPG methods in the paper that they usually require a longer video clip (usually more than 10 seconds), and require continuous (high-frequency sampling, by Nyquist–Shannon Sampling Theorem) signal inputs. So “5-frame input” is for our method, not rPPG methods. * Real-world scenarios [Reviewer bEHS, U1im, jUZR] bEHS: With less computational cost and model parameters, quantization acceleration is unnecessary when deploying on mobile devices, so we can practically use it with almost no loss of accuracy. The overall system achieves a defense rate (accuracy) of 99.3% against 3D masks. U1im: The time mentioned in Table 5 refers to the time it takes for a single image to be processed using a mobile device. Considering the pre-processing time for video parsing and other necessary steps in actual usage, the overall response time is about 879 ms. We do not use batch processing at the inference time. jUZR: Our work proves that the accuracy is almost unaffected when the model is deployed in real-world scenarios. Considering the development cost and time constraints, we devote more effort to comparing and discussing experimental data. * Open source [Reviewer iaxD] We feel sorry that we currently cannot open-source our code for public disclosure since it may involve confidentiality. We will open-source a demo code and model structure once the paper is accepted. Reference: [1] Liu A, et al. Contrastive context-aware learning for 3d high-fidelity mask face presentation attack detection. TIFS, 2022. [2] Yu Z, et al. Deep learning for face anti-spoofing: A survey. TPAMI, 2022. [3] George A, et al. On the effectiveness of vision transformers for zero-shot face anti-spoofing. IJCB, 2021. [4] Guo X, et al. Multi-domain Learning for Updating Face Anti-spoofing Models. ECCV, 2022. Pdf: /pdf/8d7d7613eb5287b4db56daad814d02eeb0a7fb8b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents a 3D mask detection tool called FASTEN, aimed at making face recognition systems more secure. The proposed network focuses on fine-grained details in large movements, which helps eliminate redundant spatio-temporal feature interference. This approach lowers computational overhead and it outperforms six competitive models in both intra-dataset and cross-dataset evaluations as Table 2/3. Real-world application on mobile devices (Table 5) shows the efficiency of the method. Strengths: 1. FASTEN offers a solution by focusing on small details during big movements, helping to detect traces of 3D masks in fewer frames. 2. The reduction in frames needed from 25 to just 5 means this tool is computationally more efficient and suitable for real-time applications. 3. The fact that FASTEN can be used in real-time on mobile devices shows that it's practical for everyday use. Weaknesses: I am not directly working on the research topic. The contribution of this paper is more technical that combines the FlowNet and mobilenetv3 (as Fig. 2) to improve the efficiency of 3D mask detection across 5 frames. I think the authors should state the explicit insights of this paper, compared to the simple baseline of FlowNet+mobileNetv3. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See the weakness. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer VV4D: Q1: I think the authors should state the explicit insights of this paper, compared to the simple baseline of FlowNet+mobileNetv3. A1: It is intuitive to consider both spatial and temporal features when detecting 3D masks. However, as shown in the ablation study (Table 4), simply combining these two (Spat. + Temp.(Eq.)) will neglect the apparent movement changes among different frames, thus leading to average performance. Instead, we introduce frame weights obtained by temporal features to assign different significance to the spatial features in each frame. This contributes more to the detection with significant improvement, especially when the input frames include large movements, e.g., eye-blinking, mouth-opening. We will clarify more explanation in the revised manuscript. --- Rebuttal Comment 1.1: Title: Thank you for your responses. Comment: Thank you for your responses. As I am not an expert in this topic, I do not have more comments.
null
null
null
null
null
null
Most Neural Networks Are Almost Learnable
Accept (poster)
Summary: The paper examines the extent to which neural networks learn deep models. Given the fact that there almost do not exist any 'rich' families of constant depth random neural networks that are efficiently learnable by some algorithm, the authors assert the existence of an algorithm that approximates these families of networks that are comprised by Lipschitz activation functions. To achieve this, they substitute each activation function with a low degree polynomial approximation (Hermite polynomial), and then show that the approximation is good enough, allowing for a standard SGD on the neural networks to be a PTAS. Strengths: -The paper examines the expressivity and learnabilty of random neural networks. This is a fundamental problem in the Theory of Deep Learning and the authors manage to address it by bounding the expected learning time, i.e. the # iterations for determining the weights of the networks approximately (if I undestand correctly Th.1.1.). This allows for a possible impact of the paper in this field. -From a first reading the results seem new. Weaknesses: -The presentation could be improved. The mathematical formulation could be more explained, e.g. with an example of a toy neural network, where the weights and random initialization can be displayed pictorially. -There could be a clear comparison to the previous approximation attempts (of the literature) to show the novelty in the current PTAS. E.g. in the 1.2. section there is a mention in Chen et al. [7] who provided an algorithm for learning deeper ReLU networks, with complexity polynomial in the input dimension but exponential in the other parameters. The current approach of this paper (from what I understand) considers fixed depth networks and asserts the existence of a polynomial which can be 'close' (approximately) to the learning outcome of the trained neural network: after i iterations, it can approximate it expectedly in 'well' (ideally constantly, as the iterations 'i' tend to infinity). This can be contrasted to previous approaches mentioned in 1.2 about learning depth-3 networks, hardness result of the setting where the target network is random and an adversary chooses some input distribution which needs to be learned by the random target network etc. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to the limitations below. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: -The paper does not attempt to make an experiment as a proof of concept. For example it could try several neural networks of constant depth and Lipschitz activations and vary the rest of the hyperparameters randomly. Then there could be some kind of benchmarking across several neural networks that measures how good the approximation is, while the learning runs in polynomial time. The running time is also interesting in terms of such reporting. Although the paper is theoretical, it discusses a potential application of such neural networks (their polynomial approximation). This is the reason why it would be interesting to offer a numerical result of the efficiency, accuracy and complexity of learning a polynomial. -It seems to be inherently difficult to find the polynomial of Th. 1.1 in order to prove the above point. Is this within the scope of this paper, besides the existence result? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review. We will improve the writing according to the outlined suggestions. We next address the reviewer’s concerns: - Experiments: Thanks for the suggestion. We will run experiments and consider whether to add them to the final version. - Finding the polynomial: It is indeed not clear how to find the polynomial of the theorem. Yet, for the algorithms of the paper, existence is enough. --- Rebuttal Comment 1.1: Title: Answer to authors during the rebuttal Comment: I thank the authors for their response. I am inclined to keep my score for acceptance for now.
Summary: The paper proves that a sufficiently wide neural network which is randomly sampled according to Xavier initialization can be learned in polynomial time (in the network size) up to an additive error $\epsilon$ if the numbers of layers, the Lipschitz-constant of the activation function, and $\epsilon$ are fixed constants. Strengths: - According to the authors, this is the first positive learnability result of this type for arbitrary deep neural networks. This seems to be a major milestone in learning theory for deep networks. - The obtained learning algorithm is actually SGD on ReLU networks, so exactly what people use in practice. Weaknesses: - The paper is crazily technical already in the main part, not even taking into account the appendix. Within the limited time available for a NeurIPS review, it is impossible to verify mathematical correctness. I tend to believe that this is inherent to the result and not the authors' fault, but I wonder if NeurIPS is actually the right venue to publish such a result. Maybe the authors should submit to a journal instead, where reviewers have enough time to verify mathematical correctness. If submitting to NeurIPS, I think the emphasis should be much more on providing intuitions of the proofs than on technical details. - The result is only valid if the network is sufficiently wide enough. Looking through the proof it seems that the required depth $D$ depends with a double-factorial on $n$, which in turn depends on the desired precision $\epsilon$. This will be astronomically large for any reasonable value of $\epsilon$. I do not think that any practical neural network can be so wide. - Obviously also all other constants seem to be huge / high exponential dependicies on the fixed quantities. But I think this is fine for a theoretical result, it does not concern me as much as the lower bound on the width mentioned above. Minor comments: - line 26: I think it would be good to define what you mean by "ReLU-like" already here. - line 39: you should say that p is a polynomial. - line 47: it is confusing to call the neuron n, because n denotes the degree of the polynomials to approximate sigma. - line 60: I find this sentence grammatically weird "Thus, those PTAS can be standard SGD on neural networks". What do you mean here? - line 63: erf not defined. - line 65: quasi-polynomial refers to the dependence on epsilon and \bar{d}, still assuming constant i. Point this out. - line 144: what are the a_i in the definition of \hat(\sigma)? These are only defined a few lines later, please fix this. - line 196: soften -> softened Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The result seems to assume realizability of the data, that is, it assumes that the ground truth is actually represented by a Xavier-initialized neural network. Did I understand this correctly and can you say anything about the non-realizable setting (where the data can be anything, but you compare yourself only against the error of a best-possible neural network)? - How much of a restriction is it to restrict the input data to the boundary of the sphere? - For sigmoid and "ReLU-like" activation functions, you prove a better dependence on $\epsilon$. Can you obtain something similar for the actual ReLU activation function? If not, what are the obstacles? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Generally the authors state all mathematical assumptions in a proper way. However, I think two limitations should be discussed earlier / in more detail: - The issue with the minimum width $D$ (see weaknesses) should be mentioned in the abstract already and pointed out during the introduction. It seems to be an actual important restriction of the validity of the result. - There is an issue with the sequence of drawing the random objects of the instance, as the authors point out themselves in lines 123 - 125. As previous work shows, the result can only hold with this particular sequence. While this is of course implicitly captured by the sequence of expectations in Thm. 1.1, I think the authors should point this out when they state Thm. 1.1 already, and not only hidden in the related work part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review. We will improve the writing according to the outlined suggestions. First, the reviewer says that our paper “seems to be a major milestone in learning theory for deep networks” and does not raise soundness concerns. We believe that this may qualify the paper for a rating better than 4. We next address the reviewer’s concerns and questions: - Dependence on constants: The constant $D$ is of the form $L^{n^i}$ where $L$ is the Lipschitz constant of the activation function. We conjecture that the theorem is correct with respect to any constant. We will clarify it in the final version. - Non-Realizable setting: You understood correctly. If $1-\epsilon_0$ fraction of the distribution is realizable by a random network, then our bounds are valid, up to an additive error of $\epsilon_0$. - *“How much of a restriction is it to restrict the input data to the boundary of the sphere?”:* We can relax this requirement and assume that the inputs are close to the sphere. This will result in an additional factor in the bounds. - ReLU-like vs. ReLU: As opposed to ReLU, ReLU-like activations have good polynomial approximations. Hence, our results give better bounds for these functions. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response to my review and for answering my questions. While I agree with the authors that the result could qualify for a better rating than 4, I remain hesitant to recommend acceptance due to the lack of intuitive explanations for the very technical content in the paper. A paper at NeurIPS should be comprehensible for the very diverse ML community. --- Reply to Comment 1.1.1: Comment: We will add more intuitive explanations to make the paper easier to read for a more diverse audience. We note that the proof approach is simple: for a random network, we can approximate each activation sufficiently well with low-degree Hermite polynomials. The calculations required to prove our result are indeed non-trivial, but we believe that there are many papers in NeurIPS which are much more technical than ours. The mathematical tools required to understand our proof are approachable to a wide audience.
Summary: The work shows that there is a PTAS for learning a random xavier feedforward network. They show rigorously that the algorithm runs in time and sample complexity $d^{t}$ where $t = poly(\frac{1}{\varepsilon})$. This is quite an impressive result since it is a distribution-free result in terms of the input $x$. Furthermore, a corollary of this result is that it shows that SGD can learn these networks. Strengths: 1. This is a technically challenging and impressive result. 2. Typically, learnability results for Neural Nets are restricted to 2-layer shallow networks. To extend the result to constant depth networks is quite impressive even if they are only random. Weaknesses: 1. The paper can be better written. While the technical content is good, I feel there could be more intuition and better presentation. Proofs could be moved the appendix and detailed proof sketches could be included. 2. Since the PTAS is for learning random networks, it's practical value is questionable. 3. I found certain aspects of the theory hard to understand - I have included them in the review as questions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The abstract mentions Xavier networks - shouldn't this only be feedforward networks? Or is it easy to extend to other kinds? 2. I understand that $\epsilon_{\sigma}(n)$ refers to the error when approximating the activation by a polynomial of degree $n$. I would like to see a few lines explaining the intuition behind this quantity and why it is important. 3. The input space is restricted to be **on** the unit sphere. Can this be relaxed to handle $\lVert x \rVert \leq 1$? 4. Theorem 1.1 is stated as "For any weights $W$", but the bound takes an expectation over $W$. Is it on average or "for any"? 5. The notation can be cleaned up. For example in Line 66, consider saying $d^t$ where $t = \mathcal{O}(dots)$ 6. Why does Corollary 1.2 consider $\epsilon, i$ as constants? Isn't it just for any $\epsilon, i$? Is it possible to indicate the dependence on $i$ in the bound? 7. Is the claim of Corollary 1.2 that SGD can be used to approximate a Xavier network starting from a Xavier network on average? I understand that this is non-trivial, but I am not able to get a good intuition as to why this is important. 8. I feel line 113-116 provides excellent motivation for the work. I would recommend the authors to highlight it in the introduction. 9. $\delta_{ij}$ is the dirac-delta function? 10. In Lemma 2.7, it says fix $W$ and then takes $\mathbb{E}_W$. I don't understand this. 11. Is the application of Jensen's Inequality in line 233 obvious? $\psi$ isn't convex as a function of all the $W$'s right? Or is the expectation only with respect to the last layer's weights? 12. I think the work is good, but presentation can be improved. I would recommend moving the proofs to the appendix and highlighting intuition and explaining the result regarding SGD in more detail. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I feel the authors should include a more detailed limitations section where they address some of the concerns I have listed above as questions (if relevant). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review. We will improve the writing according to the outlined suggestions. We next address the reviewer’s questions: - *“The abstract mentions Xavier networks - shouldn't this only be feedforward networks? Or is it easy to extend to other kinds?”:* Our techniques are not limited to feedforward networks, and we are certain that they can be used to derive similar results for more architectures. This is mostly left for future work. - *“I understand that $\epsilon_\sigma(n)$ refers to the error when approximating the activation by a polynomial of degree $n$. I would like to see a few lines explaining the intuition behind this quantity and why it is important”:* In the proof we essentially replace the activation by its polynomial approximation. Thus, it is natural to expect that our bound will depend on the extent to which the activation can be approximated by a polynomial. We will add an explanation to the final version. - *“The input space is restricted to be on the unit sphere. Can this be relaxed to handle $\lVert x \rVert \le 1$?”:* We can relax this requirement and assume that the inputs are close to the unit sphere. This will result in an additional factor in the bounds. - *“Theorem 1.1 is stated as ‘For any weights $W$’, but the bound takes an expectation over $W$. Is it on average or ‘for any’?”:* The polynomial $p_W$ is defined for any $W$, and the bound is valid in expectation. - *“Why does Corollary 1.2 consider $\epsilon, i$ as constants? Isn't it just for any $\epsilon, i$? Is it possible to indicate the dependence on $i$ in the bound?”:* The dependence on the constant can be derived from Theorem 1.1. We write the corollary to emphasize that our main theorem implies a PTAS. - *“Is the claim of Corollary 1.2 that SGD can be used to approximate a Xavier network starting from a Xavier network on average? I understand that this is non-trivial, but I am not able to get a good intuition as to why this is important”:* The importance is that this implies that standard training algorithms can learn most neural networks. Thus, this can be seen as a partial explanation for the success of NN, which is a central open question these days. - *“I feel line 113-116 provides excellent motivation for the work. I would recommend the authors to highlight it in the introduction”:* Will do. Thanks! - *“$\delta_{ij}$ is the dirac-delta function?”:* It is the Kronecker delta. We will clarify this. - *“In Lemma 2.7, it says fix $W$ and then takes $E_W$. I don't understand this”:* It should be replaced with “let $W$ be a Xavier matrix”. - *“Is the application of Jensen's Inequality in line 233 obvious?”:* It follows from the concavity of the square root function. --- Rebuttal Comment 1.1: Title: Response to the Authors Comment: I thank the reviewers for their response. Since the results on extending it to more architectures is left as future work, I would recommend the authors specify that the results as they are proven are only for feedforward networks. It can be left as a remark that they believe these results can be extended. Other than that, I am satisfied by the reviewer's responses and am increasing my score. --- Reply to Comment 1.1.1: Comment: Thanks. We will add such a remark.
Summary: This work presents a polynomial-time approximation scheme (PTAS) for learning random Xavier networks of depth $i$ up to a fixed additive error of $\epsilon$ with respect to any distribution on the hypersphere. For a fixed $\epsilon$, the time and sample complexity is polynomial w.r.t. $\bar{d}$, the total parameter size of the neural network. The work also provides improved bounds for $\bar{d}^{polylog(\epsilon^{-1})}$ for special activation functions such as sigmoid. Prior work has several limitations, mainly that most consider only one hidden-layer-layer networks. The most comparable work is Chen et al. [7] which provides an algorithm for learning deeper than one layer ReLU networks, yet while the complexity is polynomial in $\bar{d}$, it is exponential in paramaters such as the number of hidden units, depth, spectral norm of the weight matrices, and Lipschitz constant. In fact, there are a plethora of negative results regarding hardness of learning. This work avoids these hardness results by fixing an input distribution and then drawing a random weights of the network. The main idea of the paper is to approximate the activation function sufficiently close to the original by replacing it with a polynomial approximation using Hermite polynomials (Theorem 1.1). The paper calls this a "shadow network" and requires that the shadow network's coefficients are polynomially bounded. Given the extensive work on learning polynomials efficiently, the authors can leverage these algorithms, such as SGD on learning polynomial approximations, to obtain a PTAS for learning random networks to a sufficiently small error. Strengths: - This work studies an important problem of learning deeper-than-two-layers neural networks efficiently. Given the high theoretical relevance of the problem regarding the effectiveness of deep learning, the significance of studying such problem is justified. - I find the presentation of the paper quite clear, though it may be helpful in describing prior work and results on learning polynomial approximations using Hermite polynomials. Otherwise, the related work and comparison is to prior work are clear. - The main result provides a PTAS that is distribution-free in its input distribution. Prior work did not get such flexible results and this work achieves this in part by incorporating a common initiailization method (random Xavier networks) and studying "average-case" results instead of worst-case instances. Weaknesses: I do not see any notable weaknesses. I have some questions, which may be seen as weaknesses, that I will list below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Regarding constant $D(n,i,\sigma)$: It is not entirely clear to me what $D$ explicitly is with respect to the parameters of $n,i,\sigma$. Since $D$ is a lower bound to the number of neurons in each layer, this quantity seems highly relevant and would be helpful to make explicit in the main explicit theorem (like Theorem 2.1). My question is the following: how does the lower bound $D$ compare to prior work? How would the results change if you do not have an explicit constant $D$ and assume $D := \min(d_1, ..., d_{i-1})$? Would such guarantee exist given the current techniques and what would the guarantees be w.r.t. this new $D$? - L140: When defining Hermite polynomials, is $\delta_{ij}$ defined? I think this is missing and should be added to Section 2.1. - L144: When defining dual activation, $a_i$ is undefined yet. Maybe reordering the definition so that dual activation comes after defining $\sigma$ with Hermite polynomials would be clearer. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See Questions. Other than that, the authors are clear with the limitations and comparison to prior work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review. Regarding the questions: 1. This is a good question. In our result, we need a lower bound $D$ on the width which is of the form $L^{n^i}$, where $L$ is the Lipschitz constant of the activation function. We also conjecture that the theorem is correct with respect to any constant. Prior work considered significantly different settings, but a lower bound on the width is usually not required. We will discuss it in the final version. 2. Right. We will add it. 3. We will reorder. --- Rebuttal Comment 1.1: Title: Reply to Authors Comment: My questions are answered. Thank you for the response.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
GraphPatcher: Mitigating Degree Bias for Graph Neural Networks via Test-time Augmentation
Accept (poster)
Summary: This paper addresses the issue of degree bias in node classification, which refers to the poorer performance of Graph Neural Network (GNN) models on nodes with lower degrees compared to the average level. While previous works have attempted to mitigate this bias, they often trade off the performance on higher-degree nodes for improvements on low-degree nodes. In light of this, the authors propose GraphPatcher, a method that introduces virtual nodes during the test phase to tackle this problem. To make the approach computationally feasible, the authors limit the search space to 1-order neighbors and devise an iterative procedure to repair a series of corrupted ego graphs by adding one virtual node at each step. Additionally, the authors provide theoretical results that establish the relationship between the number of sampled ego graphs and the accuracy of the estimated patching loss. The proposed GraphPatcher is extensively evaluated through experiments, demonstrating its effectiveness in addressing the degree bias issue. Strengths: 1. The paper is well-written, allowing readers to easily grasp the core ideas and delve into the authors' detailed explanations. 2. The motivation behind the proposed GraphPatcher is well-founded, as it intuitively aims to enhance overall performance rather than compromising it through trade-offs. 4. The empirical evaluations consistently demonstrate the effectiveness of GraphPatcher, with statistically significant improvements observed. Furthermore, the increasing number of patched virtual nodes correlates with increasingly noticeable enhancements, providing strong support for the rationale behind the iterative procedure designed for GraphPatcher. Weaknesses: 1. While the theoretical analysis provided in the paper is logically sound, its relevance to the community's primary concern of reducing the generalization risk of the original GNN model is unclear. Consequently, the presented theoretical results may not be perceived as a significant contribution. 2. Although the experiments in the paper are well-designed, there is room for improvement in terms of comprehensiveness. For instance, the inclusion of more visually informative figures/tables showcasing the improvements across different degree levels would enhance the clarity of the results. Additionally, comparisons on heterophilic graphs would provide a more comprehensive evaluation. It is also important to consider evaluating the efficiency of GraphPatcher since it introduces additional computational overhead. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: When working with graph samplers that are necessary for training on large graphs, whether we need a larger/smaller L? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer LJuX, Thank you for your valuable and kind feedback. We sincerely appreciate your acknowledgment of the good writing quality, motivation, and performance of our proposed framework. Our detailed response to your concerns is listed as follows: **[Relevance of our theoretical analysis]** \ We appreciate your thoughtful consideration and valuable feedback. We recognize that the theorem we propose in our manuscript might not be directly applicable to any GNN in general. However, it is important to note that this theorem remains distinct from other GNNs, as it substantiates the iterative node patching and reconstruction processes, both of which are specifically tailored to our model. The significance of Theorem 1 lies in its demonstration that the upper bound of error for GraphPatcher can be notably reduced through the utilization of multiple sampled ego-graphs (i.e., Equation 5). We believe it serves as an important theoretical motivation for our sampling strategy. Along with the promising empirical results we have shown in our experiments, it further strengthens the credibility and substantiates the merits of our proposed GraphPatcher model. **[Performance on heterophilic datasets]** \ Please refer to G2 in our general response for details. In summary, GraphPatcher exhibits similar trends for heterophilic datasets as we observe for homogeneous graphs. GraphPatcher improves the low-degree performance by 0.44 and 1.97 accuracy score and the overall performance by 0.38 and 1.14 on Chameleon and Actor respectively. Updated experiments over heterophilic datasets are included in our provided one-page pdf. We will update the final version of our paper with these additional experiments. **[Performance w.r.t. different degree levels]** \ Thanks for raising this point -- we agree this would help clarify further where performance improvements are derived. We have taken the liberty of incorporating additional plots into the one-page PDF that was submitted for the global rebuttal. In these new visuals, we meticulously analyze the performance of GCN, TuneUp, and GraphPatcher across a spectrum of 10 distinct degree ranges, ranging from the 0th to the 10th percentile, all the way up to the 90th to 100th percentile. Furthermore, we also include comparative graphs depicting the relative enhancements achieved by GraphPatcher and TuneUp in contrast to the baseline performance of GCN. These supplementary graphs exhibit consistent trends, mirroring the observations we made in the original manuscript regarding the coarse degree levels. Notably, GraphPatcher consistently showcases improvements in low-degree performance while concurrently preserving or even elevating high-degree performance. We appreciate your point and are looking forward to incorporating these enlightening visuals into the appendix of our manuscript should it meet the criteria for acceptance. Your consideration of these additions would be greatly appreciated. **[Selection of L on large datasets]** \ In the appendix (i.e., Figure 5 in Section B.2), we have presented GraphPatcher's performance in relation to the number of sampled graphs (referred to as L). Our observations indicate that consistently across all datasets, an L value below 10 tends to impact the overall performance. Notably, the performance reaches a saturation point at L=10, beyond which further increments in L do not yield discernible improvements across all datasets. As a result, we are led to the conclusion that, particularly when dealing with sizable graphs (such as the ogbn-arxiv dataset examined in this study), employing 10 sampled graphs is likely to yield promising and sufficient outcomes. We sincerely appreciate your suggestions for our manuscript. We will **accordingly modify the manuscript to clarify your concerns** when the anonymous period ends and we hope we have satisfactorily answered your questions. If so, could you please consider increasing your rating? If you have any remaining doubts or concerns, please let us know, and we will happily respond. Thank you! Best regards, \ GraphPatcher's authors --- Rebuttal Comment 1.1: Title: Discussions Comment: Thanks for your response! Most of my concerns are resolved. I will raise my score. --- Reply to Comment 1.1.1: Title: Thanks for your response. Comment: We are delighted that our rebuttal satisfactorily leverages your concerns. We appreciate your constructive comments and will accordingly refine our manuscript.
Summary: This paper addresses degree bias in node classification. The authors show that current methods suffer performance degradation for high-degree nodes. Thus, they freeze the original GNN and train GraphPatcher to enhance low-degree nodes with node patching in the testing stage. In the experiments, they demonstrate that their approach significantly improves performance without performance drop in high-degree nodes on various homophilous graphs. Strengths: - They exhibit that the existing approaches suffer performance drops in high-degree nodes. - Their method improves performance without performance degradation in high-degree nodes. Weaknesses: - [W1] From my understanding, it seems difficult to train in a node-parallel manner with a single large graph, as the graphs used by each node differ, unlike vanilla GNN training. Then, I’m concerned that training GraphPatcher might take an excessively long time. - [W2] The validation is only conducted on homophilous graphs. - [W3] I found that there is a performance drop for high-degree nodes on CiteSeer in Table 1. Is there any reason for this phenomenon? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Following [W1], could you provide the comparison with baselines from the perspective of the training time (including vanilla GNN)? Also, it would be better to show memory usage in GPU when training GraphPatcher and baselines. - Following [W2], could you provide a performance comparison on heterophilous graphs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: As the authors mentioned, the additional computational cost might be considerable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Cs3J, Thank you for your valuable and kind feedback. We sincerely appreciate your acknowledgment of the good motivation and performance of our proposed framework. Our detailed response to your concerns is listed as follows: **[Node-parallel manner/Training time/Memory usage]** \ Please refer to G3 in our general response for details. In summary, the training for GraphPatcher is fast and brings little extra computational overhead if we pre-compute all ego-graphs beforehand. With sufficient CPU cores, the training would be as fast even if we compute all ego-graphs on the fly, due to the **[Experiments over heterophilic datasets]** \ Please refer to G2 in our general response for details. In summary, GraphPatcher exhibits similar trends for heterophilic datasets as we observe for homogeneous graphs. GraphPatcher improves the low-degree performance by 0.44 and 1.97 accuracy score and the overall performance by 0.38 and 1.14 on Chameleon and Actor respectively. Updated experiments over heterophilic datasets are included in our provided one-page pdf. We will update the final version of our paper with these additional experiments. **[Performance drop for high-degree nodes on Citeseer]** \ We appreciate your attention to our experiments. We believe that the performance drop in Table 1 is a dataset-specific phenomenon. For instance, as you noticed, all models in Table 1 show sub-optimal performance for high-degree nodes in Citeseer. Most of the frameworks in this table explore graph convolution schemes similar to GCN. However, when the backbone model used is GraphSage, as shown in Table 2, both TuneUp and GraphPatcher improve Citeseer’s high-degree performance by 1.49 and 0.76 accuracy scores respectively. From these two ablations, we can conclude that the performance of high-degree nodes on Citeseer is sensitive to the model architecture. Moreover, when applied to self-supervised learning-based frameworks (even with GCN as the backbone model that does not favor high-degree nodes on Citeseer), both TuneUp and GraphPatcher can improve the high-degree performance, as demonstrated in Table 3. Hence, we can also conclude that the supervision signal (e.g., self-supervised learning vs. supervised learning) will also impact the high-degree performance. Recently, [1] discovers that there exist subgroups of nodes within each graph where different GNNs might exhibit distinct results, depending on factors such as the model architecture, supervision signal, etc. This paper might be a good reference for explanations of dataset-specific or subgroup-specific abnormal behaviors shown by different GNNs, and subgroup-level understanding is still an actively underexplored direction. We sincerely appreciate your suggestions for our manuscript. We will **accordingly modify the manuscript to clarify your concerns** when the anonymous period ends and we hope we have satisfactorily answered your questions. If so, could you please consider increasing your rating? If you have any remaining doubts or concerns, please let us know, and we will happily respond. Thank you! Best regards, \ GraphPatcher's authors [1] Mao, Haitao, Zhikai Chen, Wei Jin, Haoyu Han, Yao Ma, Tong Zhao, Neil Shah, and Jiliang Tang. "Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All?." arXiv preprint arXiv:2306.01323 (2023). --- Rebuttal Comment 1.1: Title: Change my score to weak accept Comment: I appreciate your detailed response. Most of my concerns are resolved. Thus, I change my score to weak accept. --- Reply to Comment 1.1.1: Title: Thanks for your response. Comment: We are delighted that our rebuttal satisfactorily leverages your concerns. We appreciate your kind responses and acknowledgement of our work.
Summary: This paper introduces GraphPatcher, a novel test-time augmentation framework designed to mitigate degree bias in graph neural networks (GNNs). Degree bias causes GNNs to perform well with high-degree nodes (rich neighbor information) and poorly with low-degree nodes. Current strategies tend to focus on low-degree nodes during training, which can compromise performance on high-degree nodes. To overcome this, GraphPatcher creates virtual nodes to progressively 'patch' low-degree nodes, enhancing performance without sacrificing the capabilities of GNNs on high-degree nodes. The framework is model-agnostic, applicable to self-supervised or supervised GNNs, and shows significant improvement in overall and low-degree node performance across multiple benchmark datasets. Strengths: - I like the reasoning of the paper, which makes the proposed method well motivated and natural. - The proposed method is more "adaptive" to both low-degree and high-degree nodes, without special treatments that is only for low degree node. The iterative patching sounds like denoising diffusion models, which is interesting. - There are enough details for reproducibility and the code is attached to the supplementary. Weaknesses: **Concern About Weak Model Performances**: Though the experiment results show the GraphPatcher sometimes is more accurate than the baselines, it is less convincing because the baselines does not look strong enough. For example, for citation networks like Cora, Citeseer and Pubmed, the SOTA accuracy are much better than the best performance shown in Table 1. I recommend the authors to check https://paperswithcode.com/area/graphs for SOTA performances. It is very important because there are non-negligible accuracy gaps. For example, for Citeseer, there are a lot of models achieved more than 80% accuracy, but in Table 1, the best model achieved only around 71% accuracy. Same for Cora (90% vs 84%), Pubmed (91% vs 81%). I feel it would be better that the authors can evaluate the GraphPatcher on some SOTA models and evaluate the effect (improvement) of the proposed graph patching methods, rather than demonstrating improvements over some well-established models that is proposed years ago. It is crucial to understand the real-world value of the proposed methods in this fast-moving community. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: **Some Connections to Iterative Graph Generation And Diffusion Models**: The proposed GraphPatcher is conceptually to iterative graph generation framework, only the training objective is different. The way KL-divergence is used is also similar to a lot of graph generation methods. It is recommended to consider the connection to the graph generation works to see what can be adopted. I feel an autoregressive model like autoregressive transformer or LSTM can be a good candidate for generating the sequence of patched node. The main idea is that it could be possible to know when to stop (predict stop token). The recent works on graph generation using diffusion models could also be a very relevant. These are just some random thoughts, I will appreciate the thoughts from the authors on how this could relate to this work, and maybe adding some future directions into the discussion section if it is indeed relevant. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: No limitations are left unaddressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer ZZgb, Thank you for your valuable and kind feedback. We sincerely appreciate your acknowledgment of the motivation and reproducibility of our proposed framework. Our detailed response to your concerns is listed as follows: **[Baseline not strong]** \ The baselines we compare GraphPatcher with are SoTA methods recently published on top venues (e.g., GTrans on ICLR’23, EERM and ColdBrew on ICLR’22, Tail-GNN on KDD’21, etc). We compare GraphPatcher with other test-time augmentation methods or cold-start/low-degree methods and evaluate by the relative improvement. By strong baselines, we believe you mean strong backbone models. If this is the right interpretation, we have self-supervised methods that can perform better than early models like GCN. For these strong methods (i.e., Table 3), GraphPatcher can still consistently improve the performance, aligning with our observation in the main table. To further address your concern, we explore the top model on the leaderboard (i.e., GRAND [1]) and apply GraphPatcher to it. As shown in the table below, GraphPatcher still enhances performance by a significant margin, with 1.5, 1.8, and 4.2 low-degree accuracy gains. Overall, GraphPatcher combined with GRAND can reach 85.9, 76.1, and 84.2 accuracy scores. The current SoTA scores are 85.5, 75.4, and 83.8 and we further improve them by 0.4, 0.7, and 0.4 accuracy scores, making **GraphPatcher the top 1 model on the leaderboard**. Please note that the leaderboard you were referring to is not the public split leaderboard. The correct one that aligns with our setting is “Cora/Citeseer/Pubmed with Public Split: fixed 20 nodes per class”. **We will update the leaderboard with the numbers we provide in this discussion** after the anonymous period and publicize the code. ||Cora|Citeseer|Pubmed| |:-|:-:|:-:|:-:| |||Low-degree||| |GRAND|80.18±0.64|70.57±0.68|80.48±0.14| |+GraphPatcher|81.68±0.45|72.37±0.29|84.68±0.29| |||High-degree||| |GRAND|88.32±0.75|79.64±0.86|83.53±0.52| |+GraphPatcher|88.92±0.18|79.54±0.13|41.13±0.21| |||Overall||| |GRAND|85.22±0.80|74.90±0.77|82.30±0.41| |+GraphPatcher|85.90±0.44|76.10±0.38|84.20±0.26| [1] Wenzheng Feng et al. "Graph random neural networks for semi-supervised learning on graphs." NeurIPS 2020. **[Connection to Iterative Graph Generation and Diffusion Models]** \ We agree that our proposed framework is relevant to diffusion models and iterative graph generation models. Most graph generation frameworks (including those using diffusion models) explore iterative generation schemes to synthesize real graphs [1-7]. They improve the generation quality and focus on applications such as molecule design, protein design, and program synthesis. Though GraphPatcher also generates patching nodes for ego-graphs, ours is a different research direction than these methods. We do not focus on whether or not the generated patching nodes are faithful to the original data distribution, as long as the low-degree performance is enhanced and the high-degree performance is maintained. Another relevant work named GPT-GNN [8] explores an iterative node generation for pre-training, which also falls under the category of maintaining the original data distribution. In summary, GraphPatcher is relevant to these frameworks in the sense that it generates nodes to add to ego-graphs. However, our proposal is motivated by a different reason and we aim at the performance improvement brought by generated nodes in downstream tasks. Hence it is also difficult for us to directly compare GraphPatcher with these iterative generation models due to their different purposes and motivations. As for other diffusion models, we have detailed discussions in G1 in the general response. They are conceptually similar to this section but start from a more general point. **[Discussion on Auto-regressive Architecture]** \ The architecture of GraphPatcher is auto-regressive where the node to be generated depends on previous generations. For mechanisms that terminate the generation like a stop token you mentioned, we believe it might be difficult in our setting. Unlike language generation where a sentence stops at its period, our node patching does not have such a ground truth. Right now we simply decide the generation length by the validation accuracy. A suitable heuristic could be the amount of changes (as measured in cosine similarity or other metrics) after a node patching. If the node representation stops updating with an additional node, then that indicates GraphPatcher is satisfied and the generation process could stop. We have not tested this idea yet and it might be a good further exploration. We sincerely appreciate your suggestions. We will **accordingly modify the manuscript to clarify your concerns** when the anonymous period ends and we hope we have satisfactorily answered your questions. If so, could you please consider increasing your rating? If you have any remaining doubts or concerns, please let us know, and we will happily respond. Thank you! Best regards, \ GraphPatcher's authors [1] Yanqiao Zhu, et al. "A survey on deep graph generation: Methods and applications." LOG’22 [2] Jiaxuan You, et al.. "Graphrnn: Generating realistic graphs with deep auto-regressive models." ICML’18 [3] Nikhil Goyal et al. "Graphgen: A scalable approach to domain-agnostic labeled graph generation." WWW’22 [4] Chenhao Niu, et al. "Permutation invariant graph generation via score-based generative modeling." AISTATS’22. [5] Jaehyeong Jo et al.. "Score-based generative modeling of graphs via the system of stochastic differential equations." ICML’22 [6] Clement Vignac, et al. "Digress: Discrete denoising diffusion for graph generation." ICLR’23 [7] Meng Liu, et al. "Graphebm: Molecular graph generation with energy-based models." EBM@ICLR’23 [8] Ziniu Hu, et al. "Gpt-gnn: Generative pre-training of graph neural networks." KDD’20 --- Rebuttal Comment 1.1: Comment: Thank the authors for addressing my concerns. It seems there was some misunderstanding in the experiment setups, and I agree that it may be difficult to have a well-defined stopping criteria in graph patching. I believe my score has already acknowledged the contributions of this work, hence I will keep my score. --- Reply to Comment 1.1.1: Title: Thanks for your reply. Comment: We greatly appreciate your valuable insights and constructive feedback on our paper. It's truly encouraging to see that our response effectively addresses the concerns you raised. Thank you for recognizing the efforts we've put into our work.
Summary: The paper proposes GRAPHPATCHER, a test-time augmentation framework for graphs, to mitigate the degree biases in Graph Neural Networks. GRAPHPATCHER adopts a corruption function with increasing strength to simulate low-degree ego-graphs from a high-degree one. From the most corrupted graph, it then iteratively generates virtual nodes to the anchor node, such that the frozen GNN model behaves similarly given the currently patched graph or the corrupted graph next in the hierarchy. The authors then examine the effectiveness on seven real-world benchmark datasets, and observe that GRAPHPATCHER enhances the low-degree performance up to 6.5%, when enhancing the overall performance by up to 3.6% at the same time. Strengths: 1. The proposed GRAPHPATCHER avoids creating an artificial out-of-distribution scenario when focusing only on low-degree nodes, and hence avoids significantly sacrificing the performance on high-degree nodes. 2. The modelling and optimization are conducted through aligning the ego-graph sequence and its reconstruction. The test-time augmentation framework avoids changing model architectures and the expensive re-training cost. The structure is simple with good performance. Weaknesses: 1. The way GRAPHPATCHER patches the corrupted ego-graph (adding one node and the corresponding edge at a time) might limit the expressive ability of the model, as it could be difficult for this strategy to reflect more complex structures between the anchor node's neighbors. 2. The continuous corruption and patching on ego-graph share some similarities to the diffusion model. It would be more insightful if the theory behind the methodology and the similarity/dissimilarity with the diffusion model are discussed. 3. In the experimental part, Tail-GNN performs worse than GCN in almost all cases. Even for the low-degree nodes, Tail-GNN cannot always beat GCN, which is unexpected since Tail-GNN is specially designed for low-degree nodes. If there is a problem with the baseline tuning, some conclusions cannot be well supported (e.g., Tail-GNN falls short on the high-degree nodes). This also affects the convincingness of the superiority of GRAPHPATHCER over baselines. 4. It is claimed that the chosen datasets cover graphs with distinctive characteristics, but all datasets used are homophilic ones. It is expected to have more experiments on heterophilic datasets. Since heterophilic graphs often have more complex topologies, they are more susceptible to node adding and dropping. Other minor issues: Typo in row 8: "orizinally". Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to the weaknesses part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The paper has properly addressed the limitations of the proposed method. One limitation is the additional computational cost entailed by generating ego-graphs. The authors provide solutions to mitigate the problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer xNZm, Thank you for your valuable feedback. We sincerely appreciate your acknowledgment of the effectiveness of our proposed framework. Our detailed response to your concerns is listed as follows: **[Limited expressive ability]** \ We agree with you on the fact that the way we patch nodes to ego-graphs will not enhance the expressive ability from the perspective of graph-level isomorphism (e.g., making two ego-graphs distinguishable). However, **this is not a problem for our method** because we are not focusing on graph-level distinguishability (e.g., patching nodes to distinguish two ego-graphs that are indistinguishable before). Instead, we care about enhancing node-level predictions over low-degree nodes, where nodes with the same class are mapped to the same class after the node patching. For node classification tasks, a **stronger expressive ability might not always lead to good performance**. For instance, GIN has a stronger expressive ability than GCN, but GCN consistently outperforms GIN on many node classification benchmarks [3]. Moreover, training-time augmentation frameworks that add additional edges/nodes to the graph might also impact the expressive ability of the learning model [1,2]. Furthermore, even for graph-level classification tasks where expressive ability is very important, a model with better expressive ability does not necessarily outperform models with lower expressive ability [3,4]. [1] Zhao, Tong, Yozen Liu, Leonardo Neves, Oliver Woodford, Meng Jiang, and Neil Shah. "Data augmentation for graph neural networks." AAAI’21 [2] Mehdi Azabou, Venkataramana Ganesh, Shantanu Thakoor, Chi-Heng Lin, Lakshmi Sathidevi, Ran Liu, Michal Valko, Petar Veličković, and Eva L Dyer. "Half-Hop: A graph upsampling approach for slowing down message passing." ICML’23 [3] Dwivedi, Vijay Prakash, Chaitanya K. Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. "Benchmarking Graph Neural Networks." JMLR’23 [4] Zhao, Lingxiao, Neil Shah, and Leman Akoglu. "A practical, progressively-expressive GNN." NeurIPS’22 **[Similarity/Dissimilarity with the diffusion model]** \ Please refer to G1 in the general response. We will update the final version of our paper by adding the discussion above and we sincerely appreciate your suggestions. **[Performance of Tail-GNN]** \ Thanks for your attention to our experiments. We fully acknowledge the contributions brought by Tail-GNN as it is one of the earliest works that enhance low-degree performance by learning from corrupted high-degree nodes. When the evaluation setting aligns with Tail-GNN’s motivation, the low-degree performance indeed is improved by a lot. As shown in our results, Tail-GNN’s low-degree performances on dense datasets like Wiki. CS, Am. Photo and Co. CS are competitive (i.e., 1st on Wiki. CS, 2nd on Am. Photo, and 3rd on Co. CS) compared to GCN and other SoTA methods. However, Tail-GNN is well motivated by solving the cold start problem and it is evaluated in a setting specifically designed for low-degree nodes. Tail-GNN explores warm nodes only for training (i.e., all nodes with a degree higher than 5 across all datasets) and cold nodes only for testing (i.e., all the remaining nodes). This setup enables Tail-GNN to perform well on low-degree nodes over dense datasets like Wiki. CS, Co. Photo, and Co. CS. Unlike TailGNN, GraphPatcher explores a different research direction: while enhancing the low-degree node performance, GraphPatcher also aims at maintaining good performances over high-degree nodes. In order to evaluate this goal, we need to train and evaluate models by a mixture of low- and high-degree nodes. This setting might not favor Tail-GNN as it is different from the original setting that Tail-GNN is evaluated on. We followed the optimal hyper-parameters provided by the authors of Tail-GNN in the original paper and utilized the official GitHub repo to conduct the experiment. For Cora, Citeseer, and Pubmed, they are relatively sparse with average degrees lower than 5, which also does not favor Tail-GNN due to the lack of high-degree nodes for Tail-GNN to train from. **[Experiments over heterophilic datasets]** \ Please refer to G2 in the general response. The updated experiments over heterophilic datasets are included in our provided one-page pdf. We will update the final version of our paper with these additional experiments. We sincerely appreciate your suggestions over our manuscript. We will **accordingly modify the manuscript to clarify your concerns** when the anonymous period ends and we hope we have satisfactorily answered your questions. If so, could you please consider increasing your rating? If you have any remaining doubts or concerns, please let us know, and we will happily respond. Thank you! Best regards, \ GraphPatcher's authors --- Rebuttal Comment 1.1: Title: Response after rebuttal Comment: Thank you for the rebuttal. It has addressed my concerns. I have raised my rating. --- Reply to Comment 1.1.1: Title: Thanks for your reply. Comment: Your valuable insights and constructive feedback on our paper are deeply appreciated. It is motivating to observe that our response has effectively tackled your concerns. We are grateful for acknowledging the dedication we've invested in our research.
Rebuttal 1: Rebuttal: Dear ACs and reviewers, We thank the reviewers for their feedback and constructive suggestions. We are pleased that most reviewers appreciated **the promising effectiveness and performance of our framework**, e.g.: “simple with good performance” (xNZm), “method improves performance without performance degradation” (Cs3J), and “statistically significant improvements observed” (LJuX). Moreover, most reviewers also acknowledged **the good motivations behind our framework**, said: “the proposed method well motivated and natural” (ZZgb), “exhibit that the existing approaches suffer performance drops in high-degree nodes” (Cs3J), and “motivation behind the proposed GraphPatcher is well-founded” (LJuX). **[G1: Connection between diffusion models and GraphPatcher]** \ At the same time, multiple reviewers asked about the connection between diffusion models and our GraphPatcher. We appreciate the insightful suggestions and agree that GraphPatcher is relevant to diffusion models. Both diffusion models and our proposal conduct multiple corruptions to the training samples with increasing strengths and generate examples in an iterative fashion. This scheme is conceptually inspired by heat diffusion from physics. However, the motivations behind them are different, where diffusion models focus on the generation quality (i.e., fidelity to the original data distribution) but ours aims at the results brought by our generated nodes (i.e., the performance improvement). Specifically, diffusion models aim at learning the probability distribution of the data and accordingly generating examples following the learned distribution. Their goal is to generate samples that follow the original data distribution (i.e., P(X)), agnostic of any other factor like the target GNN we have in our scenario. Whereas for GraphPatcher, we aim at generating nodes to ego-nets such that the target GNN models deliver better predictions (i.e., P(Y|X)) when the node degree is low. We mostly care about performance improvement and the generated node may be very different from the original nodes in the graph. **[G2: Experiments on heterophilic datasets]** \ Besides, reviewers also worried about GraphPatcher’s effectiveness over heterophilic graphs. We agree that incorporating heterophilic datasets would help us better understandGraphPatcher’s behavior, and also benefit readers. We have accordingly added experiments on two heterophilic datasets (i.e., Chameleon and Actor), with results shown in the table from our provided one-page pdf. One interesting behavior of these two heterophilic datasets is that even though the degree distribution is power-law, GNN’s performance doesn’t downgrade much on low-degree nodes compared with the performance on high-degree nodes. As shown in the table, GCN’s performances on low- and high-degree percentiles are pretty close. On these two heterophilic datasets, GraphPatcher universally improves GNN’s performance across all degrees. Specifically, GraphPatcher improves the low-degree performance by 0.44 and 1.97 accuracy score and the overall performance by 0.38 and 1.14 on Chameleon and Actor respectively. **[G3: Efficiency and running time]** \ Some reviewers raised concerns about the efficiency and running time of our proposed GraphPatcher. While designing GraphPatcher, we paid attention to its scalability and potential applications to industrial pipelines, which is one of the reasons why we explore the ego-net design. In a single forward pass, we only generate a fixed number of ego-graphs (i.e., mostly 16 or 64 ego-graphs per batch across our experiments) for the back-propagation. In this setting with small batch sizes, GraphPatcher can already deliver good performance improvements to target GNN models with light GPU usage and little computational overhead compared with baselines, with the GPU usage and running time shown in the table below. |||TuneUp|Tail-GNN|GraphPatcher| |:-|:-:|:-:|:-:|:-:| | | |Cora | | || | Training Time (s) | 2.3 |17.7|14.5|8.1| | Train+Eval Time (s)| 2.3|17.7|14.5| 13.3 (34.8)| |GPU Consumption (GB)| 1.4|2.0|1.6|3.1| | | |Citeseer | | || | Training Time (s) | 2.6 |9.7|35.4|8.3| | Train+Eval Time (s)| 2.6|9.7|35.4|12.1 (32.3)| |GPU Consumption (GB)| 1.4|1.5|3.8|2.2| | | |Pubmed | | || | Training Time (s) | 3.2 |17.3|47.6|9.1| | Train+Eval Time (s)| 3.21|17.3|47.6|14.4 (36.9)| |GPU Consumption (GB)| 1.6|1.7|1.9|3.4| The training for GraphPatcher is fast and brings little extra computational overhead. When all the evaluation ego-graphs are pre-computed beforehand, the total running times for Cora, Citeseer, and Pubmed are under 15 seconds (i.e., 13.3, 12.1, and 14.4 seconds). However, as we mentioned in the limitation section, if evaluation ego-graphs are extracted on the fly, the running time will be jeopardized (i.e., 34.8, 32.3, and 36.9 for these three datasets). This issue can be easily resolved by using more CPU cores as this step is embarrassingly parallelizable (currently we only use 24 cores). With sufficient CPU cores, the batch loading time would be close to the time needed to extract 1 ego-net, which only brings neglectable overhead compared with loading them from the disk. We will **accordingly modify the manuscript to clarify your concerns** when the anonymous period ends, and we hope we have satisfactorily answered your questions. If so, could you please consider increasing your rating? If you have any remaining doubts or concerns, please let us know, and we will happily respond. Thank you! Best regards,\ GraphPatcher's authors Pdf: /pdf/c9797e4b53ceb5ad08a9e6228e9f049d6253cb0e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Self-Predictive Universal AI
Accept (poster)
Summary: The authors propose Self-AIXI, an extension of the universal Bayes-optimal agent (AIXI), combining it with self-prediction for use in combination with reinforcement learning instead of planning. They provide theoretical evidence that Self-AIXI inherits optimality properties from AIXI. Strengths: - Overall, the paper is well-written and theoretically sound. - All claims are supported by extensive theoretical analysis. Weaknesses: - As Self-AIXI inherits its optimality properties from AIXI, the overall contriubution seems limited. - Even though the proposed extension seems interesting, I fail to find proper motivation or clear comunication of the sigificance of the contribution by the authors. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I failed to find any motivation for increasing the complexity of AIXI even more by incorporating reinforcement learning instead of planning. Could you elaborate on that? Further comments / suggestions: - The related work presented in 5 seems to have little relation to Self-AIXI and could be extended. - Table 1 could be explained more extensively. - Regarding the safety implications mentioned in 6, i tend to disagree, as the addition of further (possilby blackbox) models mostly hinders interpretability, thus safety-critical validation. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: Limitations are clearly pointed out. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. * “As Self-AIXI inherits its optimality properties from AIXI, the overall contribution seems limited.”: We argue that one of the key results (and the main effort) of our paper is to show that Self-AIXI can perform equally optimally and achieve AIXI optimality. Once we can show that “Self-AIXI” converges to “AIXI” (in the sense spelled out in the paper), this does allow us to “inherit” many known optimality properties of AIXI without having to prove these properties again for Self-AIXI. Constructing an algorithm that improves over AIXI’s optimality properties would indeed be a significant contribution to the field of (universal) AI, but unless there are substantial flaws in the optimality proofs of AIXI this is impossible. * “I fail to find proper motivation or clear communication of the significance of the contribution by the authors.”: We have added this motivation and statement of significance to the introduction and main contributions of our paper. * “I failed to find any motivation for increasing the complexity of AIXI even more by incorporating reinforcement learning instead of planning. Could you elaborate on that?”: In this work we have demonstrated an alternate formula for an optimal agent. The complexity that is added from self prediction is taken away from additional planning. While we do have some added complexity in terms of the self model, since we are already modeling the environment the added complexity will be just our model complexity, and we will save not having to do the traditional planning. * “The related work presented in 5 seems to have little relation to Self-AIXI and could be extended.”: We would like to kindly ask whether the reviewer has any particular references in mind? * “Table 1 could be explained more extensively.”: We have expanded our explanation of Table 1. * “Regarding the safety implications mentioned in 6, I tend to disagree, as the addition of further (possilbly blackbox) models mostly hinders interpretability, thus safety-critical validation.”: We have expanded our safety discussion by this point raised by the reviewer. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal. Comment: Thank you for the clarification regarding the contribution of your work and your remarks on the changed complextiy of the approach. Regarding the related work presented, further references to work on the connecetion of self-prediction and RL would be helpful to classify your approach. Nevertheless I will take your remarks into consideration and update my review accordingly.
Summary: This paper presents a new universal Bayesian AI agent framework that uses reinforcement learning to predict its own learning to form incrementally better and ultimately optimal policies. The paper then shows theoretically that this agent, Self-AIXI, converges to the same policies as a planning based universal, Bayesoptimal agent, AIXI, thus showing that the learning based system itself will converge to optimal. The main contribution of the paper is a theoretical analysis of the self-predicting agent in the context of universal, Bayes-optimal agent frameworks. Strengths: The paper presents a self-predicting agent that uses reinforcement learning to form a universal AI agent that learns an optimal policy in the Bayes sense over a distribution of environments. The formal analysis and proofs seem sound and the final result is of interest to persons in universal AI as it introduces a differnt approach to Bayes optimal agents that could further down lead to more practical agents in this class. While (as acknowledged by the authors) self-AIXI right now is not a practical framework and the assumptions in the self-optimization proof are very strong, the paper provides a point and potential direction and stepping stone towards more practical universal AI. Weaknesses: The main weaknesses of the paper are that it is mainly a formal, theoretical endeavor with no direct known (or anticipated) practical applications. This is clearly acknowledged by the authors in terms of the practical application as well as there not being any practical classes of environments for which the assumptions of the central self-optimization proof would hold. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My main question would be whether the authors see any direct way in which the presented system would be practically applyable ? Outside the perview of universal AI, would there be subproblem domains for which this agent would be practical ? And how small would those have to be ? (Having such - even small - domains could make the paper of interest to a larger part of the community). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors very clearly state the main limitations of the preented work, which lie in the practicality of the approach and in the strong assumption they had to make for the self-optimization proof (and in particular that there currently are no practical environment distributions for which these assumptions hold). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. * “mainly a formal, theoretical endeavor with no direct known (or anticipated) practical applications”: As the reviewer correctly states, the main goal of this paper is to work out the theory for self-prediction and distillation in the limit. We can anticipate practical applications, as is demonstrated by the experiment described in the general response. * “My main question would be whether the authors see any direct way in which the presented system would be practically applyable ?”: We are conducting a proof-of-concept experiment (see general response) to determine if using a self predictor can improve performance of an AIXI approximation.
Summary: The paper introduces a reinforcement learning version of an agent (Self-Predictive Universal AI == Self-AIXI) that converges to the AIXI universal Bayes optimal agent. The advanage of the approach is it is based on learning/reinforcement learning rather than planning and thus opens up new avenues for practical approximations to the AIXI scheme. The term 'self' is based on learning a more compact (distilled) model for predicting the agent's actions. A good property-based comparison with other theoretical and practical implementations is given in table 1. Strengths: The claim is based on a mathematical proof. I could not follow all of the details, but the general structure of the argument and proof seemed sound. Weaknesses: This is a theoretical existence proof rather than an demonstation by implementation, and thus it is unclear whether there are benefits of the approach, eg: faster learner for a given error rate, more accurate practical approximation to AIXI, etc. The paper states that both AIXI and Self-AIXI in their pure form are computationally intractable. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It would be helpful if the theorems/lemmas/etc had a plain-English statement to help present the concepts clearly as well as precisely. A table of symbols would also help. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Somewhat - the paper recognizes that the approach is still computtionally intractable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. * “it is unclear whether there are benefits of the approach, eg: faster learner for a given error rate, more accurate practical approximation to AIXI, etc”: From a theoretical perspective the benefits are that we have shown how predictors can be used to assist the planning effort of the agent. We are conducting a proof-of-concept experiment (see general response) to determine if the self prediction of Self-AIXI can be used to improve an AIXI approximation. * “It would be helpful if the theorems/lemmas/etc had a plain-English statement to help present the concepts clearly as well as precisely. A table of symbols would also help.”: We have added plain-English statements of our main theorems and lemmas to the paper.
Summary: The paper proposes a Bayesian agent for reinforcement learning called Self-AIXI. Self-AIXI learns from its own Q-value maximizing actions and performs action prediction, instead of using extensive search to find optimal action at each step like AIXI. The paper proves that Self-AIXI converges to AIXI and therefore shares optimality properties of AIXI. Strengths: A proof of optimality of policy distillation methods: by proving the optimality of Self-AIXI, a theoretical guarantee is given to policy distillation methods, which could enable further analysis of the theoretical properties of such approach and help design practical methods with a guarantee. Extending the theory of general reinforcement learning to include learning: the policy learning component is introduced into the theoretical framework of RL agents, which could potentially enable analysis of strong RL agents based on extensive learning. Theory framework is clear and easy to follow: the definition of Self-AIXI and the proof framework is presented neatly and very easy to follow. The proofs are straightforward and do not require advanced techniques. Weaknesses: A central condition of the main theorem might be too strong or hard to evaluate: the main theorem requires that $\pi_S$ be a sensible off-policy that satisfies conditions of Lemma 15, but the authors leave the proof of the satisfiability as a conjecture. This would make it hard to evaluate the significance of the condition and consequently the significance of the main theorem. Connection between empirical discussion and theory work is kind of ambiguous: the authors try to motivate the theory of Self-AIXI from empirical work on policy distillation, and terms such as "learning", "distillation" and "self-prediction" are used throughout the paper to bridge theoretical discussions and empirical practice. However, the connection between empirical claims such as "method X uses learning" and theoretical results of Self-AIXI seems ambiguous without a clear definition of such terms in mathematical language. It would make the connection much clearer if "learning" and "self-prediction" could be formally defined within the framework of reinforcement learning. *Regarding originality and contribution over prior work, I would like to leave the assessment to other reviewers as I am not familiar with the related work.* Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I am not particularly familiar with reinforcement learning theory, but from a generalized perspective, I found the following points in the paper a little bit confusing. Hopefully, these questions might be helpful if the authors want to address a broader audience. * Self-prediction is defined in the paper as the process of predicting the action generated by the agent itself, but it does not seem intuitively clear where such prediction happens in the definition of Self-AIXI. * The authors also regard self-prediction as a form of learning, but it is not very explicit where the learning component is. * If one wants to optimize a parameterized policy, what would be the objective function? If the objective is to predict the action with max Q-value, why can AIXI not employ learning to predict its optimal action selected from planning? Other questions: * What does $\gamma$ represent in Lemma 15? It does not seem very clear from the context. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations of the theory of Self-AIXI in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. * “the main theorem requires that $\pi_S$ be a sensible off-policy that satisfies conditions of Lemma 15, but the authors leave the proof of the satisfiability as a conjecture. This would make it hard to evaluate the significance of the condition and consequently the significance of the main theorem.”: In response to the concerns about the satisfiability of $\pi_S$, it's important to note that its satisfiability is intrinsically linked to the chosen model and policy class. While we acknowledge the importance of a comprehensive investigation into the categorization and conditions under which this can be achieved, such an exhaustive exploration is beyond the purview of this paper. * Connection between theory and empirical algorithms too ambiguous (“learning” and “self-prediction” would need a formal definition): we have added formal definitions for “learning” and self-prediction to the paper. **Questions:** * “Self-prediction is defined in the paper as the process of predicting the action generated by the agent itself, but it does not seem intuitively clear where such prediction happens in the definition of Self-AIXI.”: This prediction lies in the mixture zeta, this mixture uses the past actions in the history to model and predict the future actions of the agent. * “The authors also regard self-prediction as a form of learning, but it is not very explicit where the learning component is.”: The learning done in the self prediction is Bayesian learning as we use a Bayesian mixture as the self predictor. * “If one wants to optimize a parameterized policy, what would be the objective function? If the objective is to predict the action with max Q-value, why can AIXI not employ learning to predict its optimal action selected from planning?”: The goal of the agent is to achieve the maximal expected reward, defined by the value function. AIXI will take the optimal action from planning, AIXI is the optimal agent. Self-AIXI shows us that when implementing agents, we can use powerful predictors to assist (or almost completely remove) the planning component * “What does $\gamma$ represent in Lemma 15? It does not seem very clear from the context.”: $\gamma$ is the discount factor (see Definition 1).
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful comments. We are pleased to see that all reviewers found our paper easy to read (giving the maximum score of 3 for presentation), and all reviewers consider the theoretical claims and analysis sound and easy to follow. We respond to comments shared by reviewers in this general response and also provide a detailed response to each reviewer individually under their reviews. The main criticism, shared by reviewers, is that it is difficult to connect the theory to concrete implementations and applications. We understand this criticism, which applies to most theoretical work in universal AI, and want to emphasize that the main goal of this manuscript is to provide a sound theoretical construction and analysis of an alternative to the planning-centric approach, enabling in theory the development of more efficient and practical universal AI agents. We strongly agree with dcXk: “the paper provides a point and potential direction and stepping stone towards more practical universal AI.”. Closing the gap towards practical implementations is the next critical step in this line of research, but we believe that this requires quite a bit of setup and clarification that deserves a full separate publication. To show that the practical impact of the theory is not completely beyond reach, we are conducting some proof-of-concept experiments with Monte Carlo AIXI Context Tree Weighting (MC-AIXI-CTW)[1]. In these experiments we are comparing a Self-AIXI approximation with an AIXI approximation. The AIXI approximation is the base MC-AIXI-CTW, and the Self-AIXI approximation is MC-AIXI-CTW equipped with a self predictor (CTW predicting its own actions used within the MC rollout). This comparison is done on the environments in [1]. We want to thank the reviewers for pointing out parts of the manuscript that can be improved, in particular the clarification of Lemma 15 (sXfu), the formal definitions of “learning” and “self-prediction” (sXfu),motivation for incorporating reinforcement learning (xEuK), and plain-English statements of the main theorems and lemmas (AePK). We have also added a discussion on how to bridge the gap towards practical applications (dcXk) and concrete algorithms (mKJS). [1] Veness, J., Ng, K. S., Hutter, M., Uther, W., & Silver, D. (2011). A Monte-Carlo AIXI approximation. Journal of Artificial Intelligence Research, 40, 95-142.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper focuses on the issue of inefficiency in planning for AIXI agent, particularly lacking an alternative universal agent that maximally exploits learning and distillation. To address the issue, this work proposed a new method, named self-AIXI, that maximally exploits self-prediction instead of planning by doing exact Bayesian inference on the policy space. Further, theoretical results were provided to prove that self-AIXI's Q-values can converge to AIXI's Q-values asymptotically. Strengths: Strengths: - It's an interesting work for self-AIXI to consider both planning and learning to improve the efficiency and guarantee the optimal solution and its convergence to the gold-standard AIXI agent. Besides, this work also provides theoretical analysis for the optimality condition, compared to the existing empirically successful model-based reinforcement learning framework like MuZero. - The paper is well-written and I can read it smoothly. Weaknesses: Weaknesses: - This is a theoretical paper, but only introducing the theory of self-AIXI might be a little bit less. It would be better if some preliminary results towards a concrete algorithm can be included in this work. This is because there exists some practical work for AIXI and model-based RL like MuZero (target to theoretical or/and empirical problems), so it's not quite clear why this self-AIXI is really needed, although there were some discussions shown in the paper. I do appreciate this work, so I'd like to hear more about this from the authors. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - See my comments in the weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. * “It would be better if some preliminary results towards a concrete algorithm can be included in this work.”: We are conducting some proof-of-concept experiments with MC-AIXI-CTW, which will be included in the updated manuscript. Please see our main response for details. Additionally we have added more discussion to the paper towards bridging the gap between theory and practical algorithms. * “it's not quite clear why this self-AIXI is really needed”: We see Self-AIXI as an important step in the path to understanding the optimal behaviour of general agents. This theory shows us that optimal behaviour can be achieved with minimal planning, if a sufficiently powerful predictor is used.
null
null
null
null
null
null
Label Poisoning is All You Need
Accept (poster)
Summary: This work introduces a novel backdoor attack, (soft)FLIP, which only requires modifying the labels of the training samples and supports arbitrary choice of triggers. (soft)FLIP is based on a trajectory-matching algorithm, which optimizes the poison labels by simulating how normal backdoor poison samples affect the model parameters during training. The experimental results validate the high effectiveness of (soft)FLIP on CIFAR10 and CIFAR100, across two ResNet architectures, three different types of triggers, and different numbers of poison samples. Strengths: 1. The idea of backdoor poisoning only modifying the label is rather novel and interesting. Meanwhile, the reported attack results look quite promising. 2. The motivation of label-only poisoning backdoor attack is well supported by two real-world scenarios (crowd-sourcing image annotation & knowledge distillation). Overall, the paper is well structured. 3. The time consumption to conduct FLIP could go less than 1 GPU hour, making the proposed attack more practical. Weaknesses: 1. My major concern is about the limitation of the model architectures you investigated. Through both the main paper and the appendix, you only investigate ResNet architectures (r18 & r32), which only cover a small spectrum of CV deep models. The lack of model architecture diversity subsequently leads to readers' doubt --- whether your attack could generalize well in the real world, on more recent architectures which are not in the ResNet family. I think for the paper's acceptance, it's necessary to consider other types of model architectures (e.g. VGG, Vision Transformer). 2. Another concern I have is the lack of studies from the defender's perspective. On the one hand, there are many existing advanced poison-cleanser defenses (e.g. [1] and [2]). I suggest including a brief study about whether your poisoning attack is resistant to these defenses in your paper's main body. It would be even better if you provide some insights into potential defenses that may effectively resist FLIP. 3. The logic in paragraph 2 (Line 30-36) may need refining (e.g. "However" in Line 32 is inappropriate). 4. Line 100-101 the claim "the corrupted labels chosen by FLIP do not degrade the clean accuracy of the model" should be diminished, since FLIP still leads to some CTA drop. 5. The caption for Figure 5(a) is wrong, please correct it. 6. May need to add identifiers (e.g. number of experts & number of poisoning samples) in Table 4. I will update my score once the above concerns are addressed. [1] Hayase, Jonathan, Weihao Kong, Raghav Somani, and Sewoong Oh. "Spectre: Defending against backdoor attacks using robust statistics." In *International Conference on Machine Learning*, pp. 4129-4139. PMLR, 2021. [2] Tang, Di, XiaoFeng Wang, Haixu Tang, and Kehuan Zhang. "Demon in the variant: Statistical analysis of {DNNs} for robust backdoor contamination detection." In *30th USENIX Security Symposium (USENIX Security 21)*, pp. 1541-1558. 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Your attack setting requires selecting both a (coarse) source class and a target class. What about all-to-one attack (every class is the source class, which is more common and more practical)? 2. In Line 195-198, what if $C$ is small? Could you explain more about the selection and meaning of $C$? 3. Could you provide results using all three trigger styles, for CIFAR-10 (r18) and CIFAR-100 (r18 & r32) settings? 4. There are some recent backdoor poisoning attacks (e.g. [1] and [2]) that also emphasize the manipulation of poison labeling. What's the relationship between your work and them? It would be great to include a short discussion in Related Work. [1] Tang, Di, XiaoFeng Wang, Haixu Tang, and Kehuan Zhang. "Demon in the variant: Statistical analysis of {DNNs} for robust backdoor contamination detection." In *30th USENIX Security Symposium (USENIX Security 21)*, pp. 1541-1558. 2021. [2] Qi, Xiangyu, Tinghao Xie, Yiming Li, Saeed Mahloujifar, and Prateek Mittal. "Revisiting the assumption of latent separability for backdoor defenses." In *The eleventh international conference on learning representations*. 2022. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations and potential defenses against the proposed attack are not explicitly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to thank the reviewer and address their comments point-by-point below: ### Weaknesses: 1. We refer the reader to the **Experiments on Larger Models + Transformers** section of the general rebuttal above for details on our experiments against the VGG and Vision Transformer architectures. 2. We refer to the **Defense** section of the general rebuttal above for details on our experiments against backdoor attacks in the literature. Of note, we recommend spectral defense methods like SPECTRE as a way to mitigate our attack. 3. In response to feedback, we have rewritten many of the sections in the paper and are reworking some of the ambiguity in the writing. As such we have tailored that paragraph to underscore our core contribution: the discovery of a new attack vector we believe practitioners should be aware of. 4. We have moderated the claim to: “minimally degrade the clean accuracy of the model” 5. This has been corrected to “Triggers” 6. In response to the suggestion, we plan to add identifiers to all of our tables. ### Questions: 1. We evaluated a version of FLIP with *no source class* on CIFAR-10 with the sinusoidal trigger and found that our method was still highly successful as detailed in the below table. $150$|$300$|$500$|$1000$|$1500$ -|-|-|-|- $92.36/17.2$|$92.14/28.0$|$91.67/58.1$|$90.79/88.9$|$90.14/95.6$ 2. $C$ is a temperature hyperparameter for our targets’ softmax initializations. The goal of $C$ is to distribute the labels’ gradient mass during the early stages of FLIP’s evaluation. A $C$ too high “vanishes gradients” by sending the logits of non-ground-truth classes too close to $0$. A $C$ low can lead to high volatility in the early stages of evaluation. While results don’t differ greatly with reasonable choices of $C$, we found that a well-selected value speeds up evaluation. As with most of our hyperparmeters, our selection of $C$ was done in a typical grid-search fashion. 3. We are working on expanding Table 1 as suggested. The final results will be added to the paper promptly. 4. While it is true that both papers propose nonstandard labeling of the poison data, both attacks still rely on image poisoning to succeed. The manipulation of labels is designed to supplement the traditional backdoor attack instead of replacing it. We believe this is an interesting line of work which is distinct from and complementary to the results of our paper. --- Rebuttal Comment 1.1: Title: Thanks for your efforts in the rebuttal Comment: I appreciate the efforts the authors made during the rebuttal period, which has addressed most of my concerns. I will increase my rating to 5 for now. It would be great if in the following days, you could also 1) show some other possible experiment settings (e.g. vit -> vit, vgg -> vit) in "Experiments on Larger Models + Transformers." (even if they fail); 2) provide results regarding my 3rd concern in "Questions" about all three triggers. --- Reply to Comment 1.1.1: Title: Thank you. Comment: We really appreciate the time you took in responding to our rebuttal. We are working hard to get (some of) those experiments done before Aug 21st. We will keep you posted on the results.
Summary: The paper studies backdoor attacks, in which a subset of training examples are poisoned with the goal of flipping predicted labels at test time with minimal modifications to the test instances (e.g., by adding a small logo or invisible noise). Previous backdoor attacks either inject malicious data or modify the instances, usually by adding a pattern that correlates with the preferred output label. This work shows that by *only changing the labels* of a small subset of the training examples one can still launch a successful backdoor attack. At a technical level, the attack first performs "traditional" attacks to train multiple poisoned (backdoored) models while storing certain information about such attacks regarding their trajectory of the poisoned models. Then such trajectories are used to find out a minimal set of labels whose flipping can have a close effect (to the original attacks). Concretely, by only changing 1000 example labels in CIFAR-10, the paper can achieve accuracy under attack (i.e., successfully flipping the label to the desired one) of ~1 while the actual accuracy without attack remains ~0.9 The paper compares their (in my opinion impressive) attack with a natural (but rather weak) attack that is based on inner product, in which the labels are flipped if the trigger patter is "weakly" present in an input instance (and that is estimated using inner product between the instance and the pattern). The comparison shows that the proposed attack is significantly more effective. Strengths: The main question of the paper is a natural question, and the answer is rather surprising. I personally would not think that label change is so powerful to launch a backdoor attack. The attack's idea is smart and could find further applications in adversarial learning by reducing attack power to label-only setting. Weaknesses: The main weakness of the paper is its poor presentation, in which many sentences and phrases are unclear for readers. Here are some examples. * calling a dataset with flipped labels "clean data" (in line 105) is a misnomer. you can call them "clean instances" but the data is poisoned, as labels are also part of the data. * I did not understand the exact definition of the attack in the knowledge distillation setting. Please give a formal mathematical definition. There are many unclear sentences and phrases such as: "we convert our continuous logits to a 170 discrete set of label flips." "a set of training trajectories of backdoored ‘expert models" "Intuitively, each gradient step differs only by the poisoned images" Letter T is overloaded: it is used both for the trigger and the length of the saved trajectories. What does "intensity and stealth" of the attack mean, formally? What is "interpolation percentage"? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Refer to the weakness comments and explain the phrases and formally define the attack model for the knowledge distillation setting. Also, what is the (time) complexity of your attack. Can you report a few numbers? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The computational complexity of the attack is not clear to me. it seems like your attack is much heavier than the "traditional" ones. but since this is an attack (not a defense) the complexity is still more OK, as there is no real symmetry between the two. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to thank the reviewer and address their comments point-by-point below: ### Weaknesses: 1. *(Poor Presentation.)* Based on the constructive feedback from our reviewers, we are revising the writing significantly. 2. *(Clean Instances.)* We will clarify this in the revisions. 3. *(Knowledge Distillation Setting.)* We will clarify our threat model for the knowledge distillation setting as follows in the revision. *In our distillation setting, the attacker provides a teacher model to the student, which is used to (soft) label the student’s distillation set of clean images. To produce these labels, we assume the attacker has read-access to the distillation set and can construct the teacher to produce arbitrary outputs on it. The goal of the adversary is to create a backdoor in the student model after it is trained on the adversarially labeled distillation set.* In practice, the adversary can first compute the soft labels that produce a backdoor and then overfit a teacher model to these labels such that, when queried, the computed labels are revealed. Explicitly, let $s(\cdot; W_s)$ be the user / student model, $t(\cdot; W_t)$ be the potentially-adversarial teacher model and $X$ be the user's clean distillation images. As in the standard distillation setup, for standard distillation loss $\mathcal{L}_{\mathrm{distill}}$, the user seeks to minimize: $$\hat{W_s} = \min_{W_s} \mathcal{L}_{\mathrm{distill}}\big(s( X ; W_s), t(X ; \hat{W}_t)\big)$$ Now, an adversarial attacker's goal is to embed a backdoor into the model to satisfy the constraints in Equation (1) and Section 1.1. However, unlike most backdoor scenarios where the adversary maintains control over the images $X$, in our setting, the attacker *can only* modify the model $t$ to select an adversarial $t(\cdot; W_t)$. 4. *(Unclear Sentences and Phrases.)* 1. We clarified what is meant by conversion in the revised text (i.e., the selection process detailed in Section 3.3). 2. We clarified the phrase to: “the first step of our attack is to record a set of training trajectories (i.e., the checkpoints of *expert models* trained on data corrupted as per a traditional backdoor attack with trigger $T(\cdot)$ and target $y_{\mathrm{target}}$ of interest). 3. We clarified the intuition that there are analagous batches of expert training data and user training data. 4. *(Letter T.)* The length of trajectory will be changed to $K$. 5. *(Intensity and Stealth.)* The intensity of our attack refers to its Poison Test Accuracy (PTA) while, the stealth refers to how hard the attack is to detect via the Clean Test Accuracy (CTA). We will revise the paper to use PTA and CTA consistently throughout. 6. *(Interpolation Percentage.)* The interpolation percentage $\alpha$ adjusts the PTA / CTA tradeoff by linearly interpolating between the logit-labels generated by FLIP ($\alpha = 0.0$) and the one-hot versions of the ground-truth labels of the dataset ($\alpha = 1.0$). Explicitly, as $\alpha \rightarrow 1$ the CTA increases and the PTA decreases. This definition has been clarified. ### Questions 1. *(Time complexity.)* While most of our experiments are run with 50 experts evaluated for 20 epochs each, as we show in Table 5 and the table below, our attack is effective with far fewer experts and epochs $K$, respectively. As such, on average, training a single expert for 20 epochs took less than 10 minutes on A40 and 2080ti GPUs. This process can obviously be parallelized for more experts. In addition, our algorithm, was run for 25 iterations which took nearly 25 minutes on the same setup. Together the just-over-half a GPU hour amounts to around a single model training run. In terms of complexity, the label optimization has similar computational cost to training an expert, so we expect it to scale similarly to model training as the dataset and model size increases. For more information, please refer Section A.4 of the Supplementary Material. --- Rebuttal Comment 1.1: Title: Thanks for the clarifications Comment: I will follow up if I have more questions. For now I do not have more. --- Reply to Comment 1.1.1: Title: Thank you. Comment: Thank you for responding to our rebuttal.
Summary: This paper studies an interesting problem whether backdoor attacks are effective if only manipulating the labels. They propose a method based on searching labels that can match parameters with pre-trained conventional backdoored models. An effective algorithm is shown and experiments are conducted to validate the effectiveness. Strengths: The problem itself is important and the motivation is proper. The logic of the whole paper is clear and easy to follow. The algorithm is clearly illustrated, and experiments are conducted to show the effectiveness of the proposed method. Weaknesses: 1. Overall, this paper seems to be written in a rush as many details are missing. 2. Scenario two needs more demonstration. It is not clear which model(teacher or student) authors would like to poison. 3. It is not sure whether this work is the first study of label-poison backdoor, as *Clean-image Backdoor: Attacking Multi-label Models with Poisoned Labels Only* already studied a similar problem. A comparison is needed in related work, and it should be considered as a baseline. 4. Proposed method is not a novel one, because in the paper 'Stronger data poisoning attacks break data sanitization defenses', they proposed to search for poisoning samples based on a decoy model. From my perspective, the method proposed in this paper is a direct application of that method to the backdoor attack. 5. The guarantee or intuition behind the proposed matching procedure(iterations from line 212) is not clear. It is not straightforward to understand why updating poisoned labels at each checkpoint will match the parameters between expert models and label-poisoned models, because the re-training process can lead to totally different local minima. 6. Please give more details about the dot-product baseline. If it is proposed in previous works, please provide references. If it is defined in this work, show more details especially the difference between it and the proposed method. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. I am confused about the attackers' ability in this work. Are you assuming that the clean training set is also accessible to attackers? Otherwise one can not train expert models using conventional backdoor methods. Since you need to train expert models, do you assume that the images can be manipulated? If so, is this assumption too strong in practice and does it contradict the motivation? 2. Need more details about training expert models. Are you training multiple expert models using the same architecture, trigger, source class and target class? If so, what is the difference between these expert models? Besides, expert models are only trained for 20 epochs, why not train for more epochs? 3. What is the intuition behind the loss function in Eq.(2)? Give more details about why it is designed in that form. 4. Why lack the experiments facing backdoor defenses? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Mentioned in weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer and address their comments point-by-point below: ### Weaknesses: 1. We are adding several missing details to the revision as per reviewers’ feedback. In conjunction with other edits, we believe this has greatly improved the quality of the presentation (and experimental results) of the paper. 2. We will clarify our threat model for the second scenario as follows in the revision. *In our distillation setting, the attacker provides a teacher model to the student, which is used to (soft) label the student’s distillation set of clean images. To produce these labels, we assume the attacker has read-access to the distillation set and can construct the teacher to produce arbitrary outputs on it. The goal of the adversary is to create a backdoor in the student model after it is trained on the adversarially labeled distillation set.* 3. While we agree with the reviewer that a label-space attack was introduced recently in [14], the setting where the attack applies is completely different, as are the resulting attack designs and priorities. In particular, [14] uses an algorithmically selected trigger combined with a straightforward corruption of the labels, while we assume the trigger is pre-specified and optimize the labels to achieve the attack. Additionally, in [14] the trigger must be the presence of a combination of labels which appear in the data with frequency matching the desired poisoning rate, which is itself selected to balance CTA and PTA. In contrast, our attack will (in principle) work for arbitrary triggers and we demonstrate successful attacks with several triggers which are not present in any of the images in the dataset. Finally, [14] works only for multi-label tasks and cannot be applied directly to our single label setting. 4. The use of a “decoy” model for efficiently solving bilevel optimization is not new and has been done in [34,11,22,49,31]. Our method for solving bilevel optimization is not novel; we tried several approaches and chose the one that worked best: trajectory matching. In particular, the method proposed in [34] is tailored for binary classification (Algorithm 4 and optimization problem (14)) and cannot be directly applied to our setting with multiple classes. 5. Informally, the goal of the trajectory matching is to minimize the drift away from the expert trajectory when training with only poisoned labels. Averaged over the full trajectory, this amounts to minimizing the distance between a model trained purely with poison labels and the final expert checkpoint under the assumption that they share the same randomness (initialization, batch order). While it is true that re-training with different randomness will lead to a completely different local minima, it is rare in machine learning to see changes in randomness have a large effect on the functional accuracy (CTA, PTA) of the final model. Also, to reduce overfitting to a particular random seed, we can average over multiple expert trajectories. 6. The dot-product baseline is our own and corresponds to flipping the $k$ images with the largest dot product with the trigger. Figure 2 shows that dot-product baseline requires an order of magnitude more poisoned examples than FLIP to successfully backdoor the trained model. Such a massive poison injection results in a rapid drop in CTA, causing an unfavorable CTA-PTA tradeoff curve. We remark that while the dot-product computes similarity to the trigger in image space, FLIP considers a much deeper notion of similarity in the parameter space of a trained model by targets images that induce similar gradients to the trigger. ### Questions: 1. While most of our experiments assume that the attacker has access to the full training set, in the table below we show that this assumption can be relaxed (i.e., access to only a subset of the training set). Under the two scenarios and threat model we consider, the attacker can only corrupt the labels and cannot alter any images. As long as the student training adheres to this constraint, we believe the attacker’s power is limited and does not contradict our motivation. In practice, we consider crowdsourcing and knowledge distillation, where it is natural to assume that the attacker can alter the data locally as long as only the labels are corrupted when the student model is trained. ||$150$|$300$|$500$|$1000$|$1500$ -|-|-|-|-|- $20$|$92.26/06.3$|$92.05/07.2$|$91.59/10.9$|$90.69/15.7$|$89.78/21.8$ $40$|$92.31/10.2$|$92.12/28.7$|$91.79/45.2$|$90.90/62.5$|$90.08/74.1$ $60$|$92.31/14.0$|$91.99/45.8$|$91.70/68.4$|$90.75/85.5$|$89.88/92.4$ $80$|$92.48/14.0$|$92.02/42.5$|$91.70/80.0$|$90.92/96.6$|$89.98/98.5$ 2. For most of our experiments we trained 50 (studied in Table 5) expert models with identical architecture, trigger, source class, and target class. However, to promote generalization across training trajectories the random initialization was varied for each of the experts. As shown in **Experiments on $K$** in the main rebuttal, we found that small values of $K$ (i.e., the number of epochs the experts are trained for) work well since checkpoints later on in training drift away from the student model training trajectory. For more information on the experts, please refer to supplementary material Section A. 3. Informally, (2) seeks to optimize the labels of the user’s training dataset such that the distance between the parameters of a model trained on the user’s set and the parameters of an expert model trained on a traditionally-backdoored dataset is minimized. This goal is clearly reflected in the numerator of (2). The denominator of (2) normalizes this quantity across the entire expert trajectory. If successful, the student model trained with label-only corruption would inherit the backdoor of the expert model. 4. We refer to the **Defense** section of the general rebuttal above for details on our experiments against backdoor attacks in the literature. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. The response address some of my concerns. Although I feel the studied problem is interesting, I still hesitate the fundamental / practical importance of the problem, beyond existing / original backdoor attacks. I would raise my score to 4 now and see the discussions of other reviewers. --- Reply to Comment 1.1.1: Title: Thank you. Comment: We thank the reviewer for reading our rebuttal and engaging in the conversation with us. The major "surprise" in our work is that corrupting only the label can be successful in creating backdoors for pre-defined triggers. We believe this is quite practical in the two scenarios we considered: crowdsourced annotations and knowledge distillation, which is not covered by standard backdoor attacks. We are happy to provide any further information if there are specific remaining concerns.
Summary: The paper first proposes a method that a backdoor attack can be done with only label poisoning. The authors introduce a new algorithm called FLIP that corrupts only the labels in the training set to create a backdoor attack. The first step of FLIP is to collect a set of training trajectories of backdoored ‘expert models’. Then, a set of real-valued logits are produced that induce similar parameters to the poisoned data when combined with clean images. Moreover, parameter-matching loss are proposed to update the logits. Finally, label flips are implanted by logits. What’s more, SoftFLIP is applied in the setting of knowledge distillation. The method is evaluated on extensive experiments and shows its effectiveness. Strengths: The article reveals the threats of only label poisoning which is rarely considered before. The method proposed seems correct and the experiments are sufficient. The experiments’ settings are rational and the article is well organized. Weaknesses: There are no any experiments on against backdoor defense. It is also an important aspect of evaluating the backdoor attack. The backbones are limitated to the ResNet. How about the results on transformers which are more popular backbones in knowledge distillation? Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Table 1, r18 s has a poor condition on PTA, it seems strange. Do your method have the robustness against fine-tuning? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: No any backdoor defenses are tested in the article. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to thank the reviewer and address their comments point-by-point below: ### Weaknesses: 1. We refer to the **Defense** section of the general rebuttal above for details on our experiments against backdoor attacks in the literature. 2. We refer to the **Experiments on Larger Models + Transformers** section of the general rebuttal above for details on our experiments against the VGG and Vision Transformer architectures. ### Questions: 1. To lower the statistical noise of the experiments presented in our submission we reran each experiment 7 *additional* times so that each figure is averaged over 10 experiments. In this process the non-monotonicity of ResNet-18s on CIFAR-10 (with respect to the number of label flips) was corrected. These higher confidence results can be found in Table 7 of the supplementary material along with similar updates for each experiment in the paper. ||$150$|$300$|$500$|$1000$|$1500$ -|-|-|-|-|- CIFAR-10|$94.13/13.1$|$93.94/32.2$|$93.55/49.0$|$92.73/81.2$|$92.17/82.4$ CIFAR-100|$82.87/11.9$|$82.48/29.9$|$81.91/35.8$|$81.25/81.9$|$80.28/95.3$ 2. We refer to the Vision Transformer experiment in the **Experiments on Larger Models + Transformers** section of the general rebuttal above. In particular, we found that when the last few layers of a transformer pretrained on ImageNet are finetuned on our corrupted labels, the attack succeeds. This is true even when the labels are generated with experts of a different architecture. ### Limitations: 1. We refer to the **Defense** section of the general rebuttal above.
Rebuttal 1: Rebuttal: ## Overall Rebuttal: these are additional experiments we ran that were commonly asked by multiple reviewers. 1. **Defenses:** As many of our reviewers judiciously pointed out, FLIP’s resilience to various backdoor defense strategies is of interest. To this end, we evaluate FLIP on CIFAR-10 with all three trigger types on three popular defenses: kmeans [R1], PCA [68], and SPECTRE [27]. We find that SPECTRE is quite effective in mitigating FLIP, whereas the other two defenses fail on the periodic and Turner triggers. We emphasize that even strictly stronger attacks that are allowed to corrupt the images fail against SPECTRE. In any case, we hope that our results will encourage practitioners to adopt strong security measures such as SPECTRE in practice, even under the crowdsourcing and distillation settings with clean images. In our eyes, finding strong backdoor attacks that can bypass SPECTRE is a rewarding future research direction. *sinusoidal:* ||$150$|$300$|$500$|$1000$|$1500$ -|-|-|-|-|- kmeans|$92.28/10.8$|$92.15/55.6$|$91.68/84.8$|$90.78/96.3$|$90.42/86.3$ PCA|$92.34/11.7$|$91.95/58.8$|$91.54/85.3$|$90.84/98.4$|$90.40/79.4$ SPECTRE|$92.50/00.2$|$92.55/00.2$|$92.43/00.2$|$92.06/01.3$|$91.47/01.7$ *pixel:* ||$150$|$300$|$500$|$1000$|$1500$ -|-|-|-|-|- kmeans|$92.13/02.7$|$91.82/04.9$|$91.36/08.3$|$92.37/01.5$|$88.60/30.4$ PCA|$92.14/02.7$|$91.83/05.2$|$91.73/05.9$|$92.26/02.1$|$92.09/02.5$ SPECTRE|$92.57/00.0$|$92.42/00.1$|$92.54/00.0$|$92.34/00.1$|$92.27/00.1$ *Turner:* ||$150$|$300$|$500$|$1000$|$1500$ -|-|-|-|-|- kmeans|$92.32/21.2$|$92.06/86.5$|$91.70/95.9$|$90.75/96.0$|$90.01/96.4$ PCA|$92.25/36.8$|$91.97/95.2$|$91.63/96.8$|$90.79/99.6$|$89.95/98.3$ SPECTRE|$92.42/00.1$|$92.36/00.1$|$92.17/00.3$|$91.45/00.7$|$90.70/03.4$ 2. **Experiments on Larger Models + Transformers.** In response to insightful comments by our reviewers we have added three experiments using two larger architectures: (1) VGG-19 (144M parameters) and (2) Vision Transformer (86M). We present results using discrete labels and the sinusoidal trigger on CIFAR-10. In the first experiment (Row 1) we show that using only 5 VGG-19 experts, we can backdoor a trainer’s VGG-19 model with high success rate. For the second experiment (Row 2), we demonstrate that our method is robust to knowledge of the trainer’s architecture (ala Table 3 in the original work). In particular, perhaps surprisingly, using labels generated with ResNet-32s, an attacker can successfully backdoor a user’s VGG-19 model. In the final experiment (Row 3), we repeat the second, substituting the VGG model for a pretrained Vision Transformer. In addition, instead of training the transformer from scratch we finetune the last few layers with our corrupted labels. We find, again, that our method is successful. ||$150$|$300$|$500$|$1000$|$1500$ -|-|-|-|-|- r32 $\rightarrow$ vit|$95.42/01.6$|$95.29/06.4$|$95.06/14.5$|$94.67/31.1$|$94.27/40.2$ vgg $\rightarrow$ vgg|$92.78/02.1$|$92.45/07.6$|$92.28/18.9$|$91.53/33.7$|$90.33/47.9$ r32 $\rightarrow$ vgg|$92.76/02.7$|$92.67/10.0$|$92.28/23.1$|$91.41/47.5$|$90.63/63.0$ 3. **Experiments on Larger Datasets.** We also added an experiment on Tiny ImageNet (100,000 points). Our results again use discrete labels and the sinusoidal trigger. The attack is again successful. ||$150$|$300$|$500$|$1000$|$1500$ -|-|-|-|-|- |r18|$61.47/10.6$|$61.23/31.6$|$61.25/56.0$|$61.45/51.8$|$60.94/57.0$ 4. **Experiments on $K$.** To aid our discussion, we also conducted an experiment on our hyperparameter $K$ which describes the number of iterations each expert is trained for. As shown below, surprisingly, the attack is still successful when experts are only trained for a single ($K = 1$) iteration. We remark that this is possibly a result of checkpoints later on in training drifting away from the student model training trajectory. $K$|$150$|$300$|$500$|$1000$|$1500$ -|-|-|-|-|- $1$|$92.33/17.0$|$92.06/42.8$|$91.59/66.1$|$90.73/85.0$|$89.62/86.8$ $5$|$92.27/12.7$|$92.04/54.1$|$91.66/90.3$|$90.72/98.0$|$89.80/99.6$ $10$|$92.38/09.6$|$92.12/55.7$|$91.65/89.9$|$90.67/99.5$|$89.81/99.8$ $20$|$92.26/12.4$|$92.09/54.9$|$91.73/87.2$|$90.68/99.4$|$89.87/99.8$ $50$|$92.41/08.1$|$92.03/48.0$|$91.72/93.1$|$90.87/99.4$|$90.03/99.8$ [R1] Chen, B., Carvalho, W., Baracaldo, N., Ludwig, H., Edwards, B., Lee, T., Molloy, I., and Srivastava, B., “Detecting backdoor attacks on deep neural networks by activation clustering.”, arXiv preprint arXiv:1811.03728, 2018a.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This manuscript proposes a new backdoor method for two typical scenarios: crowd source annotation and knowledge distillation. The adversaries are assumed to control the training dataset and inject a backdoor through label poisoning. Specifically, the method train several backdoored models using data poisoning methods. Then, it optimize continuous logits to align clean samples and poisoned ones. Finally, it convert the continuous logits to discrete labels. Experimental results on two datasets and two resnet models demonstrate the effectiveness of the proposed mothod. Strengths: 1. The idea of poisoning labels instead of samples is interesting and novel. 2. The manuscript has conducted several exeperiments to evaluate the proposed method. Weaknesses: 1. The two scenarios are common in backdoor learning. Data posioning methods can also be exploited in such scenarios. 2. FLIP needs to train expert models first, where data poisoning models are trained as experts. It would be better to explain why not use data poisoning directly. The necessity of applying FLIP is not convincing. 3. Though FLIP is a new backdoor scheme, from my perspective, it should be compared with data poisoning backdoor models. It's hard to evaluate the performance of FLIP. In other words, FLIP needs to train data poisoning models first, what are the advantages and gains of FLIP compare its first step? 4. Larger models and datasets should be used to evaluate the models. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. The advantages of FLIP over previous backdoor methods should be clarified from both motivation and experiment aspects. 2. How much additional computation caused/overhead caused by FLIP compared to a single data poisoning model? 3. Why not conduct experiments on larger datasets and models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: 1. The authors have not mentioned the limitations of this work. It is worth noting that the benefits of label poisoning backdoor method is not clear. The two motivation scenarios can be also attacked by existing data poisoning methods. 2. There could be potential negative societal impact of this work, if the proposed method is utilized by malicious attackers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to thank the reviewer and address their comments point-by-point below: We will first clarify our threat model. This will explain the advantage of FLIP and why existing attacks cannot be applied in this setting. In both the crowdsourcing and knowledge distillation scenarios, we assume that: 1. The adversary provides the trainer soft or hard labels to (a subset of) the training data. 2. The trainer then uses the clean images from the training data and the corrupted labels provided by the adversary to train a model from a random initialization. Explicitly, the reason the trainer has access to the clean images is that, in both the crowdsourcing and distillation scenarios, the trainer has full control of the images in their training data. In both scenarios, however, the labels are provided by potentially adversarial third parties. This attack surface is different from typical data poisoning or backdoor attacks since the adversary can only corrupt the labels. This is what makes the problem interesting and novel, and this is the reason none of the standard backdoor attacks can be applied. They require corrupting the images, which is not permitted under our threat model. We want to emphasize that FLIP is the first attack that only corrupts labels for a given trigger of choice. Since there is some variation in terminology surrounding data poisoning and backdoors, we want to establish the convention that data poisoning refers to attacks intended to lower the accuracy on in-distribution data, while backdoor attacks are intended to preserve in-distribution accuracy while controlling the behavior on out-of-distribution data which contain a trigger. We address each comment below: ### Weaknesses: 1. While it is true that crowd-sourced data (and distillation with an attacker-provided distillation set) is a common scenario for backdoor attacks, our setting further limits the abilities of the attacker. In our crowd-sourcing setting, the attacker receives a copy of the images to label and provides adversarial labels. Likewise, in our distillation setting the attacker provides a teacher model which is evaluated on the distillation set (and again the attacker cannot modify the distillation images). *Data poisoning attacks*, are common under our threat model where only labels are corrupted, but it is not clear how to create *backdoors* in this setting, where predictions are changed for specific attacker defined triggers in the input. This makes the proposed attack novel and interesting. This differs from the common scenario of backdoor attacks where an adversary injects training examples with corrupted images and corrupted labels. 2. We refer to our response above, where we explain why typical *backdoor* attacks cannot be applied since the attacker has no control over the images that the trainer uses in training. FLIP uses *backdoored* models to find which labels to corrupt, and only the corrupted labels are passed onto the model trainer. The backdoored model is never sent to the trainer. 1. In the crowd-sourced annotation setting, the attacker provides poisoned labels only. 2. In the distillation setting, we find that distillation erases standard backdoors. If we use an expert directly as the teacher model, we achieve a PTA of only 0.02%. 3. We refer to our response above and want to emphasize that only the labels are sent to the trainer under our threat model. Hence, existing attacks cannot be applied. 4. We refer to the general rebuttal above for additional results using the larger VGG and Vision Transformer architectures as well as the larger Tiny ImageNet dataset. ### Questions: 1. We refer to our response above and want to emphasize that previous backdoor attacks require the attacks to corrupt the images in the training data, which is not allowed under our threat model. FLIP uses existing backdoor attacks (together with trajectory matching) only to find the corrupted labels. Only the labels are sent to the trainer. 2. There are two sources of overhead in FLIP: (i) training $m\geq1$ experts and (ii) trajectory matching. For the former, Table 4 in Section 4.5 shows that FLIP exhibits robust performance even when $m=1$. In addition, as we show in **Experiments on $K$** in the main rebuttal, for a successful attack these experts need very few epochs of training. Empirically, as described in the supplementary material, for the majority of our experiments, we trained our experts for 20 epochs so that each expert model took no longer than 10 minutes on A40 and 2080ti GPUs to train. Now, the trajectory matching step, on average, took around 25 minutes for the 25 iterations of the algorithm we use. Altogether, this amounts to just over half of a GPU-hour, comparable to the amount of time it took to fully train a model. 3. We refer to our response for weakness 4. ### Limitations: 1. We refer to our response above and emphasize that standard backdoor attacks cannot be applied when only label corruption is allowed, as in our threat model. This is the benefit of FLIP over existing attacks. FLIP only corrupts the labels. There are no existing other attacks to compare against. So in Figure 2, we compare against a baseline we came up with, which corrupts labels of images whose inner product with the trigger is large. As the reviewer insightfully pointed out, the limitation of FLIP is the additional computational cost in finding the corrupted labels. 2. We believe our work will inspire machine learning practitioners to adopt secure measures and motivate further research into backdoor defenses. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the clarification. The repsonse has addressed my concerns. The scenarios of label poisoning are interesting. I would like to change my score to 5. Please clarify the threat model in the paper and add the additional results to the paper.
null
null
null
null
null
null
Distributional Pareto-Optimal Multi-Objective Reinforcement Learning
Accept (poster)
Summary: The paper introduces a new approach for solving Multi-Objective Reinforcement Learning (MORL) problems. Traditional MORL methods aim at optimizing multiple objectives, but generally focus on the expected values of returns, which can be inadequate in real-world scenarios with diverse preferences over returns. The paper addresses this limitation by extending Pareto-optimality to include distributional preferences, called Distributional Pareto-Optimality (DPO). The authors also include discussions of the relationship between Pareto-optimality policies and Distributional Pareto-Optimality policies. Furthermore, the authors propose a novel algorithm, DPMORL, that learns policies considering both the return distributions and their expectations. It captures the optimality of multivariate distributions through stochastic dominance. Through experiments on several benchmarks, DPMORL was found to be effective in learning distributional Pareto-optimal policies and outperformed existing MORL methods. Strengths: The paper astutely addresses a significant perspective in current MORL research by emphasizing the importance of not only accounting for the expected values of different objectives but also examining the distributional properties of returns in the context of the heterogeneity and diversity inherent in users' preferences. DPMORL stands out for its remarkable flexibility, as it adeptly accommodates a wide spectrum of distributional preferences, making it highly versatile and well-suited for an array of problem scenarios. This attributes to a more nuanced and expressive approach to MORL. Moreover, the algorithm is anchored in a robust theoretical foundation. The paper lucidly delineates Distributional Pareto-Optimal policies and astutely establishes their interrelation with utility functions. This amalgamation of theoretical rigor and empirical validation accentuates the potential of DPMORL to tackle distributional MORL challenges with efficacy. Furthermore, the authors deserve commendation for the inclusion of an intuitive and easily comprehensible case study that effectively demonstrates the utility function of learning. Weaknesses: The figures of illustration of 2D utility functions, including the ones in Appendix, are arranged somewhat randomly. It is suggested that the authors arrange the results of the utility function according to some gradual pattern. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I would appreciate further clarification regarding Table 2. Specifically, what are the constraints being referenced in this table? Could you provide illustrative examples to enhance understanding? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: It would be better to delve deeper into the distribution properties of preferences/constraints and to illuminate the limitations therein with greater specificity. Including illustrative examples of such constraints would not only bolster comprehension but also offer valuable insights into the practical implications and considerations of employing DPMORL in real-world scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: To Reviewer 1W6z: Thanks for your valuable feedback! We provide the response to each of your questions as follows. **Q1:** “I would appreciate further clarification regarding Table 2. Especially, what are the constraints being referenced in this table? Could you provide illustrative examples to enhance understanding?” **A1:** The way of constructing the constraints in Table 2 is detailed in the “Evaluation Metrics” paragraph of Appendix B. Briefly, each constraint is a combination of multiple linear constraints for the multidimensional returns. This covers different constraint satisfaction scenarios, including single safety constraints and multiple combinatorial safety constraints. We will elucidate this with specific examples in our revised manuscript. **Q2:** “It is suggested that the authors arrange the results of the utility function according to some gradual pattern.” **A2:** We appreciate your feedback. Our initial intent was to portray the diversity of utility functions, hence the randomized arrangement. Based on your suggestion, we will arrange the utility functions by their average slopes for better clarity in the revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing additional experiments and discussions. I have also read the authors' responses to other reviewers' comments. I think the authors have successfully addressed the questions and concerns raised by other reviewers in the rebuttal statement and additional experimental results. I believe this work represents a valuable contribution to the field of MORL. Therefore, I stand by my previous recommendation to accept this paper at NeurIPS 2023. --- Reply to Comment 1.1.1: Title: Thank you Comment: We appreciate the reviewer's constructive feedback and encouraging comments. In response to your suggestions, we plan to enhance our paper by incorporating examples of constraints. Additionally, we would be more than happy to address any further questions or concerns.
Summary: The authors propose extending multi-objective RL (MORL) to policies that are Pareto optimal over distributions of non-negative utility functions. Specifically, they define distributional Pareto optimal (DPO) policies as those whose expected returns are non-dominated for any non-negative utility function. They then propose learning a set of DPO policies by first learning a set of diverse non-negative utility functions over $[0 ~ 1]^K$ bounded objectives, then learning their corresponding optimal policies using PPO. They demonstrate in several experiments that their approach outperforms prior works in both hyper-volume and expected utility of the learned policy set. Strengths: - The paper is very well written, clearly presented, and tackles the very significant problem of finding policies that are distributionally Pareto-optimal in MORL. - I like the paper's formal treatment of Distributional MORL. The definitions and theorems given are particularly clear and precise, showing that the proposed framework is sound (not including how to actually find DPO policies, and not saying I agree with said definitions). - The proposed approach for finding DPO policies is particularly interesting. By learning a set of utility functions using a diversity metric, then converting the learned utilities to a reward function, one can then use any off-the-shelf RL algorithm to learn the DPO policies. The paper then shows that these optimal policies are guaranteed to maximise their respective utilities. - The paper conducts several experiments demonstrating that the learned utility functions are indeed diverse and non-decreasing. The paper also demonstrates that their approach outperforms prior works in several functional approximation domains. Weaknesses: MAJOR: - Theorem 1 is stated very differently in the Appendix (lines 33-38) compared to the main paper (lines 153-155). In particular, the one in the Appendix adds 2 more assumptions (lines 35-36). This makes the one in the paper severely misleading. This also decreases the strength of this theorem since there almost never is only a single optimal policy in RL (optimal policies are rarely unique), and strictly increasing utility functions further limits the applicability of the proposed framework. - The paper limits the definition of distributional MORL to non-decreasing utility functions, and the proposed algorithm for finding DPO policies is limited to non-decreasing utilities with objectives bounded by $[0 ~ 1]^K$. While this still covers a wide range of utilities, this severely limits the applicability of the proposed approach. - No proof is given to show what percentage volume of the Pareto front is covered by the set of optimal policies obtained by the proposed approach. In fact, there doesn't seem to be a reason why we should expect the specific diversity metric given in equation (3) to give us utility functions that lead to DPO policies covering any significant fraction of the Pareto frontier. This is a major missing component of the paper since its operational goal is to extend MORL to DPO policies. - [1] seems closely related to this paper as it also proposes a distributional view of MORL, and its implementation is not constrained to non-decreasing utilities over objectives bounded by $[0 ~ 1]^K$. However, this paper does not include it in its baselines, nor does it discuss or even cite it. MINOR - f(z) is missing on line 7 of the proof of Lemma 1 [1] Abdolmaleki, Abbas, et al. "A distributional view on multi-objective policy optimization." International conference on machine learning. PMLR, 2020. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It would be great if the authors can address the major weaknesses I outlined above. I am happy to increase my score if they are properly addressed, as I may have misunderstood pieces of paper. In addition to those: - Why is the set of utility functions restricted to non-decreasing ones? Is this a fundamental limitation of the proposed framework? - Why is the given Diversity-based Objective Function expected to lead to policies that cover significant volume of the Pareto front? The authors should provide a proof of how well their approach approximates the Pareto front, or at the very least a detailed discussion of it. - How is the number of utility functions to learn ($M$) determined? Is this just an arbitrary hyperparameter? If so, how does its choice affect how well the approach approximates the Pareto front? - Why is the given definition of DPO policies to be preferred over that of [1]? Can the authors compare against MO-MPO from [1]? - In the proof of Lemma 3, the authors say "Suppose π is a Pareto-Optimal policy, then π is the optimal policy under some linear combination of rewards with weight w ≻ 0". Why is this true? Is the lemma maybe assume linear utility functions (if yes this needs to be made explicit)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: While there is no limitations section, I believe the authors have adequately addressed the limitations of their work sparsely throughout the paper (besides the ones I mentioned in the weaknesses section). It will be preferable if a limitations section is added to discuss the ones the authors feel are most relevant. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: To Reviewer GTy6: Thanks for your valuable feedback! We provide the response to each of your questions as follows. **Q1:** “Why is the set of utility functions restricted to non-decreasing ones? Is this a fundamental limitation of the proposed framework?” and “the proposed algorithm for finding DPO policies is limited to non-decreasing utilities with objectives bounded by $[0, 1]^K$ ” **A1:** This non-decreasing condition is foundational to MORL, aiming to optimize all objectives simultaneously without diminishing any. This basic premise for non-decreasing utility functions is articulated in works like Definition 7 in [2], Definition 3 and 5 in [3], and Section 3.1 in [4]. The normalization of utility function outputs to [0, 1] is a prevalent technique to stabilize model optimization and policy learning across varied objectives, which might span different numerical ranges. This is not a limitation per se but a design choice. **Q2:** “there doesn't seem to be a reason why we should expect the specific diversity metric given in equation (3) to give us utility functions that lead to DPO policies covering any significant fraction of the Pareto frontier.” **A2:** At the methodology level, our method (DPMORL) optimizes policies to maximize the expected utility on a diverse set of non-linear utility functions, where Theorem 1 shows that these policies are DPO policies when they have optimal expected utility. At the experimental level, we show in Figure 1 and Figure 2 of the global rebuttal PDF file that DPMORL can learn optimal policies under a diverse set of non-linear utility functions, where the policies cover a diverse set of optimal return distributions. Therefore, we show DPMORL can learn a diverse set of optimal return distributions at both method and experimental levels. **Q3:** “How is the number of utility functions to learn (M) determined?” **A3**: In our methodology, the quantity of utility functions aligns with learned policies. This count is a predetermined parameter for MORL scenarios. For fairness in our experiments, we ensured that all baselines method and our method share the same number of learned policies. **Q4:** “[1] seems closely related to this paper as it also proposes a distributional view of MORL” and “Why is the given definition of DPO policies to be preferred over that of [1]?” **A4:** Thank you for highlighting the work in [1]. We acknowledge this work's importance in advancing the MORL field by introducing a distributional perspective. However, we'd like to clarify some misunderstandings. 1. **Fundamental Conceptual Differences**: Our work introduces the concept of DPO policies to ensure that the policies we derive can accommodate a wide range of distributional preferences in the **return space**, capturing nuances often missed when focusing purely on expected returns. In contrast, [1] proposes a scale-invariant approach to MORL. While this is a valuable perspective, it emphasizes the **action space** and combining objectives in distribution space without necessarily prioritizing the distributional nature of the returns themselves. 2. **Implementation Details**: Our approach is grounded in the concept of stochastic dominance for multivariate distributions to learn desirable DPO policies. On the other hand, [1] introduces MO-MPO, characterized as a two-tier policy enhancement procedure. It's pivotal to understand that while MO-MPO is a technique, DPO serves as the objective in our research. Given the distinct ambitions and approaches of the two papers, we did not include [1] as one of the baselines in the experiments. Nonetheless, we appreciate the academic merit of comparative discourse. In acknowledgment of the broader academic community's interests, we are glad to enrich our revisions with discussions [1] with our contributions. **Q5:** “In the proof of Lemma 3, the authors say "Suppose π is a Pareto-Optimal policy, then π is the optimal policy under some linear combination of rewards with weight w ≻ 0". Why is this true? Is the lemma maybe assume linear utility functions (if yes this needs to be made explicit)?” **A5:** This is correct from Definitions 1 and 4 in [3]. For Lemma 3 “Any Pareto-Optimal policy is a Distributionally Pareto-Optimal policy”, we provide a better proof sketch for the lemma from scratch as follows: if policy π is Pareto-optimal, then its multidimensional expected returns are not dominated by any other policies (see Definition 2.1 in [4]). Then, the return distribution mu(π) of policy π is not dominated by the return distribution mu(π’) of any other policies π’. Therefore, policy π is a Distributionally Pareto-Optimal policy. Lemma 3 has no other assumptions, and does not assume linear utility functions. **Q6:** “Theorem 1 is stated very differently in the Appendix (lines 33-38) compared to the main paper (lines 153-155). In particular, the one in the Appendix adds 2 more assumptions (lines 35-36). This makes the one in the paper severely misleading.” **A6:** We appreciate your feedback, where the statement in Theorem 1 lacks the required assumptions, and we will add the assumptions in our paper. Meanwhile, we note that either assumption “π is the only optimal policy” or assumption “utility function f is strictly increasing” can make the claim of Theorem 1 correct, which makes the conclusion adaptable to many more scenarios (than the case where both the assumption has to be satisfied). ## References [1] A distributional view on multi-objective policy optimization. ICML 2020. [2] A Survey of Multi-Objective Sequential Decision-Making. Journal of Artificial Intelligence Research, 2013. [3] A practical guide to multi‑objective reinforcement learning and planning. AAMAS, 2022. [4] Prediction-guided multi-objective reinforcement learning for continuous robot control, ICML 2020. --- Rebuttal Comment 1.1: Title: Further Discussions Comment: Dear Reviewer GTy6, We greatly appreciate the time and effort you have invested in reviewing our work. We carefully consider your comments and have addressed each of your concerns and questions in the rebuttal. As the discussion period deadline is approaching, we kindly hope that you can read the response, and consider the new information and experiment results provided. We are open to further discussions and would be happy to provide any additional clarifications if needed. --- Reply to Comment 1.1.1: Title: Looking forward to your response! Comment: Thank you again for your time and effort in reviewing this work! We hope that the weaknesses you outlined were adequately addressed in our rebuttal. Since the author-reviewer discussion will be closed within 8 hours, we hope you can take a moment to review our responses and let us know if there are any outstanding issues that make you consider this paper as borderline reject. Your feedback is crucial in helping us address any remaining concerns and improve the quality of our work.
Summary: This work introduces distributional Pareto-optimality for multi-objective reinforcement learning. Multi-objective RL (MORL) is typically formulated as to find Pareto optimality of all objectives. To build the ground, the paper defines stochastic dominance for multivariate distribution and then stochastic dominance for policies. Distributional Pareto-optimality can then be formulated. The authors propose a practical algorithm for addressing the distributional MORL problem. First, a non-decreasing NN is used (as the Q network in MARL paper QMIX) as the utility function. Then a new objective function can be defined and creates a reward function for RL. Therefore, we can iteratively train the utility function and policies. Experiment in multi-objective gymnasium shows the effectiveness of the method. Strengths: 1. The paper addresses the distributional preferences in multi-objective RL, which is an understudied problem. 2. The paper formulates a framework and propose practical method to address the problem. Weaknesses: 1. The paper is self-contained and there is no particular worth-noting weaknesses. It would be great to see if the proposed method can be applied to other more realistic environments, such as safety-aware autonomous driving. I believe driving is a decision-making task that has many objectives that should be considered. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The authors list safe RL as a special case to MORL. That would great if we can apply the proposed method and other baselines to safe RL benchmark such as Safety Gym to see the performance and also compared to other safe RL baselines. This is because safe RL is a domain with huge social impact and thus working on those tasks can improve the impact of the proposed method. 2. In the DiverseGoal environment, how we get the distribution of reward? Is it some kinds of Normal distribution predefined? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not address the limitation at all. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: To Reviewer JBHP: Thanks for your valuable feedback! We provide the response to each of your questions as follows. **Q1:** “The authors list safe RL as a special case to MORL. That would great if we can apply the proposed method and other baselines to safe RL benchmark such as Safety Gym to see the performance and also compared to other safe RL baselines.” **A1:** In Figures 1 and 2 of the global rebuttal PDF file, we show experimental results demonstrating that DPMORL can effectively optimize policies with utility functions that characterize safety properties. Specifically, the DeepSeaTreasure environment in Figure 2 resonates with safe RL dynamics, balancing between time penalties and task rewards. With our method, users can provide different utility functions derived from safety constraints, and optimize the expected utility via DPMORL for obtaining policies that satisfy the constraint. Due to the one-page limitation of the rebuttal pdf, we cannot provide the comparison results on more environments with safe RL baselines, but we're committed to delving deeper into this in our revised manuscript. **Q2:** “In the DiverseGoal environment, how we get the distribution of reward? Is it some kinds of Normal distribution predefined?” **A2:** Yes, in the DiverseGoal environment, the reward distribution associated with each goal position is a pre-defined Normal distribution. Contrasted with standard environments where the reward functions are deterministic, in our DiverseGoal environment, the reward from distinct objectives is sampled from unique Normal distributions. Consequently, the approach necessitates acquiring a policy set embodying the distributional attributes of returns to cater to diverse user preferences. For more environments in MO-Gymnasium, we provide the same case study results on the effectiveness of our method for optimizing a diverse set of non-linear utility functions under stochastic rewards in Figures 1 and 2 of the global rebuttal PDF file. --- Rebuttal Comment 1.1: Comment: Thanks for the response. --- Reply to Comment 1.1.1: Title: Thank you for the comments! Comment: Thank you for your constructive comments and positive feedback! We are delighted that our rebuttal responses addressed your questions and concerns. Since the author-reviewer discussion will be closed within 8 hours, we are more than happy to provide any additional information or explanations if there are any other concerns that make you still consider this paper as borderline accept. Your feedback is crucial in helping us address any remaining concerns and improve the quality of our work.
Summary: In Multi-Objective Reinforcement Learning (MORL), the goal is to learn Pareto optimal policies that achieve a balance among multiple conflicting objectives while considering the user's preferences. Existing MORL methods optimize the Pareto frontier by using a single value obtained from the linear combination of a multi-dimensional reward vector representing the user's preferences. In this paper, instead of the traditional approach, the authors interpret the problem from a return distribution perspective and consider the uncertainty in returns to better capture the user's preference distribution. Furthermore, they propose an algorithm that generates various utility functions, enabling the learning of Pareto optimal policies for diverse preferences. Strengths: - Instead of considering the traditional approach of single expected value in MORL, the paper proposes a Distributional Pareto-Optimal (DPO) method by defining the return as a distribution. This approach enables capturing complex preferences and allows for a more nuanced representation of utility functions. - The paper defines Stochastic Dominance (SD) and uses it to define Pareto-Optimal policies. Furthermore, it provides a mathematical proof that the pareto-optimal policy is Distributional Pareto-Optimal policy, which means that their return distributions of each policy cannot dominate that of another. - The author presents DPMORL algorithm that iteratively generates various non-linear, non-decreasing utility functions in cases where Stochastic Dominance does not exist and then train DPO policies with the generated utility functions. From these utility functions, proposed algorithm identifies Pareto-Optimal policies. By generating various utility functions, the algorithm can achieve to get more flexible and diverse policies set that can accommodate a wide range of preferences. Weaknesses: - The main experiment shows superior performance of the proposed algorithm compared to existing MORL methods but seems to do not fully explain the motivation(ability to capture users' complex preferences and intentions in a more detailed manner). Including additional experiments or toy examples such as diversity of utility function heatmaps between the return distribution approach and the conventional weighted sum method would substantiate the authors' motivation. - The proposed loss function aims to generate various utility functions, and it consists of three components: equations (1), (2), and (3). Providing explanations for each of these components would make it more readable, describing their purposes and how they contribute to creating diverse utility functions. - It would be beneficial to include an ablation study. For instance, in the main experiment, training 20 utility functions and policies may show differences in evaluation values with changes in the size of N. By conducting experiments with various N sizes and adding the results to the study, it could provide further support for the author's insistence. - Additionally, the paper proposes a method to increase the diversity of utility functions by iteratively generating them. In Appendix C Table 1, they applied N^{iter}=2 iterations to create a broader range of utility functions and trained policies based on them. However, the experimental results show that the performance does not exhibit significant differences compared to the experiments where utility functions were generated only once. Increasing N^{iter} is expected to generate a more diverse set of utility functions, enabling the learning of various Pareto-optimal policies. Consequently, this is likely to further enhance the "Constraints Satisfaction" score. - Minor typos: (1) Line 192 “We maximize” → “We minimize”, (2) Appendix line 99 “standard derivation” → “standard deviation” Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can we design utility functions in a real-world setting that consider individual preferences? For instance, in the case of autonomous driving, can we provide information that highlights safety to encourage the creation of utility functions that prioritize safety significantly? - I wonder how the algorithm performs even in cases where the multiple objectives in the Continuous Control environment experiments are conflicting. For instance, in the case of HalfCheetah, which has two multi-objectives, "run" and "jump," in the experiments, I wonder if the algorithm can still learn Pareto-optimal policies based on preferences when an additional conflicting objective, such as "run backward," is introduced. - The authors conducted MORL experiments on various environments in the main experiments. It would be better if they provide more detailed explanation with some examples of what objectives in each environment and how many objectives are there. 2D graphs of Figure 5 and Appendix Figure 1 seem that the environments have two objectives each. I wonder about some results of the experiments with three or more objectives. - Appendix Figure 1 depicts the return distribution of the set of policies learned by DPMORL on each environment, and Return 1 and 2 seem to mean returns for two objectives. It would be helpful if authors clarify explanation of what each return means, and what preference each policy has. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Author provides the limitations of their works appropriately, and I think there is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: To Reviewer y77J: Thanks for your valuable feedback! We provide the response to each of your questions as follows. **Q1:** “The main experiment shows superior performance of the proposed algorithm compared to existing MORL methods but seems to do not fully explain the motivation” **A1:** The motivation of the paper is to optimize for a diverse set of policies for optimal return distributions, which is achieved by optimizing policies on a diverse set of non-linear utility functions. In the main paper, we provide case studies in Figure 4 in the DiverseGoal environment, which shows that DPMORL can obtain policies with a diverse set of return distributions with different utility functions to optimize, supporting the motivation on a toy example. Here, we will show the additional experimental results on harder environments in MO-Gymnasium for the same case study, which is shown in Figures 1 and 2 of the global rebuttal PDF file. In the ReacherBullet and DeepSeaTreasure environment, when providing different generated utility functions for the policies, DPMORL is able to optimize the policy to obtain policies with optimal non-linear utility, covering a diverse set of optimal return distributions, which directly supports our motivations. We will add the additional case studies in Figure 1 and Figure 2 to our paper to better support our motivation in the revision. **Q2:** “equations (1), (2), and (3). Providing explanations for each of these components would make it more readable, describing their purposes and how they contribute to creating diverse utility functions.” **A2:** Briefly, equation 1 quantifies the output difference between two utility functions, and accordingly measures the smallest distance from one utility function to any other utility function; equation 2 quantifies the average slope difference between two utility functions, and accordingly measures the smallest distance from one utility function to any other utility function, and equation 3 is a weighted combination of equation 1 and 2, which is optimized to maximize the dissimilarity of the current utility function. We will add the explanation to our paper in the revision. **Q3:** “conducting experiments with various N sizes and adding the results to the study” **A3:** We show the experimental results of various N sizes in the Table 2 of the global rebuttal PDF file. As N becomes larger, DPMORL has better Expected Utility, and Hypervolume. **Q4:** “Increasing N^{iter} is expected to generate a more diverse set of utility functions” **A4:** In Table 1 in the Appendix, DPMORL in the second iteration has improved performance in 11 of the metrics compared to the first iteration, while having 7 metrics in worse performance. In ReacherBullet, we also find DPMORL in the second iteration having an overall better performance than in the first iteration. We will add the additional experimental results in the revised version of our paper. **Q5:** “can we provide information that highlights safety to encourage the creation of utility functions that prioritize safety significantly?” **A5:** In Figures 1 and 2 of the global rebuttal PDF file, we show experimental results demonstrating that DPMORL can effectively optimize policies with utility functions that characterize safety properties. The DeepSeaTreasure environment of Figure 2 aligns well with safe RL settings, where the two returns are time penalty and task reward respectively. With our method, users can provide different utility functions derived from safety constraints, and optimize the expected utility via DPMORL for obtaining policies that satisfy the constraint. Due to the one-page limitation of the rebuttal pdf, we did not provide the comparison results on more environments with safe RL baselines, and we will add more results on this topic in the revision. **Q6:** “I wonder about some results of the experiments with three or more objectives.” and “in the case of HalfCheetah, which has two multi-objectives, ‘run’ and ‘jump,’ in the experiments, I wonder if the algorithm can still learn Pareto-optimal policies based on preferences when an additional conflicting objective, such as ‘run backward,’ is introduced.” **A6:** The DPMORL algorithm can be applied well for three or more objectives. We have added experiments with 3 objectives under Hopper and FruitTree environment, as illustrated in the Figure 3 and Table 1 of the global rebuttal PDF file. Figure 3 demonstrates that DPMORL learns better 3-dimensional return distributions than GPI-PD, a competitive MORL baseline method; Table 1 demonstrates that DPMORL achieves better performance in different MORL metrics compared to baseline methods under 3 dimensional reward functions. Regarding the HalfCheetah scenario, introducing a diametrically opposing objective, like “running backward”, as an antithesis to “running forward” would render any arbitrary policy as a Pareto-optimal policy. This diverges from the optimization goal of MORL. Hence, we selected widely accepted and pragmatic MORL environments and targets for our experiments. **Q7:** “Appendix Figure 1 depicts the return distribution of the set of policies learned by DPMORL on each environment, and Return 1 and 2 seem to mean returns for two objectives. It would be helpful if authors clarify explanation of what each return means, and what preference each policy has” **A7:** We add the brief explanations to the meaning of reward here: for DeepSeaTreasure, MountainCar, and HalfCheetah, the two objectives are respectively task reward (gathering treasure, climb mountain and running) and energy consumption; for FruitTree, Hopper, and Reacher, the two objectives are the reward of reaching two different types of positions. We will add the clarifications in the appendix of our paper. Our case studies (Figure 2) in the global rebuttal PDF file show the preference for a part of learned policies, and we will add preferences for more policies in our revised manuscript. --- Rebuttal Comment 1.1: Title: Further Discussions Comment: Dear Reviewer y77J, We greatly appreciate the time and effort you have invested in reviewing our work. We carefully consider your comments and have addressed each of your concerns and questions in the rebuttal. As the discussion period deadline is approaching, we kindly hope that you can read the response, and consider the new information and experiment results provided. We are open to further discussions and would be happy to provide any additional clarifications if needed.
Rebuttal 1: Rebuttal: We express our sincere gratitude to all reviewers for their constructive feedback and insightful suggestions. In this global response, we include a supplementary PDF comprising additional experimental outcomes. We summarize the experiment result as follows: ### **Figure 1: Enhanced Case Studies for DPMORL on ReacherBullet Environment** In the case study, we show that DPMORL can effectively learn optimal policies under a diverse set of non-linear utility functions under the complex control environment (ReacherBullet), and the policies cover a wide range of optimal return distributions. We randomly generate 7 different non-linear utility functions with our method, and learn 7 policies by DPMORL to maximize expected utility for each utility function. We use the first two reward functions (two goal-reaching rewards) in MO-Gymnasium. The results show that our algorithm effectively finds policies that maximize the expected utility for different non-linear utility functions, and the learned policies cover broader areas on the return distributions compared to the baseline methods. ### **Figure 2: Enhanced Case Studies for DPMORL on DeepSeaTreasure Environment** In the case study, we show that DPMORL can effectively learn optimal policies under a diverse set of non-linear utility functions under the DeepSeaTreasure environment. We randomly generate 7 different non-linear utility functions with our method, and learn 7 policies by DPMORL to maximize expected utility for each utility function. We use the same two-dimensional reward functions (two goal-reaching rewards) as in MO-Gymnasium. The results show that our algorithm effectively finds policies that maximize the expected utility for different non-linear utility functions. The DeepSeaTreasure environment of Figure 2 aligns well with safe RL settings, where the two returns are time penalty and task reward respectively. With our method, users can provide different utility functions derived from safety constraints, and optimize the expected utility via DPMORL for obtaining policies that satisfy the constraint. ### **Figure 3: Return distribution of DPMORL on more-than-two Dimensional Objectives** In this experiment, we show the 3-dimensional return distribution of the set of policies learned by DPMORL under Hopper and FruitTree environments. The results demonstrate that DPMORL can effectively learn DPO policies under more than two-dimensional reward functions. Also, DPMORL learns better 3-dimensional return distributions than GPI-PD, a competitive MORL baseline method. ### **Table 1: Comparative Analysis for DPMORL on more-than-two Dimensional Objectives** In this experiment, we compare the performance of our method on more than two dimensions of reward with baseline methods on four MORL evaluation metrics. We run our experiment on Hopper and FruitTree environments with 3-dimensional reward function, following the reward settings in MO-Gymnasium. We measure four MORL metrics: conditional value at risk, constraint satisfaction and variance Objective, and expected utility. These metrics are detailed in Appendix C. DPMORL achieves the best performance compared to all baseline methods under 3d reward functions. ### **Table 2: Ablation Analysis with Variable Policy Counts** In this experiment, we show the ablation studies of our method with a varying amount of learned policies ($N$). We can see that as the number of policies increases, both expected utility and hypervolume increase, which means DPMORL can indeed obtain better performance as the number of policies increases. On the other hand, DPMORL has obtained promising results when there exists only 5 policies, which verify the effectiveness of DPMORL among all environments. Pdf: /pdf/d21c1eb491773a2cb3a3da2e49f941aec518ba65.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation
Accept (spotlight)
Summary: The paper presents a nice distillation pipeline that can perform high-quality text-to-3D generation using 2D diffusion priors. By introducing the score on NeRF / mesh rendered images, the authors make score distillation work with a classifier-free guidance weight as low as 7.5. This approach largely improves the detail quality of the converged results and inspires the community with an exciting direction. Strengths: - The authors propose to estimate scores on rendered image with a LoRA framework fine-tuned for the target scene. - By introducing this additional score, the distillation framework is able to work with low classifier-free guidance weight at 7.5, yielding much better appearance than the original SDS loss in DreamFusion. - Both 2D and 3D experiments validate the expressiveness of the proposed framework. Weaknesses: - The paper starts with derivations to support the particle-based variational framework. However, using more particles doesn't seem to help with the generation results. Is using a single particle not enough? What's the benefit of multiple particles, and when do we need them? - In Eq.(8) for 3D experiments, the authors mention that there's a `c` for camera pose added for the conditioning of the diffusion model. How is this implemented? Is this camera pose necessary for the framework to work? It would be nice if there's an ablation study on removing the camera pose conditioning. - For 3D experiments, why do we need a LoRA model fine-tuned from a pre-trained Stable Diffusion? Does using a small U-Net work for 3D case? - For the mesh refinement stage, the authors follow Fantasia3D to refine the geometry and texture separately. SDS loss is used in the geometry stage that refines normal maps. I'm interested in whether the proposed vsd loss is helpful in this case. Adding a comparison experiment that uses VSD to refine the normal map will be great. - Existing methods mostly use view-dependent prompts, meaning that text embedding `y` and camera `c` are not independent. Do authors use view-dependent prompts in the experiments? - Why are the two rows in Eq. (5) equal? More derivations are appreciated. - Is v-prediction necessary for the LoRA model? Does the proposed framework work with eps prediction? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to the weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Please refer to the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we provide a point-to-point response to all comments. If our response has addressed the concerns and brings new insights to the reviewer, we will highly appreciate it if the reviewer considers raising the score. ***Q1: Is using a single particle not enough? What's the benefit of multiple particles, and when do we need them?*** **A**: We find that using a single particle is **enough** for 3D generation, since the LoRA model has a strong image prior and performs quite well in the few-shot learning scenario. Specifically, our demonstrated results in Fig.1(a-b) and Appendix A are all obtained by optmizing a single particle with our proposed pipeline. As for the benefits of multiple particles, please refer to the ***common response, Q3***. In general, increasing the number of particles brings more diversity and slightly increases the generated quality, and we need them when we want more diverse and higher-quality results. We will add a corresponding discussion in the final version to improve the writing. ***Q2: How is the camera condition implemented in the variational model? Is this camera pose necessary for the framework to work? It would be nice if there's an ablation study on removing the camera pose conditioning.*** **A**: As presented in our appendix G, the camera pose is fed into a 2-layer MLP and then added into the time step embeddings of the base model. This camera condition is beneficial for the algorithm to converge since it makes LoRA training easier. We provide an ablation study on this in the ***response pdf, Fig.4***, showing that the results without camera conditions are inferior to the results with camera conditions, and the results without camera condition tend to have floaters and degraded geometry. ***Q3: For 3D experiments, why do we need a LoRA model fine-tuned from a pre-trained Stable Diffusion? Does using a small U-Net work for 3D case?*** **A**: Due to the computation cost, the number of particles is limited to a small number (1~4) in 3D experiments. Thus, estimating the score functions of noisy rendered images is a **few-shot learning problem** of diffusion models. As LoRA has a strong image prior and is suitable for this few-shot learning setting, we find that using LoRA can greatly improve the sample quality by VSD. In addition, in our early experiments, we have tried using a small U-Net instead of LoRA, and we find that the samples have more artifacts than those of LoRA. ***Q4: Is the proposed vsd loss is helpful in stage2 (mesh geometry finetuning)? Adding a comparison experiment that uses VSD to refine the normal map will be great.*** **A**: Please refer to **common response, Q2**. ***Q5: Existing methods mostly use view-dependent prompts, meaning that text embedding $y$ and camera $c$ are not independent. Do authors use view-dependent prompts in the experiments?*** **A**: We indeed use view-dependent prompts for pretrained Stable Diffusion, following previous works such as Dreamfusion and Magic3D. However, we do not use view-dependent prompts for the variational model (i.e., the LoRA model) since the camera pose has been injected into the model explicitly. We will change the math notations correspondingly to align them with the settings of view-dependent prompts, as specified in the following response. ***Q6: Why are the two rows in Eq. (5) equal? More derivations are appreciated.*** **A**: Thank you for pointing it out. We appologize for abusing the notation $y$ in the submission. In fact, the $y$ appears in the pretrained distribusion $p_t(x_t|y)$ and model $\epsilon_{\text{pretrain}}(x_t,t,y)$ depends on the camera condition $c$. We will make the dependence explicit by using notation $y^c$. Then Eq.(5) naturally holds (actually the first line is redundant). We will fix it in the final version and note that the method and theorems are still sound and the implementation remains the same. ***Q7: Is v-prediction necessary for the LoRA model? Does the proposed framework work with eps prediction?*** **A**: In our early experiments, we find that eps-prediction for the LoRA model can also work, but the performance of eps-prediction is slightly inferior to v-prediction in terms of texture details. We also provide ablation study on this in ***response pdf, Fig.4*** to demonstrate this. Once again, thank you for your constructive feedback and for considering our paper for acceptance. We'll revise our paper according to your suggestions. We kindly request that you consider raising the score accordingly if we addressed your concerns.
Summary: This paper proposes an interesting and novel technique for the task of text-to-3D generation. It defines a distribution of the target 3D scene, which is implemented as particles. Given the textual description, the distribution is updated using the Wasserstein gradient flow. Moreover, the paper proposes to fine-tune a diffusion model based on LoRA to calculate the score function of noisy rendered images, which is used to compute the objective function. Experiments show that the approach significantly improve the generation quality. Strengths: 1. The proposed technique is novel and interesting. Modeling 3D scenes as a distribution is insightful, which could inspire the following research. 2. It is a good idea to fine-tune LoRA-based diffusion models on noisy rendered images for calculating the objective function. 3. The results of the proposed model are impressive and obviously outperform previous methods. Weaknesses: 1. I am curious about how the number of particles affect the performance of the proposed model. 2. Can this idea be used in amortized text-to-3D generative models? Does it consumes a lot of GPU memory? 3. It would be better to perform ablation studies on the distillation time schedule and the density initialization. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please see the weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we provide a point-to-point response to all comments. If our response has addressed the concerns and brings new insights to the reviewer, we will highly appreciate it if the reviewer considers raising the score. ***Q1: I am curious about how the number of particles affect the performance of the proposed model.*** **A**: Please refer to the **common response, Q3**. ***Q2: Can this idea be used in amortized text-to-3D generative models? Does it consumes a lot of GPU memory?*** **A**: Yes, it is quite interesting and promising to use our method for amortized text-to-3D generative model, and we leave it for future work. Currently, our method consumes 27GB GPU memory in NeRF training stage and 17GB in mesh finetuning stage, using stable-diffusion 2.1 and its corresponding LoRA model. ***Q3: It would be better to perform ablation studies on the distillation time schedule and the density initialization.*** **A**: We have provide ablation studies on the distillation time schedule in Figure 5 in our paper, which shows that the generated results with our proposed annealed-time-schedule method have more details. As for density initialization, we find that without scene initialization, scene-level generation did not work at all, so we did not post an image on this in our paper. We will add a corresponding discussion in the final version. Once again, thank you for your constructive feedback and for considering our paper for acceptance. We'll revise our paper according to your suggestions. We kindly request that you consider raising the score accordingly if we addressed your concerns.
Summary: In this paper, a novel method for generating 3D representations from text prompts is proposed. More specifically, a novel variational score distillation (VSD) based optimisation strategy is proposed that allows for optimising NeRF- and mesh-based 3D representations by utilising pre-trained 2D diffusion models. It improves over the previously in Dreamfusion proposed score distillation sampling (SDS) by optimising a distribution over the 3D parameters instead of a single point Dirac distribution. Further, the authors made additional engineering improvements to the text-to-3d pipeline. The proposed system does not rely on high CFG weights as required for SDS and leads to impressive results that compare favourably against the state-of-the-art. Strengths: - The authors clearly identify an important shortcoming of current text-to-3D approaches: suffering from over-saturation, over-smoothing, and low-diversity problems. They identify the SDS-based optimisation as the core problem, and propose a theoretically-sound alternative that can be trained with only a small overhead compared to SDS. - The shown results clearly look impressive and the obtained scene representations clearly improve over the state-of-the-art results. - The manuscript is (mostly) very well written, has a great structure, and a great "reading flow". - The performed experiments, including the 2D experiments, clearly highlight the authors' contributions and show the effectiveness of the individual components of the proposed methods. - The authors provide a clear and correct overview over their core contributions in Table 1 and demonstrate their familiarity with the fast-moving field of text-to-3D generation. Weaknesses: - Number of particles: I am sceptical if n=1 or n=4 (L. 150) is really enough to capture the distribution. It would be interesting to investigate the relationship of number of particles and the quality of the results; as it seems that the prediction network for the noisy rendered images is trained with all particles, it could be that the quality overall improves with more samples. - Geometry optimisation and VSD/SDS: Accordingly to L. 247 ff., I believe that the authors explain that the mesh geometry is fine-tuned with SDS, not VSD (see also question below). If understood correctly, this part should express more clearly that it is only referring to the mesh geometry, not the overall geometry optimisation. Further, this seems to be an interesting limitation of the current method. It would be interesting to see a respective ablation figure comparing SDS and VSD for this mesh-geometry prediction stage. - Geometry Diversity: Figure 1 c.) shows the diversity of samples, and it appears that the geometry diversity is limited compared to the texture diversity. It would be interesting to discuss potential reasons for this. - Experimental Results: No quantitive metrics are reported and the only user study is "hidden" in Table 3 of the supplementary material. While it is true that the task of text-to-3D is very challenging to be measured quantitatively, metrics reported by previous methods such as Dreamfusion could also be reported, and the user study can be shown in the main paper or at least be referenced. - Camera Pose Priors: It would be very interesting to see how the camera sampling strategies affect the results; in particular for the "scene generation" experiments. Have the authors started investigating this dimension of the problem? - Related work: A good and thorough related work discussion is contained only in the supplementary. I would encourage the authors to add a discussion, if accepted, to the main manuscript. - Noisy prediction network: The scores for the noisy real and the noisy rendered images are predicted by two separate models (L. 166), and the one for the rendered images is smaller / a low-rank approximation. Is this introducing some form of imbalance or is this not a problem for the optimisation? - Training of noisy renderings score prediction network: If understood correctly from the supplementary, the network for predicting the scores of the noisy rendered images is trained simultaneously with the NeRF model. It could be interesting to study how different optimisation schemes affect the results. - Input text prompt: I believe it is not stated whether the same text prompt or different text prompts are used for different camera poses. Typos: - L 178: speical case -> special case - L 283: variation score distillation -> variational score distillation - L. 284: 3D parameter -> 3D parameters - L 637 of supp mat: an addition diffusion model -> an additional diffusion model Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Have the authors investigated whether there is a relationship between number of particles and the quality of the results? - Could the authors expand on the limited geometry variance in contrast to the texture variance, and why the use of the VSD is not improving over SDS for the mesh geometry prediction? - Could the authors expand on the importance of used camera pose priors, especially in the context of scene generation? - Are different text prompts used for different camera poses? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors discuss limitations both in the main paper as well as more extensively in the supplementary. I believe the discussion is thorough and good as is. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we provide a point-to-point response to all comments. If our response has addressed the concerns and brings new insights to the reviewer, we will highly appreciate it if the reviewer considers raising the score. ***Q1: Is n=1~4 is really enough to capture the distribution? The relationship of number of particles and the quality?*** **A**: Yes, it is enough because we adopt LoRA for the variational distribution. LoRA has a strong image prior and is suitable for few-shot score function learning. We have dicussed and provided corresponding quantitative experiments in ***common response, Q3***. In general, increasing the number of particles slightly increases the generated quality. ***Q2: Why the use of the VSD is not improving over SDS for the mesh geometry prediction?*** **A**: Please refer to **common response, Q2** for the discussion on this topic. We have provided a respective ablation figure comparing SDS and VSD for this mesh-geometry prediction stage, please see **response pdf, Fig.5**. ***Q3: Discuss potential reasons why the geometry diversity is limited compared to the texture diversity (Fig.1.c).*** **A**: The samples by VSD also have diverse geometry for some other prompts, and we provide extra results which demonstrates more geometry diversity in the ***response pdf, Fig.5***. We conjecture that the demonstrated two prompts in Fig.1.c are limited in geometry diversity, due to the image prior in stable-diffusion. In addition, lower the cfg will bring more geometry diversity as shown in Figure 17 in our appendix. ***Q4: Quantitative evaluation results?*** **A**: We provide quantitative results in **common response, Q1**, including both 2D experiments (evaluated on 1000 prompts) and 3D experiments (evaluated on 100 prompts). It will be added into the main paper. We will also reference the user study in the main paper. ***Q5: Add related work to the main manuscript.*** **A**: Sure, we will add the related work into the main text. ***Q6: Is the smaller / low-rank approximation model for the noisy rendered images introducing some form of imbalance?*** **A**: No, the training procedure by using LoRA is stable and we did not notice any problems for the optimization. We believe that it is because LoRA has a strong image prior and is suitable for few-shot learning (estimating the score functions of noisy rendered images), so the estimated score (update direction) in VSD is meaningful (please see ***response pdf, Fig.2*** for visualization of the updating direction of VSD). Moreover, LoRA only trains a small amount of parameters, and the base architecture is the same as the pretrained stable-diffusion, so the capacity of these two models is balanced. ***Q7: It could be interesting to study how different optimization schemes for the noisy renderings score prediction network affect the results.*** **A**: During the early experiments, we have tried different training schemes and find that optimizing 3D particles and variational scores for one step alternatively performs the best. ***Q8: Typos*** **A**: Thanks for correcting our typos and helping to improve the writing quality. We will fix them in the final version correspondingly. ***Q9: Could the authors expand on the importance of used camera pose priors, especially in the context of scene generation? Have the authors started investigating this dimension of the problem?*** **A**: Enlarging the radius range of camera pose is beneficial for scene generation. It is critical for scene to not be with a degraded geometry (a textured sphere). The scene geometry will improve a lot if more advanced camera pose priors are proposed, and we leave it for future work. ***Q10: Are different text prompts used for different camera poses?*** **A**: Yes, we use view-dependent prompts for pretrained Stable Diffusion, following previous works such as Dreamfusion and Magic3D. However, we do not use view-dependent prompts for the variational model (i.e., the LoRA model) since the camera pose has been injected into the model explicitly. Once again, thank you for your constructive feedback and for considering our paper for acceptance. We'll revise our paper according to your suggestions. We kindly request that you consider raising the score accordingly if we addressed your concerns. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed and informative rebuttal. I have no additional questions at this point. Thanks a lot! --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: We are happy to hear that you find our response satisfactory and are positive on the rating. We will definitely further revise in the final version as promised. Thank you again for the great efforts on reviewing our manuscript and providing the valuable comments.
Summary: This paper proposes Variational Score Distillation (VSD), which is a new generalization of Score Distillation Sampling (SDS) that is based on Strengths: The biggest strength of this paper is likely in its practical applicability; notably, I find it impressive that the paper is able to produce good results with a batch size = 1. Many prior works need larger batch sizes, and especially mesh generation (as in Magic3D) suffer if they do not have large batch sizes which significantly hurts performance. This is significant, as it makes text-to-3D generation much more feasible without access to a large GPU cluster / DGX sort of machine. However, it's not unclear if this is really due to VSD (more on this in the weakness section). There are also other practical benefits to this algorithm, like the additional diversity in the output that they can offer. This is also significant, as one of the practical needs in deploying a text-to-X system is being able to 'reroll' for better outputs that better align with the user's intent through the text prompt. This is something that VSD uniquely offers. The paper is also very well written, and the writing is very clear despite the complexity of what is being described. I do have a couple of more detail-oriented questions that I will ask in the questions section. Weaknesses: I am slightly unconvinced about the effectiveness of VSD when it comes to increased quality. First, as a baseline, the generated results qualitatively do look much better than DreamFusion or Magic3D. The user studies agree, with an overwhelming margin (90+%). However, it is also the case that the generated results in this paper _without_ VSD and with just the higher resolution prior (and annealed t) already looks arguably significantly better than DreamFusion or Magic3D. Without a user study to then compare against this result with and without VSD, it's hard to make the claim that VSD itself is the factor that contributes most significantly to the improved quality. It's not entirely made clear in the paper what exactly is the factor that leads to this significantly quality increase. Empirically, high resolution NeRF training for SDS with Stable Diffusion has been difficult to make work (just even based on open source re-implementations of various papers), _especially_ with a small batch size of N=1 that they claim in the paper... so I am very curious what leads to the quality increase. But also it makes me question whether VSD really adds much in terms of quality without more examples or a user study. (Figure 5 does show an example where there seems to be perceived difference, but in my own experience, this is a minor difference that could just be caused by a slightly different hyper parameter) Figure 17, although it's only on a single example, somewhat confirms this since the high CFG weight seems to result in very similar results with the VSD results with low CFG. Although high CFG weight is a factor for over saturation, some of the results with VSD also seems to suffer from over saturation anyways. Regardless, VSD does offer other benefits like diversity. But it would be amazing to hear from the authors on their thoughts on this quality difference in the SDS baseline. Ideally a user study can compare between their baseline NeRF generation results with SDS against VSD, but I understand if that's difficult. At the very least, there should be more examples (beyond the single example they show in Figure 5) that compare between their baseline NeRF results with SDS vs VSD. This would make this paper an extremely solid contribution. (it's also possible I just missed something). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In Figure 3, the VSD results generate slightly more 'oversaturated' results in comparison to ancestral sampling. Similarly, the mesh results in the paper generally seem to still suffer from over saturation issues (although some of this is likely just due to not modeling accurate physically based rendering). Is there any explanation for the perceived differences between VSD and ancestral sampling and why they lead to these qualitative differences? 2. For the SDS experiments (in Figure 17 in the appendix, for example), what are the experimental settings? Do they use the same batch size = 1 with all of the same hyper parameters as the VSD? What hyper parameter differences exist between this and DreamFusion, Magic3D? 3. How does the SDS NeRF results from this paper (without VSD) compare to DreamFusion and Magic3D in terms of perceived quality? What are the factors that cause this quality difference? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations are addressed, although I have a lot of questions about the actual efficacy of VSD with respect to quality improvements. It would also be nice to have a broader statement on the societal impacts (even if it's somewhere in the appendix, unless I just didn't see it) since text-to-3D models are something that can bring potentially harmful impacts to the labor market around content creation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we provide a point-to-point response to all comments. If our response has addressed the concerns and brings new insights to the reviewer, we will highly appreciate it if the reviewer considers raising the score. ***Q1: What exactly is the factor that leads to this significantly quality increase? Directly compare VSD with SDS? Oversaturated in Figure 17?*** **A**: Thanks for the constructive comments. We believe that VSD itself is the main factor to improve quality and add both quantiative and qualitative results to confirm it. Quantitatively, we fairly compare VSD and SDS in terms of the widely adopted FID score in both 2D and 3D experiments (see details in ***common response Q1***). In particular, we compare VSD with SDS in the same setting of 512 resolution and annealed $t$ in the 3D experiments to eliminate the effects of other factors. Qualitatively, Figure 17 shows that VSD relieves the oversatuated problem of SDS. We provide extra results comparing VSD and SDS in ***response pdf, Fig.1***, including NeRF training stage-1 and mesh texturing stage-3, both with 512 resolution and annealed $t$. In all experiments, VSD itself significantly outperforms SDS, showing its effectiveness. We will add all new results in the final version and we believe that the quality of the paper will be significantly improved. ***Q2: VSD is slightly more oversaturated than ancestral sampling. Is there any explanation for the perceived differences between VSD and ancestral sampling and why they lead to these qualitative differences?*** **A**: Theoretically, if the variational distributions are sufficiently powerful and the optimization of the variatioinal inference problem is sufficient, then the distribution of samples by VSD will be the same as that by ancestral sampling. However, in practice, we adopt a LoRA model as the variational distribution and optimize it by Adam, which slightly viodate the above conditons and results in the qualitative differences. Nevertheless, we argue that VSD is still prefarable in zero-shot 3D generation because it significantly outperforms SDS (see more details in response to your Q1) and ancestral sampling can be only used in 2D. ***Q3: What are the experimental settings for the SDS experiments in Fig.17?*** **A**: We use the same settings as VSD (i.e., same hyperparameters, 512 resolution, annealed $t$). In particular, for the 3D representations, we follow most of the experiment settings and hyperparameters in Magic3D, and the differences are: 1. We use 512 resolution and annealed $t$; 2. We use batch size = 1 (following the re-implementation of stable-dreamfusion). 3. We do not use the shading proposed in DreamFusion because we currently find that the generated texture is better without shading. We will make the detailed settings clearer in the final version, and leave adding shading to ProlificDreamer as future work. ***Q4: How does the SDS NeRF results from this paper (without VSD) compare to DreamFusion and Magic3D in terms of perceived quality? What are the factors that cause this quality difference?*** **A**: Since DreamFusion and Magic3D use a powerful text-to-image base models (Imagen and eDiff-I) than stable-diffusion and not open-sourced, we use the open-sourced re-implementation of them by stable-dreamfusion, and adopt the improvement in the above response of Q3, finding that it improves the performance of SDS in stable-dreamfusion (see Fig.5 in the main text). However, it is still hard to say which is better between our SDS implementation and the official Dreamfusion and Magic3D. Nevertheless, after integrating VSD and the whole pipeline in ProlificDreamer, our generation quality is much better (see user study in appendix K and examples in appendix L) than Dreamfusion and Magic3D. ***Q5: Add a broader statement on the societal impacts?*** **A**: We have included a broader statement in Sec.6. We will follow the suggestion and add a longer version in the appendix. Once again, thank you for your constructive feedback and for considering our paper for acceptance. We'll revise our paper according to your suggestions. We kindly request that you consider raising the score accordingly if we addressed your concerns.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers's efforts and their appreciation of our novel contributions as well as very detailed and insightful suggestions to further improve our paper. We find there are common concerns to our paper, and we'd like to clarify here. We also add a pdf file to add more experiment results. ***Q1: Quantitative evaluation for VSD and SDS? (from W2L2,wz2D,krEH)*** - Table a: 3D Sample Quality by SDS or VSD, 100 prompts. |Method(3D)|SDS|VSD(n=1)| |-|-|-| |3D-FID(↓)|118.92|107.02| - Table b: 3D Sample Quality by SDS or VSD, 25 prompts. |Method(3D)|SDS|VSD(n=1)|VSD(n=4)| |-|-|-|-| |3D-FID(↓)|191.82|186.87|185.88| - Table c: 2D Sample Quality by Different Samplers, 1000 prompts. |Method(2D)|SDS|VSD(n=4)|VSD(n=8)|DPM++| |-|-|-|-|-| |FID(↓)|90.09|68.02|66.68|47.91| **A**: We provide quantitative results (FID score, following common 2D generative model evaluations) for comparing VSD and SDS in both 2D and 3D experiments, as shown in Table a, Table b and Table c. We find that **VSD outperforms SDS** in both 2D and 3D cases, which confirms our claim about the significance of VSD. We present the detailed settings and results as follows. **Detailed settings**: - For 3D experiments, we compute the FID score between rendered images by SDS/VSD and 2D sampled images by ancestral sampling, named as 3D-FID. Specifically, we select 100 prompts from previous works including DreamFusion, Magic3D and Fantasia3D. For each prompt, we use VSD (#particles n=1, CFG=7.5) or SDS (CFG=100) to optimize one 3D object and render 10 images uniformly from the circumference at an angle of 30° above the horizon, and collect 1k images in total. To isolatedly compare VSD with SDS, we run with the default setting of the stage-1 NeRF training of ProlificDreamer (i.e., both VSD and SDS are in 512 resolution and use annealed $t$). - In Table a, We compute the FID score between the 1k samples from the 100 prompts and a 50k reference batch, which is sampled by 50-step DPM-Solver++ with 500 images per prompt. - In Table b, due to the time and computation resource limits, we compare the results for VSD with n=4 under only 25 randomly-selected prompts from the aforementioned 100 prompts, and compare SDS, VSD(n=1) and VSD(n=4) with the 50-step DPM-Solver++ under these 25 prompts. For VSD(n=4), as we can get 4 particles (3D objects) per prompt, we randomly select one particle per prompt and render the corresponding 10 images of the selected particle for fair comparison. - For 2D experiments in Table c, we follow the common setting of evaluating text-to-image models by computing FID on MSCOCO2014 validation set. Specifically, we randomly select 1k prompts and sample one image per prompt by either 50-step DPM-Solver++, SDS(CFG=100), VSD(n=4,CFG=7.5) or VSD(n=8,CFG=7.5) to collect 1k samples for each method, and then compute the FID between the obtained samples and the entire COCO validation set. For VSD, as we can get n images per prompt, we randomly select one image per prompt for fair comparison. **Results**: - **VSD with n=1 still outperforms SDS in 3D (both with 512 resolution and annealed $t$)**, as shown in Table a and Table b. Such quantitative results agree with the quanlitative results shown in ***response pdf, Fig.1***, showing the effectiveness of VSD. - **Using more particles is slightly better**. Due to the limitation of time and computation resoures, we only compare n=1 and n=4 in 3D experiments, and n=4 with n=8 in the 2D experiments. As shown in Table b, VSD with 4 particles slightly outperforms VSD with 1 particles in the 3D setting; and as shown in Table c, VSD with 8 particles slightly outperforms VSD with 4 particles in the 2D setting. - **VSD outperforms SDS in 2D**. As shown in Table c, the FID by VSD is much better than SDS. As the 2D setting isolates the sampling algorithm from the 3D representations, we can directly compare different sampling algorithms, finding that VSD can get better sample quality than SDS (though still worse than SOTA diffusion samplers, but it can generalize to 3D cases). ***Q2: Why using SDS in stage-2 for the geometry optimization of mesh? (from W2L2,krEH,eq7N)*** **A**: VSD can also be used to generate geometry. To validate this, we provide an ablation example in the ***response pdf, Fig.(3a),(3b)***. As shown in the figure, VSD can obtain reasonable geometry. Although the some part of the geometry from VSD is with more details than SDS (including the tail of the horse), on the whole, the result from VSD is similar with SDS. We conjecture that this is because currently the triangle size of the mesh is relatively large and can't represent very fine details. Thus, for efficiency, we use SDS instead of VSD for mesh geometry optimization. We believe that VSD can be used to obtain high quality mesh if more advanced mesh represetation is available. Moreover, despite that we use SDS to optmize the geometry in stage-2, VSD is still crucial in stage-1 and stage-3, in which VSD significantly improves the generated quality. ***Q3: How will the number of particles affect the results of VSD? (from krEH,Gck3,eq7N)*** **A**: In general, using more particles will make the variational distribution (LoRA model) easier to train, and thus improve the sample quality. We provide an empirical evidence in Table b and c (see more details in the response of Q1). Moreover, as LoRA has a strong image prior and is suitable for few-shot learning, we find that using 1~4 particles is enough for high-quality 3D generation. We think the the quality of our work has been improved a lot. We are welcome for further questions. Pdf: /pdf/69c7c786edfdfe2a56eba668b94b81070b1df2b8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors introduce the VSD algorithm that generalizes SDS to optimizing a distribution of shapes in terms of KL divergence to the image diffusion model. Lacking analytic score of the implicit rendered image distribution, they train a surrogate by using LORA finetuning to the base image diffusion model. The resulting algorithm is demonstrated for optimizing an empirical distribution of particles and interpreted in framework of particle VI. They empirically find that a lower CFG is possible when using VSD as opposed to SDS which alleviates artefacts (even in the n=1 case). Additionally, a simple NERF initialization for scenes, annealed time schedule and higher resolution rendering are proposed as orthogonal improvements. Strengths: Novel method: Using an auxiliary fine-tuned image diffusion model to approximate the score of an implicit distribution is a very neat idea and could have potential wider applicability in other areas of generative modelling. Presentation: The motivation behind the method, technical background and mathematical framework are presented well and the writing is clear in these aspects. Details regarding the method are also fairly clear. Strong qualitative result: The ability to use lower CFG is an interesting finding and the authors give evidence for its effectiveness in producing high quality results. Combined with the other practical additions to the training pipeline, this results in very impressive SOTA visual quality. Weaknesses: Clearer experiments: There are a lot of different experimental settings which dilutes the evaluation of the core VSD contribution imo. The paper seems to have 4 main experimental settings: a. Single particle textured mesh objects b. Single particle NERFs (object and scene) c. Multi-particle NERF d. 2D VSD results In terms of multiple particles, these are only demonstrated for 1 2D prompt and 2 NERF prompts (and maybe 1 mesh in the appendix). Can multiple particle results be demonstrated for the prompts used in a/b. Given the lack of quantitative metrics, the contributions of VSD, annealing t and higher resolution are only evaluated qualitatively on 1 or 2 prompts. Ablating VSD for more prompts would help strengthen the case for it. Currently it is only done for one of the prompts in setting a. Lack of quantitative metrics: I understand there aren’t great established metrics in this area but currently the comparison is only done by showing a few qualitative examples and a user study with 5 prompts picked per baseline. Especially since one of the claims in the paper is that VSD allows diverse generation, quantifying this over a representative set of prompts would greatly strengthen the evaluation section. R-precision has been reported in previous papers and given the motivation of matching in KL, a conditional FID to the image diffusion model could also be considered. If computation is a bottleneck, then these could have also been obtained for a coarser version/2D case. Clarity of explaining VSD results: Although VSD was motivated from the stand point of enabling diverse generations, the empirical benefits it brings in the single particle case is surprising and remains not well explained. Hence, some of the discussions in section 3.3 (about “superior generalization” and regarding CFG on lines 185,186) seem too vague. I would recommend the authors could change the wording to make it more clear which results are empirical findings that may not have theoretical justification yet and further to write any intuition/speculation about the reason more clearly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - On line 247 it is mentioned that for mesh finetuning, VSD was not helpful for geometry. To clarify, the pipeline for the textured mesh then is: 1. VSD for Nerf init, 2. SDS on DMTet geom finetuning, 3. VSD on Texture finetuning? How would this compare to following Fantasia3D and skipping the 1st NERF stage? Can you show the intermediate results during each stage? Does the SDS stage affect the geometry much in stage 2? - Is the ice cream ablation in Figure 12 using the textured mesh pipeline? - In the appendix, there was some intuition about VSD proving more fine/sharp update direction. What do samples from the LoRA tuned model and the corresponding update direction look like? - The paper claims that SDS is one of the main bottlenecks to scaling resolutions to 512 (line 226). Could you elaborate on this, since it seems they are orthogonal? - For the 2D experiment, you tried the small U-Net with 2048 particles. Did you also try n=2048 for the LoRA? Are there issues with scaling LoRA? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations as expanded upon in the appendix is sufficient. Maybe the authors can expand more about the robustness of the method and sensitivity to hyperparameters which is a common issue in related works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive review and suggestions. Below we provide a point-to-point response to all comments. We hope you will find our response satisfactory and raise your score accordingly. ***Q1: Can multiple particle results be demonstrated for the prompts used in a/b?*** **A**: Yes. We provide more multi-particle results in the ***response pdf, Fig.5***, and will add more multi-particle results in the final version. ***Q2: Comparing VSD and SDS for more prompts.*** **A**: We provide more qualitative results comparing VSD with SDS in the ***response pdf, Fig.1***. We also add quantitative results in ***common response Q1***, including both 2D experiments (evaluated on 1000 prompts) and 3D experiments (evaluated on 100 prompts). In all experiments, VSD significantly outperforms SDS under the same setting, showing the effectiveness of VSD itself. ***Q3: Lack of quantitative metrics.*** **A**: Quantatively evaluating the results of text-to-3D is challenging. There is no well-estabilished method for this. Since FID score is the most common evaluation metric in 2D generative models, which evaluates both the quality and diversity of the samples, we add quantitative results of FID scores (see details in ***common response Q1***), including both 2D (1000 prompts) and 3D (100 prompts) experiments. In particular, FID scores involve the variance of samples, and thus the better FID scores of VSD suggest that VSD improves the diversity. ***Q4: Improve the writing clarity of explaining VSD results.*** **A**: We will clarify the discussions in Sec.3.3 about the single-particle VSD cases, following the suggestions. In particular, we will clarify the discussions about the empirical findings and theoretical justifications separately, and highlight the intuition of the reason for the benifits by VSD with single particle. ***Q5: Pipeline and mesh finetuning intermediate results? Does SDS affect much in stage2?*** **A**: Yes, the whole pipeline is: 1. VSD for NeRF, 2. SDS on geom finetuning, 3. VSD on texture finetuning. We show the intermediate results during each stage in the ***response pdf, Fig.3***. We find that converting NeRF to mesh causes lots of detail lost, and using both SDS and VSD can improve the geom results (we provide a detailed explanation for why using SDS in geom stage in ***common response Q2***) and using VSD can further improve the mesh texture. ***Q6: Compare to Fantasia3D if skipping the 1st NeRF stage?*** **A**: The 1st NeRF stage is crucial for the final results because it does not need any handcrafted initialization for each prompt (as in Fantasia3D), and using VSD in 1st stage can greatly improve both the geometry and texture quality used for initializing stage2 and stage3, since the NeRF quality by VSD is superior to SDS. So we adopt the 1st stage for better performance. Given the NeRF initialization, as shown in ***response pdf, Fig.1***, VSD outperforms SDS significantly in texture optimization. Based on such results, even if skipping the 1st NeRF stage and using a handcrafted initialization as in Fantasia3D, ProlificDreamer still employs VSD to optimize the mesh texture in stage3, which is supposed to obtain a much better texture than SDS (as adopted in Fantasia3D). ***Q7: Is the ice cream ablation in Figure 12 using the textured mesh pipeline?*** **A**: No. It's the NeRF result from stage-1. ***Q8: Visualize samples from the LoRA and the corresponding VSD update direction?*** **A**: We provide a sample from the LoRA in ***response pdf, Fig.3***, showing that LoRA samples are consistent to 3D object when optimization converges. Moreover, we visualize VSD/SDS training phase of 2D in ***response pdf, Fig.2***. Since the gradient is not directly readable, we visualize $x+\Delta x$, which is the updated results if current sample optimizes via this gradient direction. As shown in Fig.2, SDS tends to provide over-saturated and over-smooth gradient while VSD provides more natural-looking gradients with more details. As a consequence, VSD provides better final results. ***Q9: Elaborate why SDS is one of the main bottlenecks to scaling resolutions to 512?*** **A**: We apologize for the potential misunderstanding for the unclear statement. We clarify that SDS is one of the main bottlenecks to generate high-fidelity NeRF, which is orthogonal to the resolution, and the quality by SDS is still poor if simply scaling resolutions to 512. Instead, VSD can provide more details. We provide more examples in ***response pdf, Fig.1.(1)*** to show that 512-res SDS is still much worse than 512-res VSD. We will make it clearer in final version. ***Q10: Did you also try n=2048 for the LoRA in 2D? Are there issues with scaling LoRA?*** **A**: Following the suggestion, we conduct a new experiment to scale up LoRA in 2D experiments, and find that LoRA obtains better visual quality than U-Net with a large number of particles (e.g. n=2048), so it does not have scaling issues. We will add these results in the final version. Once again, thank you for your constructive feedback and for considering our paper for acceptance. We'll revise our paper according to your suggestions. We kindly request that you consider raising the score accordingly if we addressed your concerns. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. I have read the rebuttal in full. The extra quantitative results using FID help strengthen the case for VSD a lot and I expect these to be incorporated in the final paper. One additional thing regarding the evaluation of multiple particle VSD: “For VSD(n=4), as we can get 4 particles (3D objects) per prompt, we randomly select one particle per prompt and render the corresponding 10 images of the selected particle for fair comparison” I agree this is definitely a way to make comparison SDS very “fair”. But if a single model is selected per prompt, this would not highlight the additional diversity from VSD, so it might be unfair to VSD. You could instead do for say 16 views, you select 4 from the 1st particle, 4 from the 2nd particle, etc… This would allow you to have renderings from all particles in your FID evaluation without changing the number of input views. It may help strengthen the evaluation for the VSD (and highlight the ability for it to get diversity). This addresses my concern about the quantitative metrics and my other questions, so I will raise my score to accept. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: Thank you for the detailed feedback and for raising the score, and thank you for providing the valuable suggestion for evaluating multi-particle VSD. We follow your suggestion and re-evaluate VSD(n=4) by considering all the particles and find that the FID by VSD(n=4) is significantly better than SDS and VSD(n=1). The detailed results are: | Method(3D) | SDS | VSD(n=1) | VSD(n=4) | | :--------- | :----- | :------- | :------- | | 3D-FID(↓) | 191.82 | 186.87 | 173.96 | Specifically, for each prompt, we can get 4 particles by VSD(n=4). For each particle, we render 10 images by 10 views, following the same setting specified before, and then get 40 images per prompt. Then we randomly select 10 images per prompt and get the same number of images as the baseline settings, and then compute the FID score correspondingly. We repeat such process for 10 times and compute the mean FID score, as shown above. We appreciate your detailed comments and suggestions. Thank you again for your great effort!
null
null
null
null
null
null
Wasserstein Quantum Monte Carlo: A Novel Approach for Solving the Quantum Many-Body Schrödinger Equation
Accept (spotlight)
Summary: The paper provides a framework to derive a new loss for Variational Monte Carlo methods. They introduce Wasserstein Quantum Monte Carlo, which uses a gradient flow based on the Wasserstein metric. The method is empirically tested for fermionic systems, e.g. a Hydrogen chain and compared against another Quantum Monte Carlo approach based on the Fisher-Rao metric. Faster convergence in terms of number of epochs is observed. Strengths: The paper derives with a theoretical framework a new loss function for Quantum Monte Carlo calculations. They underline their findings with experiments and improve upon state-of-the-art variational Monte Carlo calculations with respect to the number of epochs on rather small molecular systems. A lot of recent improvements in this field came mainly from architectural changes (e.g. PsiFormer), it is great to see also improvements on the loss function and optimization side. Weaknesses: The derived loss function is novel and improves with respect to the number of epochs current state-of-the-art calculation, I have the following concerns: 1. The lack of immediate relevance to the ML community, due to the rather small experimental section and the newly derived loss fits rather to the Quantum Chemistry community. 2. What is the QVMC baseline in section 4, is it PsiFormer? What kind of settings did you use for the PsiFormer architecture? The authors of PsiFormer proposed two variants, a small and a larger one. 3. The loss WQMC loss seems to converge faster in terms of epochs but based on the Algorithm 1 my understanding is that you have to perform third derivatives which is computationally quite expensive. Could the authors add to Fig. 2 the timings per epoch for each method and the type of GPUs used to perform the experiments? 4. In case it is possible it would be also interesting to see a broader experimental section rather than these “simple” molecules to better understand the improvements gained through the newly defined loss. For example, the potential energy surface of the Nitrogen dimer is a difficult system (especially considering the bond breaking geometry), which was also analyzed by the authors from FermiNet. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In algorithm 1 you stop the gradient for the gradient of the local energies. It is not clear to me from eq. 24, why this is the case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: All limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and the time spent. We are glad that you appreciate our developments in the design of new optimization procedures for variational Monte Carlo methods. In what follows we address the concerns raised and answer the questions. > The lack of immediate relevance to the ML community, due to the rather small experimental section and the newly derived loss fits rather to the Quantum Chemistry community. We respectfully disagree. The NeurIPS community has a rich tradition of ML applications to other fields of science including natural sciences, e.g. numerous AI4Science workshops evidence for this (e.g., see NeurIPS 2022 "Machine Learning and the Physical Sciences" where many papers including the opening keynote were about quantum variational Monte Carlo). Moreover, different approaches to this particular problem were recently published at the top ML conferences including NeurIPS [1,2,3]. While in this paper, we introduced our approach in the context of energy functional minimization for quantum systems, it is quite general-purposed: it can be applied to minimize any functional directly in the space of distributions, making it relevant to the wider audience within the NeurIPS community. For example, instead of using the quantum energy functional one might consider the KL divergence functional $\text{KL}(p \\| q)$ between the data distribution $p$ and the model distribution $q$. Then the Fisher--Rao metric corresponds to the energy-based model training and the Wasserstein metric corresponds to score-matching. We will draw these connections and expand on this discussion in the next revision of the paper. > What is the QVMC baseline in section 4, is it PsiFormer? What kind of settings did you use for the PsiFormer architecture? The authors of PsiFormer proposed two variants, a small and a larger one. For the baseline, we precisely follow [3]. Thus, as considered in [3] for the systems of similar size, we use small Psiformer architecture, the same hyperparameters, and the same techniques to stabilize optimization. > The loss WQMC loss seems to converge faster in terms of epochs but based on the Algorithm 1 my understanding is that you have to perform third derivatives which is computationally quite expensive. Could the authors add to Fig. 2 the timings per epoch for each method and the type of GPUs used to perform the experiments? As suggested by the reviewer, for a proper comparison, we updated Fig. 2 with wall-clock time as the x-axis and uploaded the new figure in our global response. The type of GPUs used to run experiments is also mentioned in the global response. In short, all the claims of our paper still hold when considering wall-clock time. Third-order derivatives can be efficiently computed using modern deep learning frameworks such as JAX, which we used to implement our method. We will include these updates in the final paper. > In case it is possible it would be also interesting to see a broader experimental section rather than these “simple” molecules to better understand the improvements gained through the newly defined loss. For example, the potential energy surface of the Nitrogen dimer is a difficult system (especially considering the bond breaking geometry), which was also analyzed by the authors from FermiNet. The systems considered in the experiments are not “simple”. Indeed, a single run for the H10 system takes more than 33 GPU hours on Nvidia A40 GPU, and we can see that there is already a gap between the baseline accuracy and the chemical accuracy. In this work, we focus on extending the Variational Monte Carlo algorithms family, and illustrating that the proposed extensions might be indeed beneficial for the performance. Scaling the proposed method to bigger molecules requires a more comprehensive empirical study and substantial engineering infrastructure, which are beyond the scope of this project. > In algorithm 1 you stop the gradient for the gradient of the local energies. It is not clear to me from eq. 24, why this is the case? Thank you for bringing this up! The local energy in (24) is evaluated at $q_t(x) = q(x,\theta)$ but does not depend on parameters $\theta$. Indeed equation (24) is obtained by using (23) for the density time derivative in (17) and integration by parts. Note that the gradient w.r.t. $\theta$ is applied only to the log density in (17) and to the gradient of log density after the integration by parts. We will further clarify this in the next revision of the paper. [1] Gerard, Leon, et al. "Gold-standard solutions to the Schrödinger equation using deep learning: How much physics do we need?." Advances in Neural Information Processing Systems 35 (2022): 10282-10294. [2] Gao, Nicholas, and Stephan Günnemann. "Generalizing neural wave functions.", International Conference on Machine Learning 2023. [3] von Glehn, Ingrid, James S. Spencer, and David Pfau. "A Self-Attention Ansatz for Ab-initio Quantum Chemistry." The Eleventh International Conference on Learning Representations. 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response! Based on the rebuttal, I will revise my score and recommend a weak accept. > Immediate relevance to ML community Thank you for clarifying this and I will recommend the paper to be accepted. > Experimental section I still disagree that the tested systems are a good set of systems to showcase their new approach but I understand that a detailed experimental section is out of reach for this work and maybe not the focus of the work. Although, I believe it would improve the paper significantly, especially the magical threshold from chemistry of chemical accuracy is faster reached for the rather small systems with the standard approach (wrt. wall clock time). Here, more complex systems would potentially showcase the importance of having a better optimisation scheme. I admit the results are impressive, especially gradient norm and variance, and I appreciate the additional figure regarding wall clock time. > Another issue, common to all Variational Monte Carlo methods, is the singularities around nuclei, which create instabilities during training. Could you elaborate on this comment a little bit more, my understanding for the tested systems (Be, B, Li2, H10) is that the current architectures like FermiNet and PsiFormer are quite stable. Do you observe divergences of the energy during training? You mention effective core potential as a potential solution, do you expect effective core potential to be already necessary for atoms like Oxygen, Nitrogen or Carbon or just for heavy atoms (4th row and so on)? ### Minor questions 1. Are the blue curves the same in the main text and the additional pdf (belonging to the rebuttal)? If I compare for B in the opt. step plot and in the wall clock time plot the blue curve they seem to be different, whereas for the orange and the green curve they seem to be identical. Or is this a plotting artifact? 2. Do you assume your new optimization scheme to scale worse than the standard approach with the number of electrons? --- Reply to Comment 1.1.1: Comment: Thank you for carefully going over the rebuttal and reconsidering your evaluation! We appreciate your time and efforts spent. > Do you observe divergences of the energy during training? You mention effective core potential as a potential solution, do you expect effective core potential to be already necessary for atoms like Oxygen, Nitrogen or Carbon or just for heavy atoms (4th row and so on)? In our comment, we refer to cusps instabilities, which are irrelevant to the proposed methodology and are common to both the baseline and the proposed method. Note that this is related mostly to the architectural design choice. Indeed, Psiformer [1] architecture does not ensure any cusp conditions by design and the authors discuss different ways to stabilize the training (see Fig. 3,5, appendices 1, 2.c, 2.d in [1]). Alternatively, one can ensure cusp conditions by design, e.g. see PauliNet architecture [2]. We are sorry for the possible confusion. The introduction of pseudopotentials is a common technique for dealing with the cusps/singularities. We have not experimented with it, but one can find open-source solutions in the DeepQMC framework [3]. Also, we refer the reader to [3] for the empirical study and the guidelines for using the pseudopotential. > Are the blue curves the same in the main text and the additional pdf (belonging to the rebuttal)? As we discussed in our previous response, the proposed method takes more time per iteration. So in order to have a proper runtime comparison, we re-ran the entire experiment for QVMC with more total iterations, and thus the entire blue curve is new. However, the general pattern of the new blue curve closely matches the old blue curve after rescaling the x-axis. The remaining curves are identical. > Do you assume your new optimization scheme to scale worse than the standard approach with the number of electrons? In our experiments, we observe the biggest performance gain for the Hydrogen chain system (H10). Hence, we do not expect our method to scale worse than the baseline with the number of electrons. Moreover, as we discuss in the conclusion (lines 290-297), we expect our method to scale better for multi-modal target distributions (e.g. due to multiple atoms in molecular systems) due to constraining the density model updates to local changes, which favor faster MCMC mixing. [1] von Glehn, Ingrid, James S. Spencer, and David Pfau. "A Self-Attention Ansatz for Ab-initio Quantum Chemistry." The Eleventh International Conference on Learning Representations. 2022. [2] Hermann, Jan, Zeno Schätzle, and Frank Noé. "Deep-neural-network solution of the electronic Schrödinger equation." Nature Chemistry 12.10 (2020): 891-897. [3] Schätzle, Zeno, et al. "DeepQMC: an open-source software suite for variational optimization of deep-learning molecular wave functions." arXiv preprint arXiv:2307.14123 (2023).
Summary: The authors show that the optimization objective typically used in variational Monte Carlo (VMC) solutions of the Schrödinger equation can be interpreted as the Fisher-Rao gradient flow in the space of Born distributions. Based on this insight, they suggest to substitute the Fisher-Rao metric with Wasserstein metrics, which has the effect that during optimization, probability mass is "transported gradually" rather than "teleported" in space. Since the VMC optimization process typically relies on sampling "walkers" from the probability distribution, which evolve over time according to a Markov process, a gradual "transport" of the probability mass promises improved convergence. The authors empirically demonstrate that their proposed method leads to lower ground state energies (and variance) for several small systems. Strengths: The paper introduces a very simple (although non-trivial!) idea and is clearly presented. The proposed optimization methods show clear practical advantages and the underlying theory may lead to even more sophisticated optimization methods for VMC solutions of the Schrödinger equation in the future. Weaknesses: The empirical evaluation of the proposed method is limited to relatively few and small systems. While the results are very promising, it is therefore unclear how the proposed method compares to the standard approach when applied to more complicated, larger systems (e.g. benzene). This being said, I find this limitation acceptable considering the large computational cost of running VMC calculations for larger systems (which the authors may not be able to afford). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Have the authors considered applying their method also to larger test systems (e.g. benzene), or even just small molecular test systems (say ethanol, methane, ...)? Currently the method is only tested for two atoms, the lithium dimer, and the H10 chain. I encourage the authors to extend their empirical evaluation to more complicated examples (if they can afford it). - The plots in Figure 2 show the number of iterations, but they do not show the associated runtime costs. Is a single iteration of WQMC comparable in cost to a single iteration of QVMC? - A single iteration of VMC consists of several individual steps including the parameter update followed by sampling walker positions for the updated probability density. The sampling of walker positions typically uses a certain number of steps in a Markov process. Considering the "more well-behaved" updates of the probability density with WQMC compared to QVMC, is it possible to reduce the required number of sampling steps for the walker positions? If the authors have not considered this, I encourage them to try. Perhaps it is possible to decrease the runtime costs of a single iteration this way? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors do not discuss limitations of their work, nor its potential for negative societal impact. While the latter is acceptable in my opinion (this work is about an algorithmic method to solve a fundamental problem in quantum chemistry and a discussion of negative societal impacts would probably appear contrived), a discussion of limitations should be added in a revised version. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful and positive feedback. We are glad that you appreciated both the simplicity and non-trivial nature of our idea. We also believe that our underlying theory has many potential implications for developing even better VMC solutions. In what follows we address the raised concerns and answer your questions. > The plots in Figure 2 show the number of iterations, but they do not show the associated runtime costs. Is a single iteration of WQMC comparable in cost to a single iteration of QVMC? We include the runtime of all the algorithms in the plots (see attachment in the general response). Namely, for a proper comparison, instead of reporting the metrics per iteration we report them per seconds elapsed. We will include these plots in the next version of the paper. All the models were benchmarked on four A40 GPUs. > Considering the "more well-behaved" updates of the probability density with WQMC compared to QVMC, is it possible to reduce the required number of sampling steps for the walker positions? If the authors have not considered this, I encourage them to try. Perhaps it is possible to decrease the runtime costs of a single iteration this way? In our experiments, we observed that the proposed method indeed can be trained with fewer MCMC steps (not included in the paper). Moreover, one can use the evaluated vector field (gradient of the local energy) as the first proposal update to the samples. We leave the detailed study of this degree of freedom beyond the scope of the current work, instead, we apply minimal changes to the optimization procedure to highlight the effect of the newly introduced metric. > I encourage the authors to extend their empirical evaluation to more complicated examples (if they can afford it). The systems considered in the experiments are not “simple”. Indeed, a single run for the H10 system takes more than 33 GPU hours on Nvidia A40 GPU, and we can see that there is already a gap between the baseline accuracy and the chemical accuracy. In this work, we focus on extending the Variational Monte Carlo algorithms family, and illustrating that the proposed extensions might be indeed beneficial for the performance. Scaling the proposed method to bigger molecules requires a more comprehensive empirical study and substantial engineering infrastructure, which are beyond the scope of this project. > Discussion of limitations should be added in a revised version As suggested by the reviewer, we will include a limitation section in the final paper. The main downside of the proposed method is the additional evaluation of the derivative, which results in longer iterations. Another issue, common to all Variational Monte Carlo methods, is the singularities around nuclei, which create instabilities during training. In the general response, we discuss how these limitations can be addressed in future work. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed reply. Regarding my comment regarding the "simplicity" of the considered systems: I understand that there are different opinions on what constitutes a "simple" system. With "simple" I mainly meant the number of electrons. For example, H10 with just 10 electrons is simply not a particularly large example, but I am aware that it is still challenging for many methods due to its highly correlated character. I also do not consider 33 GPU hours to be a particularly large investment of computational resources, but I understand that access to computational resources is very inhomogeneous in the field. I still find a larger test system would make the paper more interesting, but again, I leave it up to the authors whether they want to consider this.
Summary: The authors propose a novel approach named Wasserstein Quantum Monte Carlo, which uses the gradient flow induced by the Wasserstein and Wasserstein Fisher-Rao metrics and improves the convergence rate of Quantum Variational Monte Carlo. The numerical results show that following the gradient flow under Wasserstein and Wasserstein Fisher-Rao metrics results in better convergence to the ground state wave function. Strengths: 1. This paper is technically sound with detailed derivations. 2. Seeking new approaches to better solve quantum chemistry and quantum physics problems is very important to the quantum machine learning society. Weaknesses: I'm not familiar with this subject but I strongly believe that this paper belongs to a Physics conference or journal instead of NeurIPS. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The numerical results show that the proposed method outperforms the QVMC. But it seems not that good compared to variational quantum eigensolvers (such as UCCSD or adaptVQE) on the ground-state energy estimation problems. Could you illustrate the difference between the proposed method and those VQE methods? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Wrong track to submit this paper to NeurIPS. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time spent and valuable feedback. > I'm not familiar with this subject but I strongly believe that this paper belongs to a Physics conference or journal instead of NeurIPS We respectfully disagree that this paper does not belong to NeurIPS. The NeurIPS community has a rich tradition of ML applications to other fields of science including natural sciences, e.g. numerous AI4Science workshops evidence for this (e.g., see NeurIPS 2022 "Machine Learning and the Physical Sciences" where many papers including the opening keynote were about quantum variational Monte Carlo). Moreover, different approaches to this particular problem were recently published at the top ML conferences including NeurIPS [1,2,3]. While in this paper, we introduced our approach in the context of energy functional minimization for quantum systems, it is quite general-purposed: it can be applied to minimize any functional directly in the space of distributions, making it relevant to the wider audience within the NeurIPS community. For example, instead of using the quantum energy functional one might consider the KL divergence functional $\text{KL}(p \Vert q)$ between the data distribution $p$ and the model distribution $q$. Then the Fisher--Rao metric corresponds to the energy-based model training and the Wasserstein metric corresponds to score-matching. We will draw these connections and expand on this discussion in the next revision of the paper. > Could you illustrate the difference between the proposed method and those VQE methods? The considered Variational Monte Carlo methods (both the baseline and the proposed method) are classical computation approaches to the quantum chemistry problem. On the other side, (adapt)VQE is an algorithm for quantum devices, and cannot be efficiently run until efficient quantum devices are available. Hence, we provide no comparison. If compared to CCSD, which is the classical version of UCCSD, the approaches considered in the paper are known to be much more scalable (see Fig. 1 in [4]). [1] Gerard, Leon, et al. "Gold-standard solutions to the Schrödinger equation using deep learning: How much physics do we need?." Advances in Neural Information Processing Systems 35 (2022): 10282-10294. [2] Gao, Nicholas, and Stephan Günnemann. "Generalizing neural wave functions.", International Conference on Machine Learning 2023. [3] von Glehn, Ingrid, James S. Spencer, and David Pfau. "A Self-Attention Ansatz for Ab-initio Quantum Chemistry." The Eleventh International Conference on Learning Representations. 2022. [4] Hermann, Jan, et al. "Ab-initio quantum chemistry with neural-network wavefunctions." arXiv preprint arXiv:2208.12590 (2022). --- Rebuttal Comment 1.1: Comment: Thanks for clarifying. Again, I'm not very familiar with this topic and I have given a very low confidence about my evaluation. I hope this will not affect the overall rating of this paper.
Summary: The paper proposes a new way to compute gradients for parameters in quantum Monte Carlo. The authors propose to update the network parameters by following the Wasserstein Fishier-Rao gradient flow, which is composed of a continuity equation and a growth term. The authors show that when only the latter term is considered, the WFR gradient flow is equivalent the conventional energy loss. The authors then propose to leverage the first term and propose the Wasserstein quantum Monte Carlo. With some tricks the proposed optimization method leads to a faster convergence. Experiments are conducted on small systems with atoms. Strengths: - The paper is very well written. - The proposed method is shown to include the conventional method. - The experimental results are improved. Weaknesses: - It's not explicitly mentioned in the paper. But I think the extra gradient computation may cause more computation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Since some tricks are used in experiments. How does these tricks influence the performance? Are those tricks also used for baseline methods? - Which would be the recommended method? The WQMC or the W(FR)QMC? What would be the trade-off? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and time spent. We are glad you found our paper well-written. In what follows we address the concerns raised and answer the questions. > It's not explicitly mentioned in the paper. But I think the extra gradient computation may cause more computation. Indeed, the proposed method uses extra gradient computation. We will clarify it in the paper. In the general response, we added the plots comparing the methods in terms of runtime instead of iterations to take this extra computation into account. Note that all the claims in the paper still hold in terms of wall time. We will include these plots in the next version of the paper. > Since some tricks are used in experiments. How does these tricks influence the performance? Are those tricks also used for baseline methods? Filtering out outliers and clipping the gradients are standard practices in variational Monte Carlo methods for molecules. For the baseline, we precisely follow [1] where different clippings were used to stabilize the optimization procedure. For the proposed method, we design analogs for the new objective and use the same tricks as proposed in [1] where it is possible. > Which would be the recommended method? The WQMC or the W(FR)QMC? What would be the trade-off? We observe that the Fisher-Rao metric demonstrates faster convergence in the beginning and the Wasserstein metric converges faster later into the optimization. Our intuition is that the usage of both metrics at the same time allows us adaptively decide which distance to the ground state is larger at the moment and minimize it. Indeed, one can introduce a hyperparameter controlling the tradeoff between metrics. In our experiments, we haven’t investigated the influence of this hyperparameter, and simply take the sum of two metrics for W(FR)QMC. We leave an extensive study of this parameter for future work. [1] von Glehn, Ingrid, James S. Spencer, and David Pfau. "A Self-Attention Ansatz for Ab-initio Quantum Chemistry." The Eleventh International Conference on Learning Representations. 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. I will raise my score.
Rebuttal 1: Rebuttal: We thank all reviewers for their time spent and valuable feedback on the paper. In what follows we would like to address the common concerns raised by most of the reviewers. **Runtime.** As requested by most of the reviewers, we include the runtime of all the algorithms in the plots (see the new plots in the attached pdf to this response). Namely, for a proper comparison, instead of reporting the metrics per iteration we report them per wall time in seconds. Note that all the claims in the paper still hold in terms of wall time. We will include these plots in the next version of the paper. All the models were benchmarked on four A40 GPUs. **Limitations.** As suggested by reviewer Nk4c, we will include a limitation section in the final paper. The main downside of the proposed method is the additional evaluation of the derivative, which results in longer iterations. Another issue, common to all Variational Monte Carlo methods, is the singularities around nuclei, which create instabilities during training. **Future work.** We believe that the aforementioned limitations can be addressed in future work with an extensive empirical study. 1. One direction is to come up with more efficient Monte Carlo schemes (updates of the samples). Indeed, in our experiments, we observed that the proposed method indeed requires fewer MCMC steps (not included in the paper). Moreover, one can use the evaluated vector field (gradient of the local energy) as the first proposal update to the samples. This would require additional tuning of the hyperparameters (such as the proposal step) but would allow to decrease the runtime. 2. Another direction is the usage of the effective core potentials for larger atoms, which help alleviate instabilities created by cusps around nuclei (see, e.g. [1]). 3. Finally, one can study alternative metrics by choosing the tradeoff between Fisher-Rao and Wasserstein metrics. In our experiments, we observe that such a tradeoff exists since the joint metric W(FR)QMC (without tuned coefficient) stably outperforms both metrics used separately. **Scaling experiments.** The systems considered in the experiments are not “simple”. Indeed, a single run for the H10 system takes more than 33 GPU hours on Nvidia A40 GPU, and we can see that there is already a gap between the baseline accuracy and the chemical accuracy. In this work, we focus on extending the Variational Monte Carlo algorithms family, and illustrating that the proposed extensions might be indeed beneficial for the performance. Scaling the proposed method to bigger molecules requires a more comprehensive empirical study and substantial engineering infrastructure, which are beyond the scope of this project. [1] Li, Xiang, et al. "Fermionic neural network with effective core potential." Physical Review Research 4.1 (2022): 013021. Pdf: /pdf/c6ca30aadd7573e3056b92008b44d0aae10956f4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper shows that quantum variational Monte Carlo can be written as energy minimizing gradient flow under Fisher-Rao metric, and proposes an alternative to traditional quantum Monte Carlo by following Wasserstein gradient flow. Experiments with Be, B, Li2, and H10 chain show lower energies achieved by including Wasserstein gradient flow. Strengths: Paper proposes a novel optimization approach to quantum Monte Carlo, an important problem for computational physics/chemistry. Presentation and writing is very clear, mathematically rigorous. The results show better convergence and better optima reached with proposed method. Weaknesses: only 4 physical systems and one NN architecture were tested. Paper seems to state that WQMC is better for all cases; if so, more experiments would make this claim stronger. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Why does W(FR)QMC seem to be better than WQMC alone for most of the experiments? What was the hyper-parameter controlling the W/FR tradeoff, and how sensitive is this hyper-parameter? Were other c- cost functions tried for the c-Wasserstein gradient flow? why does the coordinate-application of tanh work well? stop_gradient in algorithm box was referring to ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Do authors expect including WQMC to converge better for all physical systems? For ex, Does the strong/weak correlation of the physics affect the method convergence relative to QMC? Could authors include the additional computation complexity (theoretically, empirically) required for the Wasserstein gradient ? Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and time spent. We are glad that you find the presentation of our ideas to be clear and rigorous. In what follows we answer the questions asked. > Why does W(FR)QMC seem to be better than WQMC alone for most of the experiments? What was the hyper-parameter controlling the W/FR tradeoff, and how sensitive is this hyper-parameter? We observe that the Fisher-Rao metric demonstrates faster convergence in the beginning and the Wasserstein metric converges faster later into the optimization. Our intuition is that the usage of both metrics at the same time allows us adaptively decide which distance to the ground state is larger at the moment and minimize it. Indeed, one can introduce a hyperparameter controlling the tradeoff between metrics. In our experiments, we haven’t investigated the influence of this hyperparameter, and simply take the sum of two metrics for W(FR)QMC. We leave an extensive study of this parameter for future work. > Were other c- cost functions tried for the c-Wasserstein gradient flow? why does the coordinate-application of tanh work well? In our experiments, we concluded that for stable training it’s important that the c-cost function clips high values of the vector field. Our intuition is that it is related to the instabilities created by cusps close to nuclei, which are common to all variational Monte Carlo methods. We also experimented with the sign non-linearity, i.e. applying it coordinate-wise to the local energy gradient. Although it exhibits faster convergence at the beginning of the optimization, we find that it demonstrates higher variance later on. > stop_gradient in algorithm box was referring to? For better presentation, and to provide some intuition on the implementation, by stop_gradient we denote the operation that detaches the node from the computational graph. That is, we evaluate the value of the local energy gradient, but then we only use its value, not backpropagating w.r.t. inputs or parameters so it’s not affected by other gradient operators. We will clarify this in the next revision of the paper, thank you! > Do authors expect including WQMC to converge better for all physical systems? For ex, Does the strong/weak correlation of the physics affect the method convergence relative to QMC? Although our approach is agnostic to the functional (and, correspondingly, Hamiltonian), we expect our method to improve over traditional QMC as the effect of correlations increases. In the limit where the system becomes non-interacting, a single Slater determinant can represent the ground state and hence consists of single-particle orbitals only. More importantly, however, the cases we expect our methodology to work best in, are where the target Born distribution is multi-modal, e.g. due to multiple atoms in molecular systems. Due to the latter, our significantly improved results for H10 are not unexpected. > Could authors include the additional computation complexity (theoretically, empirically) required for the Wasserstein gradient? As requested by most of the reviewers, we include the runtime of all the algorithms in the plots (see the new plots in the pdf attached to the general response). Namely, for a proper comparison, instead of reporting the metrics per iteration we report them per wall time in seconds. Note that all the claims in the paper still hold in terms of wall time. We will include these plots in the next version of the paper. All the models were benchmarked on four A40 GPUs. --- Rebuttal Comment 1.1: Comment: Hi Authors, thanks for your response, and I am happy with the discussion.
null
null
null
null
null
null